arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
2004.01355
\section{How to learn fair models?} At a high level, the optimization problem that we seek to solve is written as,\begin{align} \min_{h\in\mathcal{H}} \mathbb{E}_{z:(x,y,s)\sim \mathcal{D}} \mathcal{L}(h;(x,y)) ~~\text{subject to}~~ h \in \mathcal{F}_{d_h}, \label{eq:fairopt} \end{align} where $\mathcal{L}$ denotes the loss function that measures the accuracy of $h$ in predicting $y$ from $x$, and $\mathcal{F}_{d_h}$ denotes the set of {\em fair} classifiers. Our approach to solve \eqref{eq:fairopt} {\em provably efficiently} involves two main steps: \begin{enumerate*}[label=(\roman*)]\item first, we reformulate problem \eqref{eq:fairopt} to compute a posterior distribution $q$ over $\mathcal{H}$; \item second, we incorporate fairness as {\em soft} constraints on the output of $q$ using the augmented Lagrangian of Problem \eqref{eq:fairopt}. We assume that we have access to sufficient number of samples to approximate $\mathcal{D}$ and solve the empirical version of Problem \eqref{eq:fairopt}. \end{enumerate*} \subsection{From Fair Classifiers to Fair Posteriors} \label{sec:model} The starting point of our development is based on the following simple result that follows directly from the definitions of fairness metrics in Section \ref{sec:nots}: \begin{obsn}\label{obs1:rand} Fairness metrics such as DP/EO are {linear} functions of $h$, whereas PP takes a linear {fractional} form due to the conditioning on $\hat{y}$, see \cite{celis2019classification}. \end{obsn} Observation \ref{obs1:rand} immediately implies that $\mathcal{F}_{d_h}$ can be represented using linear (fractional) equations in $h$. To simplify the discussion, we will focus on the case when $\mathcal{F}_{d_h}$ is given by the DP metric. Hence, we can reformulate \eqref{eq:fairopt} as, \begin{align} \label{eq:alm_twoplayer} \min_{q\in\Delta} \quad \sum_i q_i e_{h_i} \text{ s.t. } q_i (\mu_{h_i}^{s_0} - \mu_{h_i}^{s_1}) = 0 \quad \forall i \in [N], \end{align} where $q$ represents a distribution over $\mathcal{H}$. \subsection{Imposing Fairness via Soft Constraints} In general, there are two ways of treating the $N$ constraints $q_id_{h_i}=0$ in Problem \eqref{eq:alm_twoplayer} viz., \begin{enumerate*}[label=(\roman*)]\item as {\em hard constraints}; or \item as {\em soft constraints}. \end{enumerate*} Algorithms that can handle explicit constraints efficiently require access to an efficient oracle that can minimize a linear or quadratic function over the feasible set in {\em each} iteration. Consequently, algorithms that incorporate hard constraints come with high per-iteration computational cost since the number of constraints is (at least) linear in $N$, and is not applicable in large scale settings. Hence, we propose to use algorithms that incorporate fairness as soft constraints. With these two minor modifications, we will now describe our approach to solve problem \eqref{eq:alm_twoplayer}. \section{Fair Posterior from Proximal Dual} Following the reductions approach in \cite{agarwal2018reductions}, we first write the Lagrangian dual problem of DP constrained risk minimization problem \eqref{eq:alm_twoplayer} using dual variables $\lambda$ as, \begin{align} \label{eq: randomized_almminmax} \max_{\lambda\in\mathbb{R}^N}\min_{q\in\Delta} L(q, \lambda):= \langle q, e_h \rangle + \lambda \langle q, \mu_h^{s_0} - \mu_h^{s_1} \rangle \end{align} {\bf Interpreting the Lagrangian.} Problem \ref{eq: randomized_almminmax} can be understood as a game between two players a $q$-player and a $\lambda$-player \cite{cotter2018optimization}. We recall an important fact regarding the dual problem \eqref{eq: randomized_almminmax}:\begin{fact}\label{rmk:dualns} The objective function of the dual problem \eqref{eq: randomized_almminmax} is {\em always nonsmooth} with respect to $\lambda$ because of the inner minimization problem in $q$. \end{fact} Technically, there are two main reasons why optimizing nonsmooth functions can be challenging \cite{duchi2012randomized}: \begin{enumerate*}[label=(\roman*)]\item finding a descent direction in high dimensions $N$ can be challenging; and \item subgradient methods can be slow to converge in practice.\end{enumerate*} Due to these difficulties arising from Fact \ref{rmk:dualns}, using a first order algorithm such as gradient descent to solve the dual problem in \eqref{eq: randomized_almminmax} directly can be problematic, and may be suboptimal. {\bf Accelerated optimization using Dual Proximal Functions.} To overcome the difficulties due to the nonsmoothness of the dual problem, we propose to {\em augment} the Lagrangian with a proximal term. Specifically, for some $\lambda_T$, the augmented Lagrangian function can be written as, \begin{align} \resizebox{0.9\hsize}{!}{$L_T(q, \lambda) = \langle q, e_h \rangle + \lambda \langle q, \mu_h^{s_0} - \mu_h^{s_1} \rangle - \frac{1}{2\eta} (\lambda - \lambda_T)^2$}\label{eq:alprox} \end{align} Note that, as per our simplified notation, $L_T \equiv L_{\lambda_T}$. The following lemma relates the standard Lagrangian in \eqref{eq: randomized_almminmax} with its proximal counterpart in \eqref{eq:alprox}. \begin{lemma} \label{lemma: alm_vs_lm} At the optimal solution $(q^*,\lambda^*)$ to $L$, we have $\max_{\lambda} \min_{q\in\Delta} L = \max_{\lambda}\min_{q\in\Delta} L_{\lambda^*}$. \end{lemma} This is a standard property of proximal objective functions, where $\lambda^*$ forms a fixed point of $\min_{q\in \Delta} L_{\lambda^*}(q, \lambda^*)$ (section 2.3 of \cite{parikh2014proximal}). Intuitively, Lemma \ref{lemma: alm_vs_lm} states that $L$ and $L_T$ are not at all different for optimization purposes. \begin{remark}While the augmented Lagrangian $L_T$ still may be nonsmooth, the proximal (quadratic) term can be exploited to design {\em provably} faster optimization algorithms as we will see shortly. \end{remark} \begin{algorithm}[t] \caption{FairALM: Linear Classifier} \label{alg:fair_alm_linear} \begin{algorithmic}[1] \STATE {\em Notations:} Dual step size $\eta$ \\ \quad $h_t \in \{h_1, h_2, \hdots, h_N \}$. \STATE {\em Input:} Error Vector $e_\mathcal{H}$,\\ \quad Conditional mean vector $\mu_\mathcal{H}^{s}$\\ \STATE {\em Initializations: $\lambda_0 = 0$} \FOR{$t = 0, 1, 2, ...,T$} \STATE (Primal) $h_t \leftarrow \mathrm{argmin}_i (e_{h_i} + \lambda_t (\mu_{h_i}^{s_0} - \mu_{h_i}^{s_1}))$ \label{alg: ht_update} \STATE (Dual) $\lambda_{t+1} \leftarrow \lambda_t + \eta (\mu_{h_t}^{s_0} - \mu_{h_t}^{s_1}) / t$ \label{alg: lambda_t update} \ENDFOR \STATE {\em Output:} $h_T$ \end{algorithmic} \end{algorithm} \section{Our Algorithm -- FairALM} {It is common \cite{agarwal2018reductions,cotter2018optimization,kearns2017preventing} to consider the minimax problem in \eqref{eq:alprox} as a zero sum game between the $\lambda$-player and the $q$-player. The Lagrangian(s) $L_T$ (or $L$) specify the cost which the $q$-player pays to the $\lambda$-player after the latter makes its choice. An iterative procedure leads to a regret minimizing strategy for the $\lambda$-player \cite{shalev2012online} and a best response strategy for the $q$-player \cite{agarwal2018reductions}. While the $q$-player's move relies on the availability of an efficient \textit{oracle} to solve the minimization problem, $L_T(q, \lambda)$, being a linear program in $q$ makes it less challenging. We describe our algorithm in Alg.~\ref{alg:fair_alm_linear} and call it \textit{FairALM: Linear Classifier}. \subsection{Convergence Analysis} As the game with respect to $\lambda$ is a maximization problem, we get a reverse regret bound as shown in the following Lemma. Due to space, proofs appear in the Appendix. \begin{lemma} \label{lemma: regretbound} Let $r_t$ denote the reward at each round of the game. The reward function $f_t(\lambda)$ is defined as $f_t(\lambda) = \lambda r_t - \frac{1}{2\eta} (\lambda - \lambda_t)^2$. We choose $\lambda$ in round $T+1$ to maximize the cumulative reward: $\lambda_{T+1} = \mathrm{argmax}_{\lambda} \sum_{t=1}^T f_t(\lambda)$. Define $L = \max_t |r_t|$. The following bound on the cumulative reward holds, for any $\lambda$ \begin{align} \resizebox{0.9\hsize}{!}{$\sum_{t=1}^T \bigg(\lambda r_t - \frac{1}{2\eta} (\lambda - \lambda_t)^2 \bigg) \le \sum_{t=1}^T \lambda_t r_t + \frac{\eta}{2}L^2 \mathcal{O}(\log T)$} \end{align} \end{lemma} The above lemma indicates that the cumulative reward grows in time as $\mathcal{O}(\log T)$. The proximal term in the augmented Lagrangian gives us a {\em better} bound than an $\ell_2$ or an entropic regularizer (which provides a $\sqrt{T}$ bound \cite{shalev2012online}). Next, we evaluate the cost function $L_T(q, \lambda)$ after $T$ rounds of the game. We observe that the average play of both the players converges to a saddle point with respect to $L_T(q, \lambda)$. We formalize this in the following theorem, \begin{theorem} Recall that $d_h$ represents the difference of conditional means. Assume that $|| d_h||_{\infty} \le L$ and consider $T$ rounds of the game described above. Let the average plays of the $q$-player be $\bar q = \frac{1}{T}\sum_{t=1}^T q_t$ and the $\lambda$-player be $\bar \lambda = \frac{1}{T}\sum_{t=1}^T \lambda_t$. Then under the following conditions on $q$, $\lambda$ and $\eta$, we have $L_T(\bar q, \bar \lambda) \le L_T(q, \bar \lambda) + \nu \text{ and } L_T(\bar q, \bar \lambda) \ge L_T(\bar q, \lambda) - \nu$ \begin{compactitem} \item If $\eta = \mathcal{O}(\sqrt{\frac{B^2T}{L^2 (\log T+ 1)}})$, $\nu = \mathcal{O}(\sqrt{\frac{B^2 L^2 (\log T + 1)}{T}})$; $\forall |\lambda| \le B$, $\forall q \in \Delta$ \item If $\eta = \frac{1}{T}$, $\nu = \mathcal{O}(\frac{L^2(\log T + 1)^2}{T})$; $\forall \lambda \in \mathbb{R}$, $\forall q \in \Delta$ \end{compactitem} \end{theorem} The above theorem indicates that the average play of the $q$-player and the $\lambda$-player reaches a $\nu$-approximate saddle point. Our bounds for $\nu = \frac{1}{T}$ and $\lambda \in \mathbb{R}$ are strictly better than \cite{agarwal2018reductions}.\\ \begin{algorithm}[t] \caption{FairALM: DeepNet Classifier} \label{alg:deepnets_alg} \small \begin{algorithmic}[1] \STATE {\em Notations:} Dual step size $\eta$, Primal step size $\tau$ \STATE {\em Input:} Training Set $D$ \STATE {\em Initializations:} $\lambda_0=0$, $w_0$ \FOR{$t = 0, 1, 2, ...,T$} \STATE Sample $z \sim D$ \STATE Pick $v_t \in \partial \Big(\hat e_{h_w}(z) + (\lambda_t + \eta) \hat \mu_{h_w}^{s_0} (z) - (\lambda_t-\eta) \hat \mu_{h_w}^{s_1} (z) \Big) $ \STATE (Primal) $w_t \leftarrow w_{t-1} - \tau v_t$ \label{alg:supp_ht_update} \STATE (Dual) $\lambda_{t+1} \leftarrow \lambda_t + \eta (\hat \mu_{h_{w_t}}^{s_0}(z) - \hat \mu_{h_{w_t}}^{s_1}(z)) $ \label{alg:supp_lambda_t update} \ENDFOR \STATE {\em Output:} $w_T$ \end{algorithmic} \end{algorithm} \subsection{Can we train Fair Deep Neural Networks by adapting Alg. \ref{alg:fair_alm_linear}?} The key difficulty from the analysis standpoint we face in extending these results to the deep networks setting is that the number of classifiers $|\mathcal{H}|$ may be exponential in number of nodes/layers. This creates a potential problem in computing Step~\ref{alg: ht_update} of Algorithm~\ref{alg:fair_alm_linear} -- if viewed mechanistically, is not practical since an epsilon net over the family $\mathcal{H}$ (representable by a neural network) is exponential in size. Interestingly, notice that we often use over-parameterized networks for learning. This is a useful fact here because it means that there exists a solution where $\mathrm{argmin}_i (e_{h_i} + \lambda_t d_{h_i})$ is $0$. While iterating through all $h_i$s will be intractable, we may still able to obtain a solution via standard stochastic gradient descent (SGD) procedures \cite{zhang2016understanding}. The only unresolved question then is if we can do posterior inference and obtain classifiers that are ``fair''. It turns out that the above procedure provides us an approximation if we leverage two facts: first, SGD can find the minimum of $L(h,\lambda)$ with respect to $h$ and second, recent results show that SGD, in fact, performs variational inference, implying that the optimization provides an approximate posterior \cite{chaudhari2018stochastic}. Having discussed the issue of the exponential sized $|\mathcal{H}|$ -- for which we settle for an approximate posterior -- we make three additional adjustments to the algorithm to make it suitable for training deep networks. First, the non-differentiable indicator function $\mathbbm{1}[\cdot]$ is replaced with a smooth surrogate function (such as a logistic function). Second, as it is hard to evaluate $e_h/\mu_h^{s}$ due to unavailability of the true data distribution, we instead calculate their empirical estimates $z = (x; y; s)$, and denote it by $\hat e_{h}(z)/\hat \mu_{h}^{s}(z)$. Third, by exchanging the ``$\max$'' and ``$\min$'' in \eqref{eq: randomized_almminmax}, we obtain an objective that {\em upper-bounds} our current objective in \eqref{eq: randomized_almminmax}. This provides us with a closed-form solution to $\lambda$ thus reducing the minmax objective to a single simpler minimization problem. We present the algorithm for deep neural network training in Alg.~\ref{alg:deepnets_alg} and call it \textit{FairALM: DeepNet Classifier.} \subsection{More details on \textit{FairALM: DeepNet Classifier}} Recall that in $\S~5.2$ in the paper, we identified a key difficulty when extending our algorithm to deep networks. The main issue is that the set of classifiers $|\mathcal{H}|$ is not a finite set. We argued that leveraging stochastic gradient descent (SGD) on an over-parameterized network eliminates this issue. When using SGD, few additional modifications of Alg~$1$ (in the paper) are helpful, such as replacing the non-differentiable indicator function $\mathbbm{1}[\cdot]$ with a smooth surrogate function and computing the empirical estimates of the errors and conditional means denoted by $\hat e_{h}(z)/\hat \mu_{h}^{s}(z)$ respectively. These changes modify our objective to a form that is not a zero-sum game, \begin{align} \label{eqsup:nzobj} \max_{\lambda} \min_w \Big( \hat e_{h_w} + \lambda (\hat \mu_{h_w}^{s_0} - \hat \mu_{h_w}^{s_1}) -\frac{1}{2\eta} (\lambda - \lambda_t)^2 \Big) \end{align} We use DP constraint in \eqref{eqsup:nzobj}, other fairness metrics discussed in the paper are valid as well. A closed-form solution for $\lambda$ can be achieved by solving an upper bound to \eqref{eqsup:nzobj} obtained by exchanging the ``max"/``min" operations. \begin{align} \max_{\lambda} \min_w \Big( \hat e_{h_w} &+ \lambda (\hat \mu_{h_w}^{s_0} - \hat \mu_{h_w}^{s_1}) -\frac{1}{2\eta} (\lambda - \lambda_t)^2 \Big) \\ \label{eqsup:lagub} &\le \min_w \max_{\lambda} \Big( \hat e_{h_w} + \lambda (\hat \mu_{h_w}^{s_0} - \hat \mu_{h_w}^{s_1}) -\frac{1}{2\eta} (\lambda - \lambda_t)^2 \Big) \intertext{Substituting the closed form solution $\lambda = \lambda_t + \eta(\hat \mu_{h_w}^{s_0} - \hat \mu_{h_w}^{s_1})$ in \eqref{eqsup:lagub},} \label{eqsup:convupperhalf} &\le \min_w \Big( \hat e_{h_w} + + \lambda_t (\hat \mu_{h_w}^{s_0} - \hat \mu_{h_w}^{s_1}) + \frac{\eta}{2} (\hat \mu_{h_w}^{s_0} - \hat \mu_{h_w}^{s_1})^2 \Big) \intertext{Note that the surrogate function defined within $\hat \mu_{h_w}^{s}$ is convex and non-negative, hence, we can exploit Jenson's inequality to eliminate the power~$2$ in \eqref{eqsup:convupperhalf} to give us a convenient upper bound,} \label{eqsup:convupper} &\le \min_w \Big( \hat e_{h_w} + (\lambda_t + \eta) \hat \mu_{h_w}^{s_0} - (\lambda_t - \eta) \hat \mu_{h_w}^{s_1}\Big) \end{align} In order to obtain a good minima in \eqref{eqsup:convupper}, it may be essential to run the SGD on \eqref{eqsup:convupper} a few times: for ImSitu experiments, SGD was run on \eqref{eqsup:convupper} for $5$ times. We also gradually increase the parameter $\eta$ with time as $\eta_{t} = \eta_{t-1} (1 + \eta_\beta)$ for a small non-negative value for $\eta_\beta$, e.g., $\eta_\beta\approx 0.01$. This is a common practice in augmented Lagrangian methods, see \cite{bertsekas2014constrained} (page $104$). The overall algorithm is available in the paper as Alg.~$2$. The key primal and dual steps can be seen in the following section. \clearpage \subsection{Algorithm for baselines} We provide the primal and dual steps used for the baseline algorithms for the ImSitu experiments from the paper. The basic framework for all the baselines remains the same as Alg.~$2$ in the paper. For Proxy-Lagrangian, only the key ideas in \cite{cotter2018two} were adopted for implementation. \begin{empheq}[box={\GaryboxAlg[Unconstrained]}]{align*} \text{PRIMAL:} &\quad v_t \in \partial \hat e_{h_w} \\ \text{DUAL:} &\quad \text{None} \end{empheq} \begin{empheq}[box={\GaryboxAlg[$\ell_2$ Penalty]}]{align*} \text{PRIMAL:} &\quad v_t \in \partial \Big( \hat e_{h_w} + \eta (\hat \mu_{h_w}^{s_0} - \hat \mu_{h_w}^{s_1})^2 \Big) \\ \text{DUAL:} &\quad \text{None} \\ \text{Parameters:} &\quad \text{Penalty Parameter } \eta \end{empheq} \begin{empheq}[box={\GaryboxAlg[Reweight]}]{align*} \text{PRIMAL:} &\quad v_t \in \partial \Big( \hat e_{h_w} + \eta_0 \hat \mu_{h_w}^{s_0} + \eta_1 \hat \mu_{h_w}^{s_1} \Big) \\ \text{DUAL:} &\quad \text{None} \\ \text{Parameters:} &\quad \eta_i \propto 1 / (\# \text{ samples in } s_i) \end{empheq} \begin{empheq}[box={\GaryboxAlg[Lagrangian \cite{zhao2017men}]}]{align*} \text{PRIMAL:} &\quad v_t \in \partial \Big( \hat e_{h_w} + \lambda^{0\backslash 1}_t (\hat \mu_{h_w}^{s_0} - \hat \mu_{h_w}^{s_1} - \epsilon) + \lambda^{1\backslash 0}_t (\hat \mu_{h_w}^{s_1} - \hat \mu_{h_w}^{s_0} - \epsilon)\Big) \\ \text{DUAL:} &\quad \lambda^{i\backslash j}_{t+1} \leftarrow \max \big(0, \lambda^{i\backslash j}_t + \eta_{i\backslash j} (\hat \mu_{h_w}^{s_i} - \hat \mu_{h_w}^{s_j} - \epsilon) \big)\\ \text{Parameters:} &\quad \text{Dual step sizes } \eta_{0 \backslash 1}, \eta_{1 \backslash 0} \text{ Tol. } \epsilon \approx 0.05. \hskip 5pt \mtiny{i\backslash j \in \{ 0\backslash 1, 1 \backslash 0\} } \end{empheq} \begin{empheq}[box={\GaryboxAlg[Proxy-Lagrangian \cite{cotter2018two}]}]{align*} \text{PRIMAL:} &\quad v_t \in \partial \Big( \hat e_{h_w} + \lambda^{0 \backslash 1}_t (\hat \mu_{h_w}^{s_0} - \hat \mu_{h_w}^{s_1} - \epsilon) + \lambda^{1 \backslash 0}_t (\hat \mu_{h_w}^{s_1} - \hat \mu_{h_w}^{s_0} - \epsilon)\Big) \\ \text{DUAL:} &\quad \theta^{i \backslash j}_{t+1} \leftarrow \theta^{i\backslash j}_{t} + \eta_{i\backslash j} (\hat \mu_{h_w}^{s_i} - \hat \mu_{h_w}^{s_j} - \epsilon) \\ &\quad \lambda^{i\backslash j}_{t+1} \leftarrow B \frac{\exp{\theta^{i\backslash j}_{t+1}}}{1 + \exp{\theta^{i\backslash j}_{t+1}} + \exp{\theta^{j\backslash i}_{t+1}}}\\ \text{Parameters:} &\quad \text{Dual step sizes } \eta_{0\backslash 1}/\eta_{1\backslash 0}. \text{ Tol. } \epsilon \approx 0.05, \text{ Hyperparam. } B \\ &\quad \text{No surrogates in DUAL for } \hat \mu^{s_0}_{h_w} / \hat \mu^{s_1}_{h_w}. \hskip 5pt \mtiny{i\backslash j \in \{ 0\backslash 1, 1 \backslash 0\} } \end{empheq} \begin{empheq}[box={\GaryboxAlg[FairALM]}]{align*} \text{PRIMAL:} &\quad v_t \in \partial \Big(\hat e_{h_w}(z) + (\lambda_t + \eta) \hat \mu_{h_w}^{s_0} (z) - (\lambda_t-\eta) \hat \mu_{h_w}^{s_1} (z) \Big) \\ \text{DUAL:} &\quad \lambda_{t+1} \leftarrow \lambda_t + \eta \big(\hat \mu_{h_{w}}^{s_0} - \hat \mu_{h_{w}}^{s_1} \big) \\ \text{Parameters:} &\quad \text{Dual Step Size } \eta \end{empheq} \subsection{Experiments on \textit{FairALM: Linear Classifier}} {\bf Data.} We consider four standard datasets, {\tt Adult}, {\tt COMPAS}, {\tt German} and {\tt Law Schools} \cite{donini2018empirical,agarwal2018reductions}. The {\tt Adult} dataset is comprised of demographic characteristics where the task is to predict if a person has an income higher (or lower) than $\$50$K per year. The protected attribute here is gender. In {\tt COMPAS} dataset, the task is to predict the recidivism of individuals based on features such as age, gender, race, prior offenses and charge degree. The protected attribute here is race, specifically, whether the individual is white or black. The {\tt German} dataset classifies people as good or bad credit risks with the person being a foreigner or not as the protected attribute. The features available in this dataset are credit history, saving accounts, bonds, etc. Finally, the {\tt Law Schools} dataset, which comprises of $\sim20$K examples, seeks to predict a person's passage of the bar exam. Here, a binary attribute race is considered as the protected attribute.\\ {\bf Setup.} We use Alg.~$1$ in the paper for experiments in this section. Recall from $\S~3$ of the paper that Alg.~$1$ requires the specification of $\mathcal{H}$. We use the space of logistic regression classifiers as $\mathcal{H}$. At the start of the algorithm we have an empty set of classifiers. In each iteration, we add a newly trained classifier $h \in \mathcal{H}$ to the set of classifiers only if $h$ has a smaller Lagrangian objective value among all the classifiers already in the set. \\ {\bf Quantitative Results.} For the {\tt Adult} dataset, FairALM attains a smaller test error and smaller DEO compared to the baselines considered in Table~\ref{tab:linear_classifier}. We see big improvements on the DEO measure in {\tt COMPAS} dataset and test error in {\tt German} dataset using FairALM. While the performance of FairALM on {\tt Law Schools} is comparable to other methods, it obtains a better false-positive rate than \cite{agarwal2018reductions} which is a better metric as this dataset is skewed towards it's target class.\\ {\bf Summary.} We train Alg.~$1$ on standard datasets specified in \cite{donini2018empirical,agarwal2018reductions}. We observe that FairALM is competitive with the popular methods in the fairness literature. \begin{table}[!h] \centering \resizebox{0.9\linewidth}{!}{% \begin{tabular}{c@{\hskip 0.2in}cg@{\hskip 0.25in}cg@{\hskip 0.25in}cg@{\hskip 0.25in}cg} & \multicolumn{2}{c}{Adult} & \multicolumn{2}{c}{COMPAS} & \multicolumn{2}{c}{German} & \multicolumn{2}{c}{Law Schools} \\ \hline\hline & ERR & DEO & ERR & DEO & ERR & DEO & ERR & DEO \\ \hline\hline Zafar \textit{et al.} \cite{zafar2017fairness} & $22.0$ & $5.0$ & $31.0$ & $10.0$ & $38.0$ & $13.0$ & $-$ & $-$ \\ \hline Hardt \textit{et al.} \cite{hardt2016equality} & $18.0$ & $11.0$ & $29.0$ & $8.0$ & $29.0$ & $11.0$ & $4.5$ & $0.0$ \\ \hline Donini \textit{et al.} \cite{donini2018empirical} & $19.0$ & $1.0$ & $27.0$ & $5.0$ & $27.0$ & $5.0$ & $-$ & $-$ \\ \hline Agarwal \textit{et al.} \cite{agarwal2018reductions} & $17.0$ & $1.0$ & $31.0$ & $3.0$ & $-$ & $-$ & $4.5$ & $1.0$ \\ \hline \textbf{FairALM} & $15.8 \pm 1$ & $0.7 \pm 0.6$& $34.7 \pm 1$& $0.1 \pm 0.1$ & $24.3 \pm 2.7$ & $10.8 \pm 4.5$ &$4.8 \pm 0.1$ & $0.4 \pm 0.2$ \\ \hline\hline \end{tabular}% } \caption{\label{tab:linear_classifier} \footnotesize \textbf{Standard Datasets.} We report test error (ERR) and DEO fairness measure in $\%$. FairALM attains minimal DEO measure among the baseline methods while maintaining a similar test error.} \end{table} \subsection{Supplementary Results on CelebA} {\bf Additional Results. } The dual step size $\eta$ is a key parameter in {\rm FairALM} training. Analogous to the dual step size $\eta$ we have the penalty parameter in $\ell_2$ penalty training, also denoted by $\eta$. It can be seen from Figure~\ref{fig:fairalm_ablation_celeba} and Figure~\ref{fig:l2penalty_ablation_celeba} that {\rm FairALM} is more robust to different choices of $\eta$ than $\ell_2$ penalty. The target class in this section is \textit{attractiveness} and protected attribute is \textit{gender}. \begin{figure*}[!h] \centering \frame{ \begin{minipage}{0.2\linewidth} \includegraphics[width=\linewidth]{supp_figs/celeba/Unconstrained_FairALM_acc_eta20.png} \end{minipage} $(\eta = 20)$ \begin{minipage}{0.2\linewidth} \includegraphics[width=\linewidth]{supp_figs/celeba/Unconstrained_FairALM_deo_eta20.png} \end{minipage}} \hskip 4pt \frame{ \begin{minipage}{0.2\linewidth} \includegraphics[width=\linewidth]{supp_figs/celeba/Unconstrained_FairALM_acc_eta40.png} \end{minipage} $(\eta = 40)$ \begin{minipage}{0.2\linewidth} \includegraphics[width=\linewidth]{supp_figs/celeba/Unconstrained_FairALM_deo_eta40.png} \end{minipage}} % \vskip 4pt \frame{ \begin{minipage}{0.2\linewidth} \includegraphics[width=\linewidth]{supp_figs/celeba/Unconstrained_FairALM_acc_eta60.png} \end{minipage} $(\eta = 60)$ \begin{minipage}{0.2\linewidth} \includegraphics[width=\linewidth]{supp_figs/celeba/Unconstrained_FairALM_deo_eta60.png} \end{minipage}} \hskip 4pt \frame{ \begin{minipage}{0.2\linewidth} \includegraphics[width=\linewidth]{supp_figs/celeba/Unconstrained_FairALM_acc_eta80.png} \end{minipage} $(\eta = 80)$ \begin{minipage}{0.2\linewidth} \includegraphics[width=\linewidth]{supp_figs/celeba/Unconstrained_FairALM_deo_eta80.png} \end{minipage}}% \caption{\label{fig:fairalm_ablation_celeba}\footnotesize \textbf{FairALM Ablation on CelebA. } For a given $\eta$, the left image represents the test error and the right image shows the DEO measure. We study the effect of varying the dual step size $\eta$ on FairALM. We observe that the performance of FairALM is consistent over a wide range of $\eta$ values.} \end{figure*} \begin{figure*}[!h] \centering \frame{ \begin{minipage}{0.19\linewidth} \includegraphics[width=\linewidth]{supp_figs/celeba/Unconstrained_l2_penalty_acc_eta0p001.png} \end{minipage} $(\eta = 0.001)$ \begin{minipage}{0.19\linewidth} \includegraphics[width=\linewidth]{supp_figs/celeba/Unconstrained_l2_penalty_deo_eta0p001.png} \end{minipage}} \hskip 4pt \frame{ \begin{minipage}{0.19\linewidth} \includegraphics[width=\linewidth]{supp_figs/celeba/Unconstrained_l2_penalty_acc_eta0p01.png} \end{minipage} $(\eta = 0.01)$ \hskip 4pt \begin{minipage}{0.19\linewidth} \includegraphics[width=\linewidth]{supp_figs/celeba/Unconstrained_l2_penalty_deo_eta0p01.png} \end{minipage}} % \vskip 4pt \frame{ \begin{minipage}{0.19\linewidth} \includegraphics[width=\linewidth]{supp_figs/celeba/Unconstrained_l2_penalty_acc_eta0p1.png} \end{minipage} $(\eta = 0.1)$ \hskip 8pt \begin{minipage}{0.19\linewidth} \includegraphics[width=\linewidth]{supp_figs/celeba/Unconstrained_l2_penalty_deo_eta0p1.png} \end{minipage}} \hskip 4pt \frame{ \begin{minipage}{0.19\linewidth} \includegraphics[width=\linewidth]{supp_figs/celeba/Unconstrained_l2_penalty_acc_eta1.png} \end{minipage} $(\eta = 1)$ \hskip 12pt \begin{minipage}{0.19\linewidth} \includegraphics[width=\linewidth]{supp_figs/celeba/Unconstrained_l2_penalty_deo_eta1.png} \end{minipage}}% \caption{\label{fig:l2penalty_ablation_celeba} \footnotesize \textbf{$\bm{\ell_2}$ Penalty Ablation on CelebA} For each $\eta$ value, the left image represents the test set errors and the right image shows the fairness measure (DEO). We investigate a popular baseline to impose fairness constraint which is the $\ell_2$ penalty. We study the effect of varying the penalty parameter $\eta$ in this figure. We observe that training with $\ell_2$ penalty is quite unstable. For $\eta > 1$, the algorithm doesn't converge and raises numerical errors.} \end{figure*} \clearpage {\bf More Interpretability Results. } We present the activation maps obtained when running the {\em FairALM} algorithm, unconstrained algorithm and the gender classification task. We show our results in Figure~\ref{more_acti}. The target class is \textit{attractiveness} and protected attribute is gender. We threshold the maps to show only the most significant colors. The maps from gender classification task look at gender-revealing attributes such as presence of \textit{long-hair}. The unconstrained model looks mostly at the entire image. {\em FairALM} looks at only a specific region of the face which is not gender revealing. \begin{figure*}[!h] \centering {\setlength{\fboxsep}{4pt}\fbox{ \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgrey}{Gender}} \adjustbox{cfbox=mygrey 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_women/G_camon1_p0_t1_s1_162874_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightpink}{Unconstrained}} \adjustbox{cfbox=mypink 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_women/N_camon1_p1_t1_s1_162874_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgreen}{FairALM}} \adjustbox{cfbox=mygreen 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_women/F_camon1_p1_t1_s1_162874_Cam_On_Image.png}} \end{minipage}}} \quad {\setlength{\fboxsep}{4pt}\fbox{ \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgrey}{Gender}} \adjustbox{cfbox=mygrey 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_men/G_camon1_p1_t1_s1_173141_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightpink}{Unconstrained}} \adjustbox{cfbox=mypink 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_men/N_camon1_p1_t1_s1_173141_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgreen}{FairALM}} \adjustbox{cfbox=mygreen 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_men/F_camon1_p1_t1_s1_173141_Cam_On_Image.png}} \end{minipage}}} \vskip 4pt {\setlength{\fboxsep}{4pt}\fbox{ \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgrey}{Gender}} \adjustbox{cfbox=mygrey 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_women/G_camon1_p0_t1_s1_162878_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightpink}{Unconstrained}} \adjustbox{cfbox=mypink 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_women/N_camon1_p1_t1_s1_162878_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgreen}{FairALM}} \adjustbox{cfbox=mygreen 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_women/F_camon1_p1_t1_s1_162878_Cam_On_Image.png}} \end{minipage}}} \quad {\setlength{\fboxsep}{4pt}\fbox{ \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgrey}{Gender}} \adjustbox{cfbox=mygrey 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_men/G_camon1_p1_t1_s1_173453_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightpink}{Unconstrained}} \adjustbox{cfbox=mypink 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_men/N_camon1_p1_t1_s1_173453_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgreen}{FairALM}} \adjustbox{cfbox=mygreen 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_men/F_camon1_p1_t1_s1_173453_Cam_On_Image.png}} \end{minipage}}} \vskip 4pt {\setlength{\fboxsep}{4pt}\fbox{ \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgrey}{Gender}} \adjustbox{cfbox=mygrey 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_women/G_camon1_p0_t1_s1_162887_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightpink}{Unconstrained}} \adjustbox{cfbox=mypink 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_women/N_camon1_p1_t1_s1_162887_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgreen}{FairALM}} \adjustbox{cfbox=mygreen 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_women/F_camon1_p1_t1_s1_162887_Cam_On_Image.png}} \end{minipage}}} \quad {\setlength{\fboxsep}{4pt}\fbox{ \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgrey}{Gender}} \adjustbox{cfbox=mygrey 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_men/G_camon1_p1_t1_s1_173556_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightpink}{Unconstrained}} \adjustbox{cfbox=mypink 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_men/N_camon1_p1_t1_s1_173556_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgreen}{FairALM}} \adjustbox{cfbox=mygreen 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_men/F_camon1_p1_t1_s1_173556_Cam_On_Image.png}} \end{minipage}}} \vskip 4pt {\setlength{\fboxsep}{4pt}\fbox{ \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgrey}{Gender}} \adjustbox{cfbox=mygrey 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_women/G_camon1_p0_t1_s1_162906_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightpink}{Unconstrained}} \adjustbox{cfbox=mypink 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_women/N_camon1_p1_t1_s1_162906_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgreen}{FairALM}} \adjustbox{cfbox=mygreen 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_women/F_camon1_p1_t1_s1_162906_Cam_On_Image.png}} \end{minipage}}} \quad {\setlength{\fboxsep}{4pt}\fbox{ \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgrey}{Gender}} \adjustbox{cfbox=mygrey 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_men/G_camon1_p1_t1_s1_173638_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightpink}{Unconstrained}} \adjustbox{cfbox=mypink 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_men/N_camon1_p1_t1_s1_173638_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgreen}{FairALM}} \adjustbox{cfbox=mygreen 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_men/F_camon1_p1_t1_s1_173638_Cam_On_Image.png}} \end{minipage}}} \vskip 4pt {\setlength{\fboxsep}{4pt}\fbox{ \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgrey}{Gender}} \adjustbox{cfbox=mygrey 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_women/G_camon1_p0_t1_s1_162929_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightpink}{Unconstrained}} \adjustbox{cfbox=mypink 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_women/N_camon1_p1_t1_s1_162929_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgreen}{FairALM}} \adjustbox{cfbox=mygreen 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_women/F_camon1_p1_t1_s1_162929_Cam_On_Image.png}} \end{minipage}}} \quad {\setlength{\fboxsep}{4pt}\fbox{ \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgrey}{Gender}} \adjustbox{cfbox=mygrey 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_men/G_camon1_p1_t1_s1_173789_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightpink}{Unconstrained}} \adjustbox{cfbox=mypink 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_men/N_camon1_p1_t1_s1_173789_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgreen}{FairALM}} \adjustbox{cfbox=mygreen 2pt 0pt}{\includegraphics[width=\linewidth]{supp_figs/celeba_cam/results_eccv_supp_men/F_camon1_p1_t1_s1_173789_Cam_On_Image.png}} \end{minipage}}} \caption{\label{more_acti} \footnotesize \textbf{Interpretability in CelebA.} We find that an unconstrained model picks up a lot of gender revealing attributes however FairALM doesn't. The image labelled {\tt Gender} denotes the map of a gender classification task. We observe overlap between the maps of gender classification task and the unconstrained model. The activation maps are regulated to show colors above a fixed threshold to highlight the most significant regions used by a model.} \end{figure*} \subsection{Supplementary Results on ImSitu} {\bf Detailed Setup.} We use the standard ResNet-18 architecture for the base model. We initialize the weights of the conv layers weights from ResNet-18 trained on ImageNet (ILSVRC). We train the model using SGD optimizer and a batch size of $256$. For first few epochs ($\approx20$) only the linear layer is trained with a learning rate of $0.01/0.005$. Thereafter, the entire model is trained end to end with a lower learning rate of $0.001/0.0005$ till the accuracy plateaus. {\bf Meaning of Target class $(+)$. } Target class $(+)$ is something that a classifier tries to predict from an image. Recall the basic notations $\S~2$ from the paper, $\mu_h^{s_i,t_j}:=\mu_h|(s=s_i,t=t_j)$ denotes the elementary conditional expectation of some function $\mu_h$ with respect to two random variables $s,t$. When we say we are imposing DEO for a target class $t_j$ we refer to imposing constraint on the difference in conditional expectation of the two groups of $s$ for the class $t_j$, that is, $d_h = \mu_h^{s_0,t_j} - \mu_h^{s_1,t_j}$. For example, for \textit{Cooking $(+)$} vs \textit{Driving $(-)$} problem when we say \textit{Cooking $(+)$} is regarded as the target class we mean that $t_j=\textit{cooking}$ and hence the DEO constraint is of the form $d_h = \mu_h^{s_0,cooking} - \mu_h^{s_1,cooking}$. {\bf Supplementary Training Profiles.} We plot the test set errors and the DEO measure during the course of training for the verb pair classifications reported in the paper. We compare against the baselines discussed in Table~$1$ of the paper. The plots in Fig.~\ref{fig:fairalm_supp_plots} below supplement Fig.~$5$ in the paper. \begin{figure}[!h] \centering \begin{minipage}{0.8\linewidth} \centering \frame{\includegraphics[width=0.24\linewidth]{supp_figs/imsitu_eccv/plots_eccv_supp/cooking_driving/acc_naive.png} \includegraphics[width=0.24\linewidth]{supp_figs/imsitu_eccv/plots_eccv_supp/cooking_driving/deo_naive.png}} \hskip 4pt \frame{\includegraphics[width=0.24\linewidth]{supp_figs/imsitu_eccv/plots_eccv_supp/cooking_driving/acc_lag.png} \includegraphics[width=0.24\linewidth]{supp_figs/imsitu_eccv/plots_eccv_supp/cooking_driving/deo_lag.png}} \vskip -1pt \par Cooking $\tiny(+)$ Driving $\tiny(-)$ \end{minipage} \vskip 6pt \begin{minipage}{0.8\linewidth} \centering \frame{\includegraphics[width=0.24\linewidth]{supp_figs/imsitu_eccv/plots_eccv_supp/shaving_moisturizing/acc_naive.png} \includegraphics[width=0.24\linewidth]{supp_figs/imsitu_eccv/plots_eccv_supp/shaving_moisturizing/deo_naive.png}} \hskip 4pt \frame{\includegraphics[width=0.24\linewidth]{supp_figs/imsitu_eccv/plots_eccv_supp/shaving_moisturizing/acc_lag.png} \includegraphics[width=0.24\linewidth]{supp_figs/imsitu_eccv/plots_eccv_supp/shaving_moisturizing/deo_lag.png}} \vskip -1pt \par Shaving $\tiny(+)$ Moisturizing $\tiny(-)$ \end{minipage} \vskip 6pt \begin{minipage}{0.8\linewidth} \centering \frame{\includegraphics[width=0.24\linewidth]{supp_figs/imsitu_eccv/plots_eccv_supp/washing_saluting/acc_naive.png} \includegraphics[width=0.24\linewidth]{supp_figs/imsitu_eccv/plots_eccv_supp/washing_saluting/deo_naive.png}} \hskip 4pt \frame{\includegraphics[width=0.24\linewidth]{supp_figs/imsitu_eccv/plots_eccv_supp/washing_saluting/acc_lag.png} \includegraphics[width=0.24\linewidth]{supp_figs/imsitu_eccv/plots_eccv_supp/washing_saluting/deo_lag.png}} \vskip -1pt \par Washing $\tiny(+)$ Saluting $\tiny(-)$ \end{minipage} \vskip 6pt \begin{minipage}{0.8\linewidth} \centering \frame{\includegraphics[width=0.24\linewidth]{supp_figs/imsitu_eccv/plots_eccv_supp/assembling_hanging/acc_naive.png} \includegraphics[width=0.24\linewidth]{supp_figs/imsitu_eccv/plots_eccv_supp/assembling_hanging/deo_naive.png}} \hskip 4pt \frame{\includegraphics[width=0.24\linewidth]{supp_figs/imsitu_eccv/plots_eccv_supp/assembling_hanging/acc_lag.png} \includegraphics[width=0.24\linewidth]{supp_figs/imsitu_eccv/plots_eccv_supp/assembling_hanging/deo_lag.png}} \vskip -1pt \par Assembling $\tiny(+)$ Hanging $\tiny(-)$ \end{minipage} \caption{\label{fig:fairalm_supp_plots}\footnotesize \textbf{Supplementary Training Profiles.} FairALM consistently achieves minimum DEO across different verb pair classifications.} \end{figure} {\bf Additional qualitative results} We show the activation maps in Fig.~\ref{fig:imsitu-supp-plots} to illustrate that the features used by FairALM model are more aligned with the action/verb present in the image and are not gender leaking. The verb pairs have been chosen randomly from the list provided in \cite{zhao2017men}. In all the cases {\tt Gender} is considered as the protected attribute. The activation maps are regulated to show colors above a fixed threshold in order to highlight the most significant regions used by a model to make a prediction. \begin{figure*}[!h] \centering \begin{minipage}{0.9\linewidth} \includegraphics[width=0.5\linewidth]{supp_figs/imsitu_eccv/cam_supp_filtered/microwaving_pumping_1.png} \hskip 4pt \includegraphics[width=0.5\linewidth]{supp_figs/imsitu_eccv/cam_supp_filtered/shooting_washing_1.png} \end{minipage}% \vskip 4pt \begin{minipage}{0.9\linewidth} \includegraphics[width=0.5\linewidth]{supp_figs/imsitu_eccv/cam_supp_filtered/driving_cooking_1.png} \hskip 4pt \includegraphics[width=0.5\linewidth]{supp_figs/imsitu_eccv/cam_supp_filtered/shoveling_shopping_1.png} \end{minipage} \vskip 4pt \begin{minipage}{0.9\linewidth} \includegraphics[width=0.5\linewidth]{supp_figs/imsitu_eccv/cam_supp_filtered/aiming_combing_1.png} \hskip 4pt \includegraphics[width=0.5\linewidth]{supp_figs/imsitu_eccv/cam_supp_filtered/combing_coaching_1.png} \end{minipage} \vskip 4pt \begin{minipage}{0.9\linewidth} \includegraphics[width=0.5\linewidth]{supp_figs/imsitu_eccv/cam_supp_filtered/shooting_washing_2.png} \hskip 4pt \includegraphics[width=0.5\linewidth]{supp_figs/imsitu_eccv/cam_supp_filtered/shaving_moisturizing_1.png} \end{minipage} \vskip 4pt \begin{minipage}{0.9\linewidth} \includegraphics[width=0.5\linewidth]{supp_figs/imsitu_eccv/cam_supp_filtered/lim_washing_saluting_1.png} \hskip 4pt \includegraphics[width=0.5\linewidth]{supp_figs/imsitu_eccv/cam_supp_filtered/lim_shaving_moisturizing_1.png} \end{minipage} \caption{\label{fig:imsitu-supp-plots} \footnotesize {\textbf{Additional qualitative Results in ImSitu dataset.}} Models predict the target class $(+)$. FairALM consistently avoids gender revealing features and uses features that are more relevant to the target class. Due to the small dataset sizes, a \textit{limitation} of this experiment is shown in the last row where both FairALM and Unconstrained model look at incorrect regions. The number of such cases in FairALM is far less than those in the unconstrained model.} \end{figure*} \subsection{Proofs for theoretical claims in the paper} Prior to proving the convergence of primal and dual variables of our algorithm with respect to the augmented lagrangian $L_T(q, \lambda)$, we prove a regret bound on the function $f_t(\lambda)$ which is defined in the following lemma. As $f_t(\lambda)$ is a strongly concave function (which we shall see shortly), we obtain a bound on the negative regret. \begin{lemma} \label{lemma:regretbound} Let $r_t$ denote the reward at each round of the game. The reward function $f_t(\lambda)$ is defined as $f_t(\lambda) = \lambda r_t - \frac{1}{2\eta} (\lambda - \lambda_t)^2$. We choose $\lambda$ in the round $T+1$ to maximize the cumulative reward, i.e., $\lambda_{T+1} = \mathrm{argmax}_{\lambda} \sum_{t=1}^T f_t(\lambda)$. Define $L = \max_t \mid r_t\mid$. We obtain the following bound on the cumulative reward, for any $\lambda$, \begin{align} \sum_{t=1}^T \bigg( \lambda r_t - \frac{1}{2\eta} (\lambda - \lambda_t)^2 \bigg) \le \sum_{t=1}^T \lambda_t r_t + \eta L^2 \mathcal{O}(\log T) \end{align} \end{lemma} \begin{proof} As we are maximizing the cumulative reward function, in the $(t+1)^{th}$ iteration $\lambda_{t+1}$ is updated as $\lambda_{t+1} = \mathrm{argmax}_{\lambda} \sum_{i=1}^t f_i(\lambda)$. This learning rule is also called the Follow-The-Leader (FTL) principle which is discussed in Section $2.2$ of \cite{shalev2012online}. Emulating the proof of Lemma $2.1$ in \cite{shalev2012online}, a bound on the negative regret of FTL, for any $\lambda \in \mathbb{R}$, can be derived due to the concavity of $f_t(\lambda)$, \begin{align} \label{suppeq_lemma21} \sum_{t=1}^T f_t(\lambda) - \sum_{t=1}^{T} f_t(\lambda_t) \le \sum_{t=1}^{T} f_t(\lambda_{t+1}) - \sum_{t=1}^{T} f_t(\lambda_t) \end{align} Our objective, now, is to obtain a bound on RHS of \eqref{suppeq_lemma21}. Solving {\tt $\mathrm{argmax}_{\lambda} \sum_{i=1}^t f_i(\lambda)$} for $\lambda$ will show us how $\lambda_t$ and $\lambda_{t+1}$ are related, \begin{align} \lambda_{t+1 } = \frac{\eta}{t} \sum_{i=1}^{t}r_i + \frac{1}{t} \sum_{i=1}^{t}\lambda_i \label{step:lambda_relation} \hskip 8pt \implies \lambda_{t+1} - \lambda_{t} = \frac{\eta}{t}r_t \end{align} Using \eqref{step:lambda_relation}, we obtain a bound on $f_t(\lambda_{t+1}) - f_t(\lambda_t)$, we have, \begin{align*} f_t(\lambda_{t+1}) - f_t(\lambda_t) &\le \frac{\eta}{t}r_t^2 \end{align*} With $L = \max_t |r_t|$ and using the fact that $\sum_{i=1}^{T} \frac{1}{i} \le (\log T + 1)$, \begin{align} \label{suppeq:cum1} \sum_{t=1}^{T} \Big(f_t(\lambda_{t+1}) - f_t(\lambda_{t}) \Big) \le \eta L^2 (\log T + 1) \end{align} Let us denote $\xi_T = \eta L^2 (\log T + 1)$, we bound \eqref{suppeq_lemma21} with \eqref{suppeq:cum1}, \begin{empheq}[box={\Garybox[Cumulative Reward Bound]}]{align} \label{suppeq:paper_lemma} \forall \lambda \in \mathbb{R} \quad \sum_{t=1}^{T} \Big( \lambda r_t - \frac{1}{2\eta}(\lambda-\lambda_t)^2 \Big) \le \Big(\sum_{t=1}^{T} \lambda_t r_t\Big) + \xi_T \end{empheq} \end{proof} Next, using the \textit{Cumulative Reward Bound}~\eqref{suppeq:paper_lemma}, we prove the theorem stated in the paper. The theorem gives us the number of iterations required by Alg.~$1$ (in the paper) to reach a $\nu-$approximate saddle point. Our bounds for $\eta=\frac{1}{T}$ and $\lambda \in \mathbb{R}$ are strictly better than $\cite{agarwal2018reductions}$. We re-state the theorem here, \begin{theorem} Recall that $d_h$ represents the difference of conditional means. Assume that $|| d_h||_{\infty} \le L$ and consider $T$ rounds of Alg~$1$ (in the paper). Let $\bar q := \frac{1}{T}\sum_{t=1}^T q_t$ and $\bar \lambda := \frac{1}{T}\sum_{t=1}^T \lambda_t$ be the average plays of the $q$-player and the $\lambda$-player respectively. Then, we have $L_T(\bar q, \bar \lambda) \le L_T(q, \bar \lambda) + \nu \text{ and } L_T(\bar q, \bar \lambda) \ge L_T(\bar q, \lambda) - \nu$, under the following conditions, \begin{compactitem} \item If $\eta = \mathcal{O}(\sqrt{\frac{B^2T}{L^2 (\log T+ 1)}})$, $\nu = \mathcal{O}(\sqrt{\frac{B^2 L^2 (\log T + 1)}{T}})$; $\forall |\lambda| \le B$, $\forall q \in \Delta$ \item If $\eta = \frac{1}{T}$, $\nu = \mathcal{O}(\frac{L^2(\log T + 1)^2}{T})$; $\forall \lambda \in \mathbb{R}$, $\forall q \in \Delta$ \end{compactitem} \end{theorem} \begin{proof} Recall the definition of $L_T(q, \lambda)$ from the paper, \begin{align} L_T(q, \lambda) = \big(\sum_i q_i e_{h_i}\big) + \lambda \big(\sum_i q_i d_{h_i} \big) - \frac{1}{2\eta}\big(\lambda-\lambda_T\big)^2 \end{align} For the sake of this proof, let us define $\zeta_T$ in the following way, \begin{align} \label{suppeq:zeta_def} \zeta_T(\lambda) = \frac{1}{2\eta}\sum_{t=1}^{T}\Big( (\lambda - \lambda_t)^2 - (\lambda - \lambda_T)^2 + (\lambda_t - \lambda_T)^2 \Big) \end{align} Recollect from \eqref{suppeq:paper_lemma} that $\xi_T = \eta L^2 (\log T + 1)$. We \textbf{outline} the proof as follows, \begin{enumerate} \item \label{suppitem:ub} First, we compute an upper bound on $L_T(\bar q, \bar \lambda)$, \begin{empheq}[box={\Garybox[Average Play Upper Bound]}]{align} \label{suppeq:avgplay_ub} L_T(\bar q, \bar \lambda) &\le L_T(q, \bar \lambda ) + \frac{\zeta_T(\bar \lambda)}{T} + \frac{\xi_T}{T} \quad \forall q \in \Delta\\ \text{Also, } L_T(\bar q, \lambda) &\le L_T(q, \bar \lambda ) + \frac{\zeta_T(\lambda)}{T} + \frac{\xi_T}{T} \quad \forall \lambda \in \mathbb{R} ,\forall q \in \Delta \end{empheq} \item \label{suppitem:lb} Next, we determine an lower bound on $L_T(\bar q, \bar \lambda)$, \begin{empheq}[box={\Garybox[Average Play Lower Bound]}]{align} \label{suppeq:avgplay_lb} L_T(\bar q, \bar \lambda) \ge L_T(\bar q, \lambda ) - \frac{\zeta_T(\lambda)}{T} - \frac{\xi_T}{T} \quad \forall \lambda \in \mathbb{R} \end{empheq} \item \label{suppitem:lamb} We bound $\frac{\zeta_T(\lambda)}{T} + \frac{\xi_T}{T}$ for the case $|\lambda| \le B$ and show that a $\nu-$approximate saddle point is attained. \item \label{suppitem:lamr} We bound $\frac{\zeta_T(\lambda)}{T} + \frac{\xi_T}{T}$ for the case $\lambda \in \mathbb{R}$ and, again, show that $\nu-$approximate saddle point is attained. \end{enumerate} We write the proofs of the above fours parts one-by-one. Steps~\ref{suppitem:ub},\ref{suppitem:lb} in the above outline are intermediary results used to prove our main results in Steps~\ref{suppitem:lamb},\ref{suppitem:lamr}. Reader can directly move to Steps~\ref{suppitem:lamb},\ref{suppitem:lamr} to see the main proof.\\\\ {\bf 1. Proof for the result on \textit{Average play Upper Bound}} \begin{align} L_T(q, \bar \lambda) &= {\textstyle \sum}_i q_i e_{h_i} + \Big(\frac{{\textstyle \sum}_t \lambda_t}{T}\Big)\Big({\textstyle \sum}_i q_i d_{h_i} \Big) - \frac{1}{2\eta}\Big(\frac{{\textstyle \sum}_t \lambda_t}{T} - \lambda_T\Big)^2\\ \intertext{Exploiting convexity of $\frac{1}{2\eta}\Big(\frac{{\textstyle \sum}_t \lambda_t}{T} - \lambda_T\Big)^2$ via Jenson's Inequality,} &\ge \frac{1}{T} \sum_t \Big({\textstyle \sum}_i q_i e_{h_i} + \lambda_t {\textstyle \sum}_i q_i d_{h_i} - \frac{1}{2\eta}(\lambda_t - \lambda_T)^2 \Big)\\ \intertext{As $h_t = \mathrm{argmin}_{q} L_T(q, \lambda_t)$, we have $L_T(q, \lambda_t) \ge L_T(h_t, \lambda_t)$, hence,} &\ge \frac{1}{T} \sum_t \Big( e_{h_t} + \lambda_t d_{h_t} - \frac{1}{2\eta} (\lambda_t - \lambda_T)^2 \Big)\\ \intertext{Using the \textit{Cumulative Reward Bound} \eqref{suppeq:paper_lemma},} &\ge \frac{{\textstyle \sum}_t e_{h_t}}{T} + \frac{\lambda{\textstyle \sum}_t d_{h_t}}{T} - \frac{1}{T} \sum_t \Big( \frac{(\lambda- \lambda_t)^2}{2\eta} + \frac{(\lambda_t- \lambda_T)^2}{2\eta} \Big)- \frac{\xi_T}{T}\\ \intertext{Add and subtract $\frac{1}{T}\sum_{t=1}^T\frac{1}{2\eta}(\lambda - \lambda_T)^2$, use $\zeta_T$ from \eqref{suppeq:zeta_def} and regroup the terms,} &= ({\textstyle \sum}_i \bar q_i e_{h_i}) + (\lambda{\textstyle \sum}_i \bar q_i d_{h_i}) - \frac{1}{2\eta}(\lambda-\lambda_T)^2 - \frac{\zeta_T(\lambda)}{T} - \frac{\xi_T}{T}\\ \label{suppeq:final_backward} &= L_T(\bar q, \lambda) - \frac{\zeta_T(\lambda)}{T} - \frac{\xi_T}{T} \end{align}\\ {\bf 2. Proof for the result on \textit{Average play Lower Bound}} Proof is similar to Step~\ref{suppitem:ub} so we skip the details. The proof involves finding a lower bound for $L_T(\bar q, \lambda)$ using the \textit{Cumulative Reward Bound} \eqref{suppeq:paper_lemma}. With simple algebraic manipulations and exploiting the convexity of $L_T(\bar q, \lambda)$ via the Jenson's inequality, we obtain the bound that we state.\\ {\bf 3. Proof for the case $\bm{ |\lambda| \le B}$}\\ For the case $|\lambda| \le B$, we have $\zeta_T(\lambda) \le \frac{B^2 T}{\eta}$, which gives, \begin{align} \label{eqsup:caselamb} \frac{\zeta_T(\lambda)}{T} + \frac{\xi_T}{T} &\le \frac{B^2}{\eta} + \frac{\eta L^2 (\log T + 1)}{T} \end{align} Minimizing R.H.S in \eqref{eqsup:caselamb} over $\eta$ gives us a $\nu-$ approximate saddle point, \begin{empheq}[box={\Garybox[$\nu-$ approximate saddle point for $|\lambda| \le B$ ]}]{align} L_T(\bar q, \bar \lambda) \le L_T(q, \bar \lambda) + \nu \quad &\text{ and }\quad L_T(\bar q, \bar \lambda) \ge L_T(\bar q, \lambda) - \nu\\ \text{ where } \nu = 2\sqrt{\frac{B^2 L^2 (\log T + 1)}{T}} \quad &\text{ and } \eta = \sqrt{\frac{B^2T}{L^2 (\log T+ 1)}} \end{empheq}\\\\ {\bf 4. Proof for the case $\bm{ \lambda \in \mathbb{R}}$}\\ We begin the proof by bounding $\frac{\zeta_T(\lambda)}{T} + \frac{\xi_T}{T}$. Let $\lambda_* = \mathrm{argmax}_\lambda L_T(\bar q, \lambda)$. We have a closed form for $\lambda_*$ given by $\lambda_* = \lambda_T + \eta {\textstyle \sum}_i \bar q_i d_{h_i}$. Substituting $\lambda_*$ in $\zeta_T$ gives, \begin{align} \frac{\zeta_T(\lambda_*)}{T} + \frac{\xi_T}{T} \label{eqsup:lasmin} &= \frac{1}{2\eta}\frac{1}{T}\sum_t \Big(2(\lambda_t-\lambda_T)^2 + 2\eta (\lambda_T-\lambda_t)({\textstyle \sum}_i \bar q_i d_{h_i}) \Big) + \frac{\xi_T}{T}\\ \intertext{Recollect that $\lambda_{t+1}-\lambda_t = \frac{\eta}{t} d_{h_t}$ (from~\eqref{step:lambda_relation}). Using telescopic sum on $\lambda_t$, we get $(\lambda_T-\lambda_t) \le \eta L (\log T + 1)$ and $(\lambda_T-\lambda_t)^2 \le \eta^2 L^2 (\log T + 1)^2$. We substitute these in the previous equation \eqref{eqsup:lasmin},} \frac{\zeta_T(\lambda_*)}{T} + \frac{\xi_T}{T} &\le \eta L^2 (\log T + 1)^2 + \eta L^2 (\log T + 1) + \frac{\eta L^2 (\log T + 1)}{T}\\ \intertext{Setting $\eta=\frac{1}{T}$, we get} \label{suppeq:zeta_bound} \frac{\zeta_T(\lambda_*)}{T} + \frac{\xi_T}{T} &\le \mathcal{O}(\frac{L^2(\log T + 1)^2}{T}) := \nu \end{align} Using ~\eqref{suppeq:zeta_bound}, we prove the convergence of $\lambda$ in the following way, \begin{align} L_T(\bar q, \lambda) &\le L_T(\bar q, \lambda_*) \quad \text{\Big(as $\lambda_*$ is the maximizer of $L_T(\bar q, \lambda)$\Big)} \\ &\le L_T(\bar q, \bar \lambda) + \frac{\zeta_T(\lambda_*)}{T} + \frac{\xi_T}{T} \quad \text{\Big(\textit{Average Play Lower Bound} \Big)}\\ &\le L_T(\bar q, \bar \lambda) + \nu \quad \text{\Big(from ~\eqref{suppeq:zeta_bound}\Big)} \end{align} We prove the convergence of $q$ in the following way. For any $\lambda \in \mathbb{R}$, \begin{align} L_T(q, \bar \lambda) &\ge L_T(\bar q, \lambda_*) - \frac{\zeta_T(\lambda_*)}{T} - \frac{\xi_T}{T} \hskip 7.9pt \text{\Big(\textit{Average Play Upper Bound} \eqref{suppeq:final_backward}\Big)}\\ &\ge L_T(\bar q, \lambda_*) - \nu \quad \text{\Big(from \eqref{suppeq:zeta_bound}\Big)}\\ &\ge L_T(\bar q, \bar \lambda) - \nu \quad \text{\Big(as $\lambda_*$ is the maximizer of $L_T(\bar q, \lambda)$\Big)} \end{align} Therefore, \begin{empheq}[box={\Garybox[$\nu-$ approximate saddle point for $\lambda \in \mathbb{R}$]}]{align} L_T(\bar q, \bar \lambda) \le L_T(q, \lambda) + \nu \quad &\text{ and }\quad L_T(\bar q, \bar \lambda) \ge L_T(\bar q, \lambda) - \nu\\ \text{ where } \nu = \mathcal{O}\bigg(\frac{L^2(\log T + 1)^2}{T}\bigg) \quad &\text{ and } \quad \eta = \frac{1}{T} \end{empheq} \end{proof} \section{Conclusion} We introduced FairALM, an augmented Lagrangian framework to impose constraints on fairness measures studied in the literature. On the theoretical side, we provide strictly better bounds -- $\mathcal{O}\bigg(\frac{\log^2 T}{T}\bigg)$ versus $\mathcal{O}\bigg(\frac{1}{\sqrt{T}}\bigg)$, for reaching a saddle point. On the application side, we provide extensive evidence (qualitative and quantitative) on image datasets commonly used in vision to show the potential benefits of our proposal. Finally, we use FairALM to mitigate site specific differences when performing analysis of pooled medical image datasets. In applying deep learning to scientific/biomedical problems, this is an important issue since sample sizes at individual sites/institutions are often smaller. The overall procedure is simple which we believe will lead to broader adoption and follow-up work on this socially relevant topic. \subsection{CelebA dataset} {\bf Data and Setup.} CelebA \cite{liu2018large} consists of $200$K celebrity face images from the internet annotated by a group of paid adult participants \cite{bohlen2017server}. There are up to $40$ labels available in the dataset, each of which is binary-valued. \begin{figure}[!b] \centering \frame{ \includegraphics[width=0.48\columnwidth]{final_figs/celeba/l2_penalty_FairALM_acc_eta0p001_eta40.png} \hfill \includegraphics[width=0.48\columnwidth]{final_figs/celeba/l2_penalty_FairALM_deo_eta0p001_eta40.png} } \caption{\label{fig:l2_penalty} \footnotesize \textbf{Comparison to $\ell_2$ penalty .} FairALM has a stable training profile in comparison to naive $\ell_2$ penalty. The target label is \textit{attractiveness} and protected attribute is \textit{gender}. } \end{figure} {\bf Quantitative results.} We begin our analysis by predicting each of the $40$ labels with a $3$-layer ReLU network. The protected variable, $s$, are the binary attributes like \textit{Male} and \textit{Young} representing gender and age respectively. We train the SGD algorithm for $5$-epochs and select the labels predicted with at least at $70\%$ precision and with a DEO of at least $4\%$ across the protected variables. The biased set of labels thus estimated are shown in Table~\ref{tab:celeba_ablation}. These labels are consistent with other reported results \cite{ryu2017inclusivefacenet}. It is important to bear in mind that the bias in the labels should not be attributed to its relatedness to a specific protected attributed alone. The cause of bias could also be due to the skew in the label distributions. When training a $3$-layer ReLU net with FairALM, the precision of the model remained almost the same ($\pm 5\%$ ) while the DEO measure reduced significantly as indicated in the Table~\ref{tab:celeba_ablation}. Next, choosing the most unfair label in Table~\ref{tab:celeba_ablation} (i.e., attractive), we train a ResNet18 for a longer duration of about $100$ epochs and contrast the performance with a simple $\ell_2$-penalty baseline. The training profile is observed to be more stable for FairALM as indicated in Fig.~\ref{fig:l2_penalty}. This finding is consistent with the seminal works such as \cite{bertsekas2014constrained,nocedal2006numerical} that discuss the ill-conditioned landscape of non-convex penalties. Comparisons to more recent works such as \cite{sattigeri2018fairness,quadrianto2019discovering} is provided in Table~\ref{tab:celeba_sota}. Here, we present a new state-of-the-art result for the DEO measure with the label \textit{attractive} and protected attribute \textit{gender}. \begin{figure}[!t] \vskip -0.2in {\setlength{\fboxsep}{4pt}\fbox{ \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgrey}{Gender}} \adjustbox{cfbox=mygrey 2pt 0pt}{\includegraphics[width=\linewidth]{final_figs/celeba/all_celeb_images/G_camon1_p0_t1_s1_162787_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightpink}{Unconstrained}} \adjustbox{cfbox=mypink 2pt 0pt}{\includegraphics[width=\linewidth]{final_figs/celeba/all_celeb_images/N_camon1_p1_t1_s1_162787_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgreen}{FairALM}} \adjustbox{cfbox=mygreen 2pt 0pt}{\includegraphics[width=\linewidth]{final_figs/celeba/all_celeb_images/F_camon1_p1_t1_s1_162787_Cam_On_Image.png}} \end{minipage}}} \hfill {\setlength{\fboxsep}{4pt}\fbox{ \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgrey}{Gender}} \adjustbox{cfbox=mygrey 2pt 0pt}{\includegraphics[width=\linewidth]{final_figs/celeba/all_celeb_images/G_camon1_p0_t1_s1_162821_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightpink}{Unconstrained}} \adjustbox{cfbox=mypink 2pt 0pt}{\includegraphics[width=\linewidth]{final_figs/celeba/all_celeb_images/N_camon1_p1_t1_s1_162821_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgreen}{FairALM}} \adjustbox{cfbox=mygreen 2pt 0pt}{\includegraphics[width=\linewidth]{final_figs/celeba/all_celeb_images/F_camon1_p1_t1_s1_162821_Cam_On_Image.png}} \end{minipage}}} \vskip 4pt {\setlength{\fboxsep}{4pt}\fbox{ \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgrey}{Gender}} \adjustbox{cfbox=mygrey 2pt 0pt}{\includegraphics[width=\linewidth]{final_figs/celeba/all_celeb_images/G_camon1_p1_t1_s1_162803_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightpink}{Unconstrained}} \adjustbox{cfbox=mypink 2pt 0pt}{\includegraphics[width=\linewidth]{final_figs/celeba/all_celeb_images/N_camon1_p1_t1_s1_162803_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgreen}{FairALM}} \adjustbox{cfbox=mygreen 2pt 0pt}{\includegraphics[width=\linewidth]{final_figs/celeba/all_celeb_images/F_camon1_p1_t1_s1_162803_Cam_On_Image.png}} \end{minipage}}} \hfill {\setlength{\fboxsep}{4pt}\fbox{ \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgrey}{Gender}} \adjustbox{cfbox=mygrey 2pt 0pt}{\includegraphics[width=\linewidth]{final_figs/celeba/all_celeb_images/G_camon1_p1_t1_s1_162834_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightpink}{Unconstrained}} \adjustbox{cfbox=mypink 2pt 0pt}{\includegraphics[width=\linewidth]{final_figs/celeba/all_celeb_images/N_camon1_p1_t1_s1_162834_Cam_On_Image.png}} \end{minipage} \begin{minipage}{0.12\linewidth} \figuretitle{\colorbox{mylightgreen}{FairALM}} \adjustbox{cfbox=mygreen 2pt 0pt}{\includegraphics[width=\linewidth]{final_figs/celeba/all_celeb_images/F_camon1_p1_t1_s1_162834_Cam_On_Image.png}} \end{minipage}}} \caption{\footnotesize \textbf{Interpretable Models for CelebA.} Unconstrained/FairALM predict label \textit{attractiveness} while controlling \textit{gender}. The heatmaps of Unconstrained model overlaps with gender classification task indicating gender leak. FairALM consistently picks non-gender revealing features of the face. Interestingly, these regions are on the left side in accord with psychological studies that the Face's left side is more attractive \cite{Blackburn2012}.} \vskip -0.2in \end{figure} \begin{table}[!b] \centering \setlength{\tabcolsep}{5pt} \resizebox{0.8\columnwidth}{!}{ \begin{tabular}[b]{cccc} \textbf{} & \begin{tabular}[c]{@{}c@{}}Fairness \\ GAN\cite{sattigeri2018fairness}\end{tabular} & \begin{tabular}[c]{@{}c@{}}Quadrianto \\ etal\cite{quadrianto2019discovering}\end{tabular} & \textbf{FairALM} \\ \hline\hline ERR & 26.6 & 24.1 & 24.5 \\ \hline \rowcolor[rgb]{0.89, 0.89, 1}DEO & 22.5 & 12.4 & \textbf{10.4} \\\hline FNR Female & 21.2 & 12.8 & \textbf{6.6} \\ \hline FNR Male & 43.7 & 25.2 & \textbf{17.0} \\ \hline\hline \end{tabular}} \vspace{12pt} \caption{\label{tab:celeba_sota}\footnotesize \textbf{Quantitative Results on CelebA.} FairALM attains a lower DEO measure and improves the testset errors (ERR). The target label is \textit{attractiveness} and protected attribute is \textit{gender}.} \end{table} \begin{figure*}[!t] \centering \begin{minipage}{\linewidth} \includegraphics[width=0.45\linewidth]{supp_figs/imsitu_eccv/cam_supp_filtered/cooking_driving_1.png} \hskip 8pt \includegraphics[width=0.45\linewidth]{supp_figs/imsitu_eccv/cam_supp_filtered/cooking_driving_2.png} \end{minipage}% \vskip 4pt \begin{minipage}{\linewidth} \includegraphics[width=0.45\linewidth]{supp_figs/imsitu_eccv/cam_supp_filtered/shaving_moisturizing_2.png} \hskip 8pt \includegraphics[width=0.45\linewidth]{supp_figs/imsitu_eccv/cam_supp_filtered/shaving_moisturizing_3.png} \end{minipage} \vskip 4pt \begin{minipage}{\linewidth} \includegraphics[width=0.45\linewidth]{supp_figs/imsitu_eccv/cam_supp_filtered/washing_saluting_1.png} \hskip 8pt \includegraphics[width=0.45\linewidth]{supp_figs/imsitu_eccv/cam_supp_filtered/assembling_hanging_1.png} \end{minipage} \vskip -0.1in \caption{\label{fig:imsitu_interpretability} \footnotesize {\textbf{Interpretability in ImSitu.}} The activation maps indicate that FairALM conceals gender revealing attributes in an image. Moreover, the attributes are more aligned with label of interest. The target class predicted is indicated by a $+$. The activation maps in the examples shown in this figure are representative of the general behavior on this dataset. More examples can be found in the Appendix.} \vskip -0.1in \end{figure*} {\bf Qualitatively assessing Interpretability.} While the DEO measure obtained by FairALM is lower, we can ask an interesting question: when we impose the fairness constraint, precisely which aspects of the image are no longer ``legal'' for the neural network to utilize? This issue can be approached via visualizing activation maps from models such as CAM \cite{DBLP:journals/corr/ZhouKLOT15}. As a representative example, our analysis suggests that in general, an unconstrained model uses the entire face image (including the gender-revealing parts). We find some consistency between the activation maps for {\em attractiveness} and activation maps of an unconstrained model trained to predict {\em gender}! In contrast, when we impose the fairness constraint, the corresponding activation maps turn out to be clustered around specific regions of the face which are {\em not} gender revealing. In particular, a surprising finding was that the left regions in the face were far more prominent which turns out to be consistent with studies in psychology \cite{Blackburn2012}. {\bf Summary.} FairALM minimized the DEO measure without compromising the test error. It has a more stable training profile than an $\ell_2$ penalty and is competitive with recent fairness methods in vision. The activation maps in FairALM concentrate on non-gender revealing features of the face when controlled for gender. \subsection{Chest X-Ray datasets} {\bf Data and Setup.} The datasets we examine here are publicly available from the U.S. National Library of Medicine \cite{jaeger2014two}. The images come from two sites/sources. Images for the first site are collected from patients in Montgomery county, USA and includes $138$ x-rays. The second set of images includes $662$ images collected at a hospital in Shenzhen, China. Our task is to predict pulmonary tuberculosis (TB) from the x-ray images. The images are collected from different x-ray machines with different characteristics, and have site-specific markings or artifacts, see Fig~\ref{fig:chestxray_concept}. $25\%$ of the samples from the pooked dataset are set aside for testing. \begin{figure}[!b] \centering \includegraphics[width=0.5\columnwidth]{final_figs/chest_xray/chestxray_method.png} \caption{\label{fig:chestxray_concept} \footnotesize \textbf{FairALM for dataset pooling.} Data is pooled from two sites/hospitals, Shenzhen $s_0$ and Montgomery $s_1$. } \vskip-5pt \end{figure} \begin{figure}[!b] \centering \frame{ \includegraphics[width=0.3\columnwidth]{final_figs/chest_xray/U_F_acc.png} \includegraphics[width=0.3\columnwidth]{final_figs/chest_xray/U_F_deo.png} \includegraphics[width=0.3\columnwidth]{final_figs/chest_xray/convergence_plot.png} } \caption{\label{fig:chestxray} \footnotesize \textbf{Better Generalization with FairALM.} We compare Unconstrained mode (\textbf{U}) and FairALM (\textbf{F}) Box-plots indicate a lower variance in testset error and the DEO measure for FairALM. Moreover, FairALM reaches $20\%$ testset error in fewer epochs.} \vskip-5pt \end{figure} {\bf Quantitative Results.} We treat the site information, Montgomery or Shenzhen, as a nuisance/protected variable and seek to decorrelate it from the TB labels. We train a ResNet18 network and compare an unconstrained model with FairALM model. Our datasets of choice are small in size, and so deep models easily overfit to site-specific biases present in the training data. Our results corroborate this conjecture, the training accuracies reach $100\%$ very early and the test set accuracies for the unconstrained model has a large variance over multiple experimental runs. Conversely, as depicted in Fig.~\ref{fig:chestxray}, a FairALM model not only maintains a lower variance in the test set errors and DEO measure but also attains improved performance on these measures. What stands out in this experiment is that the number of epochs to reach a certain test set error is lower for FairALM indicating that the model generalizes faster compared to an unconstrained model. {\bf Summary.} FairALM is effective at learning from datasets from two different sites/sources, minimizes site-specific biases and accelerates generalization. \subsection{Imsitu Dataset} {\bf Data and Setup.} ImSitu \cite{yatskar2016} is a situation recognition dataset consisting of $\sim100$K color images taken from the web. The annotations for the image is provided as a summary of the activity in the image and includes a verb describing it, the interacting agents and their roles. The protected variable in this experiment is gender. Our objective is to classify a pair of verbs associated with an image. The pair is chosen such that if one of the verbs is biased towards males then the other would be biased towards females. The authors in \cite{zhao2017men} report the list of labels in the ImSitu dataset that are gender biased: we choose our verb pairs from this list. In particular, we consider the verbs \textit{Cooking vs Driving}, \textit{Shaving vs Moisturizing}, \textit{Washing vs Saluting} and \textit{Assembling vs Hanging}. We compare our results against multiple baselines such as \begin{inparaenum}[\bfseries (1)] \item Unconstrained \item \textit{$\ell_2$-penalty}, the penalty applied on the DEO measure \item \textit{Re-weighting}, a weighted loss functions where the weights account for the dataset skew \item \textit{Adversarial} \cite{zhang2018mitigating} \item \textit{Lagrangian} \cite{zhao2017men} \item \textit{Proxy-Lagrangian} \cite{cotter2018two}. \end{inparaenum} The supplement includes more details of the baseline methods. \begin{figure*}[!t] \centering \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=\linewidth]{final_figs/imsitu/cooking_driving.png} \caption{ \footnotesize Cooking {\tiny (+)} Driving {\tiny (-)}} \end{subfigure \hskip8pt \centering \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=\linewidth]{final_figs/imsitu/assembling_hanging.png} \caption{\footnotesize Assembling {\tiny (+)} Hanging {\tiny (-)}} \end{subfigure}% \vskip -0.1in \caption{\label{fig:imsitu_plots}\footnotesize \textbf{Training Profiles.} FairALM achieves minimum DEO early in training and remains competitive on testset errors. More plots in appendix.} \vskip -0.2in \end{figure*} {\bf Quantitative results.} From Fig.~\ref{fig:imsitu_plots}, it can be seen that FairALM reaches a zero DEO measure very early in training and attains better test errors than an unconstrained model. Within the family of Lagrangian methods such as \cite{zhao2017men,cotter2018two}, FairALM performs better on verb pair `Shaving vs Moisturizing' in both test error and DEO measure as indicated in Table~\ref{tab:imsitu-accs} . While the results on the other verb pairs are comparable, FairALM was observed to be more stable to different hyper-parameter choices. This finding is in accord with recent studies by \cite{asi2019stochastic} who prove that proximal function models are robust to step-size selection. Detailed analysis is provided in the supplement. Turning now to an adversarial method such as \cite{zhao2017men}, results in Table~\ref{tab:imsitu-accs} show that the DEO measure is not controlled as competently as FairALM. Moreover, complicated training routines and unreliable convergence \cite{barnett2018convergence} makes model-training harder. {\bf Interpretable Models.} We used CAM \cite{DBLP:journals/corr/ZhouKLOT15} to inspect the image regions used by the model for target prediction. We observe that the unconstrained model ends up picking features from locations that may not be relevant for the task description but merely co-occur with the verbs in this particular dataset (and are gender-biased). Fig.~\ref{fig:imsitu_interpretability} highlights this observation for the selected classification tasks. Overall, we observe that the semantic regions used by the constrained model are more aligned with the action verb present in the image, and this adds to the qualitative advantages of the model trained using FairALM in terms of interpretability. {\bf Limitations}. We also note that there are cases where both the unconstrained model and FairALM look at incorrect image regions for prediction, owing to the small dataset sizes. However, the number of such cases are far fewer for FairALM than the unconstrained setup. {\bf Summary}. FairALM successfully minimizes the fairness measure while classifying verb/action pairs associated with an image. FairALM uses regions in an image that are more relevant to the target class and less gender revealing. \begin{table}[!t] \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{c@{\hskip 0.1in}cg@{\hskip 0.25in}cg@{\hskip 0.25in}cg@{\hskip 0.25in}cg} \textbf{} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Cooking{\tiny{(+)}} \\ Driving{\tiny{(-)}}\end{tabular}} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Shaving{\tiny{(+)}} \\ Moisturize{\tiny{(-)}}\end{tabular}} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Washing{\tiny{(+)}} \\ Saluting{\tiny{(-)}}\end{tabular}} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Assembling{\tiny{(+)}} \\ Hanging{\tiny{(-)}}\end{tabular}} \\ \hline\hline & ERR & DEO & ERR & DEO & ERR & DEO & ERR & DEO \\ \hline\hline \begin{tabular}[c]{@{}c@{}}Unconstrained\end{tabular} & 17.9 & 7.1 & 23.6 & 4.2 & 12.8 & 25.9 & 7.5 & 15.0 \\ \hline \begin{tabular}[c]{@{}c@{}}$\ell_2$ Penalty\end{tabular} & 14.3 & 14.0 & 23.6 & 1.3 & 10.9 & 0.0 & 5.0 & 21.6 \\ \hline Reweight & 11.9 & 3.5 & 19.0 & 5.3 & 10.9 & 0.0 & 4.9 & 9.0 \\ \hline Adversarial & 4.8 & 0.0 & 13.5 & 11.9 & 14.6 & 25.9 & 6.2 & 18.3 \\ \hline Lagrangian & 2.4 & 3.5 & 12.4 & 12.0 & 3.7 & 0.0 & 5.0 & 5.8\\ \hline Proxy-lagragn. & 2.4 & 3.5 & 12.4 & 12.0 & 3.7 & 0.0 & 14.9 & 26.0 \\ \hline \textbf{FairALM} & 3.6 & 0.0 & 20.0 & 0.0 & 7.3 & 0.0 & 2.5 & 0.0 \\ \hline\hline \end{tabular}% } \caption{\label{tab:imsitu-accs} \footnotesize \textbf{Quantitative Results on ImSitu.} Test errors (ERR) and DEO measure are reported in $\%$. The target class that is to be predicted in is indicated by a $+$. FairALM always achieves a zero DEO while remaining competitive in ERR with the best method for a given verb-pair.} \vskip -0.2in \end{table} \section{Experiments} A central theme in our experiments is to assess whether our proposed algorithm, FairALM, can indeed obtain meaningful fairness measure scores {\em without} compromising the test set performance. We evaluate FairALM on a number of problems where the dataset reflects certain inherent societal/stereotypical biases. Our evaluations are also designed with a few additional goals in mind.\\\\ {\bf Overview.} Our {\bf first} experiment on the CelebA dataset seeks to predict the value of a label for a face image while controlling for certain protected attributes (gender, age). We discuss how prediction of some labels is {\em unfair} in an unconstrained model and contrast with our FairALM. Next, we focus on the label where predictions are the most unfair and present comparisons against methods available in the literature. For our {\bf second} experiment, we use the ImSitu dataset where images correspond to a situation (activities, verb). Expectedly, some activities such as driving or cooking are more strongly associated with a specific gender. We inspect if an unconstrained model is {\em unfair} when we ask it to learn to predict two gender correlated activities/verbs. Comparisons with baseline methods will help measure FairALM's strengths/weaknesses. We can use heat map visualizations to qualitatively interpret the value of adding fairness constraints. We threshold the heat-maps to get an understanding of a general behavior of the models. Our {\bf third} experiment addresses an important problem in medical/scientific studies. Small sample sizes necessitate pooling data from multiple sites or scanners \cite{zhou2018statistical}, but introduce a site or scanner specific nuisance variable which must be controlled for -- else a deep (also, shallow) model may cheat and use site specific (rather than disease-specific) artifacts in the images for prediction even when the cohorts are age or gender matched \cite{inproceedings_fafp}. We study one simple setting here: we use FairALM to mitigate site (hospital) specific differences in predicting ``tuberculosis'' from X-ray images acquired at two hospitals, Shenzhen and Montgomery (and recently made publicly available \cite{jaeger2014two}). In all the experiments, we impose Equality of Opportunity (EO) constraint (defined in Section~\ref{sec:EO}). We adopt NVP (novel validation procedure) used in \cite{donini2018empirical} to evaluate FairALM. It is a two-step procedure: first, we search for the hyper-parameters that achieve the best accuracy, and then, we report the minimum fairness measure (DEO) for accuracies within $90\%$ of the highest accuracy. This offers some robustness of the reported numbers to hyper-parameter selection. We describe these experiments one by one.\\\\ {\bf \textit{Remark from authors.} } Certain attributes such as \textit{attractiveness}, obtained via crowd-sourcing, may have socio-cultural ramifications. Similarly, the gender attribute in the dataset is binary (male versus female) which may be insensitive to some readers. We clarify that our goal is to present evidence showing that our algorithm can impose fairness in a sensible way on datasets used in the literature rather than the higher level question of whether our community needs to invest in culturally sensitive datasets with more societally relevant themes. \section{Introduction} Fairness and non-discrimination is a core tenet of modern society. Driven by advances in vision and machine learning systems, algorithmic decision making continues to permeate our lives in important ways. Consequently, ensuring that the decisions taken by an algorithm do not exhibit serious biases is no longer a hypothetical topic, rather a key concern that has started informing legislation \cite{Goodman_Flaxman_2017} (e.g., Algorithmic Accountability act). On one extreme, some types of biases can be bothersome -- a biometric access system could be more error-prone for faces of persons from certain skin tones \cite{buolamwini2018gender} or a search for {\tt \small homemaker} or {\tt \small programmer} may return gender-stereotyped images \cite{bolukbasi2016man}. But there are serious ramifications as well -- an individual may get pulled aside for an intrusive check while traveling \cite{zuber2014critical} or a model may decide to pass on an individual for a job interview after digesting his/her social media content\cite{chin_2019,heilweil_2019}. Biases in automated systems in estimating recidivism within the criminal judiciary have been reported \cite{ustun2016learning}. There is a growing realization that these problems need to be identified and diagnosed, and then promptly addressed. In the worst case, if no solutions are forthcoming, we must step back and reconsider the trade-off between the benefits versus the harm of deploying such systems, on a case by case basis. \\ \begin{figure}[!t] \centering \includegraphics[width=0.9\columnwidth]{supp_figs/imsitu_eccv/cam_supp_filtered/microwaving_pumping_1.png} \caption{\label{fig:intro_fig} \footnotesize {The heat maps of an unconstrained model and a fair model are depicted in this figure. The models are trained to predict the target label \textit{Microwaving} ( indicated by a $(+)$). The fair model attempts to make unbiased predictions with respect to sensitive attribute \textit{gender}. In this example, it is observed that the heat maps of an unconstrained model are concentrated around gender revealing attributes such as the face of person. Alternatively, the heat maps of the fair model are concentrated around non-gender revealing attributes, such as utensils and microwave, which also happen to be more aligned to the target label.}} \vskip -0.2in \end{figure} {\bf What leads to unfair learning models?} One finds that learning methods in general tend to amplify biases that exist in the training dataset \cite{zhao2019gender}. While this creates an incentive for the organization training the model to curate datasets that are ``balanced'' in some sense, from a practical standpoint, it is often difficult to collect data that is balanced along multiple predictor variables that are ``protected'', e.g., gender, race and age. If a protected feature is correlated with the response variable, a learning model can {\em cheat} and find representations from other features that are collinear or a good surrogate for the protected variable. A thrust in current research is devoted to devising ways to mitigate such shortcuts. If one does not have access to the underlying algorithm, a recent result \cite{hardt2016equality} shows the feasibility of finding thresholds that can impose certain fairness criteria. Such a threshold search can be post-hoc applied to any learned model. But in various cases, because of the characteristics of the dataset, a fairness-oblivious training will lead to biased models. An interesting topic is the study of mechanisms via which the {\em de novo} design or training of the model can be informed by fairness measures. {\bf Some general strategies for Fair Learning.} Motivated by the foregoing issues, recent work which may broadly fall under the topic of {\em algorithmic fairness} has suggested several concepts or measures of fairness that can be incorporated within the learning model. While we will discuss the details shortly, these include demographic parity \cite{yao2017beyond}, equal odds and equal opportunities \cite{hardt2016equality}, and disparate treatment \cite{zafar2017fairness}. In general, existing work can be categorized into a few distinct categories. The {\em first} category of methods attempts to modify the representations of the data to ensure fairness. While different methods approach this question in different ways, the general workflow involves imposing fairness {\em before} a subsequent use of standard machine learning methods \cite{calmon2017optimized,kamiran2010classification}. The {\em second} group of methods adjusts the decision boundary of an already trained classifier towards making it fair as a {\em post}-processing step while trying to incur as little deterioration in overall performance as possible \cite{goh2016satisfying,fish2016confidence,woodworth2017learning}. While this procedure is convenient and fast, it is not always guaranteed to lead to a fair model without sacrificing accuracy. Part of the reason is that the search space for a fair solution in the post-hoc tuning is limited. Of course, we may impose fairness during training directly as adopted in the {\em third} category of papers such as \cite{zafar2017parity,bechavod2017penalizing}, and the approach we take here. Indeed, if we are training the model from scratch and have knowledge of the protected variables, there is little reason not to incorporate this information directly {\em during} model training. In principle, this strategy provides the maximum control over the model. From the formulation standpoint, it is slightly more involved because it requires satisfying a fairness constraint derived from one or more fairness measure(s) in the literature, while concurrently learning the model parameters. The difficulty varies depending both on the primary task (shallow versus deep model) as well as the specific fairness criteria. For instance, if one were using a deep network for classification, we would need to devise ways to enforce constraints on the {\em output} of the network, efficiently. {\bf Scope of this paper and contributions.} Many studies on fairness in learning and vision are somewhat recent and were partly motivated in response to more than a few controversial reports in the news media. As a result, the literature on mathematically sound and practically sensible fairness measures that can still be incorporated while training a model is still in a nascent stage. In vision, current approaches have largely relied on training adversarial modules in conjunction with the primary classification or regression task, to remove the influence of the protected attribute. In contrast, the {\bf contribution} of our work is to provide a simpler alternative. We show that a number of fairness measures in the literature can be incorporated by viewing them as constraints on the {\em output} of the learning model. This view allows adapting ideas from constrained optimization, to devise ways in which training can be efficiently performed in a way that at termination, the model parameters correspond to a fair model. For a practitioner, this means that no changes in the architecture or model are needed: imposing fairness only requires specifying the protected attribute, and utilizing our proposed optimization routine. \section{Appendix} \input{appendix-algorithm-linear.tex} \input{appendix-proofs.tex} \input{appendix-algorithm-deepnet.tex} \input{appendix-experiments-celeba.tex} \input{appendix-experiments-imsitu.tex} \end{document} \section{A Primer on Fairness Functions} In this section, we introduce basic notations and briefly review several fairness measures described in the literature.\\ {\bf Basic notations.} \label{sec:nots} We denote classifiers using $h:x\mapsto y$ where $x$ and $y$ are random variables that represent the features and labels respectively. A {\em protected} attribute is a random variable $s$ on the same probability space as $x$ and $y$ -- for example, $s$ may be gender, age, or race. Collectively, a training example would be $z:= (x, y, s)$. So, our goal is to learn $h$ (predict $y$ given $x$) while {\em imposing fairness-type constraints} over $s$. We will use $\mathcal{H} = \{h_1, h_2, \hdots, h_N\}$ to denote a set/family of possible classifiers and $\Delta^N$ to denote the probability simplex in $\mathbb{R}^N$, i.e., $\Delta:=\{q:\sum_{i=1}^Nq_i=1,q_i\geq 0 \}$ where $q_i$ is the $i$-th coordinate of $q$. Throughout the paper, we will assume that the distribution of $s$ has finite support. Unless explicitly specified, we will assume that $y\in\{0,1\}$ in the main paper. For each $h\in\mathcal{H}$, we will use $e_h$ to denote the misclassification rate of $h$ and $e_{\mathcal{H}}\in\mathbb{R}^N$ to be the vector containing all misclassification rates. We will use superscript to denote conditional expectations. That is, if $\mu_h$ corresponds to expectation of some function $\mu$ (that depends on $h\in\mathcal{H}$), then the conditional expectation/moment of $\mu_h$ with respect to $s$ will be denoted by $\mu_{h}^s$. With a slight abuse of notation, we will use $\mu_h^{s_0}$ to denote the elementary conditional expectation $\mu_h|(s=s_0)$ whenever it is clear from the context. We will use $d_{h}$ to denote the {\em difference} between the conditional expectation of the two groups of $s$, that is, $d_h := \mu_h^{s_0} - \mu_h^{s_1}$. For example, let $s$ be the random variable representing gender, that is, $s_0$ and $s_1$ may correspond to male and female. Then, $e_h^{s_i}$ corresponds to the misclassification rate of $h$ on group $s_i$, and $d_h=e_h^{s_0} - e_h^{s_1}$. Finally, $\mu_h^{s_i,t_j}:=\mu_h|(s=s_i,t=t_j)$ denotes the elementary conditional expectation with respect to two random variables $s,t$. \subsection{Fairness through the lens of Confusion Matrix} Recall that a {\em fairness} constraint corresponds to a performance requirement of a classifier $h$ on subgroups of features $x$ {\em induced} by a protected attribute $s$. For instance, say that $h$ predicts the credit-worthiness $y$ of an individual $x$. Then, we may require that $e_h$ be ``approximately'' the same across individuals for different races given by $s$. Does it follow that functions/metrics that are used to evaluate fairness may be written in terms of the error of a classifier $e_h$ {\em conditioned} on the protected variable $s$ (or in other words $e_h^s$)? Indeed, it does turn out to be the case! In fact, many widely used functions in practice can be viewed as imposing constraints on the confusion matrix as our intuition suggests. We will now discuss few common fairness metrics to illustrate this idea. {\bf (a) Demographic Parity (DP) \cite{yao2017beyond}.} A classifier $h$ is said to satisfy Demographic Parity (DP) if $h(x)$ is {\em independent} of the protected attribute $s$. Equivalently, $h$ satisfies DP if $d_h=0$ where we set $\mu_{h}^{s_i} = e_h^{s_i}$ (using notations introduced above). DP can be seen as equating the total false positives and false negatives between the confusion matrices of the two groups. We denote DDP by the difference of the demographic parity between the two groups. {\bf (b) Equality of Opportunity (EO) \cite{hardt2016equality}.} \label{sec:EO} A classifier $h$ is said to satisfy EO if $h(x)$ is independent of the protected attribute $s$ for $y\in\{0,1\}$. Equivalently, $h$ satisfies EO if $d_h^{y}=0$ where we set $\mu_h^{s_i} = e_h^{s_i}|(y\in\{0,1\})=: e_h^{s_i,y_j}$ conditioning on both $s$ and $y$. Depending on the choice of $y$ in $\mu_h^{s_i}$, we get two different metrics: \begin{enumerate*}[label=(\roman*)] \item {\bf $y=0$} corresponds to $h$ with equal {\it False Positive Rate (FPR)} across $s_i$ \cite{chouldechova2017fair}, whereas \item {\bf $y=1$} corresponds to $h$ with equal {\it False Negative Rate (FNR)} across $s_i$ \cite{chouldechova2017fair}. \end{enumerate*} Moreover, $h$ satisfies {\it Equality of Odds} if $d_h^{0}+d_h^{1}=0$, i.e., $h$ equalizes both TPR and FPR across $s$ \cite{hardt2016equality}. We denote the difference in EO by DEO {\bf (c) Predictive Parity (PP) \cite{celis2019classification}.} A classifier $h$ satisfies PP if the likelihood of making a misclassification among the positive predictions of the classifier is independent of the protected variable $s$. Equivalently, $h$ satisfies PP if $d_h^{\hat{y}}=0$ where we set $\mu_{h_i}^{s_i} = e_h^{s_i}|(\hat{y}=1)$. It corresponds to matching the False Discovery Rate between the confusion matrices of the two groups.
1404.7321
\section{Introduction} The present contribution focuses on the dynamics of particles and their accumulation in electro-osmotic flow through micro-channel junctions which are a basic element of microfluidic systems. This work thus extends previous studies \cite{takhistov_electrokinetic_2003,thamida_nonlinear_2002,yossifon_electro-osmotic_2006,eckstein_nonlinear_2009} that focused on describing the interesting hydrodynamic behavior, wherein beyond a critical level of external-field intensity, vortices are observed to appear within the flow around (sharp) corners of micro-channel junctions. The generation of such vortices is potentially useful in certain applications as a means to enhance and control micro-fluid mixing \cite{wu_micromixing_2008,chen_vortex_2008,zhao_microfluidic_2007}. In other situations the appearance of vortices may need to be suppressed so as to avoid accumulation of suspended particles leading to the eventual jamming of the device \cite{thamida_nonlinear_2002}. In linear electro-osmosis, when the equilibrium zeta potential is uniform and independent of the external field, the resulting flow is irrotational. However, the small yet finite polarizability of the walls gives rise to an additional, non-linear, electro-kinetic mechanism termed induced-charge electro-osmosis (ICEO) \cite{squires_induced-charge_2004}. When an external field is applied to the system, it sets off within the electrolyte solution transient Ohmic currents which create a non-uniform charge cloud near the walls. Thus, a non-uniform distribution of induced zeta potential is established whose magnitude is proportional to the external-field intensity. The Helmholtz-Smoluchowski slip velocity at the fluid-solid interface resulting from the interaction of the electric field and the induced-charge cloud is non-linear in the applied field and generates an electro-osmotic flow which is not necessarily irrotational. Thamida \& Chang \cite{thamida_nonlinear_2002} pointed out that the above mechanism produces opposite polarization of the respective upstream and down-stream faces of the corner. Furthermore, for a sufficiently strong external field this ICEO not only dominates the local flow near the corner but produces downstream macro-scale vortices as well \cite{yossifon_electro-osmotic_2006}. Thamida \& Chang \cite{thamida_nonlinear_2002} also observed that an interesting colloid accumulation occurs at the corner, however no attempt was made to address this particle trapping mechanism either theoretically or experimentally. As will be shown herein, the rapid trapping occurs due to a combination of the short range DEP trapping force and the long-range induced electro-osmotic flow which feeds the particles. Previous studies have examined particle trapping and separation based on insulator-based dielectrophoresis (iDEP) using sharp tips among other geometries \cite{staton_characterization_2010,srivastava_dc_2011,liao_nanoscale_2012}. However, in contrast to the current study, these works have treated such structures as insulators and as a result induced electrokinetic effects, emanating from the finite but small dielectric polarizability of the wall materials, were overlooked. Nevertheless, efficient trapping via the combination of a short range DEP force and far-field hydrodynamics has been shown for a variety of flow generation mechanisms. AC electroosmosis (ACEO), which uses electrodes embedded within the microchannel, has attracted a lot of attention as means for vortex generation and particle trapping through hydrodynamic forces that focus the particles into the stagnation point of the vortices \cite{ramos_ac_1998,ramos_ac_1999,wu_long-range_2005,hoettges_use_2003}. Additionally, similar accumulation has recently been demonstrated \cite{green_dynamical_2013} at the stagnation points of electro-osmotic vortices of the second kind formed at the interface of a microchannel and a wide nanoslot. It is thus the focus of the current study to provide for the first time a thorough explanation, based on both theory and experiments, for the rapid particle trapping due to the combined forces of the DEP and ICEO vortex at the corner of a microchannel. In the following, we describe the experimental methods in sec. 2, the theoretical model in sec. 3, the results and discussion in sec. 4 and concluding comments in sec. 5. \section{Experimental Methods} \subsection{Fabrication of the device} A L-junction microchannel connected to two reservoirs at opposite ends (Fig. \ref{fig: the flow system}(a)) \begin{figure} \begin{centering} \includegraphics[width=0.95\columnwidth]{\string"system\string".png} \par\end{centering} \caption{ (a) Schematics of the L-junction microchannel device along with a microscope image of the junction region and (b) the experimental flow system setup. Dimensions are in \textgreek{m}m.\label{fig: the flow system}} \end{figure} with a depth (normal to the plane of view) of 120 \textmu{}m was fabricated from PDMS (Polydimethylsiloxane, Dow-corning Sylgard 184) using a rapid prototype technique \cite{anderson_fabrication_2000}. We used a high resolution (3 \textgreek{m}m) chrome mask for the creation of the master. The radii of curvature of the corner is estimated to be \textasciitilde{}6 \textmu{}m The channel was then sealed by a PDMS coated (30 \textmu{}m thick) microscope glass slide, using a plasma bonding process \cite{haubert_pdms_2006}. Two large (19 mm diameter) reservoirs were inserted into the PDMS inlets so as to minimize possible pressure driven flow due to induced pressure head and to allow the introduction of Platinum wire electrodes (Fig. (\ref{fig: the flow system})b). \subsection{Experimental setup} A high voltage DC power supply (Stanford Research Systems PS350) was connected to the platinum wire electrodes (0.5 mm platinum wire, Sigma-Aldrich). An electrolyte solution of 10\textsuperscript{-5}{[}M{]} Potassium Chloride (conductivity $\sigma_{f}=2$ \textgreek{m}S/cm) was seeded by negatively charged fluorescently tagged tracer particles (Fluoro-Max, Thermo-Scientific) of various sizes (0.48, 1 and 2 \textgreek{m}m) and volumetric concentrations of (0.0025\%, 0.01\%, 0.04\% respectively). The particles were visualized dynamically using a spinning disc confocal system (Yokugawa CSU-X1) connected to a camera (Andor iXon3) and installed on an inverted microscope (Nikon TI Eclipse). Prior to the sudden application of the external field the system was equilibrated to minimize initial pressure driven flow. \section{Theoretical model} \subsection{Electrostatics and hydrodynamics } Here we extend the theoretical analysis of Yossifon et al. \cite{yossifon_electro-osmotic_2006} for ICEO around a sharp corner to include DEP forces acting on particles. For brevity, in deriving the latter hydrodynamic contribution to the particle motion we will closely follow their derivation, skipping the details and showing only the main results. The current problem for the L-junction differs from that of the T-junction microchannel configuration of Yossifon et Al. 2006 \cite{yossifon_electro-osmotic_2006} only in that the mid-narrow channel symmetry line is replaced by a wall (i.e. we apply an electroosmotic slip velocity instead of a symmetry condition at $D{}_{\infty}E$ boundary, see Fig. \ref{fig:The-L-junction-configuration.}). \begin{figure} \begin{centering} \includegraphics[width=0.5\columnwidth]{\string"sketch\string".png} \par\end{centering} \caption{\label{fig:The-L-junction-configuration.}The L-junction configuration} \end{figure} It is noted that for a low-conductivity electrolyte, electrothermal effects stemming from Joule heating (which scales linearly with the conductivity) become minimal \cite{tang_joule_2006}. Moreover, for the current geometry, it may also be shown that any inplane temperature gradients (which would potentially control the 2D electrothermal forcing) resulting from such minimal Joule heating, would be negligible as most of the generated heat dissipate through the bottom slide which has minimum thermal resistance. The introduction into the problem of a small but finite wall dielectric constant $\lyxmathsym{\textgreek{e}}_{w}$, necessitates the simultaneous calculation of both $\phi_{f}$ and $\phi_{w}$ , the electrostatic potentials within the fluid and wall domains, respectively. On the microscale of the electric double layer these potentials are coupled through the boundary conditions imposing continuity of the potential and specifying the jump in electric displacement across the true solid-liquid interface \cite{landau_electrodynamics_1984}. The thin-double-layer approximation was previously extended by Yossifon et al. 2006 \cite{yossifon_electro-osmotic_2006} to account for the electric-field leakage through the wall for an arbitrary value of the dielectric constants ratio $\lyxmathsym{\textgreek{e}}_{w}/\lyxmathsym{\textgreek{e}}_{f}$, to obtain an appropriate macro scale boundary condition for a symmetric electrolyte solution relating $\phi_{f}$ just outside the double layer and $\phi_{w}$ at the surface of the wall \begin{equation} \phi_{w}+\alpha\frac{\partial\phi_{w}}{\partial n}=\phi_{f}+\zeta^{eq}\;\; on\;\; A_{\infty}BC_{\infty}.\label{eq:Robin Cond. on wall-1} \end{equation} The dimensionless Robin-type boundary condition (\ref{eq:Robin Cond. on wall-1}) is formulated in terms of the dimensionless electric potentials $\phi_{f}$ , $\phi_{w}$ and the equilibrium zeta-potential $\lyxmathsym{\textgreek{z}}^{eq}$ normalized by $E_{0}h_{1}$, wherein $h_{1}$ is the width of the narrow channel and $E_{0}$ is the magnitude of the externally applied electric field within the narrow channel far from the junction. Here $\frac{\partial}{\partial n}$ denotes the derivative in the direction normal to the wall pointing into the fluid domain. Appearing in (\ref{eq:Robin Cond. on wall-1}) is the parameter $\alpha=\frac{\epsilon_{w}}{\epsilon_{f}}(\kappa h_{1})^{-1}$, wherein $\kappa^{-1}$ is the (presumed small) Debye length, which represents the thickness of the electric double layer. Under this macro scale boundary condition (1) the dimensionless electric potentials within the bulk fluid and within the wall both satisfy the Laplace equation. In addition, the potential within the bulk fluid satisfies the Neumann-type boundary condition $\frac{\partial\phi_{f}}{\partial n}=0$ applied on the channel walls ($A_{\infty}BC_{\infty}$ and $D_{\infty}EF_{\infty}$ in Fig. \ref{fig:The-L-junction-configuration.}). These boundary conditions are supplemented by the requirements that$\frac{\partial\phi_{f}}{\partial x}=-1$ within the narrow channel far from the junction ( $x\rightarrow-\infty$) (see Fig. \ref{fig:The-L-junction-configuration.}) and by conservation of electric-field flux $\frac{\partial\phi_{f}}{\partial y}=-\frac{h_{1}}{h_{2}}$ far from the junction within the wider channel ($y\rightarrow\infty$), as well as appropriate 'far field' ($x^{2}+y^{2}\rightarrow\infty$) decay conditions within the wall domain. The problems governing $\phi_{f}$ and $\phi_{w}$ are thus decoupled and may be solved recursively. Making use of the Schwartz-Christoffel (e.g. Milne-Thomson \cite{milne-thomson_theoretical_2011}) transformation \begin{equation} z_{(t)}=\frac{1}{\pi}\left[\frac{h_{2}}{h_{1}}ln\left(\frac{t+1}{t-1}\right)+2tan^{-1}\left(\frac{h_{1}}{h_{2}}t\right)-\pi\right],\label{eq:SC mapping 1} \end{equation} and \begin{equation} \omega_{(t)}=\frac{1+\left(h_{1}/h_{2}\right)^{2}}{1-t^{2}}.\label{eq:SC mapping 2} \end{equation} the physical complex $z=x+iy$ plane is mapped onto the upper half of the $\omega=\xi+i\eta$ plane, to obtain for a 2D potential of a point charge located at $\omega=1$ \begin{equation} \phi_{f}=\frac{1}{\pi}Re\left\{ ln\left(\omega-1\right)\right\} ,\label{eq:Point charge potential in mapped plane} \end{equation} wherein $'Re\left\{ \right\} '$ denotes the real part. Expanding (\ref{eq:SC mapping 1},\ref{eq:SC mapping 2}) and (\ref{eq:Point charge potential in mapped plane}) for $\left|\omega\right|\ll1$ (corresponding to $\left|t\right|\gg\frac{h_{2}}{h_{1}}$ ) one obtains the following potential function: \begin{equation} \phi_{f}\backsim\frac{1}{\pi}\left(1+\left(\frac{h_{1}}{h_{2}}\right)^{2}\right)^{\unitfrac{1}{3}}Re\left\{ \left(-\frac{3\pi}{2}z\right)^{\unitfrac{2}{3}}\right\} ,\label{eq:Linear symmetric fluid potential} \end{equation} which holds in the vicinity of the corner $z=0$. Once $\phi_{f}$ is known, one can calculate $\phi_{w}$ to satisfy Laplace's equation within the wall domain and (\ref{eq:Robin Cond. on wall-1}) from which we find the total zeta potential to be \begin{equation} \zeta=\phi_{w}-\phi_{f}=\zeta^{eq}+\zeta^{i}\;\; on\;\; A_{\infty}BC_{\infty}, \end{equation} wherein $\zeta^{i}=-\alpha\frac{\partial\phi_{w}}{\partial n}$ denotes the induced zeta potential. On the channel walls ($D_{\infty}EF_{\infty}$) opposite from those forming the corner ($A_{\infty}BC_{\infty}$) the latter effect is negligible and it is assumed that $\zeta=\zeta_{w}^{eq}$. The fluid velocity $\mathbf{v}$ and pressure $p$ are normalized by $\varepsilon_{0}\varepsilon_{f}E_{0}^{2}\nicefrac{h_{1}}{\eta}$ and $\varepsilon_{0}\varepsilon_{f}E_{0}^{2}$, respectively, where $\eta$ denotes the dynamic viscosity of the electrolyte solution and $\varepsilon_{0}$ is the permittivity of vacuum. The quasi-steady, small-Reynolds number flow of the bulk electro-neutral fluid is governed by the continuity equation $\nabla\cdot\mathbf{v}=0$ and the Stokes equation $\nabla p=\nabla^{2}\mathbf{v}$. On the channel walls ($A_{\infty}BC_{\infty}$ and $D_{\infty}EF_{\infty}$ in Fig. \ref{fig:The-L-junction-configuration.}) $\mathbf{v}$ satisfies the vanishing of the fluid velocity normal to the wall and the Helmholtz-Smoluchowski slip velocity \cite{lyklema_fundamentals_1995}condition \begin{equation} \mathbf{v}_{\mathbf{\parallel}}=-\zeta\mathbf{E}_{\mathbf{\parallel}}=-\zeta_{w}^{eq}\mathbf{E}_{\mathbf{\parallel}}-\zeta^{i}\mathbf{E}_{\mathbf{\parallel}},\label{eq:zeta=00003Deq+icd} \end{equation} where the subscript $'\Vert'$ denotes the vector component tangent to the wall. Far upstream and downstream of the corner the pressure and fluid-velocity vector become uniform across the channel. We therefore choose to present the velocity field as the sum of the linear electro-osmotic flow (EOF) and induced contributions \begin{equation} \mathbf{v}=\zeta_{w}^{eq}\mathbf{\nabla}\phi_{f}+\mathbf{v_{\mathit{ICEO}}}=\mathbf{\mathbf{v}}_{\mathbf{\mathit{EOF}}}+\mathbf{v_{\mathit{ICEO}}}.\label{eq:V eq+icd} \end{equation} The first term on the right-hand side of (\ref{eq:V eq+icd}) is an irrotational field, and based on the local approximation (\ref{eq:Linear symmetric fluid potential}) may be written as \begin{equation} \begin{array}{c} \mathbf{\mathbf{v}}_{\mathbf{\mathit{EOF}}}=\zeta_{w}^{eq}\mathbf{\nabla}\phi_{f}\\ =\zeta_{w}^{eq}\left(1+\left(\frac{h_{1}}{h_{2}}\right)^{2}\right)^{\unitfrac{1}{3}}\left(\frac{3}{2}\pi r\right)^{-\unitfrac{1}{3}}\left[cos\left(\frac{2}{3}\left(\theta+\pi\right)\right)\hat{\mathbf{r}}-sin\left(\frac{2}{3}\left(\theta+\pi\right)\right)\hat{\mathbf{\mathbf{\mathbf{\boldsymbol{\theta}}}}}\right]. \end{array}\label{eq:linear velocity} \end{equation} The induced part of the velocity $\mathbf{v}_{ICEO}$ is derivable from the stream function $\psi$ satisfying the bi-harmonic equation $\nabla^{4}\psi=0$ together with the requirements that $\psi=const.$ on each segment of the boundary of the fluid domain ($A_{\infty}BC_{\infty}$ and $D_{\infty}EF_{\infty}$ in Fig. \ref{fig:The-L-junction-configuration.}) and the slip-velocity condition resulting from the second term of (\ref{eq:zeta=00003Deq+icd}). To obtain an approximate 'local' expression for $\psi$ in the vicinity of the corner, the plane polar ($r,\theta$) coordinates were employed such that the origin lies at the corner and $\theta=0$ corresponds to the bisector of the fluid domain (eq. 3.8' in \cite{yossifon_electro-osmotic_2006}), from which one obtains \begin{equation} \begin{array}{c} \mathbf{v_{\mathit{ICEO}}}=\alpha\left(\frac{3}{2}\right)^{\unitfrac{3}{2}}\left\{ \frac{2}{3\pi}\left[1+\left(\frac{h_{1}}{h_{2}}\right)^{2}\right]\right\} ^{\unitfrac{2}{3}}\\ \cdot\frac{r^{-\unitfrac{2}{3}}}{3}\left[\left(cos\left(\frac{1}{3}\theta\right)+5cos\left(\frac{5}{3}\theta\right)\right)\hat{\mathbf{r}}-\left(sin\left(\frac{1}{3}\theta\right)+sin\left(\frac{5}{3}\theta\right)\right)\hat{\mathbf{\mathbf{\mathbf{\boldsymbol{\theta}}}}}\right]. \end{array}\label{eq:induced velocity} \end{equation} \subsection{Particle equation of motion} In the limit of low particle concentration, particle-particle interactions may be ignored and the governing kinetic equations for immersed particles include only terms related to the hydrodynamics of the fluid and externally applied forces acting on individual particles. The former includes the ICEO flow and linear EOF, the latter comprised of the DEP and linear electrophoretic (EP) contribution. For a spherical neutrally buoyant particle located at $x_{p}\left(t\right)$ and moving with velocity $\mathbf{v}_{\mathbf{p}}\left(t\right)$ within a fluid flow $\mathbf{u}$, the dimensionless equation of motion has the form \cite{babiano_dynamics_2000} \begin{equation} \begin{array}{c} St\frac{d\mathbf{v_{p}}}{dt}=St\frac{\rho_{f}}{\rho_{p}}\frac{D\mathbf{u}}{Dt}-St\frac{\rho_{f}}{2\rho_{p}}\left[\frac{d\mathbf{v_{p}}}{dt}-\frac{D}{Dt}\left(\mathbf{u}+\frac{3Fa}{5}\nabla^{2}\mathbf{u}\right)\right]\\ -\left(\mathbf{v_{p}-u-}Fa\nabla^{2}\mathbf{u}\right)-Ba\int_{0}^{t}\left[\frac{1}{\sqrt{t-\tau}}\frac{d}{d\tau}\left(\mathbf{v_{p}-u-}Fa\nabla^{2}\mathbf{u}\right)\right]d\tau+\mathbf{U_{Force}}, \end{array} \end{equation} where $\mathbf{U_{Force}}$ is a general term for additional forces that may act on the particle, e.g. DEP and EP forces in our case. The typical order of the particle radius $a\backsim O\left(10^{-6}m\right)$, the channel length scale $L\backsim O\left(10^{-4}m\right)$, velocity $U_{0}\backsim O\left(10^{-3}\nicefrac{m}{s}\right)$, particle and fluid densities $\rho_{f},\rho_{p}\backsim O\left(10^{3}\nicefrac{kg}{m^{3}}\right)$ and the fluid dynamic viscosity $\mu\backsim O\left(10^{-3}Pa\cdot s\right)$. These result in negligible Stokes $\left(St=\frac{2a^{2}\rho_{p}U_{0}}{9\mu L}\thickapprox2\cdot10^{-6}\right)$ , Faxen $\left(Fa=\frac{a^{2}}{6L^{2}}\thickapprox1.7\cdot10^{-5}\right)$ , and Basset $\left(Ba=a\sqrt{\frac{\rho_{f}U_{0}}{\pi\mu L}}\thickapprox1.7\cdot10^{-3}\right)$ dimensionless numbers. Thus, the particle equation of motion can be reduced to \begin{equation} \mathbf{v}_{p}=\mathbf{\mathbf{v}_{\mathit{LINEAR}}}+\mathbf{v_{\mathit{ICEO}}}+\mathbf{\mathbf{v}_{\mathit{DEP}}},\label{eq:velocity components-1} \end{equation} wherein the linear component comprises of both EOF and EP terms \begin{equation} \mathbf{\mathbf{v}_{\mathit{LINEAR}}}=\mathbf{\mathbf{\mathbf{v}_{\mathit{EOF}}}+\mathbf{\mathbf{v}_{\mathit{EP}}}}=\left(\zeta_{w}^{eq}-\zeta_{p}^{eq}\right)\mathbf{\nabla}\phi_{f}=\zeta^{eq}\mathbf{\nabla}\phi_{f},\label{eq:Linear velocity components} \end{equation} with the subscripts $w$ and $p$ standing for the microchannel wall and particle, respectively. Since both the microchannel wall and particle surface are negatively charged the EOF and EP counteracts each other with the former being dominant (see section IV.E and Fig \ref{fig:particle-aggregation-plots}). To account for particle trapping a non-divergence free attractive force term must be added \cite{liu_dynamic_2010} which, as was recently shown for the same tracer particles used in the current study \cite{rozitsky_quantifying_2013,green_dynamical_2013}, is provided by the short-range positive DEP attractive force existing under DC field conditions. The dimensional (symbolized by tilda) dielectrophoretic particle velocity contribution is \begin{equation} \mathbf{\tilde{v}}_{\mathbf{\mathit{DEP}}}=\frac{\mathbf{\mathbf{\tilde{F}}}_{\mathbf{\mathit{DEP}}}}{6\pi\eta a}=\frac{\varepsilon_{0}\varepsilon_{f}a^{2}f_{CM}\tilde{\nabla}\left|\mathbf{\tilde{E}}\right|^{2}}{3\eta},\label{eq:DEP dimensional velocity} \end{equation} wherein $a$ is the particle radius and $f_{CM}$ is the Clausius-Mossotti factor \cite{jones_electromechanics_1995}. Non-dimensionalization yields \begin{equation} \mathbf{v}_{\mathbf{\mathit{DEP}}}=\frac{1}{3}f_{CM}\left(\frac{a}{h_{1}}\right)^{2}\nabla\left|\mathbf{E}\right|^{2},\label{eq:DEP nondimensionalization} \end{equation} where from the local approximation of the bulk fluid potential (\ref{eq:Linear symmetric fluid potential}) \begin{equation} \mathbf{v}_{\mathbf{\mathit{DEP}}}=-f_{CM}\left(\frac{a}{h_{1}}\right)^{2}\frac{\pi}{3}\left[1+\left(\frac{h_{1}}{h_{2}}\right)^{2}\right]^{\unitfrac{2}{3}}\left(\frac{3\pi r}{2}\right)^{\unitfrac{-5}{3}}\hat{\mathbf{r}.}\label{eq:DEP velocity} \end{equation} Thus, the total particle velocity (\ref{eq:velocity components-1}) can be rewritten in terms of the \textit{relative }contributions of the above linear (\ref{eq:linear velocity},\ref{eq:Linear velocity components}), induced (\ref{eq:induced velocity}) and dielectrophoretic(\ref{eq:DEP velocity}) velocities, \begin{equation} \frac{\mathbf{v}_{p}}{\zeta^{eq}}=\mathbf{\mathbf{\nabla}}\phi_{f}+\lambda_{ICEO}\frac{\mathbf{v}_{ICEO}}{\alpha}+\lambda_{DEP}\frac{\mathbf{v}_{DEP}}{f_{CM}\left(\nicefrac{a}{h_{1}}\right)^{2}},\label{eq:velocity components lambdas} \end{equation} where $\lambda_{ICEO}=\left|\nicefrac{\alpha}{\zeta^{eq}}\right|$ represents the relative importance of the ``induced'' and ``linear'' respective parts of $\mathbf{v}_{p}$, while $\lambda_{DEP}=\left|\left(\frac{a}{h_{1}}\right)^{2}\nicefrac{f_{CM}}{\zeta^{eq}}\right|$ represents the relative importance of its ``dielectrophoretic'' and ``linear'' respective parts. \section{Results and Discussion} \subsection{Experimental} The time evolution of the particle trapping is quantified in Fig. \ref{fig:particle-aggregation-plots} \begin{figure} \begin{centering} \includegraphics[width=0.95\columnwidth]{\string"experimental_intensity\string".png} \par\end{centering} \caption{\label{fig:particle-aggregation-plots}Time evolution of the total fluorescent intensity (normalized by that corresponding to t=0 {[}s{]}) within the control volume (depicted in inset g) obtained using Confocal microscopy for blue 1{[}\textgreek{m}m{]} fluorescent particles with $\frac{h_{2}}{h_{1}}=5$. Insets a-d are confocal images taken at varying time under a constant applied field of 1388 {[}V/cm{]}, while insets d-g correspond to a constant time of \textasciitilde{}1.3{[}s{]} at various applied fields. The arrow in inset a of the figure indicate the direction of the electric field (i.e. $E_{0}<0$) and the net particle motion away from the corner.} \end{figure} by determining the overall fluorescent intensity within a constant control volume for various applied electric fields ($E_{0}<0$) and particle sizes. The particle net motion away from the corner is in the direction of the electric field indicating the dominance of EOF over EP. Images were obtained using the confocal microscope system. It is clearly demonstrated, based on the slope of the curves in Fig. \ref{fig:particle-aggregation-plots}, that the accumulation becomes more rapid as the voltage increases. Also, it is seen that for low enough fields (e.g. under $\sim$555 {[}V/cm{]}) the accumulation is almost negligible. This observation is in qualitative agreement with (\ref{eq:DEP nondimensionalization}) where it can be seen that the DEP trapping force scales non-linearly with the electric field. The insets a-g in Fig. \ref{fig:particle-aggregation-plots} are confocal images taken at discrete times and various voltages to illustrate the particle accumulation at the corner vicinity assisted by an ICEO vortex downstream of the corner. Insets a-d, corresponding to a constant applied field of 1388 {[}V/cm{]}, depict the time accumulation of the particles until a saturation like behavior occurs (at \textasciitilde{}1.3 {[}s{]}) which may correspond to a finite capacity of trapped particles (see Movie\#1 in \cite{_supplementary_2013}). Insets d-g are images taken at time of \textasciitilde{}1.3 {[}s{]}, which depict the downstream vortex patterns and particle accumulation for various applied fields. It is clear from Fig. \ref{fig:particle-aggregation-plots} that saturation is reached at longer times with decreased applied fields as can be expected from the corresponding observed decrease in vortex intensities along with particle trapping (insets d-g). Curiously, upon reversal of the electric field direction, a pronounced asymmetry of the downstream ICEO vortex together with particle trapping (Fig. \ref{fig:Asymmetry Experimental}) \begin{figure} \begin{centering} \includegraphics[width=0.95\columnwidth]{\string"experimental_asymmetry\string".png} \par\end{centering} \caption{\label{fig:Asymmetry Experimental}Particle dynamics of fluorescent red 1{[}\textgreek{m}m{]} particles upon reversal of the electric field polarity (bold arrow). $\frac{h_{2}}{h_{1}}=5$ , externally applied field $\left|E_{0}\right|$ is 833 {[}V/cm{]}.} \end{figure} are observed. For an electric field directed from the narrow to wide channel ($E_{0}>0$) no trapping is seen downstream of the corner and the dark region presumably consists of the downstream vortex (see Movie\#2 in \cite{_supplementary_2013}). As will be shown theoretically below, for $E_{0}>0$, DEP is no longer effective at trapping particles downstream of the corner. However, upstream of the corner, DEP trapping is observed irrespective of field direction. That the trapping increases with the particle size (Fig. \ref{fig:Particle size effects}) \begin{figure} \begin{centering} \includegraphics[width=0.95\columnwidth]{\string"experimental_size\string".png} \par\end{centering} \caption{\label{fig:Particle size effects}The effect of the particle size on time evolution of the total fluorescent intensity (normalized by that corresponding to t=0 {[}s{]}) at various applied electric fields.} \end{figure} is yet another indication that the mechanism is DEP controlled as the force scales linearly with the particle volume (\ref{eq:DEP dimensional velocity}) (see Movie \#3 in \cite{_supplementary_2013}). In the above figures the electric fields within the narrow channels were calculated based on the geometry of the two channels and the applied voltage according to a simplified Ohmic model $E_{0}=V/\left(L\left(1+h_{1}/h_{2}\right)\right)$. $L$ is the length of the narrow and wide channels (i.e. 15mm) and $V$ is the applied potential difference across the reservoirs. \subsection{Theoretical model} We examine the evolution of the topology of the particle pathlines in the vicinity of the channel corner for varying ratios $\lambda_{ICEO}=\left|\nicefrac{\alpha}{\zeta^{eq}}\right|$ and $\lambda_{DEP}=\left|\left(\frac{a}{h_{1}}\right)^{2}\nicefrac{f_{CM}}{\zeta^{eq}}\right|$. Alternatively, by the assumed scaling of $\zeta^{eq}$, the following also corresponds to the development of the flow and particle dynamics with increasing intensity of the external field. Fig. \ref{fig:gilads-fields} presents the local particle pathline pattern in the vicinity of the corner (i.e., $r\ll1$) for $\frac{h_{2}}{h_{1}}=5$ at the indicated values of $\lambda_{ICEO}$ and $\lambda_{DEP}$. Qualitatively similar figures are obtained when selecting other values of $\frac{h_{2}}{h_{1}}$. Inset (a) depicts the irrotational antisymmetric velocity field (see the first term in (\ref{eq:velocity components lambdas}) ) corresponding to the linear part of $\mathbf{v}_{p}$ ($\lambda_{ICEO}=0,\,\lambda_{DEP}=0$), which is derivable from $\phi_{f}$ (\ref{eq:Linear symmetric fluid potential}) and represents the superposition of EOF and opposing EP. The induced part ($\lambda_{ICEO}\rightarrow\infty$) of the electro-osmotic flow, $\mathbf{v}_{\mathbf{\mathit{ICEO}}}$, is presented in part (b). The symmetric field obtained from (\ref{eq:induced velocity}) demonstrates the formation of a jet by the convergent flow at the corner. Inset (d) of the same figure shows the flow resulting from the superposition of both the linear and induced flow components for $\lambda_{ICEO}=0.25$. As $r\rightarrow0$ the induced velocity ($\varpropto r^{-\unitfrac{2}{3}}$) prevails over the linear ($\varpropto r^{-\unitfrac{1}{3}}$) part. Accordingly, it produces a reverse flow along the wall immediately downstream of the corner. This is expected to create a domain of closed streamlines as indeed appears in Fig. \ref{fig:gilads-fields}(d) \begin{figure} \begin{centering} \includegraphics[width=0.95\columnwidth]{\string"theory_symmetric\string".png} \par\end{centering} \caption{\label{fig:gilads-fields} Analytically derivable local flow streamlines (insets (a), (b) and (d)) and particle pathline patterns (insets (c) and (e)) in the vicinity of the corner for $\frac{h_{2}}{h_{1}}=5$, $E_{0}<0$, and the indicated values of $\lambda_{ICEO}$ and $\lambda_{DEP}$.} \end{figure} . The extent of this domain is diminishing with decreasing $\lambda_{ICEO}$, or equivalently with decreasing electric field intensity, which is in qualitative agreement with the experimental results shown in insets (d-g) of Fig. \ref{fig:particle-aggregation-plots}. Upstream of the corner, the superposed velocities accumulate and for this reason no reverse flow appears at the wall. As shown in the sequel, reverse flow and closed-streamline domains may nevertheless occur upstream of the corner (Fig. \ref{fig:COMSOL results}(a,c) and Yossifon et al. 2006 \cite{yossifon_electro-osmotic_2006}) for sufficiently large values of $\lambda_{ICEO}$. However, unlike the vortex occurring downstream of the corner, this reverse flow occurs near the opposite channel walls ($D_{\infty}EF_{\infty}$) and is therefore not a \textquotedblleft{}local\textquotedblright{} phenomenon. Inset (c) represents the motion of the particles dominated solely by DEP ($\lambda_{DEP}\rightarrow\infty$ ) whereby the colloids are attracted radially towards the corner acting as a sink. Finally, the particle pathlines resulting from the superposition of the dielectrophoretic contribution ($\lambda_{DEP}=0.125$) to the ICEO fluid flow ($\lambda_{ICEO}=0.25$) are shown in inset (e). Here, as $r\rightarrow0$ it is the DEP velocity that prevails ($\varpropto r^{-\unitfrac{5}{3}}$) over both the induced ($\varpropto r^{-\unitfrac{2}{3}}$) and linear ($\varpropto r^{-\unitfrac{1}{3}}$) velocity components. This manifests itself in attracting pathlines which are downstream towards the corner where particles are trapped. Additionally, the DEP force \textquotedblleft{}breaks\textquotedblright{} the closed streamlines obtained for the ICEO flow (inset (d)) at the downstream side of the corner. This results in significant enhancement to the trapping rate due to particles arriving from the upstream side. This prediction is in agreement with the experimental results shown in Figs. \ref{fig:particle-aggregation-plots},\ref{fig:Asymmetry Experimental} which illustrate the DEP trapping of the particles both at the upstream side of the corner to which particles pathlines are attracted and at its downstream side where rapid accumulation of particles is assisted by the ICEO vortex. The linear relationship between DEP force and particle volume (\ref{eq:DEP velocity}) is clearly visible in the experimental results (Fig. \ref{fig:Particle size effects}) wherein the rate of accumulation increases with increasing particle size. \subsection{Asymmetry of the problem} An intriguing feature of the experimental results is the distinct difference in particle dynamics and their eventual accumulation upon reversal of the electric field polarity (see Fig. \ref{fig:Asymmetry Experimental}). The theoretical predictions based on the approximate 'local' expression of the potential (\ref{eq:Linear symmetric fluid potential}); Fig. \ref{fig:gilads-fields}), which is symmetric with respect to $\theta$, cannot account for this effect which stems from the geometrical asymmetry of the L-junction microchannels width $\left(\frac{h_{2}}{h_{1}}\neq1\right)$. In contrast, eqs.(\ref{eq:SC mapping 1}-\ref{eq:Point charge potential in mapped plane}) are the 'global' solution of the potential from which the linear asymmetric particle velocity field can be directly extracted to obtain \begin{subequations} \begin{equation} \mathbf{v}_{LINEAR}=\zeta^{eq}\mathbf{\mathbf{\nabla}}\phi_{f}=-\zeta^{eq}\frac{1}{\pi}Re\left\{ \frac{1}{\omega-1}\frac{d\omega}{dt}\frac{dt}{dz}\nabla z\right\} ,\label{eq:asymmetric linear - derivatives} \end{equation} which from equations (\ref{eq:SC mapping 1},\ref{eq:SC mapping 2}) yields \begin{equation} \mathbf{v}_{LINEAR}=\zeta^{eq}\frac{h_{1}}{h_{2}}\left[Re\left\{ it\right\} \hat{\mathbf{x}}-Re\left\{ t\right\} \hat{\mathbf{y}}\right].\label{eq:asymmetric linear velocity} \end{equation} \end{subequations} We combine this asymmetric linear field (\ref{eq:asymmetric linear velocity}) with the induced (\ref{eq:induced velocity}) and DEP (\ref{eq:DEP velocity}) velocities in their symmetric form as they decay more rapidly then the linear part ($\varpropto r^{-\unitfrac{5}{3}}$ and $\varpropto r^{-\unitfrac{2}{3}}$, as opposed to $\varpropto r^{-\unitfrac{1}{3}}$ respectively). These combined fields were then calculated in the vicinity of the corner for both polarities of the electric field and are illustrated in Fig. \ref{fig:asymmetric field} \begin{figure} \begin{centering} \includegraphics[width=0.95\columnwidth]{\string"theory_assymetric\string".png} \par\end{centering} \caption{\label{fig:asymmetric field}Particle pathlines upon reversal of the applied electric field polarity following an asymmetric linear field with symmetric DEP and ICEO fields, in the vicinity of the corner for $\frac{h_{2}}{h_{1}}=5$ and $\lambda_{ICEO}=0.25$. The narrow channel is on the bottom-left while the wide channel is on the top-right (see Fig. \ref{fig:The-L-junction-configuration.}).} \end{figure} . From Fig. \ref{fig:asymmetric field} it is seen that qualitatively, the DEP trapping mechanism is similar to that described in the symmetric case (Fig. \ref{fig:gilads-fields}(e)), except that when the field is directed from the narrow to wide channels (Fig. \ref{fig:asymmetric field}(b)) the resulting downstream vortex is larger and so is the region of particle pathlines downstream of the corner that terminate at the corner tip (Fig. \ref{fig:asymmetric field}(d)). It is hard to explain based only on this why there is such a distinct difference in the particle entrapment in Fig. \ref{fig:Asymmetry Experimental}. However, it is clear that the enlargement of the downstream vortex size in the wider channel compared to that in the opposite direction, due to the asymmetric linear velocity field, occurs concurrently with the decrease of the DEP force due to the same asymmetry of the electric field. With further increase of the electric field, a 'global' solution accounting for the finite dimensions of the channel's width would further emphasize this asymmetry, as will be shown in Fig. \ref{fig:COMSOL results}, where the size of the vortex in the narrow channel is shown to quickly approach its maximum size which is dictated by the channel width. \subsection{Numerical results} Using commercial code (COMSOL) the decoupled electrostatic (Laplace equations within the bulk fluid and wall domains along with (\ref{eq:Robin Cond. on wall-1})) and hydrodynamic (Stokes and continuity equations along with (\ref{eq:zeta=00003Deq+icd})) equations were solved within the full microchannel domain) rather than only at the very vicinity of the corner as in the analytically derived eqs. (\ref{eq:linear velocity},\ref{eq:induced velocity},\ref{eq:DEP velocity}). Upon this solution, the particle DEP velocity component (\ref{eq:DEP nondimensionalization}) was added in order to obtain the particle streamlines. Fig. \ref{fig:COMSOL results} \begin{figure} \begin{centering} \includegraphics[width=0.95\columnwidth]{\string"numeric\string".png} \par\end{centering} \caption{\label{fig:COMSOL results} \textquotedblleft{}Global\textquotedblright{} numerical solution of the particle pathlines patterns upon reversal of the applied electric field polarity for $\frac{h_{2}}{h_{1}}=5$ and $\lambda_{ICEO}=2.5$,} \end{figure} provides a \textquotedblleft{}global\textquotedblright{} description of the streamline pattern for $\lambda_{ICEO}=2.5$ for two opposite electric field polarities with/without the DEP contribution. insets (a) and (c) of the figure show the vortex formation downstream of the corner due to the combined induced and linear flows under reverse field polarity. While the induced ejection intensity is the same in both cases, the linear flows (and the electric fields) roughly differ by a factor of $\tfrac{h_{2}}{h_{1}}$, which corresponds to the ratio of the average velocities within the narrow and wide channels away from the corner, affecting the size of the formed downstream vortex. Insets (b) and (d) of the figure show the particle pathlines corresponding to the flow fields in insets (a) and (c), respectively, with the addition of the DEP force. That DEP force attracts particles towards the corner is clearly seen both in the upstream side where streamlines are terminated at the corner and the downstream side where the streamlines (insets (b) and (d)) are also terminated at the corner after converging along the corner wall. These simulation results are in good agreement with both the theoretical (Fig. \ref{fig:gilads-fields}, Fig. \ref{fig:asymmetric field}) results, as well as experimental observations (Fig. \ref{fig:Asymmetry Experimental}). Interestingly, upon reversal of the field polarity (inset (d)) the particle streamlines look very different to those in inset (b). This stands in contrast to the analytically predicted asymmetric plot (Fig. \ref{fig:asymmetric field}), where although the size of the downstream vortex was different in correspondence with the channel width, the same qualitative behavior of particle trapping was predicted. This discrepancy can be explained by the fact that the results shown in Fig. \ref{fig:COMSOL results} were calculated for relative high applied fields wherein the 'global' solution was strongly influenced by the ``far'' boundaries. In contrast, despite the global linear contribution, the analytical solution remains local (Fig. \ref{fig:asymmetric field}) due to the DEP and ICEO local approximations. This was confirmed by numerical simulations at low enough voltages (Fig. S1 in \cite{_supplementary_2013}) which approximate the local solution of Fig. \ref{fig:asymmetric field}. With the inclusion of the DEP force, the vortex center points (both upstream and downstream) become unstable. In insets (a),(c) the vortices have closed stream/pathlines, as opposed to (b),(d) where particles are moving in a spiral path into/away from the vortex center. This break in the stability of the fixed points is due to the attracting p-DEP force at the corner which leads to their conformation change from closed streamlines to unstable spirals that eject particles from the vortex center. The above arguments support the experimental findings (Fig. \ref{fig:Asymmetry Experimental}) where particles were not observed to be trapped upstream of the corner in the case of the field pointing from the narrow to wide channels (i.e. $E_{0}>0$), in contrast to the immense trapping occurring in the opposite direction. Inset (c) here resembles Fig. 4(c) in \cite{yossifon_electro-osmotic_2006} in the sense that an upstream vortex may develop, except that the same flow pattern occurs at yet larger $\lambda_{ICEO}$ values (or equivalently larger electric fields). The difference in the $D_{\infty}E$ (bottom) boundary condition (H-S slip condition at the wall instead of a symmetry line) delays the flow reversal within the narrow channel upstream of the corner, resulting from the continuity of the mass flow rate, which is necessary for obtaining the downstream vortex. \subsection{Scaling arguments regarding the different competing mechanisms} Here we use the following quantities: $\epsilon_{0}=8.85\cdot10^{-12}\unitfrac{F}{m}$ is the vacuum permittivity; $\epsilon_{f}=80$ and $\epsilon_{w}=3$ are the relative dielectric constants of the fluid and microchannel (PDMS), respectively; $R=8.314\unitfrac{J}{\left(K\cdot mol\right)}$ is the universal gas constant; $T=300^{\circ}K$ is the absolute temperature; $F=9.648\cdot10^{4}\unitfrac{C}{mol}$ is the Faraday constant; $c_{0}=10^{-5}M$ is the bulk ionic concentration; $h_{1}=80\cdot10^{-6}m$ is the narrow channel width; $a=0.5\cdot10^{-6}m$ is the particle radius; $\sigma_{p}=133.5\unitfrac{\mu S}{cm}$ (obtained for DI, i.e. $\sigma_{f}=0.05\unitfrac{\mu S}{cm}$\cite{rozitsky_quantifying_2013}) and $\sigma_{f}=2\unitfrac{\mu S}{cm}$ (measured) are the conductivities of the particle (effective conductivity due to surface conduction) and medium; the zeta potential of the PDMS can be estimated as\cite{kirby_zeta_2004} $\tilde{\zeta}_{w}^{eq}=-50mV$ for pH$\thickapprox$5.5 (measured) while that of polystyrene particles $\widetilde{\zeta}_{p}^{eq}=-34mV$ (measured using Zetasizer for 1\textgreek{m}m particles in $3\cdot10^{-5}M$ solution, $\sigma_{f}=4\unitfrac{\mu S}{cm}$). Hence, $\kappa^{-1}=\lambda_{D}=\sqrt{\frac{\epsilon_{0}\epsilon_{f}RT}{2F^{2}c_{0}}}\thickapprox97nm$ is the Debye length. Using the following expressions: $\alpha=\frac{\epsilon_{w}\lambda_{D}}{\epsilon_{f}h_{1}}$, $\zeta^{eq}=\frac{\tilde{\zeta}^{eq}}{E_{0}h_{1}}$ (wherein the tilde stands for the dimensional zeta-potential) and $f_{CM}=\frac{\sigma_{p}-\sigma_{f}}{\sigma_{p}+2\sigma_{f}}\approx0.96$ then for a value of $E_{0}=1388\unitfrac{V}{cm}$ (corresponding to the case when the combined effect of all competing mechanisms is most clearly seen experimentally in Fig.3): \begin{subequations} \begin{equation} \lambda_{ICEO}=\frac{\alpha}{\zeta^{eq}}=\frac{\alpha}{\zeta_{w}^{eq}-\zeta_{p}^{eq}}=\frac{\epsilon_{w}\lambda_{D}}{\epsilon_{f}\left(\tilde{\zeta_{w}^{eq}}-\tilde{\zeta_{p}^{eq}}\right)}E_{0}\approx0.028 \end{equation} \begin{equation} \lambda_{DEP}=\left(\frac{a}{h_{1}}\right)^{2}\frac{f_{CM}}{\zeta^{eq}}=\frac{a^{2}}{h_{1}}\frac{f_{CM}}{\left(\tilde{\zeta_{w}^{eq}}-\tilde{\zeta_{p}^{eq}}\right)}E_{0}\approx0.023 \end{equation} \end{subequations} Interestingly, the ratio $\lambda_{ICEO}/\lambda_{DEP}\approx1.21$ is in agreement with the theoretical analysis wherein a ratio of the $O\left(1\right)$ was used. These values are smaller by factors of $\thickapprox$9 and $\thickapprox$5 than the theoretical values of $\lambda_{ICEO}=0.25$ and $\lambda_{DEP}=0.125$ (Fig.6), corresponding to the case when all competing mechanisms exist simultaneously to obtain qualitatively the combined effect observed experimentally at high applied fields ( $E_{0}=1388\unitfrac{V}{cm}$). Such a discrepancy is not unexpected in the very crude estimates of scaling analysis. However, one plausible explanation for this could be that while DEP, ICEP and EP are local effects (the former two are non-negligible only at the vicinity of the corner while the latter depends only on the local electric field) the electroosmotic flow must obey the continuity equation (i.e. mass flux balance) everywhere in the system. Thus, it is more prone to disturbances, .e.g reservoir-microchannel entrance effects inducing back pressure which may lower the net EOF flow, resulting, in better agreement between theory and experiments. \section{Concluding remarks} The purpose of the present contribution is to study the colloid dynamics in conjunction with appearance of ICEO vortices in a micro-channel junction configuration. Our main experimental results appear in Fig. \ref{fig:particle-aggregation-plots} where we demonstrate the rapid particle trapping and accumulation at the vicinity of the corner.The exact nature of the interparticle forces {[}e.g. Posner 2009\cite{posner_properties_2009}{]} existing at the corner vicinity when the particle accumulation becomes significant is beyond the scope of the current study as we are mainly concerned with the physical mechanism that is responsible, in the first place, for the migration of particles towards the corner where they can potentially accumulate. This phenomenon was shown theoretically, at the single particle level, to be due to the combination of a short-range non-divergence free DEP trapping force which is assisted by a far-field electro-convection downstream vortex that feeds the corner with particles (Fig. \ref{fig:gilads-fields}). The analytical model is a local approximation of the global solution in the vicinity of the corner. Accordingly, the resulting expressions involve no indeterminate parameters whose evaluation would require the use of various ad hoc estimates, in particular with respect to obtaining a correct description of the dielectrophoretic contribution to the particle velocity superposed on the ICEO and linear flow velocities. Finally, we clearly demonstrate that the sharp corner geometry, in addition to its potential to greatly enhance forced convection due to the intensified velocity near the channel walls around the induced vortex, can also be used for rapid trapping of colloids. \section*{Acknowledgments} We thank Alicia Boymelgreen for her invaluable inputs. This work was supported by ISF grant 1078/10. The fabrication of the chip was possible through the financial and technical support of the Technion RBNI (Russell Berrie Nanotechnology Institute) and MNFU (Micro Nano Fabrication Unit).
2105.00282
\section{Introduction} \label{sec:intro} Various ML pipeline composition and optimisation methods have been proposed in recent years to find valid and well-performing pipelines given both a problem (i.e., a dataset) and a set of ML components with tunable hyperparameters \cite{sabu18,zohu19,ngma20,kemu20}. One of the most successful ML pipeline composition and optimisation methods is based on a sequential model-based optimisation (SMAC) approach \cite{thhu13}. The key idea of this method is to find a balance between the exploration and exploitation of configuration spaces. The exploitation searches for ML pipelines that are similar to the current best-performing pipelines in terms of pipeline structures and values of hyperparameters, whereas the exploration randomly searches for pipelines within the configuration spaces to escape the local optimum potentially emerging from the exploitation part of the optimisation process. There are several automated machine learning (AutoML) tools that implement SMAC. AutoWeka version 0.5 \cite{thhu13} implements SMAC to search for one-component pipelines. The configuration spaces of AutoWeka 0.5 include only predictors and their hyperparameters. AutoWeka4MCPS extends the configuration spaces with the data preprocessing components to construct multiple-component pipelines \cite{sabu18}. This extension of the configuration space has a potential for exploration of a wider range of diverse, well performing ML pipelines but does come with a number of challenges. One of key challenges of large configuration spaces is that it is more difficult and requires more time to find valid and well-performing pipelines. There is therefore a need and great interest in approaches offering intelligent reduction of the configuration spaces by only selecting promising well-performing ML components. Such a reduction has a crucial advantage of allowing the ML pipeline composition and optimisation methods to work in and explore smaller configuration spaces which are constructed from the most suitable and well-performing ML components. There have been several attempts to deal with the problem of configuration space reduction using two main approaches, predefined pipeline structures and meta-learning. Firstly, the approach using predefined pipeline structures reduces configuration spaces by defining fixed structures \cite{sabu18,fekl15}, or ad-hoc specifications \cite{depi17,tsga12,olmo16,wemo18,giya18} (e.g., context-free grammars) to construct pipelines based on experts' knowledge. A drawback of this approach is the potential bias of experts' knowledge used to construct such fixed pipeline structures. Therefore, it potentially limits the opportunity to find better pipelines residing outside of such predefined templates or structures. Secondly, the approach using meta-learning reduces the configuration spaces by learning from prior evaluations. Configuration spaces are reduced by selecting top well-performing components \cite{dbbr18,va19,lebu15,albu15}, important hyperparameters and ranges of these hyperparameters for a given problem \cite{albu15,rihu18,prbo19,wemu20}. This approach requires intensive evaluations of many ML components, their hyperparameters and many datasets that have diverse characteristics \cite{lega10,lega10a,albu15}. Because of the computational challenges, previous studies only investigated the importance of hyperparameters for up to six algorithms \cite{rihu18,prbo19,wemu20}. Therefore, generating a very reliable meta-knowledge base from prior evaluations is very difficult. On account of the fact that we usually have prior evaluations resulting from executing automated ML pipeline composition and optimisation methods, we would like to address the research questions about how many and which ML components we should select to design configuration spaces acknowledging and taking into account a certain level of accuracy and reliability of a meta-knowledge base generated from such prior evaluations. Our study employs the relative landmarking method \cite{va19} to reduce configuration spaces. We construct a meta-knowledge base from prior evaluations of two-hour ML pipeline composition and optimisation of SMAC over 20 datasets. The meta-knowledge base contains the mean error rate of ML components for each prior dataset. The new dataset is evaluated by executing a certain number of landmarkers (i.e., single-component pipelines). The performance of the landmarkers on the new dataset is correlated with the performance of these landmarkers on prior datasets from the meta-knowledge base. The top \textit{k} well-performing ML components of the most similar dataset are selected to construct configuration spaces. To this end, this study has two main contributions: \begin{itemize} \item A definition of the problem of ML pipeline composition and optimisation with dynamic configuration spaces. \item A study of the impact of the levels of the configuration space reduction (i.e., specified by different values of \textit{k}) on the performance of ML pipeline composition and optimisation methods given a certain level of reliability of the meta-knowledge base. \end{itemize} This paper is divided into 6 sections. After the Introduction, Section \ref{sec:related_work} presents previous studies to reduce configuration spaces in the context of AutoML. Section \ref{sec:search_space_modelling} presents the definition of the problem of ML pipeline composition and optimisation with dynamic configuration spaces. Section \ref{sec:relative_landmarking} presents the methodology of using the relative landmarking method to reduce configuration spaces. Section \ref{sec:experiment} presents experiments to investigate the problem of configuration space reduction. Finally, Section \ref{sec:conclusion} concludes this study. \section{Related Work} \label{sec:related_work} The growing number of available ML methods with their often complex hyperparameters leads to a very quick expansion of the ML pipelines configuration options and spaces. Intelligent reduction of these configuration spaces enables the ML pipeline composition and optimisation methods to find valid and well-performing ML pipelines faster given typical constraints of execution environments and time budget. We review two main approaches to reduce the configuration spaces in the context of ML pipeline composition and optimisation. \textit{Predefined ML pipeline structures and components' hyperparameters:} This approach can be implemented as fixed pipeline templates \cite{sabu18,fekl15} or ad-hoc specifications \cite{depi17,tsga12,olmo16,wemo18,giya18} such as context-free grammars. Moreover, specific ranges of hyperparameters' values, which highly contribute to well-performing pipelines, are also predefined in these specifications. The advantage of this approach is to reduce configuration spaces by restricting the length of ML pipelines, ML components and orders of these components based on experts' knowledge. However, the disadvantage of this approach is that there might be very well performing ML pipelines outside of the predefined templates. \textit{Meta-learning:} This approach aims to reduce configuration spaces by learning the characteristics of prior evaluations. The main idea of this approach is to establish relationships between datasets' characteristics and the performance of ML components based on prior evaluations \cite{dbbr18,lebu15,lega10,albu15}. The representations of datasets' characteristics can be meta-features or the relative performance of landmarkers. The landmarkers are usually one-component pipelines that are used to evaluate the performance of ML components. The meta-learning approach can be used to reduce configuration spaces by selecting a number of well-performing ML components \cite{albu15,dbbr18,va19,lebu15} or important hyperparameters for tuning \cite{albu15,rihu18,prbo19,wemu20}. To select top well-performing pipelines, \textit{average ranking} and \textit{active testing} were used to recommend one-component pipelines for new datasets \cite{dbbr18}. However, these approaches have not been applied to AutoML yet. Moreover, these studies limit their scopes by searching for one-component pipelines. To perform the hyperparameter optimisation for the recommended one-component pipelines, these studies also use the grid search that has been proven to be not as effective as SMAC \cite{thhu13}. The previous studies have also investigated the importance of hyperparameters \cite{rihu18,prbo19,wemu20} from prior evaluations. An important hyperparameter can contribute to a high variance of the performance (i.e., error rate) of an ML algorithm. For example, gamma and complexity are the most important hyperparameters of SVM \cite{rihu18}. The results of these studies can be used to reduce configuration spaces by removing less important hyperparameters (i.e., set default values to less important hyperparameters) and less important ranges of hyperparameter values. This reduction of configuration spaces enables ML pipeline composition and optimisation methods to dedicate more time to searching for the best values of the important hyperparameters as they have the highest impact on finding well-performing pipelines. A disadvantage of these studies is that the importance of hyperparameters of ML components is studied on small sets of algorithms (i.e., up to 6 algorithms). The reason is that it is extremely challenging and time-consuming to evaluate all possible hyperparameters of available algorithms. Being different from these approaches, our approach acknowledges and accepts a certain level of reliability of prior evaluations which are used to build a meta-knowledge base. We use the relative landmarking method to select well-performing ML components to construct configuration spaces for the ML pipeline composition and optimisation. We also investigate the impact of the levels of the configuration space reduction on the performance of ML pipeline composition and optimisation methods. \section{Modelling Configuration Spaces for ML Pipeline Composition and Optimisation } \label{sec:search_space_modelling} To control the extension and reduction of configuration spaces in the context of AutoML, we model configuration spaces using a generic tree structure. After that, we extend the problem of pipeline composition and optimisation \cite{sabu18} by adding a factor representing pipeline structures. Finally, we define the problem of pipeline composition and optimisation with dynamic configuration spaces as a constraint optimisation problem. \subsection{Configuration Space Modelling} Given a tree-structured set of nodes $\mathcal{X}$ and a set of ML components $\mathcal{A}$, for each tree node $x_i \in \mathcal{X}$: \begin{equation} \small{ x_i=(\mathcal{A}_i,\lambda_i, \epsilon_i) \label{eq:treenode} } \end{equation} where $\mathcal{A}_i$ is an ML component, $\lambda_i$ is the set of hyperparameters for component $\mathcal{A}_i$, and $\epsilon_i$ represents the active/inactive status of this node. If a node is a predictor and has the inactive status, all of its children node are recursively inactive. A tree-based configuration space $\mathcal{T}$ is defined as: \begin{equation} \small{ \mathcal{T}=(\mathcal{X},f_p) \label{eq:searchtree} } \end{equation} Function $f_p$ assigns each node $x_i$ to a parent node $f_p(x_i)$. A root node has $x_i=f_p(x_i)$. Each ML pipeline $p$ is a search path constructed by back tracking from an active leaf node to the root node. \begin{equation} \small{ p = (g,\vec{\mathcal{A}}, \vec{\mathcal{\lambda}}) \label{eq:pipeline} } \end{equation} where $g$ is a sequential pipeline structure which defines how the components are connected, $\vec{\mathcal{A}}\in \mathcal{A}$ is a vector of the selected components, $\vec{\mathcal{\lambda}}\in \mathcal{\lambda}$ is a vector of hyperparameters of all selected components. A pipeline $p$ can also be called a configuration. Configuration spaces can be extended by adding tree nodes, or reduced by deactivating tree nodes which form invalid and bad-performing pipelines. \subsection{The Problem of Pipeline Composition and Optimisation With Dynamic Configuration Spaces} The problem of pipeline composition and optimisation is to find the most promising machine learning pipeline $(g,\vec{\mathcal{A}}, \vec{\mathcal{\lambda}})^{*}$ including pipeline structures, selected components and their hyperparameters. The definition of this problem is extended from \cite{sabu18} as: \begin{equation} \small{ (g,\vec{\mathcal{A}}, \vec{\mathcal{\lambda}})^{*} = argmin\frac{1}{k}\sum_{i=1}^{k} \mathcal{L}((g,\vec{\mathcal{A}},\vec{\mathcal{\lambda}})^{(i)},\mathcal{D}^{(i)}_{train},\mathcal{D}^{(i)}_{valid}) } \label{eq:pipeline_problem} \end{equation} where $\mathcal{D}_{train}$ and $\mathcal{D}_{valid}$ are training and validation datasets. Equation \ref{eq:pipeline_problem} minimises k-fold cross validation error of the loss function $\mathcal{L}$. Equation \ref{eq:pipeline_problem} does not consider changes of the configuration space $\mathcal{T}$. If we reduce this configuration space by selecting promising well-performing pipelines to form the configuration space $\mathcal{T}^{*}$, the problem of pipeline composition and optimisation can be extended as a constraint optimisation problem as follows: \begin{equation} \small{ (g,\vec{\mathcal{A}}, \vec{\mathcal{\lambda}})^{*} = argmin\frac{1}{k}\sum_{i=1}^{k} \mathcal{L}((g,\vec{\mathcal{A}}, \vec{\mathcal{\lambda}})^{(i)} \in \mathcal{T}^{*},\mathcal{D}^{(i)}_{train},\mathcal{D}^{(i)}_{valid}) } \label{eq:extended_pipeline_problem} \end{equation} \textit{subject to} \begin{equation} \small{ \mathcal{T}^{*} \subset \mathcal{T} \:\:\:\:\: \textit{and} \:\:\:\:\: h(\mathcal{T}^{*}) = \emptyset \label{eq:extended_pipeline_problem_constraints_1} } \end{equation} where $h(\mathcal{T}^{*})$ is a function to find invalid pipelines. \section{Configuration Space Reduction Using the Relative Landmarking Method} \label{sec:relative_landmarking} In this section, we present the methodology to design configuration spaces for the ML pipeline composition and optimisation given the meta-knowledge base of uneven quality and certain level of uncertainty. A relative landmarking method, inspired by the \textit{average ranking} \cite{dbbr18} and the \textit{relative landmarking} \cite{va19} approaches, is proposed. These methods are used to recommend an ML algorithm for a given dataset though they have not considered the optimisation time budget or have been proposed in the context of the automated composition and optimisation of ML pipelines which are both the subject of our primary investigations. Besides, there exists an uncertainty in the meta-knowledge base, which describes the relationships between the average performance of ML components and the datasets' characteristics. This uncertainty is due to the lack of a thorough evaluation which covers the combined effects of ML components in pipelines, the diversity of pipeline structures and components' hyperparameters. To be more specific, because the meta-knowledge base is generated from prior evaluations of the automated composition and optimisation, the number of times, that ML components are evaluated, is different. There are even those that have not been evaluated or have been evaluated with a small number of times due to the limit of the optimisation time budget. Such thorough evaluations to generate highly reliable meta-knowledge are prohibitively expensive in the context of ML pipeline composition and optimisation task and we are interested in investigating if at all and how partial or less reliable meta-knowledge can be effectively used in our context. Firstly, we present the problem formulation of configuration space reduction using the relative landmarking method. Secondly, we present landmarkers that are used to evaluate a new dataset to understand its characteristics. Thirdly, we present the algorithm to design configuration spaces given a new dataset and prior evaluations on other problems from which, we assume, a useful knowledge can be extracted. \subsection{Problem Formulation} \label{sec:problem_formulation} We define the problem of pipeline composition and optimisation with dynamic configuration spaces in Equation \ref{eq:extended_pipeline_problem}. An optimal configuration space $\mathcal{T}^{*}$ can be designed at the initialisation of the ML pipeline composition and optimisation methods. The configuration space $\mathcal{T}^{*}$ of the new dataset is constructed by \textit{k} best performing ML components of the most similar dataset from prior evaluations. Given a new dataset $t_{new}$, we need to find a prior dataset $t^{*}_{i}$ which is similar to $t_{new}$ \cite{va19}. In order to do that, we evaluate a set of landmarkers (i.e., pipelines) $\Theta=\{\theta_i\}$ on $t_{new}$ and record their performance results $E_{new}$. We measure the relative similarity of the performance of the landmarkers between the new dataset $t_{new}$ and the prior dataset $t_{i}$. We select the most similar dataset $t^{*}_{i}$ based on the ranking of the similarity. The configuration space $\mathcal{T}_{new}$ of the dataset $t_{new}$ is constructed by selecting the top $k$ well-performing ML components of the previously tackled dataset $t^{*}_{i}$. \subsection{Landmarkers} Landmarkers \cite{va19} $\Theta=\{\theta_i\}$ are the ML methods/pipelines. They are usually simple and efficient to execute. They are used to indirectly evaluate the characteristics of datasets through evaluations of their performance (e.g., classification accuracy) which can then be used as meta-features. These meta-features are used to find the most similar previously solved dataset by matching the relative performance of the landmarkers on the new dataset and the previously solved datasets. Selecting a set of landmarkers, or more generally a set of meta-features describing complex problems, obviously has an impact on the final results as shown in a number of previous studies on the subject \cite{va19,albu15}. As in the context of AutoML, the evaluation time of the landmarkers forms a part of the overall AutoML optimisation time, in this study we have made a deliberate decision to evaluate a limited set of landmarkers which are the fastest to execute. Therefore in our experiments we use the five fastest predictors (i.e., RandomTree, ZeroR, IBk, NaiveBayes and OneR) from the compiled ranking of the average evaluation time of the pipelines containing these components in prior evaluations. We acknowledge that this choice is not optimal in any other sense than their cumulative execution time but it is a part of our study goals whether such a crude approach for complex problems matching could still be effective, and to what extent, in reduction of our input search space for AutoML algorithms. \subsection{Dynamic Configuration Space Design} \begin{algorithm}[!htbp] \small \caption{\small Design Configuration Space Using The Relative Landmarking} \begin{algorithmic}[1] \Require \Statex $\Theta$: The set of landmarkers \Statex $t_{new}$: The new dataset \Statex $t_{prior}$: The prior dataset \Statex \For{$\theta_i$ \textbf{in} $\Theta$} \State $E_{new}\_i$ = \textit{evaluate}($\theta_i$, $t_{new}$) \EndFor \For{\textbf{each} $t_{prior}\_i$} \State $correlation\_coefficient_i$ = \textit{calculateCorrelation}($E_{new}$, $E_{prior}\_i$) \EndFor \State $t^{*}$ = \textit{getMostSimilarTask}($correlation\_coefficient$) \State $\mathcal{T}_{new}$ = \textit{selectMLComponents}($t^{*}$, $k$) \State \textbf{return} $\mathcal{T}_{new}$ \end{algorithmic} \label{algorithm:construct_configuration_space} \end{algorithm} Algorithm \ref{algorithm:construct_configuration_space} presents the algorithm to design a configuration space using the relative landmarking method. This configuration space is used as an input for AutoML pipeline composition and optimisation methods. Firstly, the algorithm evaluates the new dataset $t_{new}$ using each landmarker $\theta_i$. The result is the performance $E_{new}\_i$ (e.g., the 10-fold cross-validation error rate) of this landmarker (line 1-3). Secondly, the algorithm calculates Pearson correlation coefficient of $E_{new}\_i$ with the mean error rate of the landmarkers on previously solved datasets $E_{prior}\_i$ (line 4-6). Thirdly, the algorithm ranks the correlation coefficients and selects the dataset $t^{*}$ that has the highest correlation coefficient (line 7). Finally, the configuration space is constructed by all preprocessing components and top \textit{k} best performing predictors (line 8) of the most similar dataset $t^{*}$. Note that sum of the evaluation time of landmarkers on the new dataset is deducted from the total time budget of the ML pipeline composition and optimisation task. \section{Experiments} \label{sec:experiment} In the experiments, we use the method described in Section \ref{sec:relative_landmarking} to explore the impact of the levels of configuration space reduction on the performance of AutoML composition and optimisation method. To do so, we compare the mean error rate of five approaches to design the configuration spaces: \begin{itemize} \item \textbf{baseline}: The full configuration space as is constructed by the preprocessing and predictor components that are implemented in AutoWeka4MCPS \cite{sabu18}. The reduction of configuration spaces is effective only if this reduction enables SMAC to find better pipelines than using the baseline configuration space. \item \textbf{r30}: A restricted configuration space using fixed pipeline structures extracted from the best pipelines found within 30 hours optimisation time \cite{sabu18}. By using this configuration space, the AutoML composition and optimisation method dedicates time to optimise hyperparameters of the fixed ML pipelines. We use this configuration to illustrate the trade-off between only optimising hyperparameters of fixed complex pipelines and searching for both well-performing pipeline structures and their hyperparameters within the reduced configuration spaces. \item \textbf{avatar}: The full configuration space which is similar to the \textbf{baseline} configuration space, but we use the AVATAR\cite{ngma20} to reduce configuration spaces by quickly ignoring invalid pipelines when using SMAC to exploit and explore the baseline configuration space. \item \textbf{oracle settings (\textit{O-k1}, \textit{O-k4}, \textit{O-k8}, \textit{O-k10}, and \textit{O-k19})}: The oracle configuration spaces designed by selecting k (i.e., k in $\{1,4,8,10,19\}$) predictors which have the lowest mean error rate in prior evaluations for the datasets themselves. The purpose of using oracle settings is to demonstrate that even if we ignore the impact of using the landmarking method to find the most similar prior problem, the meta-knowledge base which is generated from prior evaluations of AutoML composition and optimisation is useful and meet a certain level of reliability to be used for the reduction of configuration spaces. \item \textbf{landmarking settings (\textit{L-k1}, \textit{L-k4}, \textit{L-k8}, \textit{L-k10}, and \textit{L-k19})}: The relative landmarking configuration spaces designed by selecting k (i.e., k in $\{1,4,8,10,19\}$) predictors which have the lowest mean error rate in prior evaluations of the most similar dataset using the relative landmarking. We use the landmarking settings for the experiment to show that how much should we reduce configuration spaces if there exists uncertainties of both the meta-knowledge base and the matching method (i.e., the landmarking method) we use the find the most similar prior problems. \end{itemize} \subsection{Experimental settings} For the experiments we will use a variety of datasets which are presented in Table \ref{tab:datasets}. The AutoML tool we use for the experiments is AutoWeka4MCPS\footnote{https://github.com/UTS-AAi/autoweka} which implements the method of configuration space reduction using the relative landmarking. The ML pipeline composition and optimisation method is SMAC. We also use the AVATAR to evaluate the validity of ML pipelines \cite{ngga20} which dynamically reduces configuration spaces by ignoring invalid pipelines which are generated during the exploration and exploitation of SMAC. We set the time budget to 2 hours and the memory to 1GB. We perform five runs of experiment for each dataset and report the mean error rate of the best pipelines found. We use the leave-one-out strategy for the experiments with regard to the meta-data availability. For each dataset, we exclude the prior evaluations of this dataset from the meta-knowledge base when performing the relative landmarking method to reduce configuration spaces. The optimisation time for SMAC in cases of using the relative landmarking method is the remaining time of 2 hours after using the landmarking method to construct the configuration spaces. We have conducted a preliminary study to investigate the feasibility of configuration space reduction using the relative landmarking method by extracting prior evaluations. We use Algorithm \ref{algorithm:construct_configuration_space} to design configuration spaces for each dataset. We extract the prior evaluations of the most similar dataset found using the relative landmarking method. The extracted evaluations exclude pipelines that are not in the reduced configuration spaces. We make an assumption that these extracted evaluations are the results of the ML pipeline composition and optimisation without running the optimisation tasks that are time-consuming. By doing so, we have to accept an issue that the total optimisation time of the extracted evaluations is less than total budget time (i.e., 2h). However, we can quickly select a subset of \textit{k} which is used to perform both the configuration space reduction as well as the pipeline composition and optimisation with the reduced configuration spaces. For each dataset, we compare the ranking of predictors of different reduced configuration spaces \textit{k-space} and the configuration space constructed by selecting the top 8 well-performing predictor components of all datasets (\textit{avg-k8}). The size of \textit{avg-k8} is approximately 25\% of the full configuration space. We select 8 components for this exploratory study because we expect the landmarking method is effective when it comes to constructing a configuration space consisting of ML components which are better than the top 25\% average well-performing components from all prior evaluations. Figure \ref{fig:k_Values} shows the number of cases (i.e., datasets) with the ranking of the best ML component in \textit{k-space} is higher than or equal the one in \textit{avg-k8} (metric-k). We can see that \textit{metric-k} increases when the value of \textit{k} increases because of the increase of the possibility for the best ML component being among the \textit{k} selected components. We choose 5 values of \textit{k} (i.e., 1, 4, 8, 10 and 19) to generate the configuration spaces and run the ML pipeline composition and optimisation tasks using these configuration spaces. We choose these values of \textit{k} because of the following reasons: \begin{itemize} \item The value of k=1 is with the minimum configuration space. \item The values of k from 19 to 30 have the same \textit{metric-k}. The values of k from 10 to 18 also have the same \textit{metric-k}. We choose the smallest value of k=19 and k=10 because we want to maximise the reduction of configuration spaces. \item The values of k=4 and k=8 were chosen as they are between k=1 and k=10 so offer the best coverage of the space. \end{itemize} \begin{figure*}[htbp] \centering \includegraphics[width=0.95\linewidth]{images/exp1_k_values.pdf} \caption{The number of cases (i.e., datasets) with the ranking of the best ML component in \textit{k-space} is higher than or equal the one in \textit{avg-k8} (metric-k).} \label{fig:k_Values} \end{figure*} \input{tables/tab_dataset} \subsection{Experiment Results} \input{tables/tab_error_all_methods} \input{tables/tab_pipelines_apart} Table \ref{tab:error_rate_all_methods} presents the mean error rate (\%) of the best pipelines found by SMAC using different methods to design configuration spaces. The lowest mean error rate for each dataset is shown in bold. The `-' symbol represents a ``not found'' solution due to incomplete runs within the optimisation time. Figure \ref{fig:cd_diagram} shows the critical difference diagram of the average rankings of the mean error rate with different configuration spaces. Table \ref{tab:pipelines}\footnote{The details of the best ML pipelines found by SMAC with different configuration spaces for all data sets can be found at \url{https://github.com/UTS-AAi/autoweka/blob/master/autoweka4mcps/doc/landmarking_supplementary.pdf} } shows the best ML pipelines found by SMAC with different configuration spaces for the data sets \textit{amazon} and \textit{convex}. . \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth]{images/cd_all_methods.pdf} \caption{The critical difference diagram of the average ranking of the performance of SMAC with different configuration spaces.} \label{fig:cd_diagram} \end{figure} \begin{figure}[h] \includegraphics[width=1.00\linewidth]{images/mlcomponent_ranking_part.pdf} \caption{The ranking of ML predictor components based on mean error rate of their pipelines from prior evaluations for selected datasets.} \label{fig:ranking_predictors} \end{figure} \textit{Extreme values of \textit{k}=1 and k=19:} We can see that the values of \textit{k} that resulted in the worst performance is 1 and 19 in both cases of the relative landmarking and the oracle. We can also see that when selecting \textit{k} equals 1 (i.e., \textit{L-k1} and \textit{O-k1}), SMAC finds pipelines that have the lowest error rate of 5 in 20 datasets that is higher than using other configuration spaces. Moreover, the number of incomplete optimisation tasks of \textit{L-k1} and \textit{O-k1} is 5 and 4 respectively that is also higher than the other methods. The reason is that the selection of the extreme value of \textit{k}=1 can reduce the configuration spaces significantly. If the relative landmarking method can recommend a well-performing predictor component accurately, SMAC can spend more time for the hyperparameter optimisation of pipelines containing this ML component. Therefore, the extreme selections of k1 have more chance to find the well-performing pipelines. However, SMAC does not have an opportunity to explore other solutions in cases the selected predictor component, which is recommended by the relative landmarking, is not well-performing due to the lack of thorough evaluations of this component. For example, for the dataset \textit{amazon}, \textit{L-k1} has the highest error rate which has the configuration space constructed from \textit{Logistic}. However, the pipeline that has the minimum mean error rate is a pipeline containing \textit{NaiveBayesMultinominal}. We also see that for the datasets \textit{abalone, adult, car, dexter} and \textit{waveform}, \textit{L-k1} has the lowest mean error rate although the average ranking \textit{L-k1} is only better than the cases using the full configuration space (i.e., \textit{baseline}). The selection of k19 makes its configuration space to be larger than ones of k1, k4, k8 and k10. Therefore, SMAC has less time for the hyperparameter optimisation. It is even worsen than using the full configuration spaces if the top predictors are not selected into the configuration spaces due to the uncertainty of the meta-knowledge base. The average ranking of \textit{L-k19} is slightly better than \textit{L-k1} and \textit{baseline}, 8.25 compared with 8.80 and 9.65 (Figure \ref{fig:cd_diagram}). \textit{The middle values of k=4, k=8 and k=10}: The selection of k4, k8 and k10 can construct a better configuration spaces than the extreme values k1 and k19. It can avoid the extreme selection of only one predictor component which may not be well-performing, as well as the design of large configuration spaces which are difficult to be explored within a time budget. Figure \ref{fig:cd_diagram} shows that \textit{L-k10} is critically different from the \textit{baseline} configuration space, 5.50 compared with 9.56. In other words, the relative landmarking with k10 can effectively reduce the configuration spaces to enable the ML composition and optimisation method to find better pipelines. \textit{The case of the dataset \textit{convex}:} In the case of using the fixed pipeline from the 30-hour optimisation (r30), the best pipeline structure is \textit{RandomForest} $\rightarrow$ \textit{AdaBoostM1} having the mean error rate 18.47 that is reported from the study of Savaldor et al. \cite{sabu18}. However, our experiment shows that the mean error rate of using \textit{r30} with 2-hour optimisation in case of \textit{convex} is 46.72. The best pipeline structure of the other settings has only one component. L-k10 and O-k8 have the lowest mean error rate (i.e., 25.65 and 25.51). The best pipelines of L-k10 and O-k8 are \textit{RandomForest} and \textit{SMO} respectively. It clearly shows that performing hyperparameter optimisation for a complex pipeline may not produce a well-performing pipeline in comparison with performing AutoML composition and optimisation using reduce configuration spaces (i.e., in the cases of O-k1, O-k4, O-k8, O-k10, O-k19, L-k8 and L-k10). The reason is that it takes more time to optimise hyperparameters of a complex pipeline and the given time budget is not enough. In these cases, performing AutoML composition and optimisation using these reduce configuration spaces can find better pipelines. \textit{The case of the dataset \textit{amazon}:} The r30 configuration space has the lowest mean error rate 29.33 (Table \ref{tab:error_rate_all_methods}). The best pipeline structure is CustomReplaceMissingValues $\rightarrow$ Normalize $\rightarrow$ RandomSubset $\rightarrow$ NaiveBayesMultinomial $\rightarrow$ RandomSubSpace. Figure \ref{fig:ranking_predictors} shows that the ranking of \textit{RandomSubSpace} is 21 and \textit{NaiveBayesMultinominal} is 2 for the dataset \textit{amazon}. However, these two ML predictor components are not both selected into configuration spaces from k1 to k19. Therefore, using reduced configuration spaces with the oracle and landmarking settings is not effective as the optimisation of hyperparameters of fixed well-performing pipeline structure due to the meta-knowledge base is not accurate in this case, though the rankings of all landmarking and oracle settings are better than \textit{baseline}. We see that \textit{O-k4} and \textit{O-k8} have the best average rankings and exhibit a critical difference in comparison to the baseline configuration space, followed by \textit{L-k10}. \textit{O-k4}, \textit{O-k8} and \textit{L-k10} are statistically different from the baseline configuration space. The configuration space of \textit{L-k10} is reduced 67\% in comparison with the full configuration space. Moreover, we see that the performance of SMAC decreases when increasing the value of \textit{k} in the oracle settings (\textit{O-k4} $>$ \textit{O-k8} $>$ \textit{O-k10} $>$ \textit{O-k19}). The reason is that the small \textit{k} value allows for generating small configuration spaces. Additionally, the best-performing components are always selected into the configuration spaces. Therefore, SMAC can spend more time on optimising hyperparameters to find better pipelines. However, the results from the relative landmarking are different (\textit{L-k10} $>$ \textit{L-k8} $>$ \textit{L-k4} $>$ \textit{L-k19}). The average ranking of \textit{L-k10} is lower than \textit{O-k4} and \textit{O-k8}. The average ranking of \textit{L-k8}, \textit{L-k4}, \textit{L-k19} are lower than \textit{O-k19}. If the landmarking was able to recommend well-performing components to generate configuration spaces which are the same to the oracle configuration spaces, the performance of SMAC should be the same in both ways (e.g., \textit{L-k4} could be similar to \textit{O-k4}) Note that the meta-knowledge base which is extracted from the prior evaluations does not meet an extreme level of reliability which requires thorough evaluations of all ML components on many datasets that have diverse characteristics. Therefore, the landmarking method may not select the best-performing ML components to generate configuration spaces with k=4 and k=8. Due to the uncertainty of the meta-knowledge base as well as the problem matching method (i.e., the landmarking method), a larger value of k should be selected to guarantee that the best-performing ML components are always chosen (i.e., \textit{L-k10} is better than \textit{L-k8} and \textit{L-k8} is better than \textit{L-k4}). It suggests that the landmarking method, which is used to find the most similar prior problem, can be improved in future to enable the higher reduction of configuration spaces as the oracle settings. Although using \textit{L-k19} is slightly better than the baseline configuration space, \textit{L-k19} has a lower performance in comparison with \textit{L-k10}, \textit{L-k8} and \textit{L-k4}. This shows that if two configuration spaces both consist of the best-performing components, SMAC has a better performance using the smaller configuration space. \section{Conclusion} \label{sec:conclusion} In this study, we empirically demonstrate the efficiency of the use of the relative landmarking to reduce configuration spaces under the uncertainty of the meta-knowledge base which is generated from prior evaluations of ML pipeline composition and optimisation tasks. We show that the reasonable value of \textit{k} is 10 which is equivalent to one-third of the full configuration space. This value of \textit{k} depends on the factors including the reliability of the prior evaluations, the set of landmarkers as well as the similarity matching method that we use. In future, we will extend this study to dynamically reduce configuration spaces based on the level of similarity between the new dataset and prior datasets. If the similarity between the new dataset and prior datasets does not meet a certain threshold, we should reduce a small fraction of configuration spaces or even use full configuration spaces to guarantee the selection of well-performing ML components into the configuration space. If the similarity is high, we can reduce configuration spaces significantly to save time for hyperparameter optimisation. \section{Introduction} \label{sec:intro} Various ML pipeline composition and optimisation methods have been proposed in recent years to construct valid and well-performing multi-stage ML models, given both a problem (i.e., a dataset) and a set of ML components with tunable hyperparameters \cite{sabu18,zohu19,ngma20,kemu20}. Typically, this pool of ML components contains classification/regression predictors and other preprocessing operators, e.g. for imputation or feature generation/selection. Among ML pipeline composition and optimisation methods, one of the most successful is based on the Sequential Model-based Algorithm Configuration (SMAC) approach \cite{thhu13}. Like most optimisers, this method seeks a balance between the exploration and exploitation of configuration spaces. When exploiting, the procedure investigates ML pipelines that are similar to the current best performers in terms of pipeline structure and hyperparameter values. When exploring, the procedure selects random candidates within configuration space instead, seeking to avoid entrapment in any local optima. There are several automated machine learning (AutoML) tools that implement SMAC, starting with AutoWeka version 0.5 \cite{thhu13}. Most of these seek one-component pipelines and thus search through configuration spaces that only involve predictors and their hyperparameters. AutoWeka4MCPS is a rare exception, both extending configuration spaces for data preprocessing components and upgrading the implementation of SMAC, thus enabling the construction and optimisation of multiple-component pipelines \cite{sabu18}. While this extension of configuration space allows a wider range of diverse and possibly better ML solutions to be explored, it does come with a number of challenges. Key among them is that a large configuration space is more difficult to efficiently traverse for any optimiser. Given that every candidate ML pipeline must also be trained/queried on a dataset to evaluate its accuracy, and that training can be computationally expensive, this can be a substantial obstacle for using AutoML on a novel ML problem. There is therefore both a need for and a great interest in approaches that offer the intelligent reduction of configuration spaces by preemptively excluding unpromising ML pipelines or, more severely, ML components. Two approaches summarise the several attempts that have been made to deal with the problem of configuration space reduction: predefined pipeline structures and meta-learning. The former restricts ML-component composition to fixed structures \cite{sabu18,fekl15} or ad-hoc specifications \cite{depi17,tsga12,olmo16,wemo18,giya18} (e.g., context-free grammars) that leverage expert knowledge. This method for controlling pipeline structure is direct and relatively simple. However, the obvious drawback arises from the potential bias of expert knowledge, which may suppress the opportunity to find better ML solutions residing outside of predefined templates. The major alternative approach is to employ meta-learning, with the aim of reducing configuration spaces by learning from prior evaluations. Some previous attempts under this banner have sought to cull search spaces by preemptively identifying ML components that perform the best \cite{dbbr18,va19,lebu15,albu15}, while others have focussed on important hyperparameters and appropriate value ranges for a given problem \cite{albu15,rihu18,prbo19,wemu20}. The studies in this latter case faced computational challenges, only investigating hyperparameters for up to six algorithms In general, assembling a reliable meta-knowledge base is very difficult. Ideally, the construction of one should involve intensive evaluations of many ML components, each one thoroughly sampled across a frequently multi-dimensional range of hyperparameter values. To make matters more complicated, the performance of ML components can vary substantially between two intrinsically dissimilar datasets, and it is not even clear what kind of dataset characteristics should be a metric for that dissimilarity \cite{lega10,lega10a,albu15}. In practice, data scientists do not have access to a tailor-made meta-knowledge base. On the other hand, in the natural course of executing AutoML pipeline composition/optimisation processes on a dataset, data scientists do implicitly acquire accuracy evaluations for numerous ML-pipeline candidates. So, we ask the question: are these evaluations opportunistically useful? Can they recommend how many and which ML components we should select when designing an ML-pipeline search space? To explore these research questions, we run a series of experiments with the AutoWeka4MCPS package \cite{sabu18}, which is accelerated by the ML-pipeline validity checker, AVATAR \cite{ngga20}, wherever specified. To accumulate `previous experience', we initially apply SMAC-based AutoML to 20 datasets, spending two hours for each, and extract a collection of ML-pipeline evaluations from the iterations of the SMAC optimisations. We then make a very loose approximation that, in the absence of further information, the error of a pipeline represents a sampled error of its constituent predictor, thus enabling the compilation of mean-error statistics for 30 Weka predictors, both overall and per dataset. The assumption involved in this process is of course contentious, but a key question is whether rankings from the compiled statistics are still reliable enough to at least guide an improved search for an ML solution to a new dataset. Additionally, elements of our study attempt to improve upon general meta-knowledge by prioritising rankings from similar datasets. We employ the relative landmarking method \cite{va19} to quantify that similarity. Specifically, a newly encountered dataset is marked by a set of performance values, which are derived after executing a certain number of easy-to-evaluate landmarkers (i.e., single-component pipelines) on that dataset. These values are then correlated with the performance of those same landmarking predictors on previously encountered datasets; these error values are extracted from the compilation statistics in the meta-knowledge base. Ultimately, this study has two main contributions: \begin{itemize} \item A definition of the problem of ML pipeline composition and optimisation with dynamic configuration spaces. \item An investigation of the performance impact on an AutoML composition/optimisation process by varying levels of pipeline search space reduction, i.e. removing all but the `best' $k$ of 30 predictors from an ML-component pool, for variable $k$. In particular, the focus of this study is on the `best' being defined by predictor rankings for the most similar dataset, as derived from opportunistic and somewhat unreliable meta-knowledge. \end{itemize} This paper is divided into 6 sections. After the Introduction, Section \ref{sec:related_work} presents previous studies to reduce configuration spaces in the context of AutoML. Section \ref{sec:search_space_modelling} formalises the problem definition of ML pipeline composition and optimisation with dynamic configuration spaces. Section \ref{sec:relative_landmarking} presents the methodology of relative landmarking and its proposed use in reducing configuration spaces. Section \ref{sec:experiment} presents experiments to investigate the problem of configuration space reduction. Finally, Section \ref{sec:conclusion} concludes this study. \section{Related Work} \label{sec:related_work} The growing number of available ML methods with their often complex hyperparameters leads to a very rapid expansion, if not combinatorial explosion, of ML-pipeline configurations and associated search spaces. Intelligent reduction of these configuration spaces enables ML pipeline composition and optimisation methods to find valid and well-performing ML pipelines faster within the typical constraints of execution environments and time budgets. We review two main approaches to reduce configuration spaces in the context of ML pipeline composition and optimisation. \textit{Predefined ML pipeline structures and component hyperparameters:} This approach can be implemented as fixed pipeline templates \cite{sabu18,fekl15} or ad-hoc specifications \cite{depi17,tsga12,olmo16,wemo18,giya18} such as context-free grammars. Moreover, specific ranges of hyperparameter values, which highly contribute to well-performing pipelines, are also predefined in these specifications. The advantage of this approach is its simple nature, leveraging expert knowledge to reduce configuration spaces by directly restricting the length of ML pipelines, the pool of ML components, and their permissible orderings/arrangements. However, the disadvantage of this approach is that there might be strongly performing ML pipelines outside of the predefined templates. \textit{Meta-learning:} This approach aims to reduce configuration spaces by using prior knowledge to avoid wasting time with unpromising ML-solution candidates. Frequently, this involves assessing similarity between past and present ML problems/datasets, so as to hone in on the most relevant meta-knowledge available \cite{dbbr18,lebu15,lega10,albu15}. To quantify this similarity, characteristics are typically established for datasets, which can then be used in correlations. A characteristic can be directly derived from the dataset as a meta-feature, e.g. the number of raw features or data instances. Alternatively, relevant to this study, two datasets can be compared by the relative performance of landmarkers. These landmarkers are ideally simple one-component pipelines, i.e. predictors, that are of varying types; they estimate the suitability of varying modelling approaches for a dataset. For instance, the performance of a linear regressor theoretically quantifies whether an ML problem is linear. An ML problem that is estimated to be nonlinear will likely not benefit from methods serving a linear dataset. In any case, the meta-learning approach can be used to reduce configuration spaces by selecting a number of well-performing ML components \cite{albu15,dbbr18,va19,lebu15} or important hyperparameters for tuning \cite{albu15,rihu18,prbo19,wemu20}. For instance, both \textit{average ranking} and \textit{active testing} have previously been used to recommend ML solutions for new datasets \cite{dbbr18}. However, these approaches have not been applied to AutoML yet. Moreover, these studies limit their scopes by optimising predictors, not multi-component pipelines, and the optimisation method they use is grid search, proven not to be as effective as SMAC \cite{thhu13}. Other studies have investigated estimating the importance of hyperparameters \cite{rihu18,prbo19,wemu20} from prior evaluations. Specifically, some hyperparameters are more sensitive to perturbation than others; tuning them can contribute to a proportionally higher variability in ML-algorithm performance (i.e., error rate). As an example, gamma and complexity variable C are the most important hyperparameters for an SVM \cite{rihu18}. Consequently, the results of these studies can be used to reduce configuration spaces by constraining less-important hyperparameters, either by making their search ranges less granular or outright fixing them as default values. This frees up more time to seek the best values for important hyperparameters that have the highest impact on finding well-performing pipelines. However, a disadvantage of these studies is that the importance of ML-component hyperparameters has only been studied on small sets of up to six algorithms. This reflects how time-consuming it is to properly sample hyperparameter space across all available algorithms. In this study, our approach aligns with meta-learning principles. However, it differs from previous studies by refusing to carefully curate a tailor-made meta-knowledge base. Instead, accepting a degree of unreliability, we opportunistically derive assumptive statistics from numerous pipeline evaluations; these evaluations are non-exhaustive and simulate the remnants of AutoML optimisation processes intended to solve seemingly unrelated problems. Faced with this context, we focus on the relative landmarking method to draw similarities between past and present datasets, then we identify previously well-performing ML components, around which we constrain configuration subspaces for ML pipeline composition and optimisation. We also investigate how varying degrees of this recommendation-based search-space culling affects the performance of ML pipeline composition/optimisation methods. \section{Modelling Configuration Spaces for ML Pipeline Composition and Optimisation } \label{sec:search_space_modelling} To control the extension and reduction of configuration spaces in the context of AutoML, we model them using a generic tree structure. After that, we extend the problem of pipeline composition and optimisation \cite{sabu18} by adding a factor representing pipeline structures. Finally, we define the problem of pipeline composition and optimisation with dynamic configuration spaces as a constrained optimisation problem. \subsection{Configuration Space Modelling} To discuss the search space of ML pipelines, it is worth considering that each potential constituent within a pipeline is an instantiation of an ML component with a certain set of values for its hyperparameters. Specifically, if $\mathcal{A}$ is the pool of available ML components, a potential pipeline constituent indexed by $c$ can be represented as \begin{equation} \small{ x_c=(\mathcal{A}_c,\lambda_c, \epsilon_c), \label{eq:treenode} } \end{equation} where $\mathcal{A}_c\in\mathcal{A}$ is an ML component and $\lambda_c$ is a set of hyperparameter values for component $\mathcal{A}_c$. Notably, we also mark the candidate by binary variable $\epsilon_c$, which represents whether the potential constituent is accessible/inaccessible to a pipeline search algorithm. We now arrange the set of candidate constituents, $\mathcal{X}=\{x_c\}$, in a tree-based configuration space defined as \begin{equation} \small{ \mathcal{T}=(\mathcal{X},f_p). \label{eq:searchtree} } \end{equation} Function $f_p$ represents the assignment of each node $x_c$ to a parent node $f_p(x_c)$, where the root nodes of this tree-structured space are defined by $f_p(x_c)=x_c$. Given this space, an ML pipeline $p$, also called a configuration, is a path constructed by back tracking from an active leaf node to the root node. Specifically, \begin{equation} \small{ p = (g,\vec{\mathcal{A}}, \vec{\mathcal{\lambda}}), \label{eq:pipeline} } \end{equation} where $\vec{\mathcal{A}}$ is a vector of ML components in $\mathcal{A}$, $\vec{\mathcal{\lambda}}$ is a vector of hyperparameter-value sets corresponding to $\vec{\mathcal{A}}$, and $g$ is a sequential pipeline structure that defines how the components are connected. Configuration space $\mathcal{T}$ can always be extended by adding tree nodes, but dynamically reducing pipeline search space to a subset of $\mathcal{T}$ is also possible by `deactivating' tree nodes, i.e. flipping their $\epsilon_c$ switches and those of any appropriate branches. Specifically, if a candidate pipeline contains a deactivated node, it is invalid and can be ignored by any search algorithm like SMAC \cite{thhu13}. \subsection{The Problem of Pipeline Composition and Optimisation with Dynamic Configuration Spaces} The aim of pipeline composition and optimisation is to find the best-performing ML pipeline $(g,\vec{\mathcal{A}}, \vec{\mathcal{\lambda}})^{*}$, which involves the optimal combination of pipeline structure, selected components, and associated hyperparameters. The formalism of the pipeline-search problem \cite{sabu18} can thus be written as \begin{equation} \small{ (g,\vec{\mathcal{A}}, \vec{\mathcal{\lambda}})^{*} = \mathrm{arg\,min}\frac{1}{k}\sum_{i=1}^{k} \mathcal{L}((g,\vec{\mathcal{A}},\vec{\mathcal{\lambda}})^{(i)},\mathcal{D}^{(i)}_{train},\mathcal{D}^{(i)}_{valid}), } \label{eq:pipeline_problem} \end{equation} where $\mathcal{D}_{train}$ and $\mathcal{D}_{valid}$ are training and validation datasets. Specifically, this equation minimises the k-fold cross validation error of loss function $\mathcal{L}$. The $k$ here is not to be confused with the $k$ variable used for predictor rankings elsewhere in this paper. Equation (\ref{eq:pipeline_problem}) does not consider changes to the configuration space $\mathcal{T}$. If we reduce this configuration space by selecting promising pipelines to form the configuration subspace $\mathcal{T}^{*}\subset \mathcal{T}$, the problem of pipeline composition/optimisation can be reformulated as a constrained version of Eq. (\ref{eq:pipeline_problem}), as follows: \begin{equation} \small{ (g,\vec{\mathcal{A}}, \vec{\mathcal{\lambda}})^{*} = \mathrm{arg\,min}\frac{1}{k}\sum_{i=1}^{k} \mathcal{L}((g,\vec{\mathcal{A}}, \vec{\mathcal{\lambda}})^{(i)} \in \mathcal{T}',\mathcal{D}^{(i)}_{train},\mathcal{D}^{(i)}_{valid}) } \label{eq:extended_pipeline_problem} \end{equation} subject to \begin{equation} \small{ \mathcal{T}' = \mathcal{T}^{*}\setminus h(\mathcal{T}^{*}), \label{eq:extended_pipeline_problem_constraints_1} } \end{equation} where $h(\mathcal{T}^{*})$ is an optional function to find the set of all invalid pipelines. This set exclusion can be considered to represent the role of AVATAR \cite{ngga20}. \section{Configuration Space Reduction Using the Relative Landmarking Method} \label{sec:relative_landmarking} In this section, we present the methodology to design configuration spaces for the ML pipeline composition and optimisation given the meta-knowledge base of uneven quality and certain level of uncertainty. A relative landmarking method, inspired by the \textit{average ranking} \cite{dbbr18} and the \textit{relative landmarking} \cite{va19} approaches, is proposed. These methods are used to recommend an ML algorithm for a given dataset though they have not considered the optimisation time budget or have been proposed in the context of the automated composition and optimisation of ML pipelines which are both the subject of our primary investigations. Besides, there exists an uncertainty in the meta-knowledge base, which describes the relationships between the average performance of ML components and the datasets' characteristics. This uncertainty is due to the lack of a thorough evaluation which covers the combined effects of ML components in pipelines, the diversity of pipeline structures and components' hyperparameters. To be more specific, because the meta-knowledge base is generated from prior evaluations of the automated composition and optimisation, the number of times, that ML components are evaluated, is different. There are even those that have not been evaluated or have been evaluated with a small number of times due to the limit of the optimisation time budget. Such thorough evaluations to generate highly reliable meta-knowledge are prohibitively expensive in the context of ML pipeline composition and optimisation task and we are interested in investigating if at all and how partial or less reliable meta-knowledge can be effectively used in our context. Firstly, we present the problem formulation of configuration space reduction using the relative landmarking method. Secondly, we present landmarkers that are used to evaluate a new dataset to understand its characteristics. Thirdly, we present the algorithm to design configuration spaces given a new dataset and prior evaluations on other problems from which, we assume, a useful knowledge can be extracted. \subsection{Problem Formulation} \label{sec:problem_formulation} We define the problem of pipeline composition and optimisation with dynamic configuration spaces in Equation \ref{eq:extended_pipeline_problem}. An optimal configuration space $\mathcal{T}^{*}$ can be designed at the initialisation of the ML pipeline composition and optimisation methods. The configuration space $\mathcal{T}^{*}$ of the new dataset is constructed by \textit{k} best performing ML components of the most similar dataset from prior evaluations. Given a new dataset $t_{new}$, we need to find a prior dataset $t^{*}_{i}$ which is similar to $t_{new}$ \cite{va19}. In order to do that, we evaluate a set of landmarkers (i.e., pipelines) $\Theta=\{\theta_i\}$ on $t_{new}$ and record their performance results $E_{new}$. We measure the relative similarity of the performance of the landmarkers between the new dataset $t_{new}$ and the prior dataset $t_{i}$. We select the most similar dataset $t^{*}_{i}$ based on the ranking of the similarity. The configuration space $\mathcal{T}_{new}$ of the dataset $t_{new}$ is constructed by selecting the top $k$ well-performing ML components of the previously tackled dataset $t^{*}_{i}$. \subsection{Landmarkers} Landmarkers \cite{va19} $\Theta=\{\theta_i\}$ are the ML methods/pipelines. They are usually simple and efficient to execute. They are used to indirectly evaluate the characteristics of datasets through evaluations of their performance (e.g., classification accuracy) which can then be used as meta-features. These meta-features are used to find the most similar previously solved dataset by matching the relative performance of the landmarkers on the new dataset and the previously solved datasets. Selecting a set of landmarkers, or more generally a set of meta-features describing complex problems, obviously has an impact on the final results as shown in a number of previous studies on the subject \cite{va19,albu15}. As in the context of AutoML, the evaluation time of the landmarkers forms a part of the overall AutoML optimisation time, in this study we have made a deliberate decision to evaluate a limited set of landmarkers which are the fastest to execute. Therefore in our experiments we use the five fastest predictors (i.e., RandomTree, ZeroR, IBk, NaiveBayes and OneR) from the compiled ranking of the average evaluation time of the pipelines containing these components in prior evaluations. We acknowledge that this choice is not optimal in any other sense than their cumulative execution time but it is a part of our study goals whether such a crude approach for complex problems matching could still be effective, and to what extent, in reduction of our input search space for AutoML algorithms. \subsection{Dynamic Configuration Space Design} \begin{algorithm}[!htbp] \small \caption{\small Design Configuration Space Using The Relative Landmarking} \begin{algorithmic}[1] \Require \Statex $\Theta$: The set of landmarkers \Statex $t_{new}$: The new dataset \Statex $t_{prior}$: The prior dataset \Statex \For{$\theta_i$ \textbf{in} $\Theta$} \State $E_{new}\_i$ = \textit{evaluate}($\theta_i$, $t_{new}$) \EndFor \For{\textbf{each} $t_{prior}\_i$} \State $correlation\_coefficient_i$ = \textit{calculateCorrelation}($E_{new}$, $E_{prior}\_i$) \EndFor \State $t^{*}$ = \textit{getMostSimilarTask}($correlation\_coefficient$) \State $\mathcal{T}_{new}$ = \textit{selectMLComponents}($t^{*}$, $k$) \State \textbf{return} $\mathcal{T}_{new}$ \end{algorithmic} \label{algorithm:construct_configuration_space} \end{algorithm} Algorithm \ref{algorithm:construct_configuration_space} presents the algorithm to design a configuration space using the relative landmarking method. This configuration space is used as an input for AutoML pipeline composition and optimisation methods. Firstly, the algorithm evaluates the new dataset $t_{new}$ using each landmarker $\theta_i$. The result is the performance $E_{new}\_i$ (e.g., the 10-fold cross-validation error rate) of this landmarker (line 1-3). Secondly, the algorithm calculates Pearson correlation coefficient of $E_{new}\_i$ with the mean error rate of the landmarkers on previously solved datasets $E_{prior}\_i$ (line 4-6). Thirdly, the algorithm ranks the correlation coefficients and selects the dataset $t^{*}$ that has the highest correlation coefficient (line 7). Finally, the configuration space is constructed by all preprocessing components and top \textit{k} best performing predictors (line 8) of the most similar dataset $t^{*}$. Note that sum of the evaluation time of landmarkers on the new dataset is deducted from the total time budget of the ML pipeline composition and optimisation task. \section{Experiments} \label{sec:experiment} In the experiments, we use the method described in Section \ref{sec:relative_landmarking} to explore the impact of the levels of configuration space reduction on the performance of AutoML composition and optimisation method. To do so, we compare the mean error rate of five approaches to design the configuration spaces: \begin{itemize} \item \textbf{baseline}: The full configuration space as is constructed by the preprocessing and predictor components that are implemented in AutoWeka4MCPS \cite{sabu18}. The reduction of configuration spaces is effective only if this reduction enables SMAC to find better pipelines than using the baseline configuration space. \item \textbf{r30}: A restricted configuration space using fixed pipeline structures extracted from the best pipelines found within 30 hours optimisation time \cite{sabu18}. By using this configuration space, the AutoML composition and optimisation method dedicates time to optimise hyperparameters of the fixed ML pipelines. We use this configuration to illustrate the trade-off between only optimising hyperparameters of fixed complex pipelines and searching for both well-performing pipeline structures and their hyperparameters within the reduced configuration spaces. \item \textbf{avatar}: The full configuration space which is similar to the \textbf{baseline} configuration space, but we use the AVATAR\cite{ngma20} to reduce configuration spaces by quickly ignoring invalid pipelines when using SMAC to exploit and explore the baseline configuration space. \item \textbf{oracle settings (\textit{O-k1}, \textit{O-k4}, \textit{O-k8}, \textit{O-k10}, and \textit{O-k19})}: The oracle configuration spaces designed by selecting k (i.e., k in $\{1,4,8,10,19\}$) predictors which have the lowest mean error rate in prior evaluations for the datasets themselves. The purpose of using oracle settings is to demonstrate that even if we ignore the impact of using the landmarking method to find the most similar prior problem, the meta-knowledge base which is generated from prior evaluations of AutoML composition and optimisation is useful and meet a certain level of reliability to be used for the reduction of configuration spaces. \item \textbf{landmarking settings (\textit{L-k1}, \textit{L-k4}, \textit{L-k8}, \textit{L-k10}, and \textit{L-k19})}: The relative landmarking configuration spaces designed by selecting k (i.e., k in $\{1,4,8,10,19\}$) predictors which have the lowest mean error rate in prior evaluations of the most similar dataset using the relative landmarking. We use the landmarking settings for the experiment to show that how much should we reduce configuration spaces if there exists uncertainties of both the meta-knowledge base and the matching method (i.e., the landmarking method) we use the find the most similar prior problems. \end{itemize} \subsection{Experimental settings} For the experiments we will use a variety of datasets which are presented in Table \ref{tab:datasets}. The AutoML tool we use for the experiments is AutoWeka4MCPS\footnote{https://github.com/UTS-AAi/autoweka} which implements the method of configuration space reduction using the relative landmarking. The ML pipeline composition and optimisation method is SMAC. We also use the AVATAR to evaluate the validity of ML pipelines \cite{ngga20} which dynamically reduces configuration spaces by ignoring invalid pipelines which are generated during the exploration and exploitation of SMAC. We set the time budget to 2 hours and the memory to 1GB. We perform five runs of experiment for each dataset and report the mean error rate of the best pipelines found. We use the leave-one-out strategy for the experiments with regard to the meta-data availability. For each dataset, we exclude the prior evaluations of this dataset from the meta-knowledge base when performing the relative landmarking method to reduce configuration spaces. The optimisation time for SMAC in cases of using the relative landmarking method is the remaining time of 2 hours after using the landmarking method to construct the configuration spaces. We have conducted a preliminary study to investigate the feasibility of configuration space reduction using the relative landmarking method by extracting prior evaluations. We use Algorithm \ref{algorithm:construct_configuration_space} to design configuration spaces for each dataset. We extract the prior evaluations of the most similar dataset found using the relative landmarking method. The extracted evaluations exclude pipelines that are not in the reduced configuration spaces. We make an assumption that these extracted evaluations are the results of the ML pipeline composition and optimisation without running the optimisation tasks that are time-consuming. By doing so, we have to accept an issue that the total optimisation time of the extracted evaluations is less than total budget time (i.e., 2h). However, we can quickly select a subset of \textit{k} which is used to perform both the configuration space reduction as well as the pipeline composition and optimisation with the reduced configuration spaces. For each dataset, we compare the ranking of predictors of different reduced configuration spaces \textit{k-space} and the configuration space constructed by selecting the top 8 well-performing predictor components of all datasets (\textit{avg-k8}). The size of \textit{avg-k8} is approximately 25\% of the full configuration space. We select 8 components for this exploratory study because we expect the landmarking method is effective when it comes to constructing a configuration space consisting of ML components which are better than the top 25\% average well-performing components from all prior evaluations. Figure \ref{fig:k_Values} shows the number of cases (i.e., datasets) with the ranking of the best ML component in \textit{k-space} is higher than or equal the one in \textit{avg-k8} (metric-k). We can see that \textit{metric-k} increases when the value of \textit{k} increases because of the increase of the possibility for the best ML component being among the \textit{k} selected components. We choose 5 values of \textit{k} (i.e., 1, 4, 8, 10 and 19) to generate the configuration spaces and run the ML pipeline composition and optimisation tasks using these configuration spaces. We choose these values of \textit{k} because of the following reasons: \begin{itemize} \item The value of k=1 is with the minimum configuration space. \item The values of k from 19 to 30 have the same \textit{metric-k}. The values of k from 10 to 18 also have the same \textit{metric-k}. We choose the smallest value of k=19 and k=10 because we want to maximise the reduction of configuration spaces. \item The values of k=4 and k=8 were chosen as they are between k=1 and k=10 so offer the best coverage of the space. \end{itemize} \begin{figure*}[htbp] \centering \includegraphics[width=0.95\linewidth]{images/exp1_k_values.pdf} \caption{The number of cases (i.e., datasets) with the ranking of the best ML component in \textit{k-space} is higher than or equal the one in \textit{avg-k8} (metric-k).} \label{fig:k_Values} \end{figure*} \input{tables/tab_dataset} \subsection{Experiment Results} \input{tables/tab_error_all_methods} \input{tables/tab_pipelines_apart} Table \ref{tab:error_rate_all_methods} presents the mean error rate (\%) of the best pipelines found by SMAC using different methods to design configuration spaces. The lowest mean error rate for each dataset is shown in bold. The `-' symbol represents a ``not found'' solution due to incomplete runs within the optimisation time. Figure \ref{fig:cd_diagram} shows the critical difference diagram of the average rankings of the mean error rate with different configuration spaces. Table \ref{tab:pipelines}\footnote{The details of the best ML pipelines found by SMAC with different configuration spaces for all data sets can be found at \url{https://github.com/UTS-AAi/autoweka/blob/master/autoweka4mcps/doc/landmarking_supplementary.pdf} } shows the best ML pipelines found by SMAC with different configuration spaces for the data sets \textit{amazon} and \textit{convex}. . \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth]{images/cd_all_methods.pdf} \caption{The critical difference diagram of the average ranking of the performance of SMAC with different configuration spaces.} \label{fig:cd_diagram} \end{figure} \begin{figure}[h] \includegraphics[width=1.00\linewidth]{images/mlcomponent_ranking_part.pdf} \caption{The ranking of ML predictor components based on mean error rate of their pipelines from prior evaluations for selected datasets.} \label{fig:ranking_predictors} \end{figure} \textit{Extreme values of \textit{k}=1 and k=19:} We can see that the values of \textit{k} that resulted in the worst performance is 1 and 19 in both cases of the relative landmarking and the oracle. We can also see that when selecting \textit{k} equals 1 (i.e., \textit{L-k1} and \textit{O-k1}), SMAC finds pipelines that have the lowest error rate of 5 in 20 datasets that is higher than using other configuration spaces. Moreover, the number of incomplete optimisation tasks of \textit{L-k1} and \textit{O-k1} is 5 and 4 respectively that is also higher than the other methods. The reason is that the selection of the extreme value of \textit{k}=1 can reduce the configuration spaces significantly. If the relative landmarking method can recommend a well-performing predictor component accurately, SMAC can spend more time for the hyperparameter optimisation of pipelines containing this ML component. Therefore, the extreme selections of k1 have more chance to find the well-performing pipelines. However, SMAC does not have an opportunity to explore other solutions in cases the selected predictor component, which is recommended by the relative landmarking, is not well-performing due to the lack of thorough evaluations of this component. For example, for the dataset \textit{amazon}, \textit{L-k1} has the highest error rate which has the configuration space constructed from \textit{Logistic}. However, the pipeline that has the minimum mean error rate is a pipeline containing \textit{NaiveBayesMultinominal}. We also see that for the datasets \textit{abalone, adult, car, dexter} and \textit{waveform}, \textit{L-k1} has the lowest mean error rate although the average ranking \textit{L-k1} is only better than the cases using the full configuration space (i.e., \textit{baseline}). The selection of k19 makes its configuration space to be larger than ones of k1, k4, k8 and k10. Therefore, SMAC has less time for the hyperparameter optimisation. It is even worsen than using the full configuration spaces if the top predictors are not selected into the configuration spaces due to the uncertainty of the meta-knowledge base. The average ranking of \textit{L-k19} is slightly better than \textit{L-k1} and \textit{baseline}, 8.25 compared with 8.80 and 9.65 (Figure \ref{fig:cd_diagram}). \textit{The middle values of k=4, k=8 and k=10}: The selection of k4, k8 and k10 can construct a better configuration spaces than the extreme values k1 and k19. It can avoid the extreme selection of only one predictor component which may not be well-performing, as well as the design of large configuration spaces which are difficult to be explored within a time budget. Figure \ref{fig:cd_diagram} shows that \textit{L-k10} is critically different from the \textit{baseline} configuration space, 5.50 compared with 9.56. In other words, the relative landmarking with k10 can effectively reduce the configuration spaces to enable the ML composition and optimisation method to find better pipelines. \textit{The case of the dataset \textit{convex}:} In the case of using the fixed pipeline from the 30-hour optimisation (r30), the best pipeline structure is \textit{RandomForest} $\rightarrow$ \textit{AdaBoostM1} having the mean error rate 18.47 that is reported from the study of Savaldor et al. \cite{sabu18}. However, our experiment shows that the mean error rate of using \textit{r30} with 2-hour optimisation in case of \textit{convex} is 46.72. The best pipeline structure of the other settings has only one component. L-k10 and O-k8 have the lowest mean error rate (i.e., 25.65 and 25.51). The best pipelines of L-k10 and O-k8 are \textit{RandomForest} and \textit{SMO} respectively. It clearly shows that performing hyperparameter optimisation for a complex pipeline may not produce a well-performing pipeline in comparison with performing AutoML composition and optimisation using reduce configuration spaces (i.e., in the cases of O-k1, O-k4, O-k8, O-k10, O-k19, L-k8 and L-k10). The reason is that it takes more time to optimise hyperparameters of a complex pipeline and the given time budget is not enough. In these cases, performing AutoML composition and optimisation using these reduce configuration spaces can find better pipelines. \textit{The case of the dataset \textit{amazon}:} The r30 configuration space has the lowest mean error rate 29.33 (Table \ref{tab:error_rate_all_methods}). The best pipeline structure is CustomReplaceMissingValues $\rightarrow$ Normalize $\rightarrow$ RandomSubset $\rightarrow$ NaiveBayesMultinomial $\rightarrow$ RandomSubSpace. Figure \ref{fig:ranking_predictors} shows that the ranking of \textit{RandomSubSpace} is 21 and \textit{NaiveBayesMultinominal} is 2 for the dataset \textit{amazon}. However, these two ML predictor components are not both selected into configuration spaces from k1 to k19. Therefore, using reduced configuration spaces with the oracle and landmarking settings is not effective as the optimisation of hyperparameters of fixed well-performing pipeline structure due to the meta-knowledge base is not accurate in this case, though the rankings of all landmarking and oracle settings are better than \textit{baseline}. We see that \textit{O-k4} and \textit{O-k8} have the best average rankings and exhibit a critical difference in comparison to the baseline configuration space, followed by \textit{L-k10}. \textit{O-k4}, \textit{O-k8} and \textit{L-k10} are statistically different from the baseline configuration space. The configuration space of \textit{L-k10} is reduced 67\% in comparison with the full configuration space. Moreover, we see that the performance of SMAC decreases when increasing the value of \textit{k} in the oracle settings (\textit{O-k4} $>$ \textit{O-k8} $>$ \textit{O-k10} $>$ \textit{O-k19}). The reason is that the small \textit{k} value allows for generating small configuration spaces. Additionally, the best-performing components are always selected into the configuration spaces. Therefore, SMAC can spend more time on optimising hyperparameters to find better pipelines. However, the results from the relative landmarking are different (\textit{L-k10} $>$ \textit{L-k8} $>$ \textit{L-k4} $>$ \textit{L-k19}). The average ranking of \textit{L-k10} is lower than \textit{O-k4} and \textit{O-k8}. The average ranking of \textit{L-k8}, \textit{L-k4}, \textit{L-k19} are lower than \textit{O-k19}. If the landmarking was able to recommend well-performing components to generate configuration spaces which are the same to the oracle configuration spaces, the performance of SMAC should be the same in both ways (e.g., \textit{L-k4} could be similar to \textit{O-k4}) Note that the meta-knowledge base which is extracted from the prior evaluations does not meet an extreme level of reliability which requires thorough evaluations of all ML components on many datasets that have diverse characteristics. Therefore, the landmarking method may not select the best-performing ML components to generate configuration spaces with k=4 and k=8. Due to the uncertainty of the meta-knowledge base as well as the problem matching method (i.e., the landmarking method), a larger value of k should be selected to guarantee that the best-performing ML components are always chosen (i.e., \textit{L-k10} is better than \textit{L-k8} and \textit{L-k8} is better than \textit{L-k4}). It suggests that the landmarking method, which is used to find the most similar prior problem, can be improved in future to enable the higher reduction of configuration spaces as the oracle settings. Although using \textit{L-k19} is slightly better than the baseline configuration space, \textit{L-k19} has a lower performance in comparison with \textit{L-k10}, \textit{L-k8} and \textit{L-k4}. This shows that if two configuration spaces both consist of the best-performing components, SMAC has a better performance using the smaller configuration space. \section{Conclusion} \label{sec:conclusion} In this study, we empirically demonstrate the efficiency of the use of the relative landmarking to reduce configuration spaces under the uncertainty of the meta-knowledge base which is generated from prior evaluations of ML pipeline composition and optimisation tasks. We show that the reasonable value of \textit{k} is 10 which is equivalent to one-third of the full configuration space. This value of \textit{k} depends on the factors including the reliability of the prior evaluations, the set of landmarkers as well as the similarity matching method that we use. In future, we will extend this study to dynamically reduce configuration spaces based on the level of similarity between the new dataset and prior datasets. If the similarity between the new dataset and prior datasets does not meet a certain threshold, we should reduce a small fraction of configuration spaces or even use full configuration spaces to guarantee the selection of well-performing ML components into the configuration space. If the similarity is high, we can reduce configuration spaces significantly to save time for hyperparameter optimisation. \section*{Acknowledgment} \bibliographystyle{IEEEtran} \section{Introduction} \label{sec:intro} Various ML pipeline composition and optimisation methods have been proposed to construct valid and well-performing multi-stage ML models, given both a problem (i.e. a dataset) and a set of ML components with tunable hyperparameters \cite{sabu18,zohu19,ngma20,kemu20}. Typically, this pool of ML components contains classification/regression predictors and other preprocessing operators, e.g. for imputation or feature generation/selection. Among ML pipeline composition and optimisation methods, one of the most successful is based on the Sequential Model-based Algorithm Configuration (SMAC) approach \cite{thhu13}. Like most optimisers, this method seeks a balance between the exploration and exploitation of configuration spaces. When exploiting, the procedure investigates ML pipelines that are similar to the current best performers in terms of pipeline structure and hyperparameter values. When exploring, the procedure selects random candidates within configuration space instead, seeking to avoid entrapment in any local optima. There are several automated machine learning (AutoML) tools that implement SMAC, starting with AutoWeka version 0.5 \cite{thhu13}. Most of these seek one-component pipelines and thus search through configuration spaces that only involve predictors and their hyperparameters. AutoWeka4MCPS is a rare exception, both extending configuration spaces for data preprocessing components and upgrading the implementation of SMAC, thus enabling the construction and optimisation of multiple-component pipelines \cite{sabu18}. While this extension of configuration space allows a wider range of diverse and possibly better ML solutions to be explored, it does come with a number of challenges. Key among them is that a large configuration space is more difficult to efficiently traverse for any optimiser. Given that every candidate ML pipeline must also be trained/queried on a dataset to evaluate its accuracy, and that training can be computationally expensive, this can be a substantial obstacle for using AutoML on a novel ML problem. There is therefore both a need for and a great interest in approaches that offer the intelligent reduction of configuration spaces by preemptively excluding unpromising ML pipelines or, more severely, ML components. Several previous attempts have been made to deal with the problem of configuration space reduction, although few have considered the additional intricacies involving pipeline structure. These approaches typically lean one of two ways when culling the search space: hard restrictions defined by expert knowledge \cite{sabu18,fekl15,depi17,tsga12} and dynamic constraints based on meta-learning \cite{olmo16,wemo18,giya18,dbbr18,va19,lebu15,albu15,rihu18,prbo19,wemu20}. The latter notion is of particular appeal to AutoML as it is effectively hands-off; the solution to a new ML problem is aided by the automatic extraction of `meta-knowledge' from previous experience. Problematically, though, advocacy of meta-learning often hinges on the curation of an ideal meta-knowledge base, which, in the AutoML context, would need to involve intensive evaluations of many ML components, each one thoroughly sampled across a frequently multi-dimensional range of hyperparameter values. To make matters more complicated, the performance of ML components can vary substantially between two intrinsically dissimilar datasets, and it is not even clear what kind of dataset characteristics should be a metric for that dissimilarity \cite{lega10,lega10a,albu15}. In practice, data scientists do not have access to a tailor-made meta-knowledge base. On the other hand, in the natural course of executing AutoML pipeline composition/optimisation processes on a dataset, data scientists do implicitly acquire accuracy evaluations for numerous ML-pipeline candidates. So, we ask the question: are these evaluations opportunistically useful? Can they recommend how many and which ML components we should select when designing an ML-pipeline search space? To explore these research questions, we run a series of experiments with the AutoWeka4MCPS package \cite{sabu18}, which is accelerated by the ML-pipeline validity checker, AVATAR \cite{ngga20}, wherever specified. All experiments revolve around a meta-knowledge base that is built by using loose assumptions to convert limited SMAC-based AutoML runs across 20 datasets into mean-error statistics and associated performance rankings for 30 Weka predictors, both overall and per dataset. The meta-knowledge base is considered to be neither rigorous nor exhaustive. Despite this, the experiments seek to address whether rankings from the compiled statistics are still reliable enough to guide an improved search for an ML solution to a new problem. Some experiments additionally explore whether this meta-knowledge can be improved by considering dataset similarity; in these cases, we employ the relative landmarking method \cite{va19} to quantify that similarity. Because SMAC has not completed the ML pipeline composition and optimisation tasks for 9 datasets for at least one configuration space, we only present experiments on 11 datasets. Ultimately, the main contributions of this study are: \begin{itemize} \item An investigation of how the performance of an AutoML composition/optimisation process is affected by varying levels of recommended pipeline search space reduction, i.e. removing all but the `best' $k$ of 30 predictors from an ML-component pool for variable $k$. \item An exploration of how those results vary under different modes of recommendation, e.g. the best predictors over all datasets versus the best predictors for the most similar dataset, all derived from opportunistic and somewhat unreliable meta-knowledge. \end{itemize} Accordingly, this paper is divided into five sections. After the Introduction, Section \ref{sec:related_work} reviews previous attempts to reduce configuration spaces in the context of AutoML. Section \ref{sec:methodology} details the methodology used in this study, e.g. dynamic configuration spaces, meta-knowledge generation, and relative landmarking. Section \ref{sec:experiment} presents and analyses experiments assessing whether meta-learned recommendations for culling configuration space are beneficial to the performance of pipeline composition/optimisation. Finally, Section \ref{sec:conclusion} concludes this study. \section{Related Work} \label{sec:related_work} The growing number of available ML methods with their often complex hyperparameters leads to a very rapid expansion, if not combinatorial explosion, of ML-pipeline configurations and associated search spaces. Intelligent reduction of these configuration spaces enables ML pipeline composition and optimisation methods to find valid and well-performing ML pipelines faster within the typical constraints of execution environments and time budgets. We review two main approaches to reduce configuration spaces in the context of ML pipeline composition and optimisation. \textit{Predefined ML pipeline structures and component hyperparameters:} This approach can be implemented as fixed pipeline templates \cite{sabu18,fekl15,depi17,tsga12} or ad-hoc specifications \cite{olmo16,wemo18,giya18,dbbr18,va19,lebu15,albu15,rihu18,prbo19,wemu20} such as context-free grammars. Moreover, specific ranges of hyperparameter values, which highly contribute to well-performing pipelines, are also predefined in these specifications. The advantage of this approach is its simple nature, leveraging expert knowledge to reduce configuration spaces by directly restricting the length of ML pipelines, the pool of ML components, and their permissible orderings/arrangements. However, the disadvantage of this approach is that expert bias might obscure strongly performing ML pipelines outside of the predefined templates. \textit{Meta-learning:} This approach aims to reduce configuration spaces by using prior knowledge to avoid wasting time with unpromising ML-solution candidates. Frequently, this involves assessing similarity between past and present ML problems/datasets, so as to hone in on the most relevant meta-knowledge available \cite{lega10,albu15,lebu15,dbbr18,va19}. To quantify this similarity, characteristics are typically established for datasets, which can then be used in correlations. A characteristic can be directly derived from the dataset as a meta-feature, e.g. the number of raw features or data instances. Alternatively, relevant to this study, two datasets can be compared by the relative performance of landmarkers. These landmarkers are ideally simple one-component pipelines, i.e. predictors, that are of varying types; they estimate the suitability of varying modelling approaches for a dataset. For instance, the performance of a linear regressor theoretically quantifies whether an ML problem is linear. An ML problem that is estimated to be nonlinear will likely not benefit from methods serving a linear dataset. In any case, the meta-learning approach can be used to reduce configuration spaces by selecting a number of well-performing ML components \cite{albu15,dbbr18,va19,lebu15} or important hyperparameters for tuning \cite{albu15,rihu18,prbo19,wemu20}. For instance, both \textit{average ranking} and \textit{active testing} have previously been used to recommend ML solutions for new datasets \cite{dbbr18}. However, these approaches have not been applied to AutoML yet. Moreover, these studies limit their scopes by optimising predictors, not multi-component pipelines, and the optimisation method they use is grid search, proven not to be as effective as SMAC \cite{thhu13}. Other studies have investigated estimating the importance of hyperparameters \cite{rihu18,prbo19,wemu20} from prior evaluations. Specifically, some hyperparameters are more sensitive to perturbation than others; tuning them can contribute to a proportionally higher variability in ML-algorithm performance, i.e. error rate. As an example, gamma and complexity variable C are the most important hyperparameters for a support vector machine (SVM) with RBF kernel \cite{rihu18}. Consequently, the results of these studies can be used to reduce configuration spaces by constraining less-important hyperparameters, either by making their search ranges less granular or outright fixing them as default values. This frees up more time to seek the best values for important hyperparameters that have the highest impact on finding well-performing pipelines. However, a disadvantage of these studies is that the importance of ML-component hyperparameters has only been studied on small sets of up to six algorithms. This reflects how time-consuming it is to properly sample hyperparameter space across all available algorithms. In this study, our approach aligns with meta-learning principles. However, it differs from previous research by refusing to carefully curate a tailor-made meta-knowledge base. Instead, accepting a degree of unreliability, we opportunistically derive assumptive statistics from numerous pipeline evaluations; these evaluations are non-exhaustive and simulate the remnants of AutoML optimisation processes intended to solve seemingly unrelated problems. Accepting this context, we identify previously well-performing ML components, sometimes weighted by the past-and-present dataset similarity derived via the relative landmarking method, and we constrain configuration subspaces for ML pipeline composition and optimisation around these top performers. We also investigate how varying degrees of this recommendation-based search-space culling affects the performance of ML pipeline composition/optimisation methods. \section{Modelling Configuration Spaces for ML Pipeline Composition and Optimisation } \label{sec:search_space_modelling} To control the extension and reduction of configuration spaces in the context of AutoML, we model them using a generic tree structure. After that, we extend the problem of pipeline composition and optimisation \cite{sabu18} by adding a factor representing pipeline structures. Finally, we define the problem of pipeline composition and optimisation with dynamic configuration spaces as a constrained optimisation problem. \subsection{Configuration Space Modelling} To discuss the search space of ML pipelines, it is worth considering that each potential constituent within a pipeline is an instantiation of an ML component with a certain set of values for its hyperparameters. Specifically, if $\mathcal{A}$ is the pool of available ML components, a potential pipeline constituent indexed by $c$ can be represented as \begin{equation} \small{ x_c=(\mathcal{A}_c,\lambda_c, \epsilon_c), \label{eq:treenode} } \end{equation} where $\mathcal{A}_c\in\mathcal{A}$ is an ML component and $\lambda_c$ is a set of hyperparameter values for component $\mathcal{A}_c$. Notably, we also mark the candidate by binary variable $\epsilon_c$, which represents whether the potential constituent is accessible/inaccessible to a pipeline search algorithm. We now arrange the set of candidate constituents, $\mathcal{X}=\{x_c\}$, in a tree-based configuration space defined as \begin{equation} \small{ \mathcal{T}=(\mathcal{X},f_p). \label{eq:searchtree} } \end{equation} Function $f_p$ represents the assignment of each node $x_c$ to a parent node $f_p(x_c)$, where the root nodes of this tree-structured space are defined by $f_p(x_c)=x_c$. Given this space, an ML pipeline $p$, also called a configuration, is a path constructed by back tracking from an active leaf node to the root node. Specifically, \begin{equation} \small{ p = (g,\vec{\mathcal{A}}, \vec{\mathcal{\lambda}}), \label{eq:pipeline} } \end{equation} where $\vec{\mathcal{A}}$ is a vector of ML components in $\mathcal{A}$, $\vec{\mathcal{\lambda}}$ is a vector of hyperparameter-value sets corresponding to $\vec{\mathcal{A}}$, and $g$ is a sequential pipeline structure that defines how the components are connected. Configuration space $\mathcal{T}$ can always be extended by adding tree nodes, but dynamically reducing pipeline search space to a subset of $\mathcal{T}$ is also possible by `deactivating' tree nodes, i.e. flipping their $\epsilon_c$ switches and those of any appropriate branches. Specifically, if a candidate pipeline contains a deactivated node, it is invalid and can be ignored by any search algorithm like SMAC \cite{thhu13}. \subsection{The Problem of Pipeline Composition and Optimisation with Dynamic Configuration Spaces} The aim of pipeline composition and optimisation is to find the best-performing ML pipeline $(g,\vec{\mathcal{A}}, \vec{\mathcal{\lambda}})^{*}$, which involves the optimal combination of pipeline structure, selected components, and associated hyperparameters. The formalism of the pipeline-search problem \cite{sabu18} can thus be written as \begin{equation} \small{ (g,\vec{\mathcal{A}}, \vec{\mathcal{\lambda}})^{*} = \mathrm{arg\,min}\frac{1}{k}\sum_{i=1}^{k} \mathcal{L}((g,\vec{\mathcal{A}},\vec{\mathcal{\lambda}})^{(i)},\mathcal{D}^{(i)}_{train},\mathcal{D}^{(i)}_{valid}), } \label{eq:pipeline_problem} \end{equation} where $\mathcal{D}_{train}$ and $\mathcal{D}_{valid}$ are training and validation datasets. Specifically, this equation minimises the k-fold cross validation error of loss function $\mathcal{L}$. The $k$ here is not to be confused with the $k$ variable used for predictor rankings elsewhere in this paper. Equation (\ref{eq:pipeline_problem}) does not consider changes to the configuration space $\mathcal{T}$. If we reduce this configuration space by selecting promising pipelines to form the configuration subspace $\mathcal{T}^{*}\subset \mathcal{T}$, the problem of pipeline composition/optimisation can be reformulated as a constrained version of Eq. (\ref{eq:pipeline_problem}), as follows: \begin{equation} \small{ (g,\vec{\mathcal{A}}, \vec{\mathcal{\lambda}})^{*} = \mathrm{arg\,min}\frac{1}{k}\sum_{i=1}^{k} \mathcal{L}((g,\vec{\mathcal{A}}, \vec{\mathcal{\lambda}})^{(i)} \in \mathcal{T}',\mathcal{D}^{(i)}_{train},\mathcal{D}^{(i)}_{valid}) } \label{eq:extended_pipeline_problem} \end{equation} subject to \begin{equation} \small{ \mathcal{T}' = \mathcal{T}^{*}\setminus h(\mathcal{T}^{*}), \label{eq:extended_pipeline_problem_constraints_1} } \end{equation} where $h(\mathcal{T}^{*})$ is an optional function to find the set of all invalid pipelines. This set exclusion can be considered to represent the role of AVATAR \cite{ngga20}. \section{Configuration Space Reduction Using the Relative Landmarking Method} \section{Meta-learning Methodology for Configuration Space Reduction} \label{sec:methodology} Here, we present the methods used in the three major facets of our meta-learning study. Section \ref{sec:problem_formulation} describes how pipeline configuration space is designed and augmented for dynamic re-sizing. Section \ref{sec:meta_construction} details how we construct a meta-knowledge base, acknowledging its intentional limitations. Section \ref{sec:landmarkers} covers the specifics of relative landmarking and its use in identifying similar datasets. \subsection{Dynamic Configuration Spaces} \label{sec:problem_formulation} Broadly stated, an ML model is a mathematical object that attempts to approximate a desired function. It is typically paired with an ML algorithm that, via the process of training, feeds on encountered data to adjust certain variables, i.e. model parameters, so as to improve the accuracy of the approximation. This pairing of ML model and algorithm, an ML component, contains other variables, i.e. hyperparameters, that are fixed throughout the training process. Hyperparameter optimisation is thus the process of finding values for these training constants that optimise the performance of the trained model, usually via some iterative approach. Even at this level, the task is not trivial; hyperparameter space can involve many dimensions that are continuous or discrete, with varying ranges. When hyperparameter optimisation extends to variable ML components, a core facet of AutoML, configuration space becomes even more complex, involving so-called `conditional' hyperparameters. For instance, the polynomial degree of an SVM kernel is only non-null if the type of SVM kernel is set to polynomial. Consequently, the search space for a single-component model is better represented by a tree structure, which SMAC is suited to handle \cite{thhu13}. The incorporation of pipeline structure in AutoML search space complicates matters. It has been done before \cite{sabu18}, but our study necessitated an auxiliary representation of pipelines, specified as paths through a tree-structured space of ML components. This allows nodes to be marked active/inactive at any time so that an augmented SMAC can include/avoid ML pipelines containing those components while searching through that space. In effect, ML components can be pruned from configuration space to leave a substantially smaller `active' subspace. For the sake of brevity, we defer presenting any mathematical formalism until a later work. However, by taking the above approach, we reframe the equations for pipeline search \cite{sabu18} as a constrained optimisation problem. Moreover, while our investigation only ever culls the search space once per dataset, per experiment, our methodology is suited to more dynamic modulations of search space; we leave this to future research. \subsection{The Meta-knowledge Base} \label{sec:meta_construction} Prior to any meta-learning experiments and analysis, a meta-knowledge base must first be constructed. However, to simulate the desired `coincidental' nature of the metadata and its availability, we limit the collection of previous experience to SMAC-based AutoML applied across 20 datasets, and only a singular two-hour run per dataset at that. This is enough to generate numerous pipeline evaluations, extracted from iterations of the optimising algorithm SMAC, but it still falls far short of the exhaustive exploration that a meta-knowledge base ideally requires. This is especially true, as a single evaluation does not just fix pipeline structure, it also fixes values for a set of hyperparameters. In fact, to be technical, a single SMAC iteration is one-tenth of a 10-fold cross-validatory ML-pipeline evaluation; given enough time, up to ten SMAC iterations can be dedicated to the same pipeline/hyperparameter configuration. Moreover, per dataset, a single optimisation path is a very poor sampling of an entire configuration space. Some ML components may feature negligibly in the evaluated pipelines, if at all. Time budgets also complicate matters; some ML solutions may be more computationally expensive to train than others, and some SMAC runs may, via exploration/exploitation, end up in these regions of configuration space, leading to an unbalanced distribution of evaluations across datasets. In essence, the quality of accumulated experience is expected to be highly variable. Another issue is that, in raw form, pipeline evaluations are relatively useless; any one instantiation, hyperparameter sampling included, is unlikely to be visited again by SMAC in the future. Generalisations must thus be made if configuration space is to be effectively reduced. To that end, we make a loose assumption that, in the absence of further information, the error of a pipeline represents a sampled error of its constituent predictor. From this, mean-error statistics and associated performance rankings can be compiled for 30 Weka predictors, both overall and per dataset. These are much more practical, as a subspace forged around $k$ out of 30 predictors is a much more substantial reduction than excluding individual pipelines. Of course, the assumption behind the generalisation is very contentious, as the selection of preprocessors in a pipeline will obviously affect the accuracy of its predictor. So, intentionally working with limited meta-knowledge and presumptuous generalisations, the question is: are the compiled statistics still useful for narrowing in on promising subspaces? \subsection{Landmarkers} \label{sec:landmarkers} Typical reasoning in the field of meta-learning is that previous experience is most relevant to a problem at hand if past and present contexts are similar. Accordingly, it is routine to approach this by defining and compiling a set of so-called meta-features to describe a dataset, which are then subsequently compared between datasets. Naturally, identifying the most appropriate metrics to denote this similarity is a topic of active research, but landmarking has proved to be a popular option \cite{va19}; we employ this procedure in relevant experiments. A set of landmarkers, $\Theta=\{\theta_i\}$, is generally a collection of ML predictors that are simple and efficient to execute. Ideally, they represent a diversity of problem types. The theory is that, if a landmarker is well-suited for problem type $A$, and it produces an ML model with strong performance, e.g. good classification accuracy, on dataset $B$, then dataset $B$ belongs to the class of problems designated by $A$. Any ML pipeline that works well for one dataset in class $A$ is then presumed to work well for any other of that same problem type. However, in practice, it is challenging to pick a perfect set of landmarkers, especially as the choice of meta-features to describe complex problems has an impact on the effectiveness of similarity-based meta-learning \cite{va19,albu15}. Given that we include the evaluation of landmarkers as part of the overall AutoML optimisation time within relevant experiments, we have made a deliberate decision in this study to prioritise fast execution time. Therefore, sourced from the average evaluation time of all predictor-containing pipelines in our meta-knowledge base, we select the following five fastest predictors for our set of landmarkers: RandomTree, ZeroR, IBk, NaiveBayes, and OneR. We acknowledge that this choice is relatively crude, but it adheres to the opportunistic principles behind this study; are rough metrics for dataset similarity still useful in providing additional intelligence when reducing the input search space for AutoML pipeline selection? \begin{algorithm}[!htbp] \small \caption{\small Designing Configuration Space with Relative Landmarking} \begin{algorithmic}[1] \Require \Statex $\Theta$: The set of landmarkers \Statex $t_{new}$: The new dataset \Statex \{$t_{prior_j}$\}: The set of prior datasets \Statex \For{$\theta_i$ \textbf{in} $\Theta$} \State $E_{new\_i}$ = \textit{evaluate}($\theta_i$, $t_{new}$) \EndFor \For{\textbf{each} $t_{prior\_j}$} \State $c_j$ = \textit{calculateCorrelation}($E_{new}$, $E_{prior\_j}$) \EndFor \State $t^{*}$ = \textit{getMostSimilarTask}($c$) \State $\mathcal{T}_{new}$ = \textit{selectKBestMLComponents}($t^{*}$, $k$) \State \textbf{return} $\mathcal{T}_{new}$ \end{algorithmic} \label{algorithm:construct_configuration_space} \end{algorithm} Algorithm \ref{algorithm:construct_configuration_space} formalises how configuration space is constrained via the relative landmarking method, to then be used as input for AutoML pipeline composition and optimisation methods. Firstly, the algorithm evaluates the new dataset $t_{new}$ with each landmarker $\theta_i$, resulting in a 10-fold cross-validation error rate, $E_{new\_i}$, per landmarker (lines 1-3). Secondly, the algorithm calculates a Pearson correlation coefficient between the full performance vector of the new dataset, $E_{new}$, and a similarly landmarked vector of mean error rates, $E_{prior\_j}$, for each prior dataset $t_{prior\_j}$ (lines 4-6). Thirdly, the algorithm ranks the correlation coefficients and selects the dataset, $t^{*}$, that has the highest correlation coefficient (line 7). Finally, the resulting configuration space to explore is constructed from all preprocessing components and the \textit{k} best performing predictors (line 8) for the most similar dataset, $t^{*}$. We emphasise that, for landmarker-based experiments on a newly encountered dataset, the net evaluation time of landmarkers is deducted from the total time budget assigned to ML pipeline composition/optimisation processes. \section{Experiments} \label{sec:experiment} To explore the opportunistic utility of a limited meta-knowledge base, constructed according to Section \ref{sec:meta_construction}, we run a series of experiments with different settings, all described in Section \ref{sec:settings}. The results are described in Section \ref{sec:results}. \subsection{Experimental settings} \label{sec:settings} All experiments take the form of running AutoWeka4MCPS\footnote{https://github.com/UTS-AAi/autoweka} across 20 datasets listed in Table \ref{tab:datasets}. The AutoML package uses SMAC \cite{thhu13} for ML pipeline composition/optimisation and is applied for two hours of runtime and 1 GB of memory per dataset, although this two-hour process is itself done five times over for statistical purposes. Logged results typically note the 10-fold cross-validated error rate of the best ML pipeline across the five repeated runs. Essentially, the experiments only vary in how the searchable configuration space has been recommended to the AutoML package, i.e. which $k$ out of 30 predictors to utilise. First, we define the `control' contexts, in which meta-learning does not feature: \begin{itemize} \item \textbf{baseline}: The full configuration space constructed from all preprocessing and predictor components available to AutoWeka4MCPS \cite{sabu18}. \item \textbf{avatar}: The full configuration space as per \textbf{baseline}, but where the AutoML solution search process is significantly boosted by AVATAR \cite{ngma20}, identifying and ignoring invalidly composed ML pipelines before they can waste evaluation time. \item \textbf{r30}: An extreme case, where the pipeline structure of an ML solution, per dataset, is fixed to the best that was found after a previous 30 hours optimisation of AutoML as reported in \cite{sabu18}. The two hours in this `continuation' experiment are solely dedicated to optimising hyperparameters that have been re-initialised to their default values. \end{itemize} Next, we define the contexts that pull information from the meta-knowledge base, noting that the AVATAR speed-up is employed for all: \begin{itemize} \item \textbf{global leaderboard (\textit{M-k1}, \textit{M-k4}, \textit{M-k8}, \textit{M-k10}, and \textit{M-k19})}: Untargeted meta-knowledge. For each dataset, AutoML explores pipelines containing $k$ predictors, for $k$ in $\{1,4,8,10,19\}$, that performed the best across all datasets. This global leaderboard is constructed by averaging the rank numbers of a predictor from each individual dataset, then sorting these averages for all predictors. \item \textbf{landmarked (\textit{L-k1}, \textit{L-k4}, \textit{L-k8}, \textit{L-k10}, and \textit{L-k19})}: Targeted meta-knowledge. For each dataset, AutoML explores pipelines containing $k$ predictors, for $k$ in $\{1,4,8,10,19\}$, that performed the best on the most similar dataset, where similarity is defined by landmarkers; see Section \ref{sec:landmarkers}. \item \textbf{oracle (\textit{O-k1}, \textit{O-k4}, \textit{O-k8}, \textit{O-k10}, and \textit{O-k19})}: A representation of direct memory. For each dataset, AutoML explores pipelines containing $k$ predictors, for $k$ in $\{1,4,8,10,19\}$, that performed the best on the same dataset, according to previous runs in the meta-knowledge base. \end{itemize} Notably, we restrict the set of $k$ values for the meta-learning experiments due to the computational expense in running them. The particular spread of $\{1,4,8,10,19\}$ is the consequence of an initial exploratory experiment, which we exclude detailing here for the sake of brevity. As a result, these numbers may seem unusual, but they have no grander significance beyond being one possible way to sample the broad spectrum of predictor-culling scenarios. \input{tables/tab_dataset_11datasets} In any case, conventional wisdom and the no-free-lunch theorem suggest that, controlling for variability, \textit{baseline} should result in the worst model performance, with \textit{avatar} being an improvement due to an effective decrease in runtime wastage. Scenario \textit{r30} is difficult to predict \textit{a priori}, but, with a 30-hour head-start on optimising pipeline structure, it should outperform \textit{baseline} as well. As for the meta-learning experiments, they are ordered in increasing relevance of meta-knowledge to the dataset/problem on hand; the solutions found by AutoML should improve along this ordering. In effect, for $k=n$ and with $E(x)$ representing the cross-validated error of the optimal ML pipeline, we would naively expect \begin{eqnarray} \label{eq:hierarchy} \nonumber && E(\mathrm{baseline})>E(\mathrm{r30}),\\ \nonumber && E(\mathrm{baseline})>E(\mathrm{avatar}),\\ && E(\mathrm{avatar})>E(\operatorname{M-kn})>E(\operatorname{L-kn})>E(\operatorname{O-kn}). \end{eqnarray} Accordingly, the results, and any deviation from expectation, are analysed with Eq. \ref{eq:hierarchy} in mind. \subsection{Experiment Results} \label{sec:results} In the course of running the 18 experimental settings detailed in Section~\ref{sec:settings}, SMAC encountered 9 datasets for which its optimisation process was, for at least one configuration space, unable to evaluate any ML pipeline the 10 times required by cross-validation. While these `incomplete' runs are still worthy of discussion in future work, all results presented in this section relate to the remaining datasets, i.e. the first 11 in Table~\ref{tab:datasets}. \input{tables/tab_error_11datasets} \begin{figure*}[htbp] \centering \includegraphics[width=0.95\linewidth]{images/violin_chart_2.pdf} \caption{The violin plots capturing the average rankings and ranking distribution for the configuration spaces on 11 datasets.} \label{fig:violin_chart} \end{figure*} \input{tables/tab_pipelines_apart} With this acknowledged, Table \ref{tab:error_rate_all_methods} presents the mean error rate (\%) of the best pipelines found by SMAC for each different method of designing a searchable configuration space; the lowest value per dataset is shown in bold. Table \ref{tab:pipelines}\footnote{The details of the best ML pipelines found by SMAC with different configuration spaces for all data sets can be found at \url{https://github.com/UTS-AAi/autoweka/blob/master/autoweka4mcps/doc/landmarking_supplementary.pdf} } demonstrates what these optimal ML pipelines look like for the specific dataset \textit{abalone}. As for the performance measures, they allow experimental settings to be ranked in utility from 1 to 18 for each dataset. Equal rankings are averaged out, e.g. the top four settings for dataset \textit{car} are all ranked $(1+2+3+4)/4=2.5$. Immediately, given the rankings for each configuration scenario, the distribution across all 11 datasets can be displayed in Fig. \ref{fig:violin_chart}. A comparison of their averages is likewise depicted in Fig. \ref{fig:cd_diagram} via a critical difference (CD) diagram. \begin{figure}[h] \centering \includegraphics[width=0.90\linewidth]{images/cd_diagram_updated_mk_remove_incomplete_embededfont.pdf} \vspace*{-1mm} \caption{The critical difference diagram of the average ranking of the performance of SMAC with different configuration spaces.} \label{fig:cd_diagram} \end{figure} \begin{figure}[htbp] \includegraphics[width=0.90\linewidth]{images/mlcomponent_ranking_mean_11datasets.pdf} \vspace*{-3mm} \caption{The ranking of ML predictor components based on mean error rate of their pipelines from prior evaluations for selected datasets.} \label{fig:ranking_predictors} \end{figure} Reassuringly, the expectation in Eq. (\ref{eq:hierarchy}) is upheld, if loosely, in both the CD diagram and the ranking distributions. Crudely averaging the averages already presented in Fig.~\ref{fig:cd_diagram} provides the following mean rankings: 9.728 for \textit{M-kn}, 8.918 for \textit{L-kn}, and 7.926 for \textit{O-kn}. When assessed against values of 15.09 for \textit{baseline}, improved to 12.27 for \textit{r30} and 10.77 for \textit{avatar}, the benefits of meta-learning seem both evident and additive with respect to refined targeting. Indeed, using \textit{global leaderboard} configuration spaces is a decent strategy, likely to yield `good enough' performance for the majority of datasets, with \textit{M-k4} and \textit{M-k1} proving quite a bit better than the \textit{baseline}, \textit{r30} and \textit{avatar} scenarios. The \textit{landmarked} configuration spaces are an upgrade beyond the \textit{global leaderboard}, using dataset similarity to select the best ML components that are, hopefully, relevant to a problem on hand; sure enough, \textit{L-k10} and \textit{L-k8} appear better than \textit{M-k4} and \textit{M-k1}. Finally, \textit{oracle} configuration spaces ideally trump all, leveraging the best predictors found previously for each dataset on the same dataset. It is no surprise then that \textit{O-k4}, \textit{O-k8} and \textit{O-k10} are highly ranked. Nonetheless, there is more to unpack, specifically with the distributions in Fig.~\ref{fig:violin_chart}. First of all, a severe culling of $k=1$ is a risky proposition, regardless of meta-learning targeting strategy. If the remaining predictor is a strong choice for a dataset, the severely reduced search space allows this predictor to be hyperparametrically fine-tuned with greater focus than in any other scenario, meaning that $k=1$ configuration spaces are often considered most beneficial, i.e. ranked 1st. Small predictor search spaces are also more supportive of multi-component pipeline evaluations, e.g. the \textit{L-k1} structure for \textit{abalone} in Table \ref{tab:pipelines}. However, $k=1$ spaces are also frequently the least beneficial, i.e. ranked 18th, especially if the sole predictor is not well-suited for an ML problem. In fact, Fig.~\ref{fig:violin_chart} suggests that AutoML optimisation should actually hedge its bets with an elite tier of predictors, i.e. $k>1$. This is particularly evident for \textit{landmarked} and \textit{oracle} configuration spaces, where the ranking distributions shift substantially towards the top-performing half of the 18 scenarios, i.e. closer to the bottom axis. Of course, these meta-learning gains eventually dissipate for extreme values of $k$, as evidenced by the $k=19$ distributions. After all, a culling scenario with $k=30$ would be equivalent to the standard \textit{avatar} setting. Further discussion is specific to each type of meta-learning experiment. \textit{Global leaderboard configuration spaces:} Figure \ref{fig:ranking_predictors} shows 30 predictors sorted by their average performance-ranking across 11 ML problems, as derived from the meta-knowledge base, additionally displaying how they ranked on each individual dataset. As is evident from this figure, the benefit of searching throughout the top $k$ components of such an ordering is that the top tier of predictors is selected for consistency. Indeed, random forest is a ubiquitous benchmark in Kaggle competitions for that very reason. Likewise, complex ensemble methods, listed in Fig.~\ref{fig:ranking_predictors} as `meta.X', are generally not to be recommended for small optimisation time budgets. Accordingly, the \textit{M-kn} distributions in Fig.~\ref{fig:violin_chart} are relatively unimodal, with their recommendations stably performant for AutoML across all datasets. Additionally, the `all-rounders' suggested by both \textit{M-k1} and \textit{M-k4} seem a good choice for any dataset. On the other hand, without targeting the characteristics of an ML problem, the benefits of versatile predictors are quickly lost to the increased search spaces, as evidenced by \textit{M-k8} and \textit{M-k10}. \textit{Oracle configuration spaces:} The \textit{oracle} setting is a theoretical ideal, unlikely to be used in practice. Discounting for the compounded uncertainties in our meta-learning experiments, \textit{O-kn} should be the optimal recommendation procedure for establishing configuration space, given that it leverages what worked previously on the ML problem on hand in a form of direct memory. As already noted, the risk of selecting the wrong predictor, i.e. $k=1$, is evident in the broad distribution of \textit{O-k1}. However, once bets are hedged with a top tier of predictors, i.e. $k>1$, the spaces defined by \textit{O-k4}, \textit{O-k8} and \textit{O-k10} are all very effective culling strategies, with worst-case rankings higher than any alternative. Moreover, as Fig.~\ref{fig:cd_diagram} shows, the average ranking of 5.27 for \textit{O-k4} is more than 7.942 away from \textit{baseline}, i.e. it is critically different and thus a significant result despite the uncertainty inherent to the meta-knowledge base. \textit{Landmarked configuration spaces:} More so than either \textit{global leaderboard} or even \textit{oracle}, the landmarker-based recommendations of configuration space appear significantly bimodal. In effect, they either work well or fail badly. It is possible that the bifurcation is partially due to the meta-learning approach we employed; a refined selection of landmarkers or another method of determining dataset similarity could potentially weight the distributions more towards the high-ranking mode. Regardless, unlike \textit{global leaderboard}, landmarked recommendations remain competitive for a greater range of $k$, possibly because the marginal utility of an extra component in the search space is greater for \textit{L-kn} than \textit{M-kn} due to the predictor being more relevant to the problem on hand. As a result, while increasing values of $k$ diminish the best-case utility of a culled space, i.e. the maximum performance ranking, the average actually increases for a while, such that \textit{L-k10} proves to be the second-most reliable recommendation scheme of all 18 scenarios. In fact, according to the statistical tests underlying the CD diagram in Fig.~\ref{fig:cd_diagram}, the average rank for \textit{L-k10} of 6.68 is more than 7.942 away from the \textit{baseline} rank of 15.09, i.e. a critical difference. This result is significant and strongly supports the validity of using meta-knowledge in an opportunistic fashion to boost AutoML optimisation processes. Finally, we emphasise that these results are intrinsically dependent on the time budget used for ML pipeline composition/optimisation, which, in these experiments, has been two hours per SMAC run. The consequences of this choice can be counter-intuitive. For instance, settings \textit{r30} and \textit{O-k1} are effectively identical, except that the former previously had 30 hours to select its best pipeline, while the latter only had two. So, it would seem that \textit{r30} has an advantage. Instead, the violin plots in Fig.~\ref{fig:violin_chart} show that \textit{O-k1} is superior. Admittedly, as Table~\ref{tab:pipelines} implies, the extended exploration time does allow \textit{r30} to recommend far more complex ML pipelines in its culled space than \textit{O-k1}. However, a one-component pipeline can have its hyperparameters optimised far more effectively in two hours. In fact, two hours may not be enough to even train the alternative, let alone iterate through its hyperparameter configurations. On that topic, we note the following. With infinite time, an encompassing search space of $k=30$ will always provide the best solutions that $k<30$ risks missing out on. At the other extreme, with negligible time, the only feasible option is $k=1$, requiring a desperate choice of predictor that is hopefully informed by strong meta-knowledge. For any other practical choice of runtime, there will be a `sweet spot' for tier size $k$. In our particular experiments, and based on the average rankings of cull strategies, these were denoted by \textit{M-k4}, \textit{L-k10} and \textit{O-k4}. Such results may have a large degree of uncertainty, but the underlying principle is clear: although imperfect, even weakly-biasing meta-knowledge can boost the search for ML solutions that is subject to strict time limits. \section{Conclusion} \label{sec:conclusion} In this study, we have investigated whether the routine process of AutoML optimisation, previously applied to a collection of datasets, can provide any useful information to support model search in the future. Specifically, we opportunistically harvested numerous evaluations of ML pipelines, substantial but non-exhaustive, to produce a meta-knowledge base of 30 predictors and their ranked performance on each of 20 datasets. We then ran a series of experiments with AutoML package AutoWeka4MCPS and its pipeline composition/optimisation algorithm SMAC, in which the solution search space for a target ML problem was culled to varying sets of predictors informed by the meta-knowledge base. These recommendations could be based on how predictors performed overall, how they performed on the most similar dataset to the one on hand, or how they performed on the dataset itself in a previous run. Dataset similarity, where relevant, was determined by the method of relative landmarking. We found that, despite the intended unreliability of the meta-knowledge base, meta-learning does, as a generalisation, improve the outcome of SMAC. Moreover, AutoML solution search appears to do better the more relevant the meta-knowledge is to a dataset on hand. The impact of landmarker-based search-space recommendation even proved critically different from our baseline strategy, although future experiments will be required to further establish the statistics of these results. Ultimately, we find that our studied form of opportunistic meta-knowledge, compiled with a minimal level of thoroughness, is risky to depend on when selecting the best predictor for a dataset; its optimised performance is frequently `all or nothing'. In contrast, the meta-knowledge proves much more useful in culling away the worst performers, so as to leave behind a top tier of potential ML models, the optimal size of which depends on the runtime available for optimisation. In effect, our research suggests that AutoML should seek a risk-averse balance that ensures promising candidates are not disregarded, while also dedicating enough time to properly trial them.
2105.00324
\section{Introduction} Deep learning is the prevailing paradigm for machine learning. Over the course of its meteoric rise, its many differences from human learning have become increasingly clear. Chief among these are gaps in data efficiency, robustness, generalizability, and energy efficiency --- all unlikely to narrow with growing computation power alone. This has motivated a renewed search for brain-inspired learning algorithms. However, the current software infrastructure needs improvement to support productive exploration. Two common choices today for designing novel learning algorithms are TensorFlow \cite{abadi2016tensorflow} and PyTorch \cite{paszke2019pytorch}. These general deep learning frameworks provide powerful abstractions for calculating gradients and building deep neural networks, but there is no intermediate layer between these two levels. For high-level development, backpropagation is the only learning algorithm offered and is in fact coupled with the training process. Software in neuromorphic computing, on the other hand, has traditionally focused more on simulating neurons and spiking neural networks \cite{carnevale2006neuron,gewaltig2007nest,bekolay2014nengo,stimberg2019brian}, interfacing with neuromorphic hardware \cite{davison2009pynn,sawada2016truenorth,lin2018programming,rueckauer2021nxtf}, and converting pre-trained deep learning models to spiking neural networks for inference \cite{rueckauer2017conversion,rueckauer2018conversion}. Learning has not been a key part of these libraries. The few supported learning rules such as spike-timing-dependent plasticity are not competitive on large problems. As a result, new learning algorithms are developed in independent codebases that are not easily reusable. In this work, we present Neko, a software library under active development for exploring learning rules. We build on the popular autograd frameworks, and our goal is to implement key building blocks to boost researcher productivity. By decoupling the learning rules from the training process, we aim to provide an abstraction model that enables mixing and matching of various design ideas. To arrive at the right abstraction level, we need to sample a wide range of learning algorithm research. Below are the three directions and exemplars we have prioritized in this initial code release. The first class of learning rules are gradient-based methods. They approximate backpropagation with various levels of biological plausibility \cite{lillicrap2020backpropagation,lee2016training,sacramento2018dendritic,neftci2019surrogate,zenke2018superspike,marschall2020unified,lillicrap2016random,akrout2019deep,sornborger2019pulse}. From this category, we study the e-prop algorithm \cite{bellec2020solution} in detail and provide a complete reimplementation. The second direction is based on the hypothesis that the brain keeps track of probabilistic distributions over weights and rewards \cite{aitchison2021synaptic,dabney2020distributional}. This line of exploration may offer important clues towards achieving learning efficiency and robustness in the face of uncertainty. We develop a sampling-based learning rule on spiking neural networks (SNN). The third class is concerned with hardware constraints on plasticity mechanisms. For this class, we include the classic example of Manhattan rule training for memristive crossbar circuits. In all three exemplars, we seek consistent implementation in the Neko library. \section{Library design} The Neko library is designed to be modular, extensible, and easy to use. Users can select from a collection of neuron models and encoding methods to build a spiking or regular artificial neural network, and train it with one of the implemented learning rules. Alternatively, they could supply their own networks from PyTorch or Keras \cite{chollet2015keras} or develop new learning algorithms based on the provided intrinsics. The following code snippet provides an example of solving MNIST \cite{lecun1998mnist} with the e-prop algorithm on a recurrent network of 128 hidden adaptive leaky integrate-and-fire (ALIF) neurons. \begin{lstlisting}[caption={Train an SNN model of ALIF neurons with e-prop. },captionpos=b,frame=single, language=python,breaklines] from neko.backend import pytorch_backend as backend rsnn = ALIF(128, 10, backend, task_type='classification') model = Evaluator(rsnn, loss='categorical_crossentropy', metrics=['accuracy', 'firing_rate']) learning_rule = Eprop(model, mode='symmetric') trainer = Trainer(learning_rule) trainer.train(x_train, y_train, epochs=30) \end{lstlisting} The training process illustrated in this example can be broken down into a series of high-level Neko modules: the \emph{layer} includes pre-implemented recurrent SNNs and adaptors for existing Keras and PyTorch models; the \emph{evaluator} associates a model with a loss function and optional metrics; the \emph{learning rule} implements backpropagation and a growing list of neuromorphic learning rules; and the \emph{trainer} handles training logistics as well as special logic to apply multiple learning rules for gradient comparison between models. Besides these core components, auxiliary modules include the data loader, spike encoder, optimizer, and functions for loss, activation, and pseudo-derivatives calculations. To help users define custom algorithms, Neko also provides a unified API for accessing frequently used features in TensorFlow and PyTorch such as low-level tensor operations. Switching the backend is straightforward. This feature can detect occasional framework-dependent behavior and is useful for code verification and performance analysis. The multi-backend support is reminiscent of the earlier Keras framework. However, Neko is different in that it provides more fine-grained abstraction layers such that users can replace the learning algorithm by changing a single line of code. Taken together, these features also simplify the process of porting code to hardware accelerators, since implementing a backend for the hardware is sufficient to run all models in Neko on it. \section{Use cases} In this section, we present results on the three representative learning rules introduced earlier. We also provide gradient analysis as an example of Neko's cross-cutting utilities that we are building to help design, debug, and compare new learning algorithms. \subsection{Credit assignment with local signals} A key mystery in the brain is how it implements credit assignment. The standard backpropagation through time (BPTT) algorithm is unrealistic as we cannot expect a biological neuron to be aware of all past synaptic strengths. Bellec et al. \cite{bellec2020solution} proposed e-prop, a local online learning algorithm for recurrent SNNs. The method exploits the mathematical formula of BPTT, deriving an approximation which only requires a recursive accumulative \emph{eligibility trace} and a local \emph{learning signal}. These properties make the algorithm one step closer to biologically realistic on-chip learning. In Neko, we implemented full-featured e-prop algorithms including the three variants: symmetric, random, and adaptive. Whereas the paper manually derived the e-prop formulas for some networks, we took a different approach: separating the model from the learning rules. In the layer module, the regular recurrent neural networks and recurrent SNNs, with leaky integrate-and-fire (LIF) or ALIF neurons, were all defined as standard models. Meanwhile, they inherited from an \emph{Epropable} class, which defined general symbolic gradient formulas according to recurrent cell dynamics. Specifying this extra information was all it took to perform e-prop, and in a network-agnostic way. This design enabled the error-prone formula derivation to be automated. It also sped up experiments with new network architectures or e-prop variants. We compared the Neko implementation of e-prop to the original implementation on the TIMIT benchmark \cite{garofolo1992timit} for framewise speech recognition. The authors reported the results on a hybrid network of 100 ALIF and 300 LIF neurons \cite{bellec2020solution}. In our experiment, we used an ALIF-only network of 200 neurons and otherwise kept the setup identical. We report close reproduction accuracy in Fig. \ref{fig:timit}. Notably, Neko's error rate dropped by $27\%$, after tuning regularization and batch size, while keeping the firing rate low at 10 Hz. To the best of our knowledge, this is the best SNN accuracy obtained with a local learning rule, which in fact reaches the level of an LSTM baseline trained with the precise gradients from BPTT (\cite{bellec2020solution} Fig. S4). Additionally, Neko is faster (training time from Nvidia V100) and convenient for iterative development. \begin{figure} \centering \includegraphics[width=\linewidth]{timit.png} \caption{TIMIT results. \textmd{We reproduce e-prop accuracy on speech recognition in Neko with a smaller network. Neko is faster with slight tuning and reduces error by $27\%$ to reach the nonspiking baseline performance of a BPTT-trained LSTM model.} } \label{fig:timit} \end{figure} \subsection{Probabilistic learning} Bayesian statistics has captured much attention in the computational neuroscience community, both as an explanation for neural behavior \cite{Knill2004} as well as a means of performing inference in neural networks. In Neko, we develop a Hybrid Monte Carlo, or HMC \citep{Neil2011HMC}, algorithm to perform Bayesian inference on spiking neural networks based on Metropolis-adjusted Langevin diffusion \cite{Rossky1978}. Fundamentally, HMC algorithms are simply Metropolis-Hastings samplers \cite{Hoff2009Bayes} where the proposal distribution is based on the gradient. Though spiking neurons are non-differentiable by definition, \textit{surrogate gradients} can be defined by considering smoothed versions of the spiking activation function \cite{neftci2019surrogate}. State of the art learning algorithms for spiking neurons have used these surrogate gradients successfully, and we also find success in deploying them in HMC to form our proposal. In fact, this two-stage approach is especially appealing for spiking neurons, since the theoretical underpinnings of HMC place only very weak restrictions on what the proposal direction should be, and certainly do not require an exact gradient to be satisfied. Thus, from a theoretical perspective, running our algorithm for sufficiently long will result in a sample from our true posterior. Empirically, of course, it is not practical to explore the entire nonconvex, high-dimensional posterior. We therefore verify our implementation numerically. The MNIST-1D \cite{greydanus2020scaling} data is a derivative of the popular MNIST dataset of handwritten digits which transforms the image recognition problem into a sequence learning problem (See Figure \ref{fig:hmc}, Left). We train a spiking neural network with 1,000 hidden neurons using our proposed HMC algorithm\footnote{Using an adaptive step size \cite{Andrieu2008Adaptive} with a diffusion standard deviation of 0.01 scaled by the norm of the surrogate gradient, which was obtained via standard backpropagation.}, and recorded the posterior mean as well as uncertainty for the train set examples. As shown in Figure 3 (Right), we find that the model displayed significantly more uncertainty on test examples for which its best guess was incorrect than when it was correct. This validates our algorithm, as we would like errors to be associated with high uncertainty. As future work, we intend to compare HMC and other MCMC algorithms to other probabilistic learning approaches such as Variational Bayes \cite{Graves2011} and Monte Carlo Dropout \cite{gal2016} within the Neko framework. \begin{figure} \centering \includegraphics[scale=0.193]{figs/oned_example.png} \includegraphics[scale=0.8]{figs/hmc_boxplot.pdf} \caption{ Uncertainty Quantification. \textmd{\textbf{Left:} An example input representing the number 3 for the MNIST-1D data. \textbf{Right:} Posterior uncertainty among test examples which were correctly versus incorrectly predicted. Uncertainty is higher when errors are made.}} \label{fig:hmc} \end{figure} \subsection{Analog neural network training} Memristors have emerged as a new platform for neuromorphic learning \cite{thomas2013memristor,hu2014memristor}. These devices represent the synapse weights in the tunable conductance states of large crossbar architectures. Compared with digital implementations of neural networks, these analog circuits offer promising advantages in parallel processing, in-situ learning, and energy efficiency \cite{fuller2019parallel,li2018efficient}. However, they also place constraints on how the weights can be updated. A classic way to train these networks is with the Manhattan rule learning algorithm \cite{7139171}. Although training with backpropagation on device is theoretically possible, the time consumption of tuning individual weights with feedback algorithm can be prohibitive, especially for larger scale neural networks \cite{Alibart_2012}. As an alternative, the Manhattan rule simply updates network weights by a fixed amount according to the sign of the gradients, where the actual change magnitude may depend on the state of the material. This learning rule has been applied successfully to simple machine learning benchmarks in simulated or fully hardware-implemented analog neural networks \cite{yao2020fully}. Neko implements a family of Manhattan rules to simulate the training process. It includes the basic algorithm and an extended version that supports a specified range of material conductance constraints. Because these learning rules do not have special requirements for the network architecture, users can directly supply existing Keras and PyTorch models with Neko's adaptors. Our preliminary results show that both the simple Manhattan rule and the constrained version could train the MNIST dataset up to 96\% accuracy on a simple 2-layer (with 64, 32 neurons) multi-layer perceptron, which is 2\% lower than backpropagation. \subsection{Gradient comparison analysis} Many learning rules depend on gradients explicitly or implicitly. Yet, gradient estimates are not intuitive to developers. Debugging learning rules sometimes require noticing the subtle differences in gradient estimates and follow their trends over the course of training. In Neko, we have designed a gradient comparison tool that can enumerate the gradients or weight changes for multiple learning rules with the same model state and input data. It can also track this information batch by batch. Visualizing this information can help inspect approximation quality differences caused by algorithm tweaks and identify equivalence in formula transformations. Outside the context of debugging, the change in gradient estimates throughout the training process can also reveal potential biases and other properties of the learning algorithm. The gradient comparison tool is made possible by Neko's separation of the learning algorithm and trainer module. It is implemented as a special trainer that takes multiple learning rules and clones of the same model. While the primary model follows the usual training process, the others' parameters are synced with the primary at each training step, and the weight changes are saved. The equivalence of gradient changes and weight changes can be established using the built-in \emph{naive optimizer} which applies gradients directly without learning rate. Gradient analysis offers insights into how learning rules behave relative to each other and backpropagation. Fig. \ref{fig:grads} illustrates this with an example of training spiking MNIST models with three variants of e-prop. While symmetric e-prop was the best at gradient approximation, the relationship between random and adaptive versions was somewhat unexpected. The adaptive version produced gradients with larger deviation and bias, which could explain its weaker performance on the benchmark (not shown). \begin{figure} \centering \includegraphics[width=\linewidth]{grads.png} \caption{Gradient analysis tool. \textmd{This example illustrates the differences in approximate gradients among e-prop variants for training MNIST: (top) a snapshot of the distributions of gradient deviations, (bottom) how the gradient deviations change over time.} } \label{fig:grads} \end{figure} \section{Supporting utilities} To further enable neuromorphic centric exploration, we integrate the SpikeCoding toolbox \cite{SpikeCoding2021} which enables simple encoding of continuous value sequences into spikes with nearly a dozen algorithms. We present experimental results (Table \ref{tab:surg-ecg_table}) on two temporal data applications using three encoding schemes \cite{Petro2020}: \begin{itemize} \item \emph{Temporal contrast (TC)} encoding compares the absolute value of a signal with a threshold derived by the derivative and standard deviation of the full sequence multiplied by a tunable parameter. \item \emph{Step-forward (SF)} encoding generates positive/negative spikes by comparing values in a sequence to a moving baseline plus a tunable threshold, which is initially the first value of the sequence and updated each spike. \item \emph{Moving window (MW)} encoding uses a similar moving baseline and threshold to determine spiking but which is set to the mean of values in a tunable time window. \end{itemize} All models were trained with e-prop learning except for the Benchmark RNN model trained with BPTT. While we note that there was often a sizable decrease in accuracy using these encodings, the sparsity of the input signal was significantly increased. Spike encodings may enable the use and development of learning algorithms more suited to or dependent on event based input. \begin{table} \setlength{\tabcolsep}{7pt} \centering \caption{Testing two classification exemplars using temporal spike encoding schemes} \label{tab:surg-ecg_table} \begin{tabular}{lccccc} \toprule Encoding & None & TC & SF & MW & Benchmark \\ \midrule Surgery$^{1}$ & 0.675 & 0.620 & 0.687 & 0.563 & \textbf{0.766} \\ ECG$^{2}$ & \textbf{0.813} & 0.763 & 0.699 & 0.685 & 0.811 \\ \bottomrule \end{tabular} \begin{flushleft} $^{1}$A surgery kinematic dataset measuring the positions and orientations of surgical instruments during labeled simulated exercises. Data available upon request. $^{2}$A public ECG heartbeat categorization dataset \cite{kachuee2018ecg} subsampled for class balance. \end{flushleft} \end{table} \section{Conclusions} We presented the design of a coding library for researching learning algorithms. Through three examples, we demonstrated its capability and ease of use in diverse scenarios. Our reference implementations introduced a new state-of-the-art in local temporal credit assignment with SNNs, a sampling-based learning rule for estimating weight and prediction posteriors, as well as simulations for constrained training of analog neural networks on memristive hardware. Additionally, we showed a cross-cutting example to support learning rule inspection with gradient comparison analysis. Two directions emerge for future work. First, we will extend learning rules to complex neuron models (e.g., dendritic computation, structured neurons) and network architecture. Second, we will port learning algorithms to emerging hardware platforms. Both processes will be facilitated by the abstraction of learning algorithms and the multi-backend support in the Neko library\footnote{https://github.com/cortical-team/neko}. \pagebreak \begin{acks} We thank Sihong Wang and Shilei Dai for helpful discussions. This work is partially supported by Laboratory Directed Research and Development (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. Department of Energy under Contract No. DE-AC02-06CH11357. \end{acks} \bibliographystyle{ACM-Reference-Format}
1912.11583
\section{Introduction} \label{Sec1} Matrix data have been popularly encountered in various big data applications. For example, many science and social applications consist of individuals with complicated interaction systems. Such system can often be modeled using a network with the nodes representing the $n$ individuals and the edges representing the connectivity among individuals. The overall connectivity can thus be recorded in an $n\times n$ adjacency matrix whose entries are zeros and nonzeros, representing the corresponding pair of nodes unconnected or connected, respectively. Examples include the friendship network, the citation network, the predator-prey interaction network, and many others. There has been a large literature on statistical methods and theory proposed for analyzing matrix data. In the network setting, the observed adjacency matrix is frequently modeled as the summation of a deterministic low rank mean matrix and a random noise matrix, where the former stores all useful information in the data and is often the interest. One popular assumption made in most studies is that the rank $K$ of the latent mean matrix is known. However, in practice, such $K$ is generally unknown and needs to be estimated. This paper focuses on estimation and inference on the low rank $K$ in a general model setting including many popularly used network models as special cases. In our model, the data matrix ${\bf X}$ can be ``roughly" decomposed as a low rank mean matrix ${\bf H}$ with $K$ spiked eigenvalues and a noise matrix ${\bf W}$ whose components are mostly independent. Here, $K$ is assumed to be fixed but unknown. To infer $K$ with quantified statistical uncertainty, we propose a universal approach for Rank Inference by Residual Subsampling (RIRS). Specifically, we consider the hypothesis test \begin{equation}\label{eq:hypothesis} H_0: K=K_0 \text{ vs. } \ H_1: K>K_0 \end{equation} with $K_0$ some pre-specified positive integer. The spiked mean matrix with rank $K_0$ can be estimated by eigen decomposition, subtracting which from the observed data matrix yields the residual matrix. Then by appropriately subsampling the entries of the residual matrix, we can construct a test statistic. We prove that under the null hypothesis, the test statistic converges in distribution to the standard normal, and under the alternative hypothesis, some spiked structure remains in the residual matrix and the constructed test statistic behaves very differently. Thus, the hypothesis test in \eqref{eq:hypothesis} can be successfully conducted. Then by sequentially testing the hypothesis \eqref{eq:hypothesis} for $K_0 = 1,\cdots, K_{\max}$ with $K_{\max}$ some large enough positive integer, we can estimate $K$ as the first integer that makes our test fails to reject. We provide theoretical justifications on the effectiveness of our procedure. A key for RIRS to work well is the carefully designed subsampling scheme. Although the noise matrix ${\bf W}$ has mostly independent components, the residual matrix is only an estimate of ${\bf W}$ and has correlated components. Intuitively speaking, if too many entries of the residual matrix are sampled, the accumulated estimation error and the correlation among sampled entries would be too large, rendering the asymptotic normality invalid. We provide both theoretical and empirical guidance on how many entries to subsample. In the special case where the diagonals of the data matrix ${\bf X}$ are nonzero independent random variables (which corresponds to selfloops in network models), a special deterministic sampling scheme can also be used and the RIRS test takes a simpler form. The structure of low rank mean matrix plus noise matrix is very general and includes many popularly used network models such as the Stochastic Block Model (SBM, \cite{SBM1983, WangWong1987, Abbe2017CommunityDA}), Degree Corrected SBM (DCSBM, \cite{DCSBM2011}), Mixed Membership (MM) Model, and Degree Corrected Mixed Membership (DCMM) Model \citep{airoldi2008} as special cases. RIRS test is applicable to all these network models, and in fact, goes beyond them. Substantial efforts have been made in the literature on estimating $K$ in some specific network models, where $K$ is referred to as the number of communities. For example, \cite{Mc2013} proposed an MCMC algorithm based on the allocation sampler to cluster the nodes in SBM and simultaneously estimate $K$. \cite{airoldi2008} developed a general variational inference algorithm to estimate the parameters in MM model with $K$ chosen according to some BIC criterion. \cite{2019J} considered testing \eqref{eq:hypothesis} with $K_0=1$ and proposed a signed polygon statistic which can accommodate the degree heterogeneity in the DCMM model. \cite{gao2017} proposed EZ statistics constructed by ``frequencies of three-node subgraphs'' to test \eqref{eq:hypothesis} with $K_0=1$ in the setting of DCSBM. \cite{ba2017} introduced a linear spectral statistic to test $H_0:K=1$ vs. $H_1: K=2$ under the SBM. Compared to these works, we consider more general model and general positive integer $K_0$ that can be larger than 1. There is also a popular line of work uses likelihood based methods to estimate $K$. For example, \cite{Daudin2008}, \cite{Latouche-etal2012}, \cite{saldana2017many}, and \cite{wang2017}, among others. \cite{ChenLei2018} proposed a network cross-validation method for estimating $K$ and proved the consistency of the estimator under SBM. \cite{LL15} proposed to estimate $K$ using the spectral properties of two graph operators -- the non-backtracking matrix and the Bethe Hessian matrix. \cite{Zhao7321} proposed to sequentially extract one community at a time by optimizing some extraction criterion, based on which they proposed a hypothesis test for testing the number of communities empirically via permutation method. \cite{BickelSarkar16} proposed a new test based on the asymptotic distribution of the largest eigenvalue of the appropriately rescaled adjacency matrix for testing whether a network is Erd\"{o}s R\'{e}nyi or not, and suggested a recursive bipartition algorithm for estimating $K$. \cite{L16} generalized the test in \cite{BickelSarkar16} for testing whether a network is SBM with some specific $K_0$, and proposed a sequential testing idea to estimate the true number of communities. Among the existing literature reviewed above, the works by \cite{BickelSarkar16} and \cite{L16} are most closely related to ours. The main idea in both papers is that under the null hypothesis which assumes that the matrix data follows SBM with $K_0$ communities, the model parameters can be estimated and then the residual matrix can be rescaled. The rescaled residual matrix will be close to a generalized Wigner matrix whose extreme eigenvalues (after recentering and rescaling) converge in distribution to the Tracy-Widom distribution. However, under the alternative hypothesis, the extreme eigenvalues behave very differently. At a high level, this idea is related to ours in the sense that our proposal is also based on the residual matrix. RIRS test differs from the literature in the way of using the residual matrix. Instead of investigating the spectral distribution of the residual matrix, we construct RIRS test by subsampling just a fraction of the entries in residual matrix. The subsampling idea ensures that the noise accumulation caused by estimating the mean matrix does not dominate the signal which guarantees the nice performance of our test. Compared to the existing literature, RIRS test behaves more like a nonparameteric one in the sense that we do not assume any specific structure of the low rank mean matrix. Yet, it is also simple and fast to implement by nature. Our asymptotic theory is also new to the literature. It is built on the recent developments on random matrix theory in \cite{FF18}, which establishes the asymptotic expansions of the eigenvectors for a very general class of random matrices. This powerful result allows us to establish the sampling properties of RIRS test in equally general setting. The remaining of the paper is organized as follows: Section \ref{Sec2} presents the model setting and motivation for RIRS. We introduce our new approach and establish its asymptotic theoretical results in Section \ref{Sec: RRI}. Simulations under various models are conducted to justify the performance of RIRS in Section \ref{Sec4}. We further apply RIRS to a real data example in Section \ref{Sec:real}. All proofs are relegated to the Appendix and the Supplementary Material. \subsection{Notations} We introduce some notations that will be used throughout the paper. We use $a\ll b$ to represent $a/b\rightarrow0$ and write $a\lesssim b$ if there exists a positive constant $c$ such that $0\le a\le c b$. We say that an event $\mathcal{E}_n$ holds with high probability if $\mathbb{P}(\mathcal{E}_n)=1-O(n^{-l})$ for some positive constant $l$ and sufficiently large $n$. For a matrix ${\bf A}$, we use $\lambda_j({\bf A})$ to denote the $j$-th largest eigenvalue, and $\|{\bf A}\|_F$, $\|{\bf A}\|$, and $\|{\bf A}\|_{\infty}$ to denote the Frobenius norm, the spectral norm, and the maximum elementwise infinity norm, respectively. In addition, denote by ${\bf A}(k)$ the $k$th row of the matrix ${\bf A}$. For a unit vector ${\bf x} = (x_1,\cdots, x_n)^T$, let $d_x=\|{\bf x}\|_{\infty}=\max|x_i|$ represent the vector infinity norm. \section{Model setting and motivation} \label{Sec2} \subsection{Model setting} \label{Sec2.1} Consider an $n\times n$ symmetric random matrix $\widetilde{\bf X}$ which admits the following decomposition \begin{equation}\label{0908.1} \widetilde {\bf X}={\bf H}+{\bf W}, \end{equation} where ${\bf H} = \mathbb E(\widetilde {\bf X})$ is the mean matrix with some fixed but unknown rank $K\ll n$ and ${\bf W}$ is the noise matrix with bounded and independent entries on and above the diagonals. As mentioned in the introduction, model \eqref{0908.1} includes popularly used network models as special cases. In such applications, the observed matrix ${\bf X}$ is the adjacency matrix and can be either $\widetilde{\bf X}$ or $\widetilde{\bf X} - \mathrm{diag}(\widetilde{\bf X})$, with the former corresponding to networks with self-loops and the latter corresponding to networks without self-loops, respectively. An important and interesting question is inferring the unknown rank $K$, which corresponds to the number of communities in network models. We address the problem by testing the hypotheses \eqref{eq:hypothesis} under the universal model \eqref{0908.1}. We note that with some transformation, model \eqref{0908.1} can accommodate nonsymmetric matrices. In fact, for any matrix $\widetilde {\bf X}$ that can be written as the summation of a rank $K$ mean matrix and a noise matrix of independent components, we can define a new matrix as $$\left( \begin{array}{ccc} {\bf 0} &\ \ \widetilde{\bf X}\\ \widetilde{\bf X}^T& \ \ {\bf 0}\\ \end{array} \right).$$ It is seen that this new matrix has the same structure as in \eqref{0908.1} with rank $2K$, and our new method and theory both apply. For simplicity of presentation, hereafter we assume the symmetric matrix structure for $\widetilde{\bf X}$ and ${\bf X}$. Write the eigen-decomposition of ${\bf H}$ as ${\bf V}{\bf D}{\bf V}^T$, where ${\bf D}=\mathrm{diag}(d_1,...,d_K)$ collects the nonzero eigenvalues of ${\bf H}$ in decreasing magnitude and ${\bf V}= ({\bf v}_1,\cdots, {\bf v}_K)$ is the matrix collecting the corresponding eigenvectors. Denote by $\hat d_1, \cdots, \hat d_n$ the eigenvalues of ${\bf X}$ in decreasing magnitude and $\hat{{\bf v}}_1,\cdots, \hat{{\bf v}}_n$ the corresponding eigenvectors. We next discuss the motivation of RIRS. \subsection{Motivation}\label{sec: motivation} To gain insights, consider the simple case when the observed data matrix ${\bf X} = \widetilde{{\bf X}}$ and follows model \eqref{0908.1}. Then $\mathbb E {\bf W} = \bf 0$. Thus intuitively, as $n\rightarrow\infty$, the normalized statistic $ \sum_{i=1}^nw_{ii}/\sqrt{\sum_{i=1}^n\mathbb{E}w_{ii}^2} $ converges in distribution to standard normal. Meanwhile, we expect $ \sum\limits_{i=1}^n\mathbb{E}w_{ii}^2/\sum\limits_{i=1}^nw_{ii}^2 $ to converge to 1 in probability as $n\rightarrow \infty$. The above two results entail that \begin{equation}\label{eq:norm-trueW} \frac{\sum_{i=1}^nw_{ii}}{\sqrt{\sum_{i=1}^nw_{ii}^2}} \end{equation} is asymptotically normal as the matrix size $n\rightarrow \infty$. In the ideal case where the eigenvalues $d_1, \cdots, d_K$ and eigenvectors ${\bf v}_1$, $\cdots$, ${\bf v}_K$ are known, a test of the form \eqref{eq:norm-trueW} can be constructed by replacing $w_{ii}$ with $\tilde w_{ii}$ where $\widetilde{\bf W} = (\tilde w_{ij})= {\bf X} - \sum_{k=1}^{K_0}d_k{\bf v}_k{\bf v}_k^T$. Under the null hypothesis, $\widetilde{\bf W}={\bf W}$ and the corresponding test statistic (constructed in the same way as \eqref{eq:norm-trueW}) is asymptotically normal. However, under the alternative hypothesis, $\widetilde{\bf W}$ still contains some information from the $K-K_0$ smallest spiked eigenvalues and the corresponding eigenvectors and the test statistic is expected to exhibit different asymptotic behavior. Thus, the hypotheses in \eqref{eq:hypothesis} can be successfully tested by using this statistic. In practice, the eigenvalues and eigenvectors of ${\bf H}$ are unavailable and need to be estimated. A natural estimate of $\widetilde {\bf W}$ takes the form \begin{equation}\label{eq: resid-mat} \widehat {\bf W}= (\hat w_{ij})={\bf X}-\sum_{k=1}^{K_0}\hat d_k\hat{\bf v}_k\hat{\bf v}_k^T. \end{equation} Under $H_0$, the residual matrix $\widehat{\bf W}$ is expected to be close to ${\bf W}$, which motivates us to consider test of the form \begin{equation}\label{s1} \widetilde T_n = \frac{\sum_{i=1}^n\hat w_{ii}}{\sqrt{\sum_{i=1}^n\hat w^2_{ii}}}. \end{equation} Intuitively, the asymptotic behavior of the above statistic is expected to be close to the one in \eqref{eq:norm-trueW}. Thus, by examining the asymptotic behavior of $\widetilde T_n$ we can test the desired hypotheses. In fact, it will be made clear later that one form of RIRS test is based on this intuition. The statistic in \eqref{eq:norm-trueW} only uses the diagonals of ${\bf W}$. In theory, the asymptotically normality remains true if we aggregate any and all entries of the matrix ${\bf W}$ (instead of just the diagonals) and normalize properly, thanks to the independence of the entries on and above the diagonals of ${\bf W}$. However, this does not translate into the asymptotic normality of the test based on $\widehat{{\bf W}}$ for at least two reasons: First, in applications absence of selfloops, the observed data matrix ${\bf X}$ takes the form $\widetilde{\bf X} - \mathrm{diag}(\widetilde{\bf X})$ and thus $\widehat{{\bf W}}$ estimates ${\bf W} - \mathrm{diag}(\widetilde{{\bf X}})$ which has nonrandom diagonals. Consequently, test constructed using diagonals of $\widehat {\bf W}$ becomes invalid. Second, the entries of $\widehat {\bf W}$ are all correlated and have errors coming from estimating the corresponding entries of ${\bf W}$. Aggregating too many entries of $\widehat {\bf W}$ will cause too much noise accumulation. This together with the correlations among $\hat w_{ij}$ makes the asymptotic normality of the corresponding test statistic invalid. This heuristic argument is formalized in a later Section \ref{Sec3}. Thus to overcome these difficulties, we need to carefully choose which and how many entries to aggregate. These issues are formally addressed in the next section. \section{Rank inference via residual subsampling} \label{Sec: RRI} \subsection{A universal RIRS test} The key ingredient of RIRS is subsampling the entries of $\widehat{\bf W}$. Specifically, define i.i.d. Bernoulli random variables $Y_{ij}$ with $\mathbb{P}(Y_{ij}=1)=\frac{1}{m}$ for $1\leq i<j\leq n$, where $m$ is some positive integer diverging with $n$ at a rate that will be specified later. In addition, set $Y_{ji}=Y_{ij}$ for $i<j$. A universal RIRS test that works under the broad model \eqref{0908.1} takes the following form \begin{equation}\label{eq: test-general} T_n=\frac{\sqrt{m}\sum_{i\neq j}\hat w_{ij}Y_{ij}}{\sqrt{2\sum_{i\neq j}\hat w_{ij}^2}}. \end{equation} The effect of $m$ is to control on average how many entries of the residual matrix to aggregate for calculating the test statistic. It will be made clear in a moment that $m$ needs to grow to infinity in order for the central limit theorem to kick in. However, the growth rate cannot be too fast because otherwise the noise accumulation and the correlation in $\hat w_{ij}$ will make the asymptotic normality invalid. The following conditions will be used in our theoretical analysis. \begin{cond}\label{cond1} ${\bf W}$ is a symmetric matrix with independent and bounded upper triangular entries (including the diagonals) and $\mathbb{E}w_{ij}=0$ for $i\neq j$. \end{cond} \begin{cond}\label{cond2} There exists a positive constant $c_0$ such that $\frac{|d_i|}{|d_{j}|}\ge 1+c_0$ for all $1\le i<j\le K, d_i\neq -d_j$. \end{cond} \begin{cond}\label{cond3} There exists a positive sequence $\theta_n$, which may tend to $0$ as $n\rightarrow \infty$, such that $\sigma_{ij}^2=\mathrm{var}(w_{ij})\le \theta_n$ and $\max\limits_{1\le i\le n}|h_{ii}|\lesssim\theta_n$, where $h_{ii}$'s are the diagonal entries of matrix ${\bf H}$. In addition, $\alpha_n^2=\max\limits_{i}\sum\limits_{j=1}^n\sigma_{ij}^2\rightarrow \infty$ as $n\rightarrow \infty$, $|d_K|\gtrsim \alpha_n^2$ and $\frac{|d_K|}{\alpha_n}\gtrsim n^{\ensuremath{\epsilon}} \newcommand{\ve}{\ensuremath{\varepsilon}}$ for some positive constant $\ensuremath{\epsilon}} \newcommand{\ve}{\ensuremath{\varepsilon}$. \end{cond} \begin{cond}\label{cond4} $\|{\bf V}\|_{\infty}\lesssim\frac{1}{\sqrt n}$. \end{cond} \begin{cond}\label{cond5} It holds that $\sum_{i\neq j} \sigma_{ij}^2\gg m$ and $\sum_{i\neq j}\sigma_{ij}^2\gtrsim n^{\ensuremath{\epsilon}} \newcommand{\ve}{\ensuremath{\varepsilon}}(\frac{n\sum_{k=1}^{K_0}(\mathbf{1}^T{\bf v}_k)^2}{m} + \alpha_n^2 + \frac{n^2\alpha_n^2}{md_K^2})$ for some positive constant $\ensuremath{\epsilon}} \newcommand{\ve}{\ensuremath{\varepsilon}$. \end{cond} Conditions \ref{cond1}-\ref{cond2} are also imposed in \cite{FF18}, where asymptotic expansions of spiked eigenvectors are established. The results therein serve as the theoretical foundation of RIRS. Random matrix satisfying Condition \ref{cond1} is often termed as generalized Wigner matrix in the literature. Conditions \ref{cond2} and \ref{cond3} restrict the spiked eigenvalues of the low rank mean matrix. The constraint $|d_K|\gtrsim \alpha_n^2$ in Condition \ref{cond3} is a technical condition for controlling the noise accumulation in our test caused by estimating $w_{ij}$. It can be easily satisfied by many network models with low rank structure. To see this, note that if $w_{ij}$, $j\ge i\ge1$ follows Bernoulli distribution and $\sigma_{ij}^2\sim \theta_n$, then $\alpha_n^2\sim n \theta_n$. Since $h_{ij}$'s and $\sigma_{ij}^2$'s are the means and variances of Bernoulli random variables, respectively, we have $h_{ij}\sim \sigma_{ij}^2\sim \theta_n$ and $\|{\bf H}\|_F = \{\sum_{i,j}h_{ij}^2\}^{1/2} \sim n\theta_n$. Note also that $\|{\bf H}\|_F = \{\sum_{i=1}^Kd_i^2\}^{1/2}$ and $K$ is finite. These together with $\alpha_n^2\sim n \theta_n^2$ derived earlier ensure that $|d_K|\gtrsim \alpha_n^2$ is not hard to be satisfied. In fact, if in addition $d_1\sim d_K$ and $\theta_n \lesssim 1$, we have $|d_K|\gtrsim \alpha_n^2$ satisfied. Condition \ref{cond4} is a technical condition needed to prove the key Lemmas \ref{keylem}-\ref{keylem2}. Condition \ref{cond5} characterizes what kind of $m$ can make RIRS succeed. More detailed discussion on the choice of $m$ will be given in a later section. \begin{theorem}\label{thma} Assume Conditions \ref{cond1}-\ref{cond5}. Under null hypothesis in \eqref{eq:hypothesis} we have \begin{equation}\label{0307.3} T_{n}\stackrel{d}{\rightarrow } N(0,1), \text{ as } n\rightarrow\infty. \end{equation} \end{theorem} \begin{theorem}\label{thmb} Assume Conditions \ref{cond1}-\ref{cond5} and the alternative hypothesis in \eqref{eq:hypothesis}. If $\sum_{i\neq j}\big(\sum_{k=K_0+1}^{K}d_k{\bf v}_k(i){\bf v}_k(j)\big)^2\ll \sum_{i\neq j}\sigma_{ij}^2$, then as $n\rightarrow \infty$, \begin{equation}\label{eq: alt-distr} \frac{\sqrt m\left(\sum_{i\neq j}\hat w_{ij}Y_{ij}-\sum_{k=K_0+1}^{K}d_k\sum_{i\neq j}{\bf v}_k(i){\bf v}_k(j)Y_{ij}\right)}{\sqrt{2\sum_{i\neq j}\hat w_{ij}^2}}\stackrel{d}{\rightarrow } N(0,1). \end{equation} If instead, \begin{equation}\label{eq: m-cond2} \left|\sum_{k=K_0+1}^{K}d_k\sum_{i\neq j}{\bf v}_k(i){\bf v}_k(j)\right|\gg \sqrt{m}\left(\sqrt{\sum_{i\neq j}\mathbb \sigma_{ij}^2}+\sum_{k=K_0+1}^{K}|d_k|\right), \end{equation} we have \begin{equation}\label{0307.5} \mathbb P( |T_n| > C) \rightarrow 1, \text{ as } n\rightarrow\infty \end{equation} for arbitrarily large positive constant $C$. \end{theorem} By Theorems \ref{thma} and \ref{thmb}, we have the following Corollary about the size and power of RIRS. \begin{corollary}\label{co1} Under the conditions of Theorem \ref{thma}, we have $$\lim_{n\rightarrow \infty}\mathbb{P}(|T_n|\geq \Phi^{-1}(1-\alpha/2)|H_0)=\alpha,$$ where $\Phi^{-1}(t)$ is the inverse of the standard normal distribution function, and $\alpha$ is the pre-specified significance level. Alternatively under the same conditions for ensuring \eqref{0307.5}, we have $$\lim_{n\rightarrow \infty}\mathbb{P}(|T_n|\geq\Phi^{-1}(1-\alpha/2)|H_1)=1.$$ \end{corollary} We remark that the above theoretical results hold even under extreme degree heterogeneity in network models. In fact, in degree corrected membership model the mean matrix takes the form \begin{equation}\label{0105.1} {\bf H}={\boldsymbol \Theta}{\boldsymbol \Pi}{\bf B}{\boldsymbol \Pi}^T{\boldsymbol \Theta}, \end{equation} where ${\bf B}$ is a $K\times K$ nonsingular matrix with all entries taking values in $[0,1]$, ${\boldsymbol \Theta}=\mathrm{diag}(\vartheta_1,...,\vartheta_n)$ with $\vartheta_i> 0$ is the degree heterogeneity matrix, and ${\boldsymbol \Pi}=({\boldsymbol \pi}_1,...,{\boldsymbol \pi}_n)^T$ is an $n\times K$ matrix of probability mass vectors. Since we do not have any direct constraints on the smallest variance of $w_{ij}$, all our theoretical results remain to hold even when $\max_j \vartheta_j/ \min_j \vartheta_j \rightarrow \infty$. \subsection{Choice of $m$} It is seen from the previous two theorems that the tuning parameter $m$ plays a crucial role for RIRS to achieve the desired size with high power. Condition \ref{cond5} provides general conditions on the choice of $m$ for ensuring the null and alternative distributions in \eqref{0307.3} and \eqref{eq: alt-distr}. For \eqref{0307.5} to hold, we also need the additional assumption \eqref{eq: m-cond2}. In some special cases, these conditions boil down to simpler forms which can provide us more specific guideline on the choice of $m$. As an example, we consider a special case that \begin{equation}\label{eq: cond-choose-m} \min_{i\neq j}\sigma_{ij}^2\sim \max_{i\neq j}\sigma_{ij}^2,\ |d_1|\lesssim n\theta_n, \ \text{and for $K_0<K$, } \end{equation} \[ \sum_{i\neq j}\sigma_{ij}^2 \lesssim \left|\sum_{k=K_0+1}^{K}d_k\sum_{i\neq j}{\bf v}_k(i){\bf v}_k(j)\right|. \] For SBM with $K$ communities, there are at most $K(K+1)/2$ different variances in the entries of the adjacency matrix and hence the first condition in \eqref{eq: cond-choose-m} is not hard to be satisfied. For other network models, this condition may also be satisfied with additional assumptions. Note that $\|{\bf H}\|_F = \{\sum_{i,j}h_{ij}^2\}^{1/2} = \{\sum_{i=1}^Kd_i^2\}^{1/2}$. If the entries of $\widetilde{\bf X}$ follow Bernoulli distribution then $h_{ij}\sim \sigma_{ij}^2$, and thus the second condition in \eqref{eq: cond-choose-m} is satisfied in view of Condition \ref{cond3}. To understand the intuition of the third condition, note that under the alternative hypothesis we have the following decomposition $$ \widetilde{{\bf X}}= \sum_{k=1}^{K_0}d_k{\bf v}_k{\bf v}_k^T + \sum_{k=K_0+1}^{K}d_k{\bf v}_k{\bf v}_k^T + {\bf W}. $$ The second term on the right hand side corresponds to the signal missed by the null hypothesis, and the third term corresponds to the noise. Thus, the third condition in \eqref{eq: cond-choose-m} intuitively says that under the alternative hypothesis, the cumulative missed signal $\sum_{k=K_0+1}^{K}d_k\sum_{i\neq j}{\bf v}_k(i){\bf v}_k(j)$ cannot be dominated by the noise accumulation. The next theorem specifies what kind of $m$ satisfies the two inequalities in Condition \ref{cond5} and \eqref{eq: m-cond2}. \begin{theorem}\label{thm: m-choice} Set $\theta_n = \max_{i\neq j}\sigma_{ij}^2$. Assume \eqref{eq: cond-choose-m}. Then $m$ satisfying the following condition \begin{equation}\label{eq:m-cond1} \frac{n^{\ensuremath{\epsilon}} \newcommand{\ve}{\ensuremath{\varepsilon}}}{\theta_n}\log n+n^{\ensuremath{\epsilon}} \newcommand{\ve}{\ensuremath{\varepsilon}-1}\theta_n^{-2}(\log n)^2\ll m\ll n^{2}\theta_n (\log n)^{-2} \end{equation} makes Condition \ref{cond5} and inequality \eqref{eq: m-cond2} hold. Consequently, \eqref{0307.3} and \eqref{0307.5} hold under Conditions \ref{cond1}--\ref{cond4}. Moreover, a sufficient condition for \eqref{eq:m-cond1} is $n^{1-\ensuremath{\epsilon}} \newcommand{\ve}{\ensuremath{\varepsilon}}\ll m \ll n^{1+2\ensuremath{\epsilon}} \newcommand{\ve}{\ensuremath{\varepsilon}}(\log n)^{-1}$ under Conditions \ref{cond1}--\ref{cond4}. \end{theorem} It is seen that Theorem \ref{thm: m-choice} allows for a wide range of values for $m$. In theory, any $m$ satisfying \eqref{eq:m-cond1} guarantees the asymptotic size and power of our test. In implementation, we found smaller $m$ in this range yields better empirical size. It is also seen from \eqref{eq:m-cond1} that RIRS works with very sparse networks. In fact, the only sparsity condition imposed by \eqref{eq:m-cond1} is that $\theta_n = \max_{i\neq j}\sigma_{ij}^2 \gg n^{-1+\ensuremath{\epsilon}} \newcommand{\ve}{\ensuremath{\varepsilon}/2}$, where $\ensuremath{\epsilon}} \newcommand{\ve}{\ensuremath{\varepsilon}$ is a constant that can be arbitrarily small. In SBM, this corresponds to the very sparse setting with average degree of order $n^{-1+\ensuremath{\epsilon}} \newcommand{\ve}{\ensuremath{\varepsilon}/2}$. Our sparsity condition is significantly weaker than the ones in related work in the literature. In particular, both \cite{BickelSarkar16} and \cite{L16} considered dense SBM with $\theta_n$ bounded below by some constant. We remark that sparser models have been considered in the network literature, though mostly in estimation instead of inference problems. For example, \cite{wang2017} proposed a model selection criterion for estimating $K$ under the very sparse setting of SBM with $n\theta_n/\log n \rightarrow \infty$. \cite{LL15} established the consistency of their method for estimating $K$ under the setting $n\theta_n = O(1)$. We need slightly stronger assumption on the sparsity level because we consider the statistical inference problem of hypothesis testing, which involves more delicate analyses for establishing the asymptotic distributions of the test statistic. \subsection{A special case: networks with self-loops} We formalize the heuristic arguments in Section \ref{sec: motivation} about the ratio statistic $\widetilde T_n$ in \eqref{s1} when the network admits selfloops. In such case, the general test \eqref{eq: test-general} still works. However, the simpler one $\widetilde T_n$ can enjoy similar asymptotic properties. \begin{theorem}\label{thm2} Suppose that Conditions \ref{cond1}-\ref{cond4} hold, the network contains self-loops and $\sqrt{\sum_{i=1}^n\sigma_{ii}^2}\gg n^{\ensuremath{\epsilon}} \newcommand{\ve}{\ensuremath{\varepsilon}}$ for some positive constant $\ensuremath{\epsilon}} \newcommand{\ve}{\ensuremath{\varepsilon}$. (i) Under null hypothesis we have \begin{equation}\label{ti1} \widetilde T_n \stackrel{d}{\rightarrow } N(0,1), \text{ as } n\rightarrow \infty. \end{equation} (ii) Under alternative hypothesis, if further $\sum_{i=1}^n(\sum_{k=K_0+1}^{K}d_kv^2_k(i))^2\ll \sum_{i=1}^n\sigma_{ii}^2$, we have \begin{equation}\label{ti2} \frac{\sum_{i=1}^n\hat w_{ii}-\sum_{k=K_0+1}^{K}d_k}{\sqrt{\sum_{i=1}^n\hat w^2_{ii}}}\stackrel{d}{\rightarrow } N(0,1), \text{ as } n\rightarrow\infty. \end{equation} If instead, $|\sum_{k=K_0+1}^{K}d_k|^2\gg \sum_{i=1}^n\sigma^2_{ii}+\sum_{i=1}^n(\sum_{k=K_0+1}^{K}d_kv^2_k(i))^2$, then \begin{equation}\label{0614.1} \mathbb P( |\widetilde{T}_n|>C) \rightarrow 1, \end{equation} for arbitrarily large positive constant $C$. \end{theorem} It is seen that with the same critical value $\Phi^{-1}(1-\alpha/2)$, $\widetilde T_n$ enjoys the same properties on size and power as $T_n$. In addition, since the construction of $\widetilde T_n$ does not depend on any tuning parameter, the implementation is much easier. \subsection{Estimation of $K$}\label{sec: K-est} RIRS naturally suggests a simple method for estimating the rank $K$. The idea is similar to the one in \cite{L16}. That is, we sequentially test the following hypotheses $$H_0: K=K_0 \quad \text{vs.} \quad H_1:K>K_0,$$ for $K_0=1,2,...,K_{\max}$ at the significant level $\alpha$ using RIRS. Here, $K_{\max}$ is some prespecified positive integer. Once RIRS fails to reject a value of $K_0$, we stop and use it as the estimate of the rank. Since we assume the true value of $K$ is finite, it is easy to see from Corollary \ref{co1} that with asymptotical probability $1-\alpha$ we can identify the true value of the rank. \subsection{Networks without self-loops: why subsampling?}\label{Sec3} In this section, we formalize the heuristic arguments given in Section \ref{sec: motivation} on why sub-sampling is necessary. We theoretically show that $T_n$ is no longer a valid test for $H_0$ without the ingredient of subsampling. We start with introducing some additional notations that will be used in this subsection. For any matrices ${\bf M}_1$ and ${\bf M}_2$ of appropriate dimensions, let $$\mathcal{R}({\bf M}_1,{\bf M}_2,t)=-\sum_{l=0,l\neq 1}^L\frac{{\bf M}_1^T\mathbb{E}{\bf W}^l{\bf M}_2}{t^{l+1}}, \ \mathcal{P}({\bf M}_1,{\bf M}_2,t)=t \mathcal{R}({\bf M}_1,{\bf M}_2,t),$$ where $L$ is a positive integer such that $$\frac{\alpha_n^{L+1}(\log n)^{\frac{L+1}{2}}}{|d_K|^{L-2}}\rightarrow 0.$$ By Lemma 1 and Theorem 1 of \cite{FF18}, there exists a unique $t_k$ such that $\frac{t_k}{d_k}\rightarrow 1$, $1\le k\le K$ and $\hat d_k-t_k={\bf v}_k^T{\bf W}{\bf v}_k+O_p(\frac{\alpha_n}{|d_k|})$. Define \begin{align*} &{\bf b}^T_{{\bf e}_i,k,t}={\bf e}_i^T-\mathcal{R}({\bf e}_i,{\bf V}_{-k},t)\Big(({\bf D}_{-k})^{-1}+\mathcal{R}({\bf V}_{-k},{\bf V}_{-k},t)\Big)^{-1}{\bf V}_{-k}^T,\\ &{\bf s}_{k,i}={\bf b}_{{\bf e}_i,k,t_k}-{\bf e}_i^T{\bf v}_k{\bf v}_k, \quad {\bf s}_k=\sum_{i=1}^n{\bf s}_{k,i},\ {\bf s}_k(i)={\bf e}_i^T{\bf s}_k, \\ &\text{and} \quad {\bf r}_k={\bf V}_{-k}(t_k{\bf D}_{-k}^{-1}-{\bf I})^{-1}{\bf V}_{-k}^T\mathbb{E}{\bf W}^2{\bf v}_k, \end{align*} where ${\bf V}_{-k}$ is the submatrix of ${\bf V}$ by removing the $k$-th column, and we slightly abuse the notation and use ${\bf D}_{-k}$ to denote the submatrix of ${\bf D}$ by removing the $k$th diagonal entry. Further define $a_k=\sum_{i=1}^n{\bf v}_k(i)$, $k=1,\cdots, K$ and \begin{align} R(K)&=2\sum_{k=1}^{K}\frac{\mathbf{1}^T\mathbb{E}{\bf W}^2{\bf v}_ka_k}{t_k}+2\sum_{k=1}^{K}\frac{a_k^2{\bf v}_k^T\mathbb{E}{\bf W}^2{\bf v}_k}{d_k}\\ &+\sum_{k=1}^{K}{\bf v}_k^T\mathrm{diag}({\bf W}){\bf v}_ka_k^2+2\sum_{k=1}^{K}a_k{\bf s}_k^T\mathrm{diag}({\bf W}){\bf v}_k+2\sum_{k=1}^{K}a_k\frac{\mathbf{1}^T{\bf r}_k}{t_k}.\nonumber \end{align} We have the following theorems. \begin{theorem}\label{thm1} Suppose that Conditions \ref{cond1}--\ref{cond4} hold and {\small\begin{equation}\label{eq: cond-var} \sum_{i<j}\sigma^2_{ij}\left(1-\sum_{k=1}^{K_0}a_k^2{\bf v}_k(i){\bf v}_k(j)-\sum_{k=1}^{K_0}a_k\Big({\bf v}_k(j){\bf s}_k(i)+{\bf v}_k(i){\bf s}_k(j)\Big)\right)^2\ge n^{\ensuremath{\epsilon}} \newcommand{\ve}{\ensuremath{\varepsilon}_1}\Big(1+\frac{n^2\alpha_n^2}{d_{K_0}^2}\Big), \end{equation}} for some positive constant $\ensuremath{\epsilon}} \newcommand{\ve}{\ensuremath{\varepsilon}_1$. Under null hypothesis we have, as $n\rightarrow\infty$, {\small $$\frac{\sum\limits_{i\neq j}\hat w_{ij}+R(K_0)}{2\sqrt{\sum\limits_{i<j}\sigma^2_{ij}\left(1-\sum\limits_{k=1}^{K_0}a_k^2{\bf v}_k(i){\bf v}_k(j)-\sum\limits_{k=1}^{K_0}a_k\big({\bf v}_k(j){\bf s}_k(i)+{\bf v}_k(i){\bf s}_k(j)\big)\right)^2}}\stackrel{d}{\rightarrow } N(0,1).$$} \end{theorem} \begin{theorem}\label{thm3} Suppose that Conditions \ref{cond1}--\ref{cond4} hold. In addition, assume \eqref{eq: cond-var} holds with $K_0$ and $d_{K_0}$ replaced with $K$ and $d_K$, respectively. Under alternative hypothesis we have, as $n\rightarrow\infty$, {\small\begin{align*} \frac{\sum\limits_{i\neq j}\hat w_{ij}+R(K)-\sum\limits_{k=K_0+1}^{K}d_ka_k^2}{2\sqrt{\sum\limits_{i<j}\sigma^2_{ij}\left(1-\sum\limits_{k=1}^{K}a_k^2{\bf v}_k(i){\bf v}_k(j)-\sum\limits_{k=1}^{K}a_k\Big({\bf v}_k(j){\bf s}_k(i)+{\bf v}_k(i){\bf s}_k(j)\Big)\right)^2}} \stackrel{d}{\rightarrow } N(0,1). \end{align*} } \end{theorem} It is seen from Theorems \ref{thm1} and \ref{thm3} that aggregating all entries of the residual matrix leads to a statistic with bias and variance taking very complicated forms under both null and alternative hypotheses. The complicated forms of bias and variance limit the practical usage of the above results. In addition, and more importantly, these results may even fail to hold in some cases. To understand this, note that the variance of $\sum_{i\neq j}\hat w_{ij}+R(K_0)$ in Theorem \ref{thm1} is approximately equal to $$4\sum_{i<j}\sigma^2_{ij}\left(1-\sum_{k=1}^{K_0}a_k^2{\bf v}_k(i){\bf v}_k(j)-\sum_{k=1}^{K_0}a_k({\bf v}_k(j){\bf s}_k(i)+{\bf v}_k(i){\bf s}_k(j))\right)^2.$$ Condition \eqref{eq: cond-var} is imposed to put a lower bound on the variance. Without this condition, the asymptotic normality in Theorem \ref{thm1} will no longer hold. However, we next give an example where inequality \eqref{eq: cond-var} fails to hold. Consider networks with eigenvector taking the form ${\bf v}_1 = \frac{1}{\sqrt n}\mathbf{1}$. Then $a_1=\sqrt n$. Since ${\bf v}_k$, $k\ge 2$ are orthogonal to ${\bf v}_1$, we have $a_k=0$, $k\ge 2$. By Condition \ref{cond4} and {Theorem \ref{0113-1}} in the Supplementary file, we have $\max_i|{\bf s}_1(i)|\lesssim \frac{\alpha_n^2}{\sqrt n d_1^2}$. Combining this with Condition \ref{cond3} and using the fact ${\bf v}_1 = \frac{1}{\sqrt n}\mathbf{1}$, we have \begin{align} &\sum_{i<j}\sigma^2_{ij}\left(1-\sum_{k=1}^{K_0}a_k^2{\bf v}_k(i){\bf v}_k(j)-\sum_{k=1}^{K_0}a_k({\bf v}_k(j){\bf s}_k(i)+{\bf v}_k(i){\bf s}_k(j))\right)^2\\ &= \sum_{i<j}\sigma^2_{ij}\left(1-n{\bf v}_1(i){\bf v}_1(j)-\sqrt{n}\big({\bf v}_1(j){\bf s}_1(i)+{\bf v}_1(i){\bf s}_1(j)\big)\right)^2\nonumber \\ &= \sum_{i<j}\sigma^2_{ij}\left({\bf s}_1(i)+{\bf s}_1(j)\right)^2\nonumber \\ &\lesssim \frac{\alpha_n^4}{nd_1^4}\sum_{i<j}\sigma^2_{ij}\le \frac{\alpha_n^6}{d_1^4}\lesssim \frac{n^2\alpha_n^2}{d_1^4}\lesssim \Big(1+\frac{n^2\alpha_n^2}{d_{K_0}^2}\Big),\nonumber \end{align} where in the last line we have used $\sum_{i<j}\sigma_{ij}^2\leq n\alpha_n^2$ and $\alpha_n^2\lesssim n$. This contradicts to \eqref{eq: cond-var}! Therefore, in this case the central limit theorem fails to hold under the null hypothesis. In fact, by checking the proof of Theorem \ref{thm1}, we see that the intrinsic problem is when aggregating too many terms from the residual matrix, the noise accumulation is no longer negligible, canceling the first order term $\sum_{i\neq j}\sigma_{ij}^2$, and consequentially makes the central limit theorem fail. Similar phenomenon happens under the alternative hypothesis as well. This justifies the necessity of the subsampling step. \section{Simulation studies} \label{Sec4} In this section, we use simulations to justify the performance of RIRS in testing and estimating $K$, where Section \ref{Sec4-1} considers the network model and Section \ref{Sec4-2} considers more general low rank plus noise matrices. The nominal level is fixed to be $\alpha = 0.05$ in all settings. \subsection{ Network models}\label{Sec4-1} Consider the DCMM model \eqref{0105.1}. We simulate two types of nodes: pure node with ${\boldsymbol \pi}_i$ chosen from the set of unit vectors \begin{equation*} \text{PN}(K)=\{{\bf e}_1,\cdots,{\bf e}_K\}, \end{equation*} and the mixed membership node with ${\boldsymbol \pi}_i$ chosen from \begin{equation*} \text{MM}(K,x)=\Big\{(x,1-x,\underset{K-2}{\underbrace{0,\cdots,0}} ),\quad (1-x,x,\underset{K-2}{\underbrace{0,\cdots,0}} ),\quad (\underset{K}{\underbrace{\frac{1}{K},\cdots,\frac{1}{K}}})\Big\} \end{equation*} where $x\in (0,1)$. Note that DCMM \eqref{0105.1} includes SBM, DCSBM, and MM models as special cases. \begin{itemize} \item[\bf 1).]\emph{ \textbf{SBM}} \end{itemize} When all rows of ${\boldsymbol \Pi}$ are chosen from the pure node set $\text{PN}(K)$ and the degree heterogeneity matrix ${\boldsymbol \Theta}=r{\bf I}_n$, the DCMM \eqref{0105.1} reduces to the SBM with the following mean matrix structure \begin{equation}\label{0106.1} {\bf H}=r{\boldsymbol \Pi}{\bf B}{\boldsymbol \Pi}^T,\quad r\in (0,1),\quad {\boldsymbol \pi}_i\in \text{PN}(K),\ i=1,\cdots, n. \end{equation} We generate 200 independent adjacency matrices each with $n=1000$ nodes and $K$ equal-sized communities from the above SBM \eqref{0106.1}. We set ${\bf B}=(B_{ij})_{K\times K}$ with $B_{ij}=\rho^{|i-j|}$, $i\neq j$ and $B_{ii}=(K+1-i)/K$. We experiment with $\rho=0.1$ and 0.9. The value of $r$ ranges from 0.1 to 0.9, with smaller $r$ corresponding to sparser network model. For all values of $K$, we choose $m=\sqrt n$ in calculating the RIRS test statistics $T_n$ and $\widetilde T_n$ for networks without and with selfloops, respectively. The performance of RIRS is compared with the methods in \cite{L16}, where two versions of test -- one with and one without bootstrap correction -- were proposed when the network is absent of self-loops (i.e. $X_{ii}=0, i=1,...,n$). The empirical sizes and powers of both methods when $\rho=0.1$ are reported in Tables \ref{tab:1} and \ref{tab:2} for $K=2$ and $3$, respectively. The corresponding computation times are reported in Table \ref{tab:time}. We also compare the performance of $T_n$ and $\widetilde T_n$ when $\rho=0.1$ and 0.9, receptively, in Table \ref{tab:34} in the existence of selfloops. From Tables \ref{tab:1} and \ref{tab:2}, we observe that the performance of RIRS is relatively robust to the sparsity level $r$, with size close to the nominal level and power close to 1 in almost all settings. On contrary, the method in \cite{L16} without bootstrap has much worse performance when the sparsity level is high or when the number of communities is large. In fact, when $K=2$, the method in \cite{L16} without bootstrap correction suffers from size distortion for smaller $r$ (sparser setting). This phenomenon becomes even more severe when $K=3$, where the sizes are equal or close to one at all sparsity levels. With such distorted size, it is no longer meaningful to compare the power. Therefore we omit its power in Table \ref{tab:2}. With bootstrap correction, the method in \cite{L16} performs much better and is comparable to RIRS except for the setting of $r=0.1$ and $K=3$, where the size is severely distorted. However, from Table \ref{tab:time} we see that the computational cost for the bootstrap method in \cite{L16} is much higher than that of RIRS. Table \ref{tab:34} suggests that when $\rho$ is large, that is, denser connections between communities, $\widetilde T_n$ performances better than $T_n$, and vice versa. Finally, we present in Figure \ref{fig:sbm1} the histogram plots as well as the fitted density curves of our test statistics from 1000 repetitions when $K=2$, $\rho=0.1$, and $r=0.7$ under the null hypothesis. The standard normal density curves are also plotted as reference. It visually confirms that the asymptotic null distribution is standard normal. \begin{table}[htbp] \centering {\small \caption{ Empirical size and power under SBM with $K=2$ and $\rho=0.1$. } \begin{tabular}{|c|cc|cc|cc|cc|} \hline & \multicolumn{6}{c|}{No selfloops} & \multicolumn{2}{c|}{Selfloops} \\ \hline & \multicolumn{2}{c|}{RIRS ($T_n$)} &\multicolumn{2}{c|}{Lei (no bootstrap)} &\multicolumn{2}{c|}{Lei (bootstrap)} & \multicolumn{2}{c|}{RIRS ($\widetilde{T}_n$)}\\ \hline r & size & $\underset{(K_0=1)}{\text{power}}$& size & $\underset{(K_0=1)}{\text{power}}$ & size & $\underset{(K_0=1)}{\text{power}}$& size & $\underset{(K_0=1)}{\text{power}}$\\ \hline 0.1 & 0.025 & 1 & 0.995 & 1 & 0.035 & 1 & 0.085 & 0.815 \\ 0.3 & 0.025 & 1 & 0.24 & 1 & 0.02 & 1 & 0.06 & 1 \\ 0.5 & 0.045 & 1 & 0.07 & 1 & 0.025 & 1 & 0.065 & 1 \\ 0.7 & 0.065 & 1 & 0.1 & 1 & 0.055 & 1 & 0.05 & 1 \\ 0.9 & 0.04 & 1 & 0.045 & 1 & 0.065 & 1 & 0.075 & 1 \\ \hline \end{tabular}% \label{tab:1}% } \end{table}% \begin{table}[htbp] \centering {\small \caption{Empirical size and power under SBM for $K=3$ and $\rho=0.1$. } \resizebox{\textwidth}{18mm}{ \begin{tabular}{|c|ccc|c|ccc|ccc|} \hline & \multicolumn{7}{c|}{No selfloops} & \multicolumn{3}{c|}{Selfloops} \\ \hline & \multicolumn{3}{c|}{RIRS ($T_n$)} &$\underset{\text{(no bootstrap)}}{\text{Lei}}$&\multicolumn{3}{c|}{$\underset{\text{( bootstrap)}}{\text{Lei}}$} & \multicolumn{3}{c|}{RIRS($\widetilde{T}_n$)}\\ \hline $r$ &size & $\underset{(K_0=1)}{\text{power}}$ &$\underset{(K_0=2)}{\text{power}}$ &size &size & $\underset{(K_0=1)}{\text{power}}$ &$\underset{(K_0=2)}{\text{power}}$&size & $\underset{(K_0=1)}{\text{power}}$ &$\underset{(K_0=2)}{\text{power}}$ \\ \hline 0.1 & 0.065 & 1 & 0.36 & 1 & 0.895 & 1 & 1 & 0.1 & 0.98 & 0.19 \\ 0.3 & 0.075 & 1 & 0.795 & 1 & 0.06 & 1 & 1 & 0.065 & 1 & 0.625 \\ 0.5 & 0.045 & 1 & 0.98 & 0.99 & 0.02 & 1 & 1 & 0.075 & 1 & 0.94 \\ 0.7 & 0.045 & 1 & 0.985 & 0.925 & 0.04 & 1 & 1 & 0.065 & 1 & 1 \\ 0.9 & 0.05 & 1 & 1 & 0.69 & 0.015 & 1 & 1 & 0.05 & 1 & 1 \\ \hline \end{tabular}}% \label{tab:2}% } \end{table} \begin{table}[htbp] \centering {\small \caption{Size and power of $T_n$ and $\widetilde T_n$ under SBM with selfloops when $K=3$, $\rho=0.1$ or $\rho=0.9$. } \begin{tabular}{|c|cccc|cccc|} \hline & \multicolumn{4}{c|}{RIRS ($T_n$)} & \multicolumn{4}{c|}{RIRS($\widetilde{T}_n$)}\\ \hline \multicolumn{9}{|c|}{$\rho=0.1$}\\ \hline $r$ &Size & $\underset{(K_0=1)}{\text{Power}}$ &$\underset{(K_0=2)}{\text{Power}}$& $\widehat K$ &Size & $\underset{(K_0=1)}{\text{Power}}$ &$\underset{(K_0=2)}{\text{Power}}$& $\widehat K$ \\ \hline 0.1 & 0.045 & 1 & 0.33 & 0.285 & 0.085 & 0.99 & 0.12 & 0.155 \\ 0.3 & 0.04 & 1 & 0.755 & 0.695 & 0.085 & 1 & 0.585 & 0.595 \\ 0.5 & 0.06 & 1 & 0.96 & 0.94 & 0.04 & 1 & 0.955 & 0.9 \\ 0.7 & 0.05 & 1 & 1 & 0.945 & 0.035 & 1 & 1 & 0.91 \\ 0.9 & 0.07 & 1 & 1 & 0.965 & 0.03 & 1 & 1 & 0.945 \\ \hline \multicolumn{9}{|c|}{$\rho=0.9$}\\ \hline 0.1 & 0.025 & 0.8 & 0.15 & 0.1 & 0.075 & 1 & 0.66 & 0.72 \\ 0.3 & 0.05 & 0.995 & 0.345 & 0.28 & 0.04 & 1 & 1 & 0.955 \\ 0.5 & 0.065 & 1 & 0.645 & 0.535 & 0.05 & 1 & 1 & 0.97 \\ 0.7 & 0.045 & 1 & 0.765 & 0.77 & 0.07 & 1 & 1 & 0.935 \\ 0.9 & 0.055 & 1 & 0.915 & 0.895 & 0.02 & 1 & 1 & 0.955 \\ \hline \end{tabular} % \label{tab:34} } \end{table} \begin{figure}[!htb] \includegraphics[width=5.5in, trim=0 0.8in 0 0.3in, clip]{figs/SBM_plot.pdf} \caption{Histogram plots and the estimated densities (red curves) of RIRS test statistic when $K=2$ and $r=0.7$. Left: $T_n$ when no selfloop; Right: $\widetilde T_n$ when selfloops exist.} \label{fig:sbm1} \end{figure} \begin{table}[htb] \centering \caption{Average computation time (in seconds) for test statistics in Table \ref{tab:1} and Table \ref{tab:5} in one replication under SBM with no selfloops, $K=2$ and $r=0.5$.} \begin{tabular}{|c|cc|cc|cc|} \hline & \multicolumn{2}{c|}{RIRS ($T_n$)} &\multicolumn{2}{c|}{Lei (no bootstrap)} &\multicolumn{2}{c|}{Lei (bootstrap)}\\ \hline & Size &$\widehat K$& Size &$\widehat K$& Size & $\widehat K$\\ \hline Time& 0.504&0.906 & 0.432 & 2.88 &14.410&147.142 \\ \hline \end{tabular}% \label{tab:time}% \end{table}% \begin{itemize} \item [\bf 2).] \emph{ \textbf{DCMM}} \end{itemize} Next consider the general DCMM model (\ref{0105.1}). The number of repetition is still 200. We simulate the node degree parameters $\vartheta_j$'s independently from the uniform distribution over $[0.5,1]$. The vectors ${\boldsymbol \pi}_i$ are chosen from $\text{PN}(K)\cup\text{MM}(K,0.2)$, with $n_0$ pure nodes from each community and $(n-Kn_0)/3$ nodes from each mixed membership probability mass vector in $\text{MM}(K,0.2)$. We select $n_0=0.35n$ when $K=2$ and $n_0=0.25n$ when $K=3$. The matrix ${\bf B}$ is chosen to be the same as in the SBM with $\rho=0.1$. The network size $n$ ranges from 800 to 2000. The empirical sizes and powers are summarized in Table \ref{tab:new1}. Since \cite{L16} only considers SBM, the tests therein are no longer applicable in this setting. RIRS performs well and similarly to the SBM setting. Figure \ref{fig:dcmm1} presents the histogram plots as well as the fitted density curves of RIRS under the null hypothesis from 1000 repetitions when $K = 3$ and $n=1500$. These results well justify our theoretical findings. \begin{table}[!htb] \centering {\small \caption{Empirical size and power of RIRS under DCMM model.} \resizebox{\textwidth}{18mm}{ \begin{tabular}{|c|cc|cc||ccc|ccc|} \hline & \multicolumn{4}{c||}{$K=2$} & \multicolumn{6}{c|}{$K=3$} \\ \hline & \multicolumn{2}{c|}{No Selfloop ($T_n$)} & \multicolumn{2}{c||}{Selfloop ($\widetilde T_n$)} & \multicolumn{3}{c|}{No Selfloop ($T_n$)} & \multicolumn{3}{c|}{Selfloop ($\widetilde T_n$)} \\ \hline $n$ &Size& $\underset{(K_0=1)}{\text{Power}}$ & Size &$\underset{(K_0=1)}{\text{Power}}$ &Size & $\underset{(K_0=1)}{\text{Power}}$ &$\underset{(K_0=2)}{\text{Power}}$ &Size & $\underset{(K_0=1)}{\text{Power}}$ &$\underset{(K_0=2)}{\text{Power}}$ \\ \hline 800 & 0.045 & 1 & 0.08 & 1 &0.05 & 1 & 0.58 & 0.08 & 1 & 0.845 \\ 1000 & 0.04 & 1 & 0.05 & 1 &0.025 & 1 & 0.68 & 0.06 & 1 & 0.92 \\ 1200 & 0.065 & 1 & 0.05 & 1 &0.045 & 1 & 0.77 & 0.07 & 1 & 0.92 \\ 1500 & 0.045 & 1 & 0.03 & 1 &0.075 & 1 & 0.9 & 0.055 & 1 & 0.98 \\ 1800 & 0.075 & 1 & 0.055 & 1 &0.045 & 1 & 0.98 & 0.065 & 1 & 0.995 \\ 2000 & 0.075 & 1 & 0.065 & 1 &0.05 & 1 & 0.965 & 0.045 & 1 & 1 \\ \hline \end{tabular}}% \label{tab:new1}% } \end{table}% \begin{figure}[!htp] \centering \includegraphics[width=5.2in, trim=0 0.8in 0 0.3in, clip]{figs/DCMM_plot1500.pdf} \caption{DCMM. Histogram plots and the estimated densities (red curves) of RIRS when $K=3$ and $n=1500$. Left: $T_n$ when no selfloop; Right: $\widetilde T_n$ when selfloops exist.} \label{fig:dcmm1} \end{figure} \begin{itemize} \item [\bf 3).] \emph{ \textbf{Estimating the Number of Communities}} \end{itemize} We use the method discussed in Section \ref{sec: K-est} to estimate the number of communities $K$. Since the approaches in \cite{L16} are not applicable to the DCMM model, we only compare the performance of RIRS with \cite{L16} in SBM setting in the absence of selfloops. The proportions of correctly estimated $K$ are calculated over 200 replications and tabulated in Table \ref{tab:5} for SBM and in Table \ref{tab:6} for DCMM model. Table \ref{tab:5} shows that RIRS generally has comparable estimation accuracy with Lei's method under the SBM. While for DCMM model (Table \ref{tab:6}), RIRS can also estimate the number of communities with high accuracy. In particular, the estimation accuracy gets closer and closer to the expected value of 95\% as $n$ increases, which is consistent with our theory. \begin{table}[htbp] \centering {\small \caption{Proportion of correctly estimated $K$ under SBM. } \resizebox{\textwidth}{17mm}{ \begin{tabular}{|c|ccc|c||ccc|c|} \hline &\multicolumn{4}{c||}{$K=2$}&\multicolumn{4}{c|}{$K=3$}\\ \hline &\multicolumn{3}{c|}{No Selfloop}&Selfloop&\multicolumn{3}{c|}{No Selfloop}& Selfloop\\ \hline $r$&$\underset{T_n}{\text{RIRS}}$&$\underset{\text{(no bootstrap)}}{\text{Lei}}$&$\underset{\text{(bootstrap)}}{\text{Lei}}$&$\underset{\widetilde T_n}{\text{RIRS}}$& $\underset{T_n}{\text{RIRS}}$&$\underset{\text{(no bootstrap)}}{\text{Lei}}$&$\underset{\text{(bootstrap)}}{\text{Lei}}$&$\underset{\widetilde T_n}{\text{RIRS}}$\\ \hline 0.1 & 0.93 & 0 & 0.97 & 0.815 & 0.285 & 0 & 0.165 & 0.105 \\ 0.3 & 0.94 & 0.795 & 0.955 & 0.96 & 0.745 & 0 & 0.935 & 0.645 \\ 0.5 & 0.945 & 0.925 & 0.925 & 0.95 & 0.9 & 0.005 & 0.98 & 0.895 \\ 0.7 & 0.97 & 0.915 & 0.945 & 0.96 & 0.955 & 0.065 & 0.995 & 0.955 \\ 0.9 & 0.94 & 0.94 & 0.935 & 0.955 & 0.93 & 0.275 & 0.975 & 0.945 \\ \hline \end{tabular}}% \label{tab:5}% } \end{table}% \begin{table}[htbp] \centering {\small \caption{Proportion of correctly estimated $K$ under DCMM.} \resizebox{\textwidth}{12.4mm}{ \begin{tabular}{|C{0.8cm}|C{0.66cm}C{0.66cm}C{0.66cm}C{0.66cm}C{0.66cm}C{0.66cm}||C{0.66cm}C{0.66cm}C{0.66cm}C{0.66cm}C{0.66cm}C{0.66cm}|} \hline &\multicolumn{6}{c||}{$K=2$}&\multicolumn{6}{c|}{$K=3$}\\ \hline $n$ & 800 & 1000 & 1200 & 1500 & 1800 & 2000 & 800 & 1000 & 1200 & 1500 & 1800 & 2000 \\ \hline &\multicolumn{12}{c|}{No Selfloop ($T_n$)}\\ \hline RIRS & 0.935 & 0.935 & 0.93 & 0.965 & 0.935 & 0.95 & 0.505 & 0.625 & 0.805 & 0.865 & 0.915 & 0.935 \\ \hline &\multicolumn{12}{c|}{Selfloop ($\widetilde T_n$)}\\ \hline RIRS & 0.935 & 0.94 & 0.955 & 0.955 & 0.955 & 0.945 & 0.79 & 0.85 & 0.93 & 0.895 & 0.935 & 0.965 \\ \hline \end{tabular}}% \label{tab:6}% } \end{table}% \subsection{Low rank data matrix}\label{Sec4-2} RIRS can be applied to other low rank data matrices beyond the network model. In this section, we generate $n\times n$ data matrix ${\bf X}$ from the following model $${\bf X}={\bf H}+{\bf W}= {\bf V}{\bf D}{\bf V}^T+{\bf W},$$ where the residual matrix ${\bf W}$ is symmetric with upper triangle entries (including the diagonal ones) i.i.d from uniform distribution over (-1,1). Let ${\bf V}=\frac{1}{\sqrt{2}}\left( \begin{array}{ccc} {\bf V}_1\\ {\bf V}_2 \\ \end{array} \right) $, where ${\bf V}_1$ and ${\bf V}_2$ are $n_1\times K$ and $(n-n_1)\times K$ matrices respectively. We randomly generate an $n_1\times n_1$ Wigner matrix and collect its $K$ eigenvectors corresponding to the largest $K$ eigenvalues to form ${\bf V}_1$. We set ${\bf V}_2=\frac{\sqrt K}{\sqrt{ n-n_1}}{\boldsymbol \Pi}$ with ${\boldsymbol \Pi}=({\boldsymbol \pi}_1,...,{\boldsymbol \pi}_{n-n_1})^T$, where ${\boldsymbol \pi}_i\in \text{PN}(K)$ and the number of rows taking each distinct value from $\text{PN}(K)$ is the same. The diagonal matrix ${\bf D}=n\times \mathrm{diag}(K,K-1,...,1)$. The multiplier $n$ in the construction of ${\bf D}$ is to make the norm of each column of ${\bf V}$ one. We set $n_1=n/2$ and range the value of $n$ from 100 to 500. When $K=2$, the empirical sizes and powers as well as the proportions of correctly estimated $K$ over 500 repetitions are recorded in Table \ref{tab:lowrank}. It is seen that both $T_n$ and $\widetilde{T}_n$ performs well, with $\widetilde{T}_n$ having slightly higher power. This higher power further translates into better estimation accuracy (closer to 95\%) of estimated $K$. \begin{table}[htbp] \centering {\footnotesize\caption{Empirical size and power, and the proportion (Prop) of correctly estimated $K$ over 500 replications. } \begin{tabular}{|l|ccccc||ccccc|} \hline &\multicolumn{5}{c||}{No Selfloop ($T_n$)}&\multicolumn{5}{c|}{Selfloop ($\widetilde T_n$)}\\ \hline $n$&100&200&300&400&500&100&200&300&400&500\\ \hline Size& 0.048 & 0.042 & 0.05 & 0.054 & 0.052 & 0.05 & 0.05 & 0.032 & 0.062 & 0.052 \\ Power& 0.612 & 0.914 & 0.994 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ Prop&0.588 & 0.856 & 0.944 & 0.95 & 0.954 & 0.95 & 0.95 & 0.968 & 0.938 & 0.948 \\ \hline \end{tabular}% \label{tab:lowrank}% } \end{table}% \section{Real data analysis} \label{Sec:real} We consider a popularly studied network of political blogs assembled by \cite{Adamic05}. The nodes are blogs over the period of two months before the 2004 U.S. Presidential Election. The edges are the web links between the blogs. These blogs have known political divisions and were labeled into two communities ($K=2$) by \cite{Adamic05} -- the liberal and conservative communities. This blog data has been frequently used in the literature, see \cite{Karrer2011}, \cite{zhao2012} and \cite{L16} among others. It is widely believed to follow a degree corrected block model. For the readers' convenience, we cite a graph (Figure \ref{fig:blogdata}) from \cite{Karrer2011}, which modeled the data using the degree corrected block model. Following the literature, we ignore the directions and study only the largest connected component, which has $n=1222$ nodes. Consider the following two hypothesis tests: \begin{eqnarray*} &&{\textbf {(HT1)}:}\quad H_0: K=1\quad vs\quad H_1: K>1.\\ &&{\textbf {(HT2)}:}\quad H_0: K=2\quad vs\quad H_1: K>2. \end{eqnarray*} \cite{L16} considered {\textbf{(HT2)}} and obtained test statistic values 1172.3 and 491.5, corresponding to the test without bootstrap and with bootstrap, receptively. Both are much larger than the critical value (about 1.454) from the Tracy-Widom distribution, and thus the null hypothesis in {\textbf{(HT2)}} was strongly rejected. This is not surprising because the testing procedure in \cite{L16} is based on the SBM. It is possible that the model is misspecified when applying the tests therein. RIRS does not depend on any specific network model structure and is expected to be more robust to model misspecification. Since most of the diagonal entries of ${\bf X}$ are zero, we use the test statistic $T_n$. Noticing that the observed data matrix ${\bf X}$ is non-symmetric, we consider two simple transformations: \begin{equation}\label{0107.1} \text{Method 1}: \widetilde{{\bf X}}_1={\bf X}+{\bf X}^T;\qquad \text{Method 2}:\widetilde{{\bf X}}_2=\left( \begin{array}{ccc} {\bf 0} &\ \ {\bf X}\\ {\bf X}^T& \ \ {\bf 0}\\ \end{array} \right)_{2n\times 2n}. \end{equation} The transformation in Method 2 is general and can be applied to even non-square data matrix ${\bf X}$. After the transformations, $\mathrm{rank}(\mathbb{E}(\widetilde{{\bf X}}_1))=K$ and $\mathrm{rank}(\mathbb{E}(\widetilde{{\bf X}}_2))=2K$. The results of applying $T_n$ to the two hypothesis test problems \textbf {(HT1)} and \textbf {(HT2)}, together with the estimated number of communities by the sequential testing procedure are reported in Table \ref{tab:realdata}. We can see that for both transformations, RIRS consistently estimated the number of communities to be 2, which is consistent with the common belief in the literature. \begin{table}[htbp] \centering {\footnotesize\caption{Hypothesis testing and estimation results for the political blog data.} \begin{tabular}{|c|c|c||c|c|c|} \hline &\multicolumn{2}{c||}{Method 1}&\multicolumn{2}{c|}{Method 2}&\multirow{ 2}{*}{Decision}\\ \cline{2-5} &Test Statistic&P-value&Test Statistic&P-value&\\ \hline {\textbf{(HT1)}}&3.3527&0.0008&2.7131&0.0067&Reject $H_0$ in \textbf{(HT1)}\\ \hline {\textbf{(HT2)}}& -1.2424&0.2141&-0.8936&0.3716& Accept $H_0$ in \textbf{(HT2)}\\ \hline Estimate&\multicolumn{2}{c||}{2}&\multicolumn{2}{c|}{2}& $K=2$\\ \hline \end{tabular}% \label{tab:realdata}% } \end{table}% \begin{figure}[!htp] \centering \includegraphics[width=4.3in, trim=0 1.8in 0 1.8in, clip]{figs/blogdata0.pdf} \caption{(FIG.2. in \cite{Karrer2011}). Divisions of the blog network data using the degree corrected block model. The node colors reflect community labels.} \label{fig:blogdata} \end{figure} \newpage
1409.0274
\section{Introduction} Let $\mathfrak{g}$ be a~f\/inite-dimensional complex simple Lie algebra with highest root $\theta$. The current algebra $\mathfrak{g}[t]$ associated to $\mathfrak{g}$ is equal to $\mathfrak{g}\otimes \mathbb{C}[t]$, where $\mathbb{C}[t]$ is the polynomial ring in one variable. The degree grading on $\mathbb{C}[t]$ gives a~natural $\mathbb{Z}_{\geq 0}$-grading on $\mathfrak{g}[t]$ and the Lie bracket is given in the obvious way such that the zeroth grade piece $\mathfrak{g}\otimes 1$ is isomorphic to $\mathfrak{g}$. Let $\widehat{\mathfrak{g}}$ be the untwisted af\/f\/ine Lie algebra corresponding to~$\mathfrak{g}$. In this paper, we shall be concerned with the~$\mathfrak{g}[t]$-stable Demazure modules of integrable highest weight representations of~$\widehat{\mathfrak{g}}$. The Demazure modules are actually modules for a~Borel subalgebra $\widehat{\mathfrak{b}}$ of $\widehat{\mathfrak{g}}$. The $\mathfrak{g}[t]$-stable Demazure modules are known to be indexed by pairs $(l,\lambda)$, where~$l$ is a~positive integer and~$\lambda$ is a~dominant integral weight of~$\mathfrak{g}$ (see~\cite{FoL,Naoi}). We denote the corresponding module by $D(l, \lambda)$ and call it the level~$l$ Demazure module with highest weight~$\lambda$; it is in fact a~f\/inite-dimensional graded $\mathfrak{g}[t]$-module. The study of the category of f\/inite-dimensional graded $\mathfrak{g}[t]$-modules has been of interest in recent years for variety of reasons. An important construction in this category is that of the fusion product. The fusion product of f\/inite-dimensional graded $\mathfrak{g}[t]$-modules~\cite{FL} is by def\/inition, dependent on the given parameters. Many people have been working in recent years, to prove the independence of the parameters for the fusion product of certain~$\mathfrak{g}[t]$-modules, see for instance~\cite{CSVW,CV,FoL,Naoi,V}. These works mostly considered the fusion product of Demazure modules of the same level and gave explicit def\/ining relations for them. We ask the most natural question: Can one give similar results for the fusion product of dif\/ferent level Demazure modules? In this paper, we answer this question for some important cases; namely we prove (Corollary~\ref{c2}) that the fusion product of~$m$ copies of the level one Demazure module $D(1, \theta)$ with~$n$ copies of the adjoint representation $\ev_0 V(\theta)$ is independent of the parameters, and we give explicit def\/ining relations. We note that $\ev_0 V(\theta)$ may be thought of as a~Demazure module $D(l,\theta)$ of level $l\geq 2$. More generally, the following is the statement of our main theorem (see Section~\ref{section3} for notation). \begin{theorem} \label{MT} Let $k\geq 1$. For $0\leq i \leq k$, we have the following: \begin{enumerate}\itemsep=0pt \item[$1)$] a~short exact sequence of $\mathfrak{g}[t]$-modules, \begin{gather*} 0\rightarrow \tau_{2k+1-i} \big(D(1, k\theta)/\big\langle \big(x^{-}_\theta \otimes t^{2k-i}\big) \overline{w}_{k\theta}\big\rangle\big) \\ \phantom{0} \xrightarrow{\phi^{-}} D\big(1,(k+1)\theta\big)/\big\langle \big(x^{-}_\theta \otimes t^{2k+2-i}\big) \overline{w}_{(k+1)\theta}\big\rangle \\ \phantom{0} \xrightarrow{\phi^+} D\big(1,(k+1)\theta\big)/\big\langle \big(x^{-}_\theta \otimes t^{2k+1-i}\big) \overline{w}_{(k+1)\theta}\big\rangle \rightarrow 0; \end{gather*} \item[$2)$] an isomorphism of $\mathfrak{g}[t]$-modules, \begin{gather*} D\big(1,(k+1)\theta\big)/\langle \big(x^{-}_\theta \otimes t^{2k+2-i}\big) \overline{w}_{(k+1)\theta}\rangle \cong D(1,\theta)^{* (k+1-i)}* \textup{ev}_0 V(\theta)^{*i}. \end{gather*} \end{enumerate} \end{theorem} We obtain the following two important corollaries: \begin{corollary} \label{c1} Given $k\geq 1$ and $0\leq i \leq k $, we have the following short exact sequence of $\mathfrak{g}[t]$-modules, \begin{gather*} 0 \rightarrow \tau_{2k+1-i}\big(D(1,\theta)^{*(k-i)} * \textup{ev}_0 V(\theta)^{*i}\big) \rightarrow D(1,\theta)^{*(k+1-i)}* \textup{ev}_0 V(\theta)^{*i} \\ \phantom{0} \rightarrow D(1,\theta)^{*(k-i)} * \textup{ev}_0 V(\theta)^{*(i+1)} \rightarrow 0. \end{gather*} \end{corollary} \begin{corollary} \label{c2} Given $m,n\geq 0$, we have the following isomorphism of $\mathfrak{g}[t]$-modules, \begin{gather*} D(1,\theta)^{*m} * \textup{ev}_0 V(\theta)^{*n} \cong D\big(1,(m+n)\theta\big)/\big\langle \big(x^{-}_\theta \otimes t^{2m+n}\big) \overline{w}_{(m+n)\theta}\big\rangle. \end{gather*} \end{corollary} The Corollary~\ref{c2} generalizes a~result of Feigin (see~\cite[Corollary~2]{F}), where he only considers the case $m=0$. Theorem~\ref{MT}, Corollaries~\ref{c1} and~\ref{c2} are proved in Section~\ref{section4}. In~\cite{CV}, Chari and Venkatesh introduced a~large collection of indecomposable graded $\mathfrak{g}[t]$-modules (which we call Chari--Venkatesh or CV modules) such that all Demazure mo\-du\-les~$D(l, \lambda)$ belong to this collection. In the case when $\mathfrak{g}$ is simply laced, Theorem~\ref{MT} enables us to obtain (see Theorem~\ref{T2}) interesting exact sequences between CV modules and to show that the fusion product of a~special family of CV modules is again a~CV module. Theorem~\ref{T2} generalizes results of Chari and Venkatesh (see~\cite[\S~6]{CV}), where they only consider the case $\mathfrak{g}=\mathfrak{sl}_2$. For $n\geq 1$, let $\mathcal{A}_n=\mathbb{C}[t]/(t^n)$ be the truncated algebra. We consider for $k\geq1$ the local Weyl modules $W_{\mathcal{A}_n}(k\theta)$ for the truncated current algebra $\mathfrak{g}\otimes\mathcal{A}_n$. These modules are known to be f\/inite-dimensional, but they are still far from being well understood; even their dimensions are not known. As a~consequence of Theorem~\ref{MT}, we are able to obtain the following description of truncated Weyl modules in terms of local Weyl modules $W(k\theta)$, $k\geq 1$, for the current algebra~$\mathfrak{g}[t]$. The latter modules $W(k\theta)$ are very well understood. \begin{corollary} \label{truncated} Assume that $\mathfrak{g}$ is simply laced. Given $k,n\geq 1$, we have the following isomorphism of $\mathfrak{g}[t]$-modules, \begin{gather*} W_{\mathcal{A}_n}(k\theta) \cong \begin{cases} W(\theta)^{*(n-k)} * \textup{ev}_0 V(\theta)^{*(2k-n)}, & k\leq n < 2k, \\ W(k\theta), & n\geq 2k. \end{cases} \end{gather*} \end{corollary} The Corollary~\ref{truncated} is proved in Section~\ref{section5}. \section{Preliminaries}\label{section2} Throughout the paper, $\mathbb{C}$ denote the f\/ield of complex numbers, $\mathbb{Z}$ the set of integers, $\mathbb{Z}_{\geq 0}$ the set of non-negative integers, $\mathbb{N}$ the set of positive integers and $\mathbb{C}[t]$ the polynomial ring in an indeterminate~$t$. {\bf 2.1.}~Let $\mathfrak{a}$ be a~complex Lie algebra, $\mathbf{U}(\mathfrak{a})$ the corresponding universal enveloping algebra. The current algebra associated to $\mathfrak{a}$ is denoted by $\mathfrak{a}[t]$ and def\/ined as $\mathfrak{a} \otimes \mathbb{C}[t]$, with the Lie bracket \begin{gather*} [a \otimes t^r, b \otimes t^s]=[a, b]\otimes t^{r+s}, \qquad \text{for all} \quad a, b \in \mathfrak{a} \quad \text{and} \quad r, s \in \mathbb{Z}_{\geq 0}. \end{gather*} We let $\mathfrak{a}[t]_{+}$ be the ideal $\mathfrak{a}\otimes t\mathbb{C}[t]$. The degree grading on $\mathbb{C}[t]$ gives a~natural $\mathbb{Z}_{\geq 0}$-grading on~$\mathbf{U}(\mathfrak{a}[t])$ and the subspace of grade~$s$ is given~by \begin{gather*} \mathbf{U}(\mathfrak{a}[t])[s]= \spn \Big\{(a_1\otimes t^{r_1})\cdots (a_k\otimes t^{r_k}):k\geq 1,\, a_i\in\mathfrak{a},\, r_i\in \mathbb{Z}_{\geq 0}, \sum r_i=s\Big\}, \quad \forall\, s\in \mathbb{N}, \end{gather*} and the subspace of grade zero $\mathbf{U}(\mathfrak{a}[t])[0]=\mathbf{U}(\mathfrak{a})$. {\bf 2.2.}~Let $\mathfrak{g}$ be a~f\/inite-dimensional complex simple Lie algebra, with Cartan subalgebra~$\mathfrak{h}$. Let~$R$ (resp.~$R^+$) be the set of roots (resp.\ positive roots) of $\mathfrak{g}$ with respect to $\mathfrak{h}$ and $\theta \in R^+$ be the highest root in~$R$. There is a~non-degenerate, symmetric, Weyl group invariant bilinear form $(\cdot|\cdot)$ on $\mathfrak{h}^*$, which we assume to be normalized so that the square length of a~long root is two. For $\alpha\in R$, $\alpha^{\vee}\in\mathfrak{h}$ denotes the corresponding co-root and we set $d_{\alpha}=2/(\alpha|\alpha)$. For $\alpha\in R$, let $\mathfrak{g}_{\alpha}$ be the corresponding root space of $\mathfrak{g}$ and f\/ix non-zero elements $x^{\pm}_{\alpha}\in \mathfrak{g}_{\pm\alpha}$ such that $[x^{+}_{\alpha},x^{-}_{\alpha}]=\alpha^{\vee}$. We set $\mathfrak{n}^{\pm}=\oplus_{\alpha\in R^{+}} \mathfrak{g}_{\pm\alpha}$. Let $P^{+}$ be the set of dominant integral weights of $\mathfrak{g}$. For $\lambda\in P^+$, $V(\lambda)$ be the corresponding f\/inite-dimensional irreducible $\mathfrak{g}$-module generated~by an element $v_{\lambda}$ with the following def\/ining relations: \begin{gather*} x^{+}_{\alpha}v_\lambda=0, \qquad h v_\lambda = \langle \lambda, h \rangle v_\lambda, \qquad (x^{-}_{\alpha})^{\langle \lambda, \alpha^{\vee} \rangle +1} v_\lambda=0, \qquad \text{for all} \quad \alpha\in R^+, \quad h\in\mathfrak{h}. \end{gather*} {\bf 2.3.}~A graded $\mathfrak{g}[t]$-module is a~$\mathbb{Z}$-graded vector space \begin{gather*} V=\bigoplus_{r\in\mathbb{Z}} V[r] \qquad \text{such that} \quad (x\otimes t^s) V[r]\subset V[r+s], \quad x\in\mathfrak{g}, \quad r\in\mathbb{Z}, \quad s\in\mathbb{Z}_{\geq0}. \end{gather*} For $\mu\in\mathfrak{h}^*$, an element~$v$ of a~graded $\mathfrak{g}[t]$-module~$V$ is said to be of weight~$\mu$, if $(h\otimes 1)v=\langle \mu, h\rangle v$ for all $h\in \mathfrak{h}$. We def\/ine a~morphism between two graded $\mathfrak{g}[t]$-modules as a~degree zero morphism of $\mathfrak{g}[t]$-modules. For $r\in\mathbb{Z}$, let $\tau_r$ be the grade shift operator: if~$V$ is a~graded $\mathfrak{g}[t]$-module then~$\tau_r V$ is the graded $\mathfrak{g}[t]$-module with the graded pieces shifted uniformly by~$r$ and the action of~$\mathfrak{g}[t]$ remains unchanged. For any graded $\mathfrak{g}[t]$-module~$V$ and a~subset~$S$ of~$V$, $\langle S\rangle$ denotes the submodule of~$V$ generated by~$S$. For $\lambda\in P^+$, $\ev_0 V(\lambda)$ be the irreducible graded $\mathfrak{g}[t]$-module such that $\ev_0 V(\lambda)[0]\cong_{\mathfrak{g}} V(\lambda)$ and $\ev_0 V(\lambda)[r]=0$ $\forall\, r\in \mathbb{N}$. In particular, $\mathfrak{g}[t]_{+}(\ev_0 V(\lambda))=0$. {\bf 2.4.}~For $r,s\in\mathbb{Z}_{\geq0}$, we denote \begin{gather*} \mathbf{S}(r,s)=\bigg\{(b_p)_{p\geq0}: b_p\in\mathbb{Z}_{\geq0}, \; \sum\limits_{p\geq 0}b_p =r, \; \sum\limits_{p\geq0}pb_p=s\bigg\}. \end{gather*} For $\alpha\in R^{+}$ and $r,s\in\mathbb{Z}_{\geq0}$, we def\/ine an element $\mathbf{x}^{-}_{\alpha}(r,s)\in\mathbf{U}(\mathfrak{g}[t])[s]$~by \begin{gather*} \mathbf{x}^{-}_{\alpha}(r,s)=\sum\limits_{(b_p)\in\mathbf{S}(r,s)} (x^{-}_{\alpha} \otimes 1)^{(b_0)}(x^{-}_{\alpha} \otimes t)^{(b_1)}\cdots(x^{-}_{\alpha} \otimes t^s)^{(b_s)}, \end{gather*} where for any non-negative integer~$b$ and any $x\in\mathfrak{g}[t]$, we understand $x^{(b)}=x^b/b!$. The following was proved in~\cite{G} (see also~\cite[Lemma 2.3]{CV}). \begin{lemma}\label{garland} Given $s\in\mathbb{N}$, $r\in\mathbb{Z}_{\geq0}$ and $\alpha\in R^{+}$, we have \begin{gather*} (x^{+}_{\alpha} \otimes t)^{(s)}(x^{-}_{\alpha} \otimes 1)^{(s+r)}-(-1)^s\mathbf{x}^{-}_{\alpha}(r,s)\in\mathbf{U}(\mathfrak{g}[t])\mathfrak{n}^{+}[t]\bigoplus \mathbf{U}(\mathfrak{n}^{-}[t])\mathfrak{h}[t]_{+}. \end{gather*} \end{lemma} \section{Weyl, Demazure modules and fusion product}\label{section3} In this section, we recall the def\/initions of local Weyl modules, level one Demazure modules and fusion products. \subsection{Weyl module}\label{section3.1} The def\/inition of the local Weyl module was given originally in~\cite{CP}, later in~\cite{CFK} and~\cite{FL}. \begin{definition} Given $\lambda \in P^+$, the local Weyl module $W(\lambda)$ is the cyclic $\mathfrak{g}[t]$-module generated by an element $w_\lambda$, with following def\/ining relations: \begin{gather} \mathfrak{n}^{+}[t] w_\lambda=0, \qquad (h \otimes t^s) w_\lambda = \langle \lambda, h \rangle \delta_{s,0} w_\lambda=0, \qquad s\geq0, \qquad h\in\mathfrak{h}, \nonumber \\ (x^{-}_\alpha \otimes 1)^{\langle \lambda, \alpha^{\vee} \rangle + 1}w_\lambda=0, \qquad \alpha \in R^{+}. \label{w2} \end{gather} \end{definition} \noindent We note that the relation~\eqref{w2} implies \begin{gather} \label{w2'} \big(x^{-}_\alpha \otimes t^{\langle \lambda, \alpha^{\vee} \rangle}\big) w_\lambda=0, \qquad \alpha \in R^{+}, \end{gather} which is easy to see from Lemma~\ref{garland}. We set the grade of $w_\lambda$ to be zero; then $W(\lambda)$ becomes a~$\mathbb{Z}_{\geq 0}$-graded module with \begin{gather*} W(\lambda)[0] \cong_{\mathfrak{g}} V(\lambda). \end{gather*} Moreover, $\textup{ev}_0 V(\lambda)$ is the unique graded irreducible quotient of $W(\lambda)$. We now specialize to the case $\lambda\in \mathbb{N}\theta$, and obtain some further useful relations that hold in~$W(\lambda)$. \begin{lemma} \label{g} Let $k\in\mathbb{N}$. The following relations hold in the local Weyl module $W((k+1)\theta)$: \begin{enumerate}\itemsep=0pt \item[$1)$] $(x^{-}_{\theta}\otimes 1)^{2k+1}\big(x^{-}_{\theta}\otimes t^{2k+1-i}\big)w_{(k+1)\theta}=0, \qquad \forall\, 0\leq i \leq k$; \item[$2)$] $(x^{-}_{\theta}\otimes t^m)(x^{-}_{\theta} \otimes t^{m+1}) w_{(k+1)\theta} \in \big\langle \big(x^{-}_{\theta} \otimes t^{m+2}\big) w_{(k+1)\theta} \rangle, \qquad \forall\, m\geq k$. \end{enumerate} \end{lemma} \begin{proof} To prove part (1), consider $(x^{+}_{\theta}\!\otimes t^{2k{+}1{-}i}) (x^{-}_{\theta}\otimes 1)^{2k{+}3}w_{(k{+}1)\theta}$. Since $(x^{+}_{\theta}\!\otimes t^{2k{+}1{-}i})w_{(k{+}1)\theta}$ $=0$, we get \begin{gather*} \big(x^{+}_{\theta}\otimes t^{2k+1-i}\big) (x^{-}_{\theta}\otimes 1)^{2k+3}w_{(k+1)\theta} =\big[x^{+}_{\theta}\otimes t^{2k+1-i}, (x^{-}_{\theta}\otimes 1)^{2k+3}\big] w_{(k+1)\theta} \\ \qquad{} =\sum\limits_{j=1}^{2k+3} (x^{-}_{\theta}\otimes 1)^{j-1}\big(\theta^{\vee}\otimes t^{2k+1-i}\big)(x^{-}_{\theta}\otimes 1)^{2k+3-j}w_{(k+1)\theta}. \end{gather*} Since $(\theta^{\vee}\otimes t^{2k+1-i})w_{(k+1)\theta}=0$, we may replace $(\theta^{\vee}\otimes t^{2k+1-i})(x^{-}_{\theta}\otimes 1)^{2k+3-j}$~by \begin{gather*} \big[\theta^{\vee}\otimes t^{2k+1-i}, (x^{-}_{\theta}\otimes 1)^{2k+3-j}\big] =(-2)(2k+3-j)(x^{-}_{\theta}\otimes 1)^{2k+2-j}\big(x^{-}_{\theta}\otimes t^{2k+1-i}\big). \end{gather*} After simplifying, we get \begin{gather*} \big(x^{+}_{\theta}\otimes t^{2k+1-i}\big) (x^{-}_{\theta}\otimes 1)^{2k+3}w_{(k+1)\theta} \\ \qquad{} =(-1)(2k+2)(2k+3)(x^{-}_{\theta}\otimes 1)^{2k+1}\big(x^{-}_{\theta}\otimes t^{2k+1-i}\big)w_{(k+1)\theta}. \end{gather*} Now, using $(x^{-}_{\theta}\otimes 1)^{2k+3}w_{(k+1)\theta}=0$ in $W((k+1)\theta)$, completes the proof of part~(1). Part (2) follows easily by putting $r=2$, $s=2m+1$ and $\alpha=\theta$ in Lemma~\ref{garland}, and using the fact that $(x^{-}_{\theta}\otimes 1)^{2m+3}w_{(k+1)\theta}=0$, $\forall\, m\geq k$ by~\eqref{w2}. \end{proof} \subsection{Level one Demazure module}\label{section3.2} Let $\lambda\in P^{+}$ and $\alpha\in R^{+}$ with $\langle \lambda, \alpha^{\vee}\rangle > 0$. Let $s_\alpha, m_\alpha \in \mathbb{N}$ be the unique positive integers such that \begin{gather*} \langle \lambda, \alpha^{\vee}\rangle = (s_\alpha-1)d_\alpha + m_\alpha, \qquad 0<m_\alpha\leq d_\alpha. \end{gather*} If $\langle \lambda, \alpha^{\vee}\rangle = 0$, set $s_\alpha=0=m_\alpha$. We take the following as a~def\/inition of the level one Demazure module. \begin{definition} \textup{(see~\cite[Corollary 3.5]{CV})} The level one Demazure module $D(1,\lambda)$ is the graded quotient of $W(\lambda)$ by the submodule generated by the union of the following two sets: \begin{gather} \big\{(x^{-}_{\alpha} \otimes t^{s_{\alpha}}) w_{\lambda}: \alpha\in R^{+}~\textup{such that}~d_{\alpha} > 1\big\}, \label{dm1} \\ \big\{\big(x^{-}_{\alpha} \otimes t^{s_{\alpha}-1}\big)^2 w_{\lambda}: \alpha\in R^{+}~\textup{such that}~d_\alpha =3~\textup{and}~m_\alpha=1\big\}. \label{dm2} \end{gather} In particular, for $\mathfrak{g}$ simply laced, $D(1,\lambda)\cong_{\mathfrak{g}[t]} W(\lambda)$. We denote~by $\overline{w}_\lambda$, the image of $w_\lambda$ in~$D(1,\lambda)$. \end{definition} The following proposition gives explicit def\/ining relations for $D(1,k\theta)$. \begin{proposition} \label{WvsD} Given $k\geq 1$, the level~$1$ Demazure module $D(1,k\theta)$ is the graded $\mathfrak{g}[t]$-module generated by an element $\overline{w}_{k\theta}$, with the following defining relations: \begin{gather*} \mathfrak{n}^{+}[t]\, \overline{w}_{k\theta}=0, \qquad (h \otimes t^s) \overline{w}_{k\theta} = \langle k\theta, h\rangle \delta_{s,0} \overline{w}_{k\theta}, \qquad s\geq0, \quad h\in\mathfrak{h}, \\ (x^{-}_{\alpha}\otimes 1) \overline{w}_{k\theta}=0, \qquad \alpha\in R^+, \quad (\theta|\alpha)=0, \\ (x^{-}_{\alpha} \otimes 1)^{kd_{\alpha}+1} \overline{w}_{k\theta}=0, \qquad \big(x^{-}_{\alpha} \otimes t^k\big) \overline{w}_{k\theta}=0, \qquad \alpha\in R^+, \quad (\theta|\alpha)=1, \\ (x^{-}_{\theta} \otimes 1)^{2k+1} \overline{w}_{k\theta}=0. \end{gather*} \end{proposition} \begin{proof} Observe that, from the abstract theory of root systems $(\theta|\alpha)= 0$ or~1, $\forall\, \alpha \in R^{+}\setminus \{\theta\}$. This implies that $\langle k\theta, \alpha^{\vee}\rangle = 0$ or $kd_{\alpha}$, $\forall\, \alpha \in R^{+}\setminus \{\theta\}$. Hence the relations~\eqref{dm2} do not occur in $D(1,k\theta)$ and the relations~\eqref{dm1} are \begin{gather*} \big(x^{-}_{\alpha} \otimes t^k\big) \overline{w}_{k\theta}=0, \qquad \alpha\in R^+, \quad \alpha~~\text{short}, \quad (\theta|\alpha)=1. \end{gather*} For a~long root $\alpha\in R^+$ with $(\theta|\alpha)=1$, by~\eqref{w2'} it follows that $(x^{-}_{\alpha} \otimes t^k) \overline{w}_{k\theta}=0$. Now the other relations are precisely the def\/ining relations of $W(k\theta)$. This proves Proposition~\ref{WvsD}. \end{proof} We record below a~well-known fact, for later use: \begin{gather*} D(1, \theta)\cong_{\mathfrak{g}} V(\theta) \oplus \mathbb{C}. \end{gather*} In particular, \begin{gather} \label{dimd} \dim D(1, \theta)= \dim V(\theta)+1. \end{gather} The following is a~crucial lemma, which we use in proving Theorem~\ref{MT}. \begin{lemma} \label{crucial} Let $k\geq 1$ and $0\leq i \leq k$. The following relations hold in the module $D(1,(k+1)\theta)$: \begin{enumerate}\itemsep=0pt \item[$1)$] $(x^{-}_{\alpha} \otimes 1)^{kd_{\alpha}+1}\big(x^{-}_{\theta} \otimes t^{2k+1-i}\big) \overline{w}_{(k+1)\theta}=0$, $\forall\, \alpha \in R^{+}$, $(\theta|\alpha)=1$; \item[$2)$] $\big(x^{-}_{\alpha} \otimes t^{k}\big) (x^{-}_{\theta} \otimes t^{2k+1-i}) \overline{w}_{(k+1)\theta} \in \big\langle \big(x^{-}_{\theta} \otimes t^{2k+2-i}\big) \overline{w}_{(k+1)\theta} \big\rangle$, $\forall\, \alpha \in R^{+}$, $(\theta|\alpha)=1$; \item[$3)$] $\big(x^{-}_{\theta} \otimes t^{2k-i}\big) \big(x^{-}_{\theta} \otimes t^{2k+1-i}\big) \overline{w}_{(k+1)\theta} \in \big\langle \big(x^{-}_{\theta} \otimes t^{2k+2-i}\big) \overline{w}_{(k+1)\theta} \big\rangle$. \end{enumerate} \end{lemma} \begin{proof} Let $ \alpha \in R^{+} \textup{with} (\theta|\alpha)=1$. This implies that $\theta-\alpha$ is also a~root of $\mathfrak{g}$ and $(\theta|\theta-\alpha)=1$. We now prove part (1). Observe that, $(x^{-}_{\theta} \otimes t^{2k+1-i}) \overline{w}_{(k+1)\theta}$ is an element of weight $k\theta$. Further $(x^{+}_{\alpha} \otimes 1) (x^{-}_{\theta} \otimes t^{2k+1-i}) \overline{w}_{(k+1)\theta}=0$, since $(x^{+}_{\alpha} \otimes 1)\overline{w}_{(k+1)\theta}=0$ and $(x^{-}_{\theta-\alpha} \otimes t^{2k+1-i}) \overline{w}_{(k+1)\theta}=0$, for all $0\leq i \leq k$. Considering the copy of $\mathfrak{sl}_2$ spanned by $x^{+}_{\alpha} \otimes 1$, $x^{-}_{\alpha} \otimes 1$, $\alpha^{\vee}\otimes 1$, we obtain part (1) by standard $\mathfrak{sl}_2$ arguments. We now prove part (2). Putting $r=2$, $s=(3k+1-i)$ and $\alpha=\theta$ in Lemma~\ref{garland}, we get \begin{gather} \big(x^{-}_{\theta} \otimes t^{k}\big) \big(x^{-}_{\theta} \otimes t^{2k+1-i}\big) \overline{w}_{(k+1)\theta} + \sum\limits_{\substack{k+1\leq p\leq q \leq 2k-i\\p+q= 3k+1-i}} \frac{1}{(2\delta_{p, q})!} \big(x^{-}_{\theta} \otimes t^{p}\big) \big(x^{-}_{\theta} \otimes t^{q}\big) \overline{w}_{(k+1)\theta} \nonumber\\ \qquad{} \in \big\langle\big(x^{-}_{\theta}\otimes t^{2k+2-i}\big) \overline{w}_{(k+1)\theta} \big\rangle, \label{e} \end{gather} since $(x^{-}_{\theta}\otimes 1)^{3k+3-i}\overline{w}_{(k+1)\theta}=0$, $\forall\, 0\leq i \leq k$. Now we act on both sides of~\eqref{e} by $x^{+}_{\theta-\alpha}$ and use the relation $(x^{-}_{\alpha} \otimes t^r) \overline{w}_{(k+1)\theta}=0$, for all $r\geq(k+1)$, which gives part~(2). Part~(3) is immediate from the part~(2) of Lemma~\ref{g}. \end{proof} \subsection{Fusion product}\label{section3.3} In this subsection, we recall the def\/inition of the fusion product of f\/inite-dimensional graded cyclic $\mathfrak{g}[t]$-modules given in~\cite{FL} and give some elementary properties. For a~cyclic $\mathfrak{g}[t]$-module~$V$ generated by~$v$, we def\/ine a~f\/iltration $F^{r}V$, $r\in\mathbb{Z}_{\geq 0}$~by \begin{gather*} F^{r}V=\sum\limits_{0\leq s \leq r} \mathbf{U}(\mathfrak{g}[t])[s] v. \end{gather*} We say $F^{-1}V$ is the zero space. The associated graded space $\gr V=\bigoplus_{r\geq 0} F^{r}V/ F^{r-1}V $ naturally becomes a~cyclic $\mathfrak{g}[t]$-module generated by $v+F^{-1}V$, with action given by \begin{gather*} (x\otimes t^s)\big(w+F^{r-1}V\big):= (x\otimes t^s)w+F^{r+s-1}V, \qquad \forall\, x\in \mathfrak{g}, \quad w\in F^{r}V, \quad r, s\in \mathbb{Z}_{\geq0}. \end{gather*} Observe that, $\gr V \cong V$ as $\mathfrak{g}$-modules. The following lemma will be useful. \begin{lemma} \label{f1} Let~$V$ be a~cyclic $\mathfrak{g}[t]$-module. For $r,s\in \mathbb{Z}_{\geq0}$, the following equality holds in the quotient space $F^{r+s}V/ F^{r+s-1}V$. \begin{gather*} (x\otimes t^s)\big(w+F^{r-1}V\big)=\big((x\otimes (t-a_1)\cdots (t-a_s))w\big) + F^{r+s-1}V, \end{gather*} for all $a_1,\dots,a_s\in\mathbb{C}$, $x\in \mathfrak{g}$, $w\in F^{r}V$. \end{lemma} Given a~$\mathfrak{g}[t]$-module~$V$ and $z\in\mathbb{C}$, we def\/ine an another $\mathfrak{g}[t]$-module action on~$V$ as follows: \begin{gather*} (x\otimes t^s)v=\big(x\otimes (t+z)^s\big)v, \qquad x\in\mathfrak{g}, \qquad v\in V, \qquad s\in \mathbb{Z}_{\geq0}. \end{gather*} We denote this new module by $V^z$. Let $V_i$ be a~f\/inite-dimensional cyclic graded $\mathfrak{g}[t]$-module generated by $v_i$, for $1\leq i\leq m$, and let $z_1,\dots,z_m$ be distinct complex numbers. We denote~by \begin{gather*} \mathbf{V}={V_1}^{z_1}\otimes\dots\otimes {V_m}^{z_m}, \end{gather*} the corresponding tensor product of $\mathfrak{g}[t]$-modules. It is easily checked (see~\cite[Proposition 1.4]{FL}) that $\mathbf{V}$ is a~cyclic $\mathfrak{g}[t]$-module generated by $v_1\otimes\dots\otimes v_m$. The associated graded space $\gr \mathbf{V}$ is called the fusion product of $V_1,\dots,V_m$ w.r.t.\ the parameters $z_1,\dots,z_m$, and is denoted by ${V_1}^{z_1}*\cdots*{V_m}^{z_m}$. We denote $v_1*\cdots*v_m=(v_1\otimes\cdots\otimes v_m)+F^{-1}\mathbf{V}$, a~generator of $\gr \mathbf{V}$. For ease of notation we mostly, just write $V_1*\cdots*V_m$ for ${V_1}^{z_1}*\cdots*{V_m}^{z_m}$. But unless explicitly stated, it is assumed that the fusion product does depend on these parameters. The following lemma will be needed later. \begin{lemma} \label{f2} Given $1\leq i\leq m$, let $V_i$ be a~finite-dimensional cyclic graded $\mathfrak{g}[t]$-module generated by $v_i$, and $s_i\in\mathbb{Z}_{\geq0}$. Let $x\in\mathfrak{g}$. If $(x\otimes t^{s_i})v_i=0$, $\forall\, 1\leq i\leq m$ then $(x\otimes t^{s_1+\dots+s_m}) v_1*\dots*v_m=0$. \end{lemma} \begin{proof} Let $z_1,\dots,z_m$ be distinct complex numbers and let $\mathbf{V}$ as above. By using Lemma~\ref{f1}, we get the following equality in $\gr \mathbf{V}$, \begin{gather*} \big(x\otimes t^{s_1+\cdots+s_m}\big) \big((v_1\otimes\cdots\otimes v_m)+F^{-1}\mathbf{V}\big) \\ \qquad{} =\big(\big(x\otimes (t-z_1)^{s_1}\cdots(t-z_m)^{s_m}\big)v_1\otimes\dots\otimes v_m\big) + F^{s_1+\cdots+s_m-1}\mathbf{V}. \end{gather*} Now the proof follows by the def\/inition of the fusion product. \end{proof} \section{Proof of the main theorem}\label{section4} In this section, we prove the existence of maps $\phi^{+}$ and $\phi^{-}$ from Theorem~\ref{MT} and then prove our main theorem (Theorem~\ref{MT}). {\bf 4.1.}~Given $k\geq 1$ and $0\leq i \leq k $, we denote~by \begin{gather*} \mathbf{V}_{i,k} = D(1, k\theta)/\big\langle \big(x^{-}_\theta \otimes t^{2k-i}\big) \overline{w}_{k\theta} \big\rangle, \end{gather*} and let $\overline{v}_{i,k}$ be the image of $\overline{w}_{k\theta}$ in $\mathbf{V}_{i,k}$. Using Proposition~\ref{WvsD}, $\mathbf{V}_{i,k}$ is the cyclic graded $\mathfrak{g}[t]$-module generated by the element $\overline{v}_{i,k}$, with the following def\/ining relations: \begin{gather} (x^{+}_{\alpha}\otimes t^s)\overline{v}_{i,k}=0, \qquad s\geq 0,\quad \alpha\in R^+, \label{r1} \\ (h \otimes t^s) \overline{v}_{i,k} = \langle k\theta, h\rangle \delta_{s,0} \overline{v}_{i,k}, \qquad s\geq0, \quad h\in\mathfrak{h}, \label{r2} \\ (x^{-}_{\alpha}\otimes 1) \overline{v}_{i,k}=0, \qquad \alpha\in R^+, \quad (\theta|\alpha)=0, \label{r3} \\ (x^{-}_{\alpha}\otimes 1)^{kd_{\alpha}+1} \overline{v}_{i,k}=0, \qquad \big(x^{-}_{\alpha} \otimes t^k\big) \overline{v}_{i,k}=0, \qquad \alpha\in R^+, \quad (\theta|\alpha)=1, \label{r4} \\ (x^{-}_{\theta}\otimes 1)^{2k+1} \overline{v}_{i,k}=0, \qquad \big(x^{-}_{\theta} \otimes t^{2k-i}\big) \overline{v}_{i,k}=0. \label{r5} \end{gather} The existence of $\phi^{+}$ is trivial, which we record below. \begin{proposition} The map $\phi^{+} : \mathbf{V}_{i,k+1} \rightarrow \mathbf{V}_{i+1,k+1}$ which takes $\overline{v}_{i,k+1} \rightarrow \overline{v}_{i+1,k+1}$ is a~surjective morphism of $\mathfrak{g}[t]$-modules with $\ker \phi^{+} = \big\langle \big({x^{-}_\theta} \otimes t^{2k+1-i}\big) \overline{v}_{i,k+1}\big\rangle$. \end{proposition} Now we prove the existence of $\phi^{-}$ in the following proposition. \begin{proposition} There exist a~surjective morphism of $\mathfrak{g}[t]$-modules $\phi^{-} : \tau_{2k+1-i} \mathbf{V}_{i,k} \rightarrow \ker \phi^{+}$, such that $\phi^{-}(\overline{v}_{i,k}) = \big(x^{-}_\theta \otimes t^{2k+1-i}\big) \overline{v}_{i,k+1}$. \end{proposition} \begin{proof} We only need to show that $\phi^{-}(\overline{v}_{i,k})$ satisf\/ies the def\/ining relations of $\mathbf{V}_{i,k}$. We start with the relation~\eqref{r1}. First, for $\alpha=\theta$ it is clear. Let $\alpha \in R^{+}\setminus\{\theta\}$; if $(\theta|\alpha)=0$ then also it is clear. If $(\theta|\alpha)=1$ then $(\theta-\alpha)\in R^{+}\setminus\{\theta\}$ and $(\theta|\theta-\alpha)=1$, now it is clear from the relations $(x^{-}_{\theta-\alpha}\otimes t^r)\overline{v}_{i,k+1}=0$ for all $r\geq (k+1)$ in $\mathbf{V}_{i,k+1}$. The relations~\eqref{r2},~\eqref{r3} are trivially satisf\/ied by $\phi^{-}(\overline{v}_{i,k})$. Finally the last two relations~\eqref{r4},~\eqref{r5} are also satisf\/ied by $\phi^{-}(\overline{v}_{i,k})$; in fact these are exactly the statements of Lemmas~\ref{g} and~\ref{crucial}. \end{proof} {\bf 4.2.}~The existence of the surjective maps $\phi^{+}$ and $\phi^{-}$, give the following: \begin{gather} \label{diml} \dim \mathbf{V}_{i,k+1} \leq \dim \mathbf{V}_{i,k} + \dim \mathbf{V}_{i+1,k+1}. \end{gather} The following proposition helps in proving the reverse inequality. \begin{proposition} The map $\psi :\mathbf{V}_{i,k+1}\rightarrow D(1,\theta)^{* (k+1-i)} * \textup{ev}_0 V(\theta)^{*i}$ such that $\psi(\overline{v}_{i,k+1}) = \overline{w}_{\theta}^{* (k+1-i)} * v_{\theta}^{*i}$ is well-defined and surjective morphism of $\mathfrak{g}[t]$-modules. In particular, \begin{gather} \label{dimg} \dim \mathbf{V}_{i,k+1}\geq (\dim D(1, \theta))^{k+1-i} (\dim V(\theta))^{i}. \end{gather} \end{proposition} \begin{proof} We only need to show that $\psi(\overline{v}_{i,k+1})$ satisf\/ies the def\/ining relations of $\mathbf{V}_{i,k+1}$. But they follow easily from the following relations: \begin{gather*} ((h \otimes 1)-\langle (k+1)\theta, h\rangle)\big(\overline{w}_{\theta}^{\otimes (k+1-i)} \otimes v_{\theta}^{\otimes i}\big)=0, \qquad \forall\, h\in\mathfrak{h}, \\ (x^{-}_{\alpha}\otimes 1)^{\langle (k+1)\theta, \qquad \alpha^{\vee} \rangle + 1}\big(\overline{w}_{\theta}^{\otimes (k+1-i)}\otimes v_{\theta}^{\otimes i}\big)=0, \qquad \forall\, \alpha\in R^{+} \end{gather*} (which holds in $D(1,\theta)^{\otimes (k+1-i)} \otimes \textup{ev}_0 V(\theta)^{\otimes i}$) and $(h \otimes t^s)\psi(\overline{v}_{i,k+1})=0, \forall\, s\geq 1$, $h\in\mathfrak{h}$ (which holds in $D(1,\theta)^{* (k+1-i)}*\textup{ev}_0 V(\theta)^{*i}$). Further from Lemma~\ref{f2}, by using the relations \begin{gather*} (x^{+}_{\alpha}\otimes t^s)\overline{w}_{\theta}=0=(x^{+}_{\alpha}\otimes t^s)v_{\theta}, \qquad \forall\, s\geq 0, \quad \alpha\in R^{+}, \\ (x^{-}_{\alpha}\otimes t)\overline{w}_{\theta}=\big(x^{-}_{\theta}\otimes t^{2}\big)\overline{w}_{\theta}=0=(x^{-}_{\theta}\otimes t)v_{\theta}=(x^{-}_{\alpha}\otimes t)v_{\theta}, \qquad \forall\, \alpha \in R^{+}\setminus\{\theta\}, \end{gather*} which holds in $D(1,\theta)$ and $\textup{ev}_0 V(\theta)$. \end{proof} We record below a~result from~\cite{F} and use this in proving our main theorem. \begin{proposition}\textup{\cite[Corollary 2]{F}} \label{F} Given $k\geq 1$, the following is an isomorphism of $\mathfrak{g}[t]$-modules, \begin{gather*} \textup{ev}_0 V(\theta)^{* k} \cong D(1, k\theta)/\big\langle \big({x^{-}_\theta} \otimes t^{k}\big) \overline{w}_{k\theta}\big\rangle. \end{gather*} \end{proposition} {\bf 4.3.}~We now prove Theorem~\ref{MT}, proceeding by induction on~$k$. First, for $k=1$, we prove Theorem~\ref{MT} for $0\leq i \leq 1$. Let $i=1$, observe that $\mathbf{V}_{1,1} \cong_{\mathfrak{g}[t]} \ev_{0} V(\theta)$. Using Proposition~\ref{F},~\eqref{dimd},~\eqref{diml} and~\eqref{dimg} this case follows. Let $i=0$, now observe that $\mathbf{V}_{0,1} \cong_{\mathfrak{g}[t]} D(1,\theta)$. Using part (2) of Theorem~\ref{MT} for $i=1$ and $k=1$,~\eqref{dimd},~\eqref{diml} and~\eqref{dimg} this case also follows. Now let $k\geq2$, and assume Theorem~\ref{MT} holds for $(k-1)$. We prove the assertion for~$k$, proceeding by induction on~$i$. For $i=k$, it follows from Proposition~\ref{F},~\eqref{dimd},~\eqref{diml} and~\eqref{dimg}. Now let $i\leq (k-1)$, and assume Theorem~\ref{MT} holds for $(i+1)$. We now prove for~$i$. Using part (2) of Theorem~\ref{MT}, for $(i+1)$ and~$k$, also for~$i$ and $(k-1)$, and~\eqref{diml}, we get \begin{gather*} \dim \mathbf{V}_{i,k+1} \leq (\dim D(1, \theta))^{k-i} (\dim V(\theta))^{i+1} + (\dim D(1, \theta))^{k-i} (\dim V(\theta))^{i}. \end{gather*} Together with~\eqref{dimd}, we see \begin{gather*} \dim \mathbf{V}_{i,k+1} \leq (\dim D(1,\theta))^{k+1-i} (\dim V(\theta))^{i}. \end{gather*} Now the proof of Theorem~\ref{MT} in this case follows by~\eqref{dimg}. This completes the proof of Theorem~\ref{MT}. Combining parts (1) and (2) of Theorem~\ref{MT}, we get Corollary~\ref{c1}. Using part (2) of Theorem~\ref{MT} and Proposition~\ref{F}, we obtain Corollary~\ref{c2}. \section{CV modules and truncated Weyl modules}\label{section5} We start this section by recalling the def\/inition of CV modules given in~\cite{CV}. For $\mathfrak{g}$ simply laced, we shall restate Theorem~\ref{MT} in terms of these modules. At the end, we also discuss truncated Weyl modules. {\bf 5.1.}~Given $\lambda\in P^{+}$, we say that $\pmb{\xi}=(\xi(\alpha))_{\alpha\in R^+}$ is a~$\lambda$-compatible $|R^{+}|$-tuple of partitions, if \begin{gather*} \xi(\alpha)=\big(\xi(\alpha)_1\geq\dots \geq\xi(\alpha)_j\geq\dots\geq 0\big), \qquad |\xi(\alpha)|= \sum\limits_{j\geq 1}\xi(\alpha)_j = \langle \lambda, \alpha^{\vee} \rangle, \qquad \forall\, \alpha \in R^+. \end{gather*} \begin{definition}[see~\protect{\cite[\S~2]{CV}}] Let $\lambda\in P^{+}$ and $\pmb{\xi}$ be a~$\lambda$-compatible $|R^{+}|$-tuple of partitions. The Chari--Venkatesh module or CV module $V(\pmb\xi)$ is the graded quotient of $W(\lambda)$ by the submodule generated by the following set \begin{gather*} \bigg\{\mathbf{x}^{-}_{\alpha}(r,s)w_{\lambda}: \alpha\in R^+, s, r \in \mathbb{N}~\textup{such that}~s+r\geq 1+rk+\sum\limits_{j\geq k+1} \xi(\alpha)_j~\textup{for some}~k\in \mathbb{N}\bigg\}. \end{gather*} \end{definition} The following lemma (implicit in the proof of Theorem~1 of~\cite{CV}) is useful in understanding CV modules. \begin{lemma} \label{c-v} Let $\lambda\in P^{+}$, $r\in\mathbb{N}$ and $\pmb\xi=(\xi(\alpha))_{\alpha\in R^+}$ a~$\lambda$-compatible $|R^{+}|$-tuple of partitions. If $r\geq \xi(\alpha)_1$ then $\mathbf{x}^{-}_{\alpha}(r,s)w_{\lambda}=0$ in $W(\lambda)$, for all $\alpha\in R^+$, $s, k \in \mathbb{N}$, $s+r\geq 1+rk+\sum\limits_{j\geq k+1} \xi(\alpha)_j$. \end{lemma} \begin{proof} Let $\alpha\in R^+$ and $s, k \in \mathbb{N}$ such that $s+r\geq 1+rk+\sum\limits_{j\geq k+1} \xi(\alpha)_j$. Given $r\geq \xi(\alpha)_1$, it follows that $s+r \geq 1 + \sum\limits_{j\geq 1} \xi(\alpha)_j=1+\langle \lambda, \alpha^{\vee} \rangle$. Now the proof follows by using Lemma~\ref{garland} and~\eqref{w2}. \end{proof} For $\lambda\in P^{+}$, we associate two~$\lambda$-compatible $|R^{+}|$-tuple of partitions as follows: \begin{gather*} \{\lambda\}:=\big(\big(\langle \lambda, \alpha^{\vee} \rangle\big)\big)_{\alpha\in R^{+}}, \qquad \pmb\xi(\lambda):=\big(\big(1^{\langle \lambda, \alpha^{\vee} \rangle}\big)\big)_{\alpha\in R^+}. \end{gather*} Each partition of $\{\lambda\}$ has at most one part, and each part of each partition of $\pmb\xi(\lambda)$ is 1. The CV modules corresponding to these two, have nice descriptions, which we record below for later use; \begin{gather} \label{e1} V(\{\lambda\})\cong_{\mathfrak{g}[t]} \ev_0 V(\lambda), \qquad V(\pmb\xi(\lambda)) \cong_{\mathfrak{g}[t]} W(\lambda). \end{gather} The f\/irst isomorphism follows by taking $s=r=k=1$ in the def\/inition of the CV mo\-du\-le~$V(\{\lambda\})$ and the second isomorphism follows from Lemma~\ref{c-v}. {\bf 5.2.}~Given $k\geq 1$ and $0\leq i \leq k $, we def\/ine the following $|R^{+}|$-tuple of partitions: \begin{alignat*}{3} &\pmb\xi_{i}^{-}:=(\xi^{-}_{i}(\alpha))_{\alpha\in R^{+}}, \qquad&& \text{where} \quad \xi^{-}_{i}(\alpha)= \begin{cases} \big(1^{\langle k\theta, \alpha^{\vee} \rangle}\big), & \alpha\neq \theta, \\ \big(2^{i}, 1^{2(k-i)}\big), & \alpha=\theta, \end{cases}& \\ & \pmb\xi_{i}:=\big(\xi_{i}(\alpha)\big)_{\alpha\in R^{+}}, \qquad && \text{where} \quad \xi_{i}(\alpha)= \begin{cases} \big(1^{\langle (k+1)\theta, \alpha^{\vee} \rangle}\big), & \alpha\neq \theta, \\ \big(2^{i}, 1^{2(k+1-i)}\big), & \alpha=\theta, \end{cases}& \\ & \pmb\xi_{i}^{+}:=\big(\xi^{+}_{i}(\alpha)\big)_{\alpha\in R^{+}}, \qquad && \text{where} \quad \xi^{+}_{i}(\alpha)= \begin{cases} \big(1^{\langle (k+1)\theta, \alpha^{\vee} \rangle}\big), & \alpha\neq \theta, \\ \big(2^{i+1}, 1^{2(k-i)}\big), & \alpha=\theta. \end{cases}& \end{alignat*} For $\mathfrak{g}$ simply laced, we can restate Theorem~\ref{MT} in terms of CV modules as follows: \begin{theorem} \label{T2} Assume that $\mathfrak{g}$ is simply laced. Given $k\geq 1$ and $0\leq i \leq k $, we have the following: \begin{enumerate}\itemsep=0pt \item[$1)$] a~short exact sequence of $\mathfrak{g}[t]$-modules, \begin{gather*} 0 \rightarrow \tau_{2k+1-i}V(\pmb\xi^{-}_{i}) \rightarrow V(\pmb\xi_{i}) \rightarrow V(\pmb\xi^{+}_{i}) \rightarrow 0; \end{gather*} \item[$2)$] an isomorphism of $\mathfrak{g}[t]$-modules, \begin{gather*} V(\pmb\xi_{i}) \simeq V(\pmb\xi(\theta))^{* (k+1-i)} * V(\{\theta\})^{*i}. \end{gather*} \end{enumerate} \end{theorem} \begin{proof} This follows from Theorem~\ref{MT}, by using Lemma~\ref{c-v} and~\eqref{e1}. \end{proof} {\bf 5.3.}~For $n\geq 1$, we def\/ine $\mathcal{A}_n= \mathbb{C}[t]/(t^{n})$. The {\em truncated current algebra} $\mathfrak{g}\otimes \mathcal{A}_n$, can be thought of as the graded quotient of the current algebra $\mathfrak{g}[t]$: \begin{gather*} \mathfrak{g}\otimes \mathcal{A}_n\cong\mathfrak{g}[t]/\big(\mathfrak{g}\otimes t^n \mathbb{C}[t]\big). \end{gather*} Let $k\geq1$. The local Weyl module $W_{\mathcal{A}_n}(k\theta)$ for the truncated current algebra $\mathfrak{g}\otimes \mathcal{A}_n$ is def\/ined in~\cite{CFK}, and we call it the {\em truncated Weyl module}. It is easy to see that $W_{\mathcal{A}_n}(k\theta)$ naturally becomes a~$\mathfrak{g}[t]$-module and the following is an isomorphism of $\mathfrak{g}[t]$-modules, \begin{gather} \label{trul} W_{\mathcal{A}_n}(k\theta) \cong W(k\theta)/\langle (x^{-}_{\theta}\otimes t^{n}) w_{k\theta}\rangle. \end{gather} Now Corollary~\ref{truncated} is immediate from Corollary~\ref{c2}, by using~\eqref{w2'} and~\eqref{trul}. \subsection*{Acknowledgements} The author thanks Vyjayanthi Chari, K.N.~Raghavan and S.~Viswanath for many helpful discussions and encouragement. Part of this work was done when the author was visiting the Centre de Recherche Mathematique (CRM), Montreal, Canada, during the thematic semester on New Directions in Lie Theory. The author acknowledges the hospitality and f\/inancial support extended to him by CRM. The author also thanks the anonymous referees for their valuable comments, due to which the paper is much improved. The author acknowledges support from CSIR under the SPM Fellowship scheme. \pdfbookmark[1]{References}{ref}
1409.0479
\section{Introduction} In recent years dynamical systems with delays have evolved as a major topic in nonlinear sciences \cite{bookThomas,E}. Time delays arise naturally, and might play a role in many areas of physics, biology and technology, such as nonlinear optics \cite{RMPIngo,Nix12}, gene regulatory circuits \cite{Chen2002}, population dynamics \cite{NEL02,Mie12}, traffic flows \cite{Orosz04,Saf02}, neuroscience \cite{C}, and social or communication networks \cite{WAN06,Lu11}. A well established effect of a delay in the dynamics is the possibility to induce multistability \cite{Mas02.1,Kim97}. In oscillatory systems a delay gives rise to coexistent periodic orbits with different frequencies \cite{sch89.1,Yanchuk09,Sie13} and possibly different oscillation patterns \cite{ikke,choe10,Per10,Wil13,Vuellings14}. Such coexistent patterns could be related to memory storage and temporal pattern recognition, especially in neural networks \cite{Fos96,Fos97,RMPRab}. However, noise, which is unavoidable in real networks, can place important limitations to the capacity of a memory element, as it can induce mode hoppings between coexistent attractors. We study the statistical properties of such mode hoppings in small networks of oscillators. We consider a single oscillator with delayed feedback, two delay-coupled oscillators and a unidirectional ring, and we briefly discuss globally coupled elements. The number of possible frequencies scales with the roundtrip delay time, but the noisy system visits only a fraction of these frequencies, which scales with the square root of the delay time. While without noise the range of frequencies also scales with the coupling strength, we find that in the stochastic system it does not depend on the coupling strength. In contrast, the robustness of the orbits to noise, measured by the average residence time, is mainly determined by the coupling strength, while the delay has a minor effect. Complementary to local stability analysis, the study of coupled systems subject to noise also provides information about the robustness of certain oscillation patterns. We find that depending on network topology, an oscillation pattern might dominate: in unidirectional rings the oscillators spend equally much time in all the possible phase configurations, a globally coupled network shows a clear preference to in-phase synchrony. This paper is organized as follows. In section II we discuss stochastic switching for a single phase oscillator. We compare our numerical results to those obtained by a potential model \cite{Mork90b}, and discuss the model in the limit of strong coupling and large delay. We discuss stochastic switching of two coupled phase oscillators in section III, and extend the potential model. In section IV we extend our results to a unidirectional ring of delay-coupled oscillators. Finally, we demonstrate the generality of our results with delay-coupled FitzHugh-Nagumo oscillators in section V. We discuss our results in section VI. \section{Stochastic switching in a single phase oscillator with feedback} The most basic delay network is a single oscillator with delayed feedback. We consider a Kuramoto oscillator, which describes the oscillating dynamics by a single phase variable. It is a universal model, as many oscillators can be reduced to phase oscillators in the weak coupling regime \cite{kur97.1,dai97.1,AcebronKuramoto}. Thanks to its simplicity, the Kuramoto model allows for analytical insights while still capturing many essential features of synchronization. A Kuramoto oscillator with delayed feedback and noise is modelled by \begin{equation} \dot{\phi}(t)=\omega_0+\kappa\sin(\phi(t-\tau)-\phi(t)+\theta)+\xi(t)\label{eq:model}\,. \end{equation} The oscillator has a natural frequency $\omega_0$, the other parameters are the coupling delay $\tau$, the coupling strength $\kappa>0$ and the coupling phase $\theta$. The system is subject to additive Gaussian white noise $\xi(t)$, with $\langle\xi(t)\xi(t_0)\rangle=2D\delta(t-t_0)$. As the dynamics is invariant under a transformation $\phi(t)\rightarrow \phi(t)+\tilde{\omega}t,\omega_0\rightarrow \omega_0+\tilde{\omega},\theta\rightarrow\theta-\tilde{\omega}\tau$, we can omit the coupling phase $\theta$ without loss of generality. We first briefly discuss the deterministic dynamics of this system \cite{ear03.1,AmpPhase}. Without noise, the oscillator resides in one of the frequencies $\dot{\phi}=\omega_k$ given by \begin{equation} \omega_k=\omega_0-\kappa\sin(\omega_k\tau)\label{eq:omega}\,. \end{equation} A graphical determination of the frequencies $\omega_k$ is shown in Fig. \ref{fig:kura1}. The orbits for which $\kappa\tau\cos(\omega_k\tau)>1$ holds, are stable. For large coupling or long feedback delay $\kappa\tau\gg1$, the stable frequencies close to $\omega_0$ are approximated as $\omega_k\tau\approx 2k\pi$, whereas the spacing is given by $\omega_{k+1}-\omega_k\approx2\pi/\tau$. As all solutions of Eq.~\eqref{eq:omega} are limited by $\omega_0-\kappa\le\omega_k\le\omega_0+\kappa$, the number of coexistent stable orbits is estimated as $\kappa\tau/\pi$. \begin{figure}[!ht] \includegraphics[width=0.5\columnwidth]{kura1.eps} \caption{Graphical determination of the different coexisting frequencies of a single oscillator with delayed feedback (Eq. \eqref{eq:omega}). The intersections with the thick decreasing slopes of the sine function correspond to stable orbits, and are marked with a circle. The coloring of the circles relates to the probability distribution $p(\omega(t))$ of the corresponding stochastic oscillator (shown in Fig. \ref{fig:var}): the probability that the oscillator has a frequency $\omega(t)\approx \omega_k$ is large for the most central frequencies $\omega_k\approx\omega_0$, marked with a black circle, while the probability to find the system's frequency $\omega(t)$ close to the outer frequencies $\omega_k\approx\omega_0\pm\kappa$, marked with an empty circle, is negligible. Parameters are $\omega_0=6$, $\kappa=2$, $\tau=10$ and $D=0.5$} \label{fig:kura1} \end{figure} If we add noise to the system, the oscillator switches between these coexistent orbits. We simulated a Kuramoto oscillator with delayed feedback, using a Heun algorithm adapted to delayed interactions, with a timestep of $h=0.01$. For our choice of parameters ($\kappa=2$, $\omega_0=6$, $\tau=10$), without noise, the oscillator has six stable periodic orbits, with respective frequencies $\omega_1\approx 4.48$, $\omega_2\approx 5.07$, $\omega_3\approx 5.67$, $\omega_4\approx 6.27$, $\omega_5\approx 6.87$ and $\omega_6\approx 7.46$, shown in Fig. \ref{fig:kura1}. A typical timetrace of the phase evolution, with multiple mode hoppings between $\omega_3$ and $\omega_4$, is shown in Fig. \ref{fig:kura1tt}(a). As an indicator for mode hoppings we use the frequency measure $\omega(t)=(\phi(t)-\phi(t-\tau))/\tau$, which is the driving term of the dynamics, and corresponds to the average of the instantaneous frequency $\dot{\phi}(t)$ over the past delay interval. Moreover, this definition of $\omega(t)$ respects the origin of the frequency locking, which lies in the auto-phase locking of the instantaneous phase $\phi(t)$ onto the delayed phase $\phi(t-\tau)$. The time evolution of $\omega(t)$ is shown in Fig.~\ref{fig:kura1tt}(b), exhibiting clear jumps between the deterministic frequencies $\omega_k$. \begin{figure}[!ht] \includegraphics[width=\columnwidth]{tt1.eps} \caption{(a) Phase evolution $\phi(t)-\omega_0 t$ of a Kuramoto oscillator with delayed feedback and noise. We subtracted the natural frequency $\omega_0 t$ for better visibility of the mode hoppings. (b) The frequency measure $\omega(t)=(\phi(t)-\phi(t-\tau))/\tau$ is a good indicator for the mode hoppings. Parameters are $\omega_0=6$, $\kappa=2$, $\tau=10$ and $D=0.5$} \label{fig:kura1tt} \end{figure} The distribution of frequencies $p(\omega(t))$, with $\omega(t)$ defined as above, is shown in Fig. \ref{fig:var}(a). One can clearly distinghuish multiple maxima, corresponding to the deterministic frequencies $\omega_2$, $\omega_3$, $\omega_4$ and $\omega_5$ . The frequencies closest to the eigenfrequency $\omega_0$ of the oscillator, i.e. $\omega_3$ and $\omega_4$, are most often visited, while the oscillator spends a negligible amount of time in the orbits with frequencies $\omega_{1,6}\approx \omega_0\mp\kappa$. \begin{figure}[!hb] \includegraphics[width=\columnwidth]{freqdistr.eps} \caption{(Color online) The frequency distributions (grey) for (a) an oscillator with feedback, (b) two coupled identical identical oscillators and (c) two detuned oscillators. The analytical approximations (Eq. \eqref{eq:frequencydistribution} and Eq. \eqref{eq:freqdistr2}) are plotted in black, the (blue) dashdotted lines show the respective Gaussian envelopes. The parameters are $\kappa=2$, $\tau=10$, $\omega_0=6$, $D=0.5$ , and (c) $\Delta=0.8$.} \label{fig:var} \end{figure} To calculate the residence times of the orbits, we apply the following procedure: at the starting point $t_0$ the oscillator is considered to reside in the orbit with a frequency $\omega_k$ for which the distance $|\omega(t_0)-\omega_k|$ is minimal, and it stays there as long as $|\omega(t)-\omega_k|<\epsilon$. After a transition, we determine the new locking frequency again as the frequency at minimal distance. We chose $\epsilon=2/3(\omega_k-\omega_{k-1})$; for weak noise $\omega(t)$ does not show large fluctuations around the locking frequency $\omega_k$ and the choice of $\epsilon$ does not largely affect the residence times of the orbits. In our simulations we obtained around $10^6$ transitions. The residence time distributions of two of the orbits ($\omega_2$ and $\omega_3$) are shown in Fig. \ref{fig:rtd}(a). The distribution is exponential. Upon the exponential decay there are signatures of the delay time; these are shown in the inset. The peaks can be understood as delay echoes which result from a known stochastic resonance effect in delay systems~\cite{Ohi99,Mas02.1,Mas02.2}: A mode hopping causes a perturbation, which increases the probability for a mode hopping at multiples of the feedback delay. Moreover, the average residences times, shown in Fig. \ref{fig:rtd}(b), are largest for orbits $\omega_3$ and $\omega_4$ with a frequency close to the natural frequency $\omega_0$. \begin{figure}[!th] \includegraphics[width=\columnwidth]{rtdinset.eps} \caption{(Color online)(a) Logarithm of the residence time distribution $\ln(p(T))$, for a Kuramoto oscillator with delayed feedback, for the orbits with frequencies $\omega_2$ and $\omega_3$. (b) Mean residence time of the orbits $\omega_{2,3,4,5}$ versus their frequency for a single oscillator (upper black dots) together the theoretical approximation (Eq. \eqref{eq:rtd1}) (upper dashed pink curve). The lower pink dots and the lower blue dashed curve represent the mean average residence times of the orbits and their theoretical approximation (Eq. \eqref{eq:rtd2}) respectively for two identical coupled systems. The parameters are $\kappa=3$, $\tau=10$, $\omega_0=6$ and $D=0.5$.} \label{fig:rtd} \end{figure} In order to interpret the mode hopping dynamics, we approximate the delay system by an undelayed system. It is then possible to define a Langevin equation and to compute the frequency distributions and average residence times of the different periodic orbits. Such an approach is possible thanks to the simplicity of the Kuramoto oscillator, as the dynamics of the oscillator is only characterized by a frequency. A similar method has been suggested in the context of mode hopping between external cavity modes in a single laser with delayed feedback \cite{Mork90b,Lenstra91}. Using this approximation, we show analytically how the frequency distribution and average residence times scale with the feedback strength, delay, and frequency of the orbit. Thereby we focus on the regime $\kappa\tau\gg 1$, in which a multitude of orbits coexists. In order to simplify the system, we first rewrite the dynamics in terms of the delay phase difference $x(t)=\phi(t)-\phi(t-\tau)$. \begin{equation} \dot{x}(t)=\omega_0-\kappa\sin x(t)-\dot{\phi}(t-\tau)+\xi(t)\,. \end{equation} The main step is the following: We approximate the instantaneous frequency $\dot{\phi}(t-\tau)$ by the frequency averaged over the future delay interval plus its noise source \begin{equation} \dot{\phi}(t-\tau)\approx\frac{1}{\tau}\int_{t-\tau}^{t}\dot{\phi}(t')dt'+\xi(t-\tau)=\frac{x(t)}{\tau} + \xi(t-\tau)\,. \end{equation} Such assumption is justified for weak noise, when the oscillator resides in one of the periodic orbits during a delay interval. But also in case of a random walk ($\kappa=0$) it leads to the correct stationary distribution. In this way we obtain a closed equation without delay for the phase difference $x(t)$, that can be written in terms of a potential \cite{Mork90b}, \begin{eqnarray} \dot{x}(t) & = & -\frac{dV(x)}{dx} + \tilde{\xi}(t)\mbox{ with }\nonumber\\ V(x) & = & \frac{1}{2\tau}(x-x_0)^2 - \kappa\cos x\label{eq:potential}\,, \end{eqnarray} \noindent with $x_0=\omega_0\tau$ and $\tilde{\xi}(t)=\xi(t)-\xi(t-\tau)$. As the noise sources $\xi(t)$ and $\xi(t-\tau)$ are uncorrelated, the simplified oscillator is effectively subject to a magnified noise strength of $\langle\tilde{\xi}^2(t)\rangle=4D$. The approximation by white noise in Eq. \eqref{eq:potential} does not preserve correlations around multiples of $\tau$, like those shown in Fig. \ref{fig:rtd}(a). The potential $V(x)$ is shown in Fig. \ref{fig:kura1pot}. It is a parabolic potential modulated by a cosine function. The local minima $x_k=\omega_k\tau$ correspond to the frequencies in the noise-free case $D=0$. Our reduction procedure does not affect them and their calculation by the potential extrema reveals Eq.~\eqref{eq:omega}. The local maxima $x_m$ correspond to unstable solutions of the deterministic system. \begin{figure}[!h] \centering \includegraphics[width=0.6\columnwidth]{potential-parabool.eps} \caption{Potential for a Kuramoto oscillator with feedback (Eq. \eqref{eq:potential}). Parameters are $\omega_0=6$, $\kappa=2$, $\tau=10$.} \label{fig:kura1pot} \end{figure} The phase difference $x(t)$ relates in a simple way to the frequency measure $x(t)/\tau=\omega(t)$. Hence, the stationary distribution of frequencies $p(\omega)$ is given by a Boltzmann factor \cite{Kampen} \begin{equation} p(\omega)\propto e^{-\frac{V(\omega\tau)}{2D}}=e^{-\frac{\tau}{4D}(\omega-\omega_0)^2}e^{\frac{\kappa\cos\omega\tau}{2D}}\label{eq:frequencydistribution}\,. \end{equation} We recognize a Gaussian envelope with mean $\omega_0$ and variance $\sigma^2=2D/\tau$. This envelope corresponds to the probability distribution of a random walk. Thus, while the total frequency range is given by $2\kappa$, the range of visited frequencies scales with $\sqrt{D/\tau}$. As the spacing between the orbits scales inversely with the feedback delay, the number of attended orbits grows as $\sqrt{D\tau}$. The coupling function, which appears in the second factor of Eq. \eqref{eq:frequencydistribution}, determines the location and the shape of the different peaks. As the feedback strength increases, the peaks in the distribution become more pronounced. In Fig. \ref{fig:var} we compare our analytical result for the simplified system (Eq. \eqref{eq:frequencydistribution}) with numerical simulations of the original delay system (Eq. \eqref{eq:model}). We find that our theoretical results provide an excellent approximation for the distribution of frequencies and thus prove the validity of the applied reduction method. Also the average residence times can be approximated by the potential model \eqref{eq:potential}: In the limit of low noise, the escape rates from an orbit with frequency $\omega_k$ are given by the Kramers rate \cite{Kra40,McN89} $$r_{\pm}(\omega_k)= \frac{\sqrt{-V''(\omega_k\tau)V''(x_m)}}{2\pi}e^{-\frac{\Delta V}{2D}}\,,$$ \noindent where the suffix denotes whether the oscillator hops to a mode with a higher or a lower frequency. The average residence time $T_0(\omega_k)$ reads then $$T_0(\omega_k)\approx\frac{1}{r_+(\omega_k)+r_-(\omega_k)}\,.$$ For strong coupling and large feedback delay $\kappa\tau\gg 1$, a multitude of orbits are stable, with $\omega_k\tau\approx 2n\pi$ and $x_m=(2n+1)\pi$. The average residence time is then further approximated as \begin{equation} T_0(\omega_k)\approx\frac{\pi}{\kappa}\frac{e^{\frac{\kappa}{D}+\frac{\pi^2}{4\tau D}}}{\cosh\left(\frac{\pi(\omega_k-\omega_0)}{2D}\right)}\label{eq:rtd1}\,. \end{equation} \noindent We compared the average residence time of the different periodic orbits with our theoretical result (Eq. \eqref{eq:rtd1}) in Fig. \ref{fig:rtd}, and the approximation gives good results. Consequently, the average residence time $T_0(\omega_k)$ increases with the feedback strength $\kappa$, which determines the depth of the potential wells, and decreases with the noise strength $D$. For a fixed frequency $\omega_k$ the feedback delay $\tau$ has a limited influence on the residence times, for long delays the delay dependency even vanishes. Only the orbits with a frequency close to the natural frequency have a considerable average residence time, and are in this sense robust to noise. This range of these frequencies scales with $D$, and does not depend on the delay time or the coupling strength. Due to the frequency difference of $2\pi/\tau$, the number of orbits that is robust to noise scales approximately as $D\tau$. Moreover, there is a difference in the mode hopping behavior at long and short delay times: For long delays, the $\sqrt{D/\tau}$-range of attended orbits is much smaller than the range of robust frequencies, so that all visited orbits have a similar average residence time. If the delay is shorter, as it is the case for our choice of parameters, significant differences in the residence times of the orbits are observed. \section{Two mutually coupled phase oscillators} More common than a single oscillator driven by its own delayed feedback are coupled oscillators. We consider here the simple case of two mutually delay-coupled oscillators with independent noise sources. This system is modelled by \begin{eqnarray} \dot{\phi}_1(t) & = & \omega_{01}+\kappa\sin(\phi_{2}(t-\tau)-\phi_1(t))+\xi_1(t)\nonumber\\ \dot{\phi}_2(t) & = & \omega_{02}+\kappa\sin(\phi_{1}(t-\tau)-\phi_2(t))+\xi_2(t)\label{eq:Zn}\,, \end{eqnarray} with $\omega_{01,02}=\omega_0\mp\Delta/2$, and $\Delta$ being the detuning between the oscillators. We repeat first the case of identical oscillators ($\omega_{01}=\omega_{02}\equiv \omega_0$) without noise ($D=0$) \cite{AmpPhase,ikke}. The system does not only have in-phase synchronized oscillations $\phi_1(t)=\phi_2(t)=\omega_k t$, but also anti-phase synchronized orbits $\phi_1(t)=\phi_2(t)+\pi=\omega_k t$. The frequencies of the in-phase orbits are identical to the single feedback system; they are given by $\omega_k=\omega_0-\kappa\sin(\omega_k\tau)$. For the anti-phase orbits the frequencies can be found by solving $\omega_k=\omega_0+\kappa\sin(\omega_k\tau)$. The coupled system thus has twice as many coexisting periodic orbits as the single system. In-phase orbits are stable for $\cos(\omega_k\tau)>0$ and anti-phase orbits for $\cos(\omega_k\tau)<0$. A graphical determination of the frequencies is shown in Fig. \ref{fig:kura2}(a): stable in-phase and anti-phase frequencies alternate each other. For large $\kappa\tau\gg1$ the frequencies $\omega_k$ close to the natural frequency $\omega_0$ are approximated as $\omega_k\tau\approx n\pi$, so that the separation between the frequencies approaches $\pi/\tau$. Without noise, nonidentical oscillators still synchronize to a common frequency if the coupling is strong enough $|\Delta|<2\kappa$ \cite{sch89.1}. Detuned oscillators, however, are no longer exactly in-phase or anti-phase with each other, but they exhibit a phase difference $\delta$ depending on the locking frequency and the detuning. We find for the frequencies $\omega_k$ and the phase difference $\delta$ \begin{eqnarray} \omega_k &= & \omega_0 - \kappa\sin(\omega_k\tau)\cos\delta\nonumber\\ \sin\delta & = & \frac{\Delta}{2\kappa\cos(\omega_k\tau)}\label{eq:omegadet}\,, \end{eqnarray} if the conditions $\cos(\omega_k\tau+\delta)>0$ and $\cos(\omega_k\tau-\delta)>0$ hold, an orbit is stable. We solve Eq. \eqref{eq:omegadet} graphically in Fig. \ref{fig:kura2}(b). For nonidentical oscillators the frequency range is reduced to $2\kappa-\Delta$; for large $\kappa\tau$ however neither the locking frequencies $\omega_k$ nor their stability is largely affected by the detuning. \begin{figure} \includegraphics[width=0.49\columnwidth]{kura2.eps} \includegraphics[width=0.49\columnwidth]{kura2-det.eps} \caption{(Color online) Graphical determination of the locking frequencies of two (a) identical and (b) nonidentical delay-coupled phase oscillators. Intersection with the thick full (dashed) line correspond stable in-phase (anti-phase) orbits. The filling of the black (magenta) circles relates to the relative probability that the in-phase (anti-phase) orbit is visited by the corresponding stochastic system, darker labeling corresponds to a higher probability to find a frequency $\omega(t)\approx \omega_k$. The corresponding probability distributions are shown in Fig. \ref{fig:var} (b,c). In panel (b) a stable orbit is labeled as in-phase if $-\pi/2<\delta<\pi/2$. Just like for a single feedback oscillator, the frequencies $\omega_k\approx\omega_0$ are most often visited, the width of the frequency distribution is however smaller. Parameters are $\omega_0=6$, $\kappa=2$, $\tau=10$ and (b) $\Delta=0.8$.} \label{fig:kura2} \end{figure} We show the phase evolution of two identical delay-coupled oscillators in Fig. \ref{fig:kura2tt}. Mode hopping happens in two stages: if one oscillator, the leader, changes its frequency, the other oscillator, the laggard, follows a delay time later. Looking at the evolution of the driving terms $\phi_{1,2}(t)-\phi_{2,1}(t-\tau)$, shown in Fig. \ref{fig:kura2tt}(b), it is clear that during a transition the driving term of the leader changes with $2\pi$, while the laggard changes its frequency without a phase jump in its drive. As a frequency measure for the coupled system we use the mean frequency of the two oscillators averaged over the past delay interval $\omega(t)=(\phi_1(t)+\phi_2(t)-\phi_1(t-\tau)-\phi_2(t-\tau))/(2\tau)$; we thus capture the frequency transition of the leading oscillator. The two oscillators initiate equally many transitions, hence the role of leader and laggard changes randomly. For nonidentical oscillators, however, if the system speeds up, the fast oscillator is more often the leader, while if the oscillators slow down, the slow oscillator is leading the dynamics. \begin{figure} \includegraphics[width=\columnwidth]{tt2.eps} \caption{(Color online)(a) The phase evolutions $\phi_{1}-\omega_0t$ (pink) and $\phi_{2}-\omega_0t$ (black) of two identical noisy Kuramoto oscillators coupled with delay. We subtracted the natural frequency $\omega_0$ for better visibility of the mode hoppings. (b) The time evolution of the frequency $\omega(t)=(\phi_1(t)+\phi_2(t)-\phi_1(t-\tau)-\phi_2(t-\tau))/(2\tau)$ for two coupled oscillators (black), together with the phase differences $x_1(t)/\tau=(\phi_1(t)-\phi_2(t-\tau))/\tau$ (upper pink curve) and $x_2(t)/\tau=\phi_2(t)-\phi_1(t-\tau)$ (lower blue curve). The dashed lines indicate the mode hoppings. Parameters are $\omega_0=6$, $\kappa=3$, $\tau=10$ and $D=0.5$.} \label{fig:kura2tt} \end{figure} Also for mutually coupled oscillators it is possible to define a delay-free Langevin formalism. We rewrite the system as a function of the driving terms $x_1(t)$ and $x_2(t)$, defined as $x_{1,2}(t)=(\phi_{1,2}(t)-\phi_{2,1}(t-\tau))$. We then assume that the oscillators are locked to the same fixed frequency over the delay interval, and as such, that $\dot{\phi}_1(t-\tau)$ and $\dot{\phi}_2(t-\tau)$ only differ in the contribution of the noise. This leads to the main reduction \begin{equation} \dot{\phi}_{1,2}(t-\tau)\approx(x_1(t)+x_2(t))/(2\tau)+\xi_{1,2}(t-\tau)\,. \end{equation} In this way we can rewrite the system as a function of a twodimensional potential: \begin{eqnarray} \dot{x}_{1,2}(t)&=&-\frac{\partial V_{1,2}}{\partial x_{1,2}}+\tilde{\xi}_{1,2}(t) \mbox{ with }\nonumber\\ V(x_1,x_2) &=& \frac{1}{4\tau}(2x_0-x_1-x_2)^2+\frac{\Delta}{2}(x_1-x_2)\nonumber\\ & & -\kappa\left(\cos x_{1}+\cos x_2\right) \label{eq:potential-det}\,, \end{eqnarray}% with $x_0=\omega_0\tau$ and $\tilde{\xi}_{1,2}(t)=\xi_{1,2}(t)-\xi_{2,1}(t-\tau)$. This potential is shown in Fig. \ref{fig:potential-det}. The wells are located at $(x_1,x_2)=(\omega_k\tau +2n\pi-\delta,\omega_k\tau-2n\pi+\delta)$. The frequency of the system is then given by the average frequency $\omega=(x_1+x_2)/(2\tau)$. As the phase difference between the oscillators is only determined upon a multiple of $2\pi$, the potential is $4\pi$-periodic with respect to $x_1-x_2=x_A$. For identical oscillators there are thus two equally probable pathways for a transition: $x_1$ changes with almost $2\pi$, while $x_2$ remains almost constant, and $\phi_1(t)$ leads the dynamics, and vice versa. These pathways are indicated by arrows in Fig. \ref{fig:potential-det}. Transitions typically take place between orbits with a minimal frequency difference, and therefore with a different oscillation pattern. If the oscillators are identical, we obtain the frequency distribution $p(\omega)$ by integrating over the phase difference $x_A$. We find \begin{eqnarray} p(\omega) & \propto & \int_0^{4\pi}dx_A e^{-\frac{V(\omega\tau,x_A)}{2D}}\nonumber\\ & \propto & e^{-\frac{\tau}{2D}(\omega-\omega_0)^2}I_0(\kappa\cos\omega\tau/D)\label{eq:freqdistr2}\,, \end{eqnarray} with $I_0(y)$ being the modified Bessel function of the first kind, $I_0(y)=\sum \frac{y^{2n}}{2^{2n}(n!)^2}$. \begin{figure}[t] \includegraphics[width=\columnwidth]{potential2d.eps} \caption{Twodimensional potential for two coupled Kuramoto oscillators, without (a) and with (b) detuning. The arrows indicate the two pathways for a transition between two frequencies, thicker arrows correspond to more probable pathways. Parameters are $\omega_{0}=6$, $\kappa=2$, $\tau=10$ and (b) $\Delta=0.8$} \label{fig:potential-det} \end{figure} Like for the single oscillator, the frequency distribution can be written as a Gaussian envelope multiplied with a factor determining the separate peaks. However, the variance of the envelope decreases with a factor $1/2$ compared to the single feedback system. The Bessel function $I_0(y)$ is symmetric: we find alternating peaks corresponding to in-phase and anti-phase orbits, their height only depends on their respective frequencies $\omega_k$ and not on the oscillation pattern. We compare our numerical and theoretical results for the frequency distribution in Fig. \ref{fig:var}(b). The agreement is excellent. The oscillators are thus always synchronized (except during the delay interval following a transition), but they spend a proportion of time in in-phase as well as in anti-phase orbits. As a result, for long enough delays, the overall correlation between the oscillators vanishes at zero lag, but shows maxima at odd multiples of the coupling delay. For nonzero detuning $\Delta>0$, the potential (Eq. \eqref{eq:potential-det}) is tilted, as is shown in Fig. \ref{fig:potential-det}(b). Consequently the phase difference $x_A(t)$ between the oscillators preferentially increases during a mode hopping. The most probable, and the least probable transition pathway between two frequencies are also sketched on Fig. \ref{fig:potential-det}(b). The ratio between the transition rates is approximated by $exp(-pi*Delta/2D)$, so that for large detunings it is reasonable to assume that all the transitions to a higher frequency are induced by the faster oscillator $x_2$, and the transitions to a lower frequency by the slower one $x_1$. For $\kappa\tau$ sufficiently large, we can approximate the envelope by assuming detailed balance \begin{eqnarray} p(\omega_k)r_+(\omega_k) & = & p(\omega_{k+1})r_-(\omega_{k+1})\Leftrightarrow\nonumber\\ \frac{p(\omega_{k+1})}{p(\omega_k)} & \approx & e^{-\frac{\Delta V(\omega_{k+1}\rightarrow \omega_k)-\Delta V(\omega_{k}\rightarrow \omega_{k+1})}{2D}}\nonumber\\ & \approx & e^{-\frac{\tau}{2D}\left((\omega_{k+1}-\omega_0)^2-(\omega_{k}-\omega_0)^2+\frac{2\delta}{\tau}(2\omega_0-\omega_{k+1}-\omega_k)\right)}\nonumber\\ & \approx & e^{-\frac{\tau}{2D}\left(1-\frac{2\delta}{\pi}\right)\left((\omega_{k+1}-\omega_0)^2-(\omega_k-\omega_0)^2\right)}\,. \end{eqnarray} This corresponds to a Gaussian envelope of the frequency distribution with mean $\omega_0=(\omega_1+\omega_2)/2$ and variance $\sigma^2=D/(\tau(1-\epsilon))$, with $\epsilon=2\arcsin(\Delta/2\kappa)/\pi>0$. The distribution of frequencies thus becomes broader due to the detuning, in agreement with the numerical results for the full delay system. In Fig. \ref{fig:var}(c) we show the approximated Gaussian envelope together with the simulated distribution of frequencies. For identical oscillators, we approximate the residence times of the orbits by assuming all the transitions take place via the two optimal pathways. We obtain then for the mean residence time \begin{equation} T_0(\omega_k)=\frac{1}{2r_+(k)+2r_-(k)}\approx\frac{\pi}{2\kappa}\frac{e^{\frac{\kappa}{D}+\frac{\pi^2}{8\tau D}}}{\cosh\left(\frac{\pi(\omega_k-\omega_0)}{2D}\right)}\label{eq:rtd2}\,. \end{equation} This corresponds to half of the lifetime of the orbits of a single oscillator with the a roundtrip delay $2\tau$. We compare numerical and theoretical results in Fig. \ref{fig:rtd}(b). \section{Extension to a ring of Kuramoto oscillators} It is possible to extend these results to a unidirectional ring of $N$ oscillators. Such system is then modelled by \begin{equation} \dot{\phi}_n(t)=\omega_0 + \Delta_n + \kappa\sin(\phi_{n+1}(t-\tau)-\phi_n(t))+\xi_n(t)\label{eq:uniring}\,, \end{equation} with $N+1\equiv 1$. Without detuning, the coupling topology allows for in-phase oscillations $\phi_n(t)=\omega_k t$ and several out-of-phase oscillation patterns $\phi_n(t)=\omega_k t+n\Delta\phi$, with $\Delta\phi=2m\pi/N$. The corresponding frequencies are given by $\omega_k=\omega_0-\kappa\sin(\omega_k\tau-\Delta\phi)$, and they are stable if $\cos(\omega_k\tau-\Delta\phi)>0$ \cite{ikke}. This results in alternating orbits with a different oscillation pattern. For strong coupling and long delay, the frequencies are separated by $2\pi/(N\tau)$. Defining $x_n=\phi_n(t)-\phi_{n+1}(t-\tau)$, and assuming that the instantaneous frequencies of the oscillators can be approximated by the mean frequency averaged over the delay interval and their noise source, $$\dot{\phi}_n(t-\tau)\approx\frac{1}{N\tau}\displaystyle\sum\limits_{l=1}^N x_l +\xi_n(t-\tau)\,,$$ we find an N-dimensional potential \begin{equation} V(x_1,\hdots, x_N)=\frac{N}{2\tau}(x_0-x_S)^2 +\displaystyle\sum\limits_{l=1}^N\Delta_n x_n + \kappa\displaystyle\sum\limits_{l=1}^N\cos x_l\label{eq:potentialuni}\,, \end{equation} with $x_S=\frac{1}{N}\displaystyle\sum\limits_{l=1}^N x_l$. The frequency of the system is then measured by $\omega(t)=x_S(t)/\tau$. It is no longer possible to compute the frequency distribution $p(\omega)$ in terms of simple analytical expressions as above. However, for identical oscillators ($Delta_n=0$) it is straighforward to see that the parabolic term in Eq. \eqref{eq:potentialuni} leads to a Gaussian envelope. The variance of this envelope is given by $\sigma^2=2D/(N\tau)$, and thus scales inversely with the total roundtrip delay $N\tau$. As the frequency difference between the orbits is approximated by $2\pi/(N\tau)$, the number of attended orbits scales as $\sqrt{N\tau}$. Moreover, the potential is symmetric with respect to the different oscillation patterns, so that each pattern is equally often visited in the long delay limit. For low noise, zero detuning and large $\kappa\tau$, we find that the average residence times scale inversely with the number of oscillators in the ring, and depend weakly on the total roundtrip delay. They are approximated by \begin{equation} T_0(\omega_k)=\frac{1}{Nr_+(\omega_k)+Nr_-(\omega_k)}\approx\frac{\pi}{N\kappa}\frac{e^{\frac{\kappa}{D}+\frac{\pi^2}{4N\tau D}}}{\cosh\left(\frac{\pi(\omega_k-\omega_0)}{2D}\right)}\label{eq:rtdN}\,. \end{equation} We compared the frequency distributions and residences times of the simplified non-delay system with simulations of three, four and five delay-coupled oscillators, and the agreement is excellent (not shown). \section{General periodic systems with delayed coupling} In order to investigate whether our results are valid in a broader context, we compare the switching behavior of other nonlinear delay-coupled oscillators to our results for phase oscillators. The Kuramoto model is a weak-coupling limit, which only describes the phase dynamics, and does not take any influence on the amplitude into account; therefore we expect that our results mainly apply for weak coupling. First, we sketch the deterministic periodic solutions in a general delay system. For a single oscillator, it is known that a feedback delay induces coexisting periodic orbits, with a frequency separation of $2\pi/\tau$ \cite{Yanchuk09}. We show here briefly that in a unidirectional ring of identical oscillators, a delay gives rise to alternating in-phase and out-of-phase orbits, in a similar way as for phase oscillators. For general limit cycle systems, unlike for phase oscillators, it is not so straightforward to determine the respective orbits and their stability properties. Extending the approach of Yanchuk and Perlikowski for a single feedback system \cite{Yanchuk09}, we consider a set of $N$ identical nonlinear systems coupled in a unidirectional ring with delay \begin{equation} \dot{x}_n(t)=f(x(t),x_{n+1}(t-\tau))\label{unialg}\,. \end{equation} where $x_{N+1}\equiv x_1$. For the following we assume, that this network allows for an in-phase synchronized periodic solution $x_n(t)=x_{n-1}(t)=x_n(t+T)$ at a coupling delay $\tau=\tau_0$. Shifting the delay to $\tau_1=\tau_0+T/N$, the same periodic orbit is a solution of the system, the oscillators however exhibit a phase difference $x_n(t)=x_{n+1}(t-T/N)=x_n(t+T)$. Similarly, we find the same waveform appearing with all the other out-of-phase patterns that are allowed in the ring: a pattern corresponding to $x_n(t)=x_{n-1}(t-kT/N)=x_n(t+T)$ can be found at a delay $\tau_k=\tau_0+kT/N$. An orbit with a period $T$ thus reappears when shifting the delay by an amount $T/N$. The periodic solutions are organized in branches: as the delay increases, the period $T$ of an orbit varies continuously between a minimal period $T_{\min}$ and a maximal period $T_{\max}$. For a fixed delay $\tau$, the number of coexistent orbits resulting from a single branch can then be estimated in the following way: we have $\tau\approx nT_{\max}/N\approx mT_{\min}/N$. The number of periodic states is then estimated as $m-n=N\tau(T_{\min}^{-1}-T_{\max}^{-1})$, with in-phase and out-of-phase orbits alternating each other. The frequency difference between two orbits is approximated by $2\pi/(N\tau)$, just like for phase oscillators. It is possible to show that the stability of these orbits depends on their period, but not on the oscillation pattern. In the long delay limit the stability no longer depends on the number of oscillators in the ring, or the coupling delay. As an examplary system, we investigate numerically stochastic switching between such coexistent orbits in FitzHugh-Nagumo oscillators. We simulated a single oscillator ($N=1$) with delayed feedback, and two identical mutually delay-coupled oscillators ($N=2$). Our oscillator is modelled by \begin{eqnarray} \epsilon\dot{v}_n(t) & = & v(t)-\frac{{v_n}^3(t)}{3}-w_n(t)+k(v_{n+1}(t-\tau)-v_n(t))\nonumber\\ \dot{w}_n(t) & = & v_n(t)+a +\xi_n(t)\,, \end{eqnarray} with $(v_{N+1},u_{N+1})\equiv (v_1,u_1)$, and $\xi_n(t)$ being Gaussian white noise with a variance given by $\langle\xi^2(t)\rangle=2\tilde{D}$. We choose our parameters so that without delayed coupling and without noise the oscillator(s) show periodic spiking dynamics. A typical timetrace of an oscillator with noise and feedback, which performs a mode hopping, is shown in Fig. \ref{fig:FHNtt}(a). \begin{figure}[t] \includegraphics[width=0.49\columnwidth]{FHNtt.eps} \includegraphics[width=0.49\columnwidth]{FHNphase.eps} \caption{(a) Time trace and (b) frequency evolution of a FitzHugh-Nagumo oscillator with feedback. Parameters are $\epsilon=0.01$, $a=0.9$, $k=0.2$, $\tau=20$ and $\tilde{D}=0.0145$.} \label{fig:FHNtt} \end{figure} We analyze the mode hopping in a similar way as for phase oscillators. We define the phase of the oscillators by the Hilbert-transform of the fast variable, $\phi_n(t)=\arg(\mathcal{H}(v_n(t))$, but similar results were obtained by using the alternative definition $\phi_n(t)=\arctan(v_n(t)/w_n(t))$. In both cases the waveform is very different from sinusoidal, so the frequency shows large fluctuations within a period even without noise. The frequency measure is given by $\omega(t)=\sum(\phi_n(t)-\phi_n(t-\tau))/(N\tau)$, similar as for Kuramoto oscillators. We show the evolution of the frequency $\omega(t)$ in Fig. \ref{fig:FHNtt}(b). Although the mode hopping event is hard to detect in the timetrace (Fig. \ref{fig:FHNtt}(a)), it is clearly visible in the frequency. Comparing Figs. \ref{fig:FHNtt}(b) and \ref{fig:kura1tt}(b), we find that the irregular waveform of the FitzHugh-Nagumo oscillator results in larger and asymmetric excursions from the deterministic frequency. Fig. \ref{fig:FHN} (a,b) compares the frequency distributions $p(\omega)$ of a single oscillator with feedback, and two mutually coupled oscillators. For the single oscillator, shown in Fig. \ref{fig:FHN}(a), we find five different peaks, separated by a frequency difference of $2\pi/\tau$. The shape of the different peaks is asymmetric; this feature results from the asymmetric waveform of the spikes. The frequency distribution $p(\omega)$ for two mutually coupled oscillators is shown in Fig. \ref{fig:FHN}(b): we find the peaks at the same frequency as for the single element; they correspond to in-phase orbits. Between the in-phase peaks we find maxima that can be associated to anti-phase orbits. Just like for phase oscillators, the frequency distribution for the coupled system has the same mean, and half of the variance as the single system. The corresponding average residence times of each orbit are shown in Fig. \ref{fig:FHN}(c). The average residence times of the single oscillator (black dots) are larger than those of the coupled system (pink dots). Moreover, the average residence times show the same trend as for coupled phase oscillators: the orbits with a central frequency are most robust against noise. The agreement with the Kuramoto oscillators is even quantitative: In Fig. \ref{fig:FHN}(a,b) we compared the frequency distributions with Gaussians (blue dashed lines), which have the same mean and variance as the original distributions; the maxima of the peaks approximately lie on this Gaussian curve, both for the single feedback oscillator and the two delay-coupled oscillators. From the mean and variance, we identify the natural frequency $\omega_0$ and the noise strength $D$ of the corresponding Kuramoto model with the same delay $\tau$. The coupling phase can be found by the position of the in-phase peaks $\theta\approx \omega_k\tau$. We also compared in Fig. \ref{fig:FHN}(c) the Kuramoto residence times for a single oscillator (Eq. \eqref{eq:rtd1}, pink dashed curve) and for two coupled oscillators (Eq. \eqref{eq:rtd2}, blue dashed curve) to the residence times for FitzHugh-Nagumo elements. We thereby used the parameters $D$ and $\omega_0$ determined from the frequency distributions, the coupling strength $\kappa$ can then be estimated from the average residence times. We find that, for the same coupling strength $\kappa$ for the single and the coupled system, the residence times are well approximated by the phase model. Moreover, we find a single parameter set $(\omega_0,\kappa,D,\theta)$ which models the frequency distribution and the average residence times for both a single feedback and two coupled oscillators. Hence, the scaling properties of the stochastic periodic dynamics with the delay time and the oscillator number are reproduced. \begin{figure}[t] \includegraphics[width=\columnwidth]{FHNfreqrtd.eps} \caption{(Color online) Frequency distribution $p(\omega)$ for one (a) and two (b) coupled FitzHugh-Nagumo oscillators with delay, with $\epsilon=0.01$, $a=0.9$, $k=0.2$, $\tau=40$ and $\tilde{D}=0.0145$. The blue dashed curve shows the Gaussian envelope for the corresponding Kuramoto oscillator(s) with $\omega_0=2.55$, $D=0.2$ and $\tau=40$. In panel (c) the corresponding average residence times are shown for one (upper black dots) and two (lower pink dots) oscillators, the dashed curves represent the Kuramoto approximation for one (Eq. \eqref{eq:rtd1}, upper pink curve) and two (\eqref{eq:rtd2}, lower blue curve) elements for $\omega_0=2.55$, $D=0.2$, $\tau=40$ and $\kappa=0.815$} \label{fig:FHN} \end{figure} \section{Summary and discussion} We have studied the influence of additive noise on one, two and a ring of phase oscillators coupled with delay. In such systems multiple periodic orbits coexist, and under influence of noise the oscillators hop from one orbit to another. We approximated the system as a noisy particle in potential well; both for the distribution of frequencies as for residence times the approximation is excellent. Although our approximation only applies for weak noise, we obtain a good agreement for the distribution of frequencies for all noise strenghts. However, it should be remarked that in the simplified model some dynamical phenomena are not reproduced. The most prominent example are the delay stochastic resonance peaks in the residence time distribution, shown in Fig. \ref{fig:rtd}(a). Also frequency oscillations with a periodicity of a roundtrip delay time, which are typically present in the system, are no longer visible. Transient behavior is different as well: in the delay system a transient decays at a rate proportional to the inverse delay time, while in the reduced system the decay happens much faster. We found that the oscillators only visit a fraction of the deterministic stable orbits: whereas the number of deterministic orbits scales with oscillator number $N$, delay time $\tau$ and coupling strength $\kappa$, the number of visited orbits scales as $\sqrt{DN\tau}$, and does not depend on the coupling strength $\kappa$. The orbit with a frequency closest to the natural frequency is the most probable, irrespective of its oscillation pattern. Our results on the average residence times indicate the robustness of the orbits against weak noise. The most robust orbits are those with a frequency close to the natural frequency, also irrespective of the oscillation pattern. The sensitivity of an orbit against noise depends strongly on the coupling strength, the coupling delay plays only a minor role. The number of robust orbits scales as $DN\tau$. For two delay-coupled oscillators, and for unidirectional rings the systems does not show any preference for a particular oscillation pattern. The different oscillation patterns are equally often attended, in the long delay limit. However, this symmetry between in-phase and out-phase patterns depends on the coupling topology. We also simulated three, four and five delay-coupled Kuramoto oscillators in an all-to-all configuration. In this case however a description as a noisy particle in a potential is not accurate, as not only periodic dynamics is observed. The distribution of frequencies looks different: the peaks associated to in-phase orbits are considerably higher than those corresponding to out-of-phase dynamics. For long delays even only in-phase periodic orbits are visible in the frequency distributions. We find that the frequency distributions narrows with the number of oscillators: the variance of the frequency distribution scales as $1/(N-1)$. The Kuramoto model is a weak coupling approximation for limit cycle oscillators. Therefore we expect our results to apply for delay-coupled nonlinear systems showing stable periodic dynamics. In particular, the Kuramoto approximation applies when the coupling mainly influences the oscillation phase, while the waveform or oscillation amplitude is hardly affected. We found indeed a good correspondence between Kuramoto and FitzHugh-Nagumo oscillators in this case. However, we expect the approximation to break down as the coupling strength increases and amplitude instabilities play a role in the dynamics. Not only in stable oscillatory systems, but also in a chaotic attractor a delay has the effect of inducing multiple periodic orbits. Hence, the chaotic attractor of two delay-coupled chaotic systems contains in-phase as well as anti-phase orbits, and they have similar stability properties (for long enough delay). Therefore, it is not surprising that we find the same correlation pattern, with a high correlation at the delay time, but no correlation at zero lag, for coupled noisy oscillators and chaotic systems with delay \cite{hei01.1}. However, chaotic and stochastic systems show different scaling behavior with the delay time and the number of coupled elements. It is worth noting that the envelopes of the frequency distributions are the same as those for a random walk. The delayed feedback only imposes restrictions on the distribution of the two point distribution of $x(t)=\phi(t)-\phi(t-\tau)$, but it does not affect the envelope. On timescales much shorter than the delay $t_0\ll\tau$, the influence of the feedback is even not visible: the two point distribution of $\phi(t)-\phi(t-t_0)$ is identical to the one of a random walk. A possible explanation of this surprising phenomenon lies in the fact, that the equations of motion do not impose any restrictions on this phase difference, as long as $t_0$ is different enough from $\tau$. Hence, the random-walk can explore the possible range. However, on timescales equal or larger than the delay, the dynamics (i.e. the timetrace) of an oscillator with delayed feedback differs significantly from a random walk. Also the two point distributions show a clear fingerprint of the delay time, and for $t_0>\tau$, a larger variance than a random walk. We believe that the issue of two-/$N$-point distributions in delay systems is worth being studied in more detail. \begin{acknowledgements} This work was initiated during a visit at TU Berlin, and O.D. is grateful to Eckehard Sch\"oll and Andrea V\"ullings for their hospitality and many ideas. Furthermore, O.D. thanks Ido Kanter and Jordi Zamora for fruitful discussions. T.J. acknowledges support by a fellowship within the Postdoc-Programme of the German Academic Exchange Service (DAAD). \end{acknowledgements}
2006.15557
\section{Introduction} The Casimir effect was discovered \cite{1} as an attractive force which arises between two parallel uncharged ideal metal planes in vacuum and depends only on the Planck constant $\hbar$, speed of light $c$, and interplane distance $a$. At zero temperature of the planes this effect is entirely caused by the zero-point oscillations of the quantized electromagnetic field whose spectrum is altered by the presence of boundary conditions on the planes as compared to the free Minkowski space. More recently, the Casimir effect was generalized to the case of metallic or dielectric plates kept at arbitrary temperature $T$. In the framework of the Lifshitz theory, the free energy and force of the Casimir interaction between real-material plates are represented as some functionals of the reflection coefficients expressed via the frequency-dependent dielectric permittivities of plate materials. Detailed information on calculation of the Casimir free energies and forces using the Lifshitz theory, as well as about a comparison between experiment and theory, can be found in the monograph \cite{2}. There are also generalizations of the Lifshitz theory for bodies of arbitrary shape and alternative derivations of the Casimir interaction in the literature (see, e.g., Refs.~\cite{2,2a,2b,2c}). During the last few years, much attention is given to graphene which is a one-atom-thick layer of carbon atoms possessing unusual physical properties \cite{3}. It has been shown that at energies below 1--2~eV graphene is well described by the Dirac model as a set of massless or very light electronic quasiparticles. The corresponding fermion field satisfies the relativistic Dirac equation in (2+1)-dimensions where the speed of light $c$ is replaced with the Fermi velocity $v_F\approx c$/300 \cite{3,4}. This allowed application of the methods developed earlier in planar quantum electrodynamics \cite{5,6,7,8} for investigation of various quantum effects in graphene systems \cite{9,10,11,12,13,13a,14}. One of these effects is the Casimir attraction between two parallel graphene sheets which can be calculated using the Lifshitz theory \cite{2}. For this purpose, one should know the response function of graphene to the electromagnetic field which does not reduce to the standard dielectric permittivities of metallic and dielectric materials. It is important to keep in mind that the permittivities of ordinary materials are usually derived using the kinetic theory or Kubo formula under several assumptions which are not universally applicable \cite{14.0}. These ones and some other theoretical approaches have been used in approximate calculations of the response functions and the Casimir force in graphene systems \cite{14.1,14.2,14.3,14.4,14.5,14.6,14.7,14.8,14.9,14.10,14.11,14.12,14.13,14.14,14.15,14.16,14.17,14.18}. In the framework of the Dirac model, however, the dielectric response of graphene can be described exactly by means of its polarization tensor found on the basis of first principles of thermal quantum field theory. Although the polarization tensor of graphene was considered in many papers (see, e.g., Ref.~\cite{15} and literature therein), the exact expression for it at zero temperature, as well as the corresponding formulas for the reflection coefficients, have been found in Ref. \cite{16}. The polarization tensor of gapped graphene (the energy gap $\Delta$ arises for quasiparticles of nonzero mass) at any temperature was derived in Ref. \cite{17}. The expressions of Ref. \cite{17} are valid at the pure imaginary Matsubara frequencies and were used to investigate the Casimir effect in many graphene systems \cite{17,18,19,20,21,22,23,24,25,26,27}. In Ref.~\cite{28} another form for the polarization tensor of graphene at nonzero temperature was derived valid over the entire plane of complex frequencies. It was generalized for the case of nonzero chemical potential $\mu$ in Ref.~\cite{29}. This form of the polarization tensor was also successfully used in calculations of the Casimir force in various graphene systems \cite{29,30,31,32,33,34} , as well as for investigation of the reflectivity and conductivity properties of graphene \cite{35,36,37,38}. An interest to the thermodynamic aspects of the Lifshitz theory in application to graphene systems arose from the so-called Casimir puzzle. It turned out that the theoretical predictions for the Casimir force between both metallic and dielectric test bodies are excluded by the measurement data if one takes into account in calculations the dissipation of free electrons and the conductivity at a constant current, respectively (see the reviews in Refs.~\cite{2,39,40} and the most recent experiments \cite{41,42,43,44}). As to thermodynamics, it was found that an account of dissipation of free electrons for metals with perfect crystal lattices and the dc conductivity for dielectrics results in a violation of the third law of thermodynamics which is also known as the Nernst heat theorem (see the reviews in Refs.~\cite{2,39} and the most recent results in Refs.~\cite{45,46,47,48,49,50}). In the single experiment on measuring the Casimir interaction in graphene systems performed up to date \cite{51}, the measurement data were found in good agreement with theoretical predictions using the polarization tensor \cite{52}. Taking into consideration that the polarization tensor of graphene results in two spatially nonlocal dielectric permittivities, the longitudinal one and the transverse one, each of which is complex and takes dissipation into account, the question arises whether the Casimir free energy and entropy of graphene systems is consistent with the requirements of thermodynamics. To answer this question, the low-temperature behavior of the Casimir free energy and entropy between two sheets of pristine graphene with $\Delta=\mu=0$ was found in Ref. \cite{53}. It was shown that in this case the Casimir entropy vanishes with vanishing temperature, i.e., the Nernst heat theorem is satisfied. The same result was obtained for the Casimir-Polder entropy of an atom interacting with a sheet of a pristine graphene \cite{54}. For an atom interacting with real graphene sheet possessing nonzero $\Delta$ and $\mu$ it was shown that the Nernst heat theorem is followed for $\Delta>2\mu$ \cite{55} and $\Delta<2\mu$ \cite{55,56}. As to the case $\Delta=2\mu$, the nonzero value of the Casimir-Polder entropy at zero temperature was found in this case depending on the parameters of a system, i.e., an entropic anomaly \cite{56} (the low-temperature behavior of the Casimir-Polder free energy for $\Delta, \mu\neq$0 was also considered in Ref.~\cite{57}). In this paper, we derive the low-temperature analytic asymptotic expressions for the Casimir free energy and entropy of two real graphene sheets possessing nonzero values of $\Delta$ and $\mu$. This is a more complicated problem than for an atom interacting with real graphene sheet because the free energy of an atom-graphene interaction is the linear function of the reflection coefficients, which is not the case for two parallel graphene sheets. The Casimir free energy is presented by the Lifshitz formula where the reflection coefficients are expressed via the polarization tensor of graphene in (2+1)-dimensional space-time. The thermal correction to the Casimir energy at zero temperature is separated in two contributions. In the first of them, the temperature dependence is determined exclusively by a summation over the Matsubara frequencies, whereas the polarization tensor is defined at zero temperature. The temperature dependence of the second contribution is determined by an explicit dependence of the polarization tensor on temperature as a parameter. We find the asymptotic behaviors at low temperature for each of these contributions under different relationships between $\Delta$ and 2$\mu$. It is shown that the leading terms determining the low-temperature behavior of the total Casimir free energy originate from the first contribution to the thermal correction for both $\Delta>2\mu$ and $\Delta<2\mu$ and from the second contribution for $\Delta=2\mu$. As a result, for $\Delta>2\mu$ and $\Delta<2\mu$ the Nernst heat theorem is satisfied, whereas for $\Delta=2\mu$ it is violated. The physical meaning of this anomaly is discussed in the context of problems considered earlier in the literature on the Casimir effect between metals and dielectrics. The paper is organized as follows. In Sec.~II, we briefly summarize the necessary formalism of the polarization tensor. Section III is devoted to the perturbation expansion of the Lifshitz formula at low temperature. In Secs.~IV, V, and VI, the derivation of the asymptotic expressions for the Casimir free energy and entropy at low temperature is presented for the cases $\Delta>2\mu$, $\Delta=2\mu$, and $\Delta<2\mu$, respectively. Section VII contains our conclusions and a discussion. In the Appendix, the reader will find some calculation details. \section{The polarization tensor of graphene and the reflection coefficients} We consider two parallel graphene sheets separated by a distance $a$ at temperature $T$ in thermal equilibrium with the environment. The electronic quasiparticles in graphene considered in the framework of the Dirac model \cite{3,4} are characterized by some small but nonzero mass which results in the energy gap $\Delta$ taking the typical value 0.1--0.2~eV. The energy gap arises due to an impact of the defects of structure, interelectron interactions and interaction with a substrate if any \cite{15,58}. We also assume that the graphene sheets under consideration possess some value of the chemical potential $\mu$ which depends on the doping concentration \cite{59} (for a pristine graphene $\Delta=\mu=0$). The polarization tensor of graphene describes its response to an external electromagnetic field in the one-loop approximation. The values of this tensor at the pure imaginary Matsubara frequencies $\xi_l=2\pi k_BTl/\hbar$ (where $k_B$ is the Boltzmann constant and $l=0,\,1,\,2\,\ldots$) are usually notated as \begin{equation} \Pi_{mn}(\ri\xi_l,k_{\bot},T,\Delta,\mu)\equiv \Pi_{mn,l}(k_{\bot},T,\Delta,\mu), \label{eq1} \end{equation} \noindent where $m,\,n=0,\,1,\,2$ are the tensor indices and $k_{\bot}$ is the magnitude of the wave vector projection on the plane of graphene. Below it is convenient to consider the dimensionless polarization tensor, frequencies and the wave vector projection defined as \begin{equation} \tp_{mn.l}=\frac{2a}{\hbar}\Pi_{mn,l}, \quad \zeta_l=\frac{\xi_l}{\omega_c}, \quad \omega_c\equiv\frac{c}{2a}, \quad y=2a\left(k_{\bot}^2+\frac{\xi_l^2}{c^2}\right)^{1/2}. \label{eq2} \end{equation} In fact only the two components of the polarization tensor are the independent quantities. As an example, the 00 component $\tp_{00}$ and the trace $\tp_{m}^{\,m}$ are often used for a full characterization of this tensor \cite{17}. For our purposes it is more convenient to use $\tp_{00}$ and the following linear combination of the 00 component and the trace: \begin{equation} \tp_l\equiv\tp(\ri\zeta_l,y,T,\Delta,\mu) = (y^2-\zeta_l^2)\tp_m^{\,m}(\ri\zeta_l,y,T,\Delta,\mu) -y^2\tp_{00}(\ri\zeta_l,y,T,\Delta,\mu). \label{eq3} \end{equation} The reason is that the reflection coefficients on graphene sheets for the transverse magnetic (TM) and transverse electric (TE) polarizations of the electromagnetic waves take the following simple form \cite{16,17,28,29}: \begin{eqnarray} && \rM\zy=\frac{y\tp_{00,l}\yt}{y\tp_{00,l}\yt+2(y^2-\zeta_l^2)}, \nonumber \\[-1mm] &&\label{eq4}\\[-2mm] && \rE\zy=-\frac{\tp_{l}\yt}{\tp_{l}\yt+2y(y^2-\zeta_l^2)}, \nonumber \end{eqnarray} \noindent where we omitted the parameters $\Delta$ and $\mu$ in the notations of the reflection coefficients for the sake of brevity. Now we present the exact expressions for $\tp_{00,l}$ and $\tp_l$ obtained in the literature. First of all, it is convenient to present them as the respective quantity defined at $T=0$ plus the thermal correction to it \begin{eqnarray} && \tp_{00,l}\yt=\tp_{00,l}\yo+\dT\tp_{00,l}\yt, \nonumber \\ && \tp_{l}\yt=\tp_{l}\yo+\dT\tp_{l}\yt. \label{eq4a} \end{eqnarray} \noindent It is also useful to present $\tp_{00,l}$ and $\tp_l$ as the sums of contributions which do not depend and, quite the reverse, depend on $\mu$ and $T$ \cite{34} \begin{eqnarray} && \tp_{00,l}\yt= \tp_{00,l}^{(0)}(y,\Delta)+ \tp_{00,l}^{(1)}\yt, \nonumber \\ && \tp_{l}\yt= \tp_{l}^{(0)}(y,\Delta)+ \tp_{l}^{(1)}\yt. \label{eq5} \end{eqnarray} As the first contributions on the right-hand side of Eq.~(\ref{eq5}) we choose the 00 component and the combination (\ref{eq3}) for the polarization tensor of gapped ($\Delta\neq 0$) but undoped ($\mu=0$) graphene defined at zero temperature \cite{16,34} \begin{eqnarray} && \tp_{00,l}^{(0)}(y,\Delta)=\alpha\frac{y^2-\zeta_l^2}{p_l} \Psi\left(\frac{D}{p_l}\right), \nonumber \\[-1mm] && \label{eq6}\\[-2mm] && \tp_{l}^{(0)}(y,\Delta)=\alpha(y^2-\zeta_l^2){p_l} \Psi\left(\frac{D}{p_l}\right), \nonumber \end{eqnarray} \noindent where $\alpha=e^2/(\hbar c)$ is the fine structure constant, $D\equiv\Delta/(\ho)$, and the following notations are introduced \begin{equation} \Psi(x)=2\left[x+(1-x^2)\arctan(x^{-1})\right], \quad p_l=\left[\vF^2y^2+(1-\vF^2)\zeta_l^2\right]^{1/2}, \quad \vF=\frac{v_F}{c}. \label{eq7} \end{equation} In accordance to our choice, \begin{eqnarray} && \tp_{00,l}^{(0)}(y,\Delta)=\tp_{00,l}(y,0,\Delta,0), \nonumber \\ && \tp_{l}^{(0)}(y,\Delta)=\tp_{l}(y,0,\Delta,0). \label{eq8} \end{eqnarray} \noindent In so doing, $\tp_{00,l}^{(1)}$ and $\tp_{l}^{(1)}$ acquire a meaning of the thermal corrections to the polarization tensor of undoped graphene defined at $T=0$: \begin{eqnarray} && \tp_{00,l}^{(1)}(y,T,\Delta,0)=\dT\tp_{00,l}(y,T,\Delta,0), \nonumber \\ && \tp_{l}^{(1)}(y,T,\Delta,0)=\dT\tp_{l}(y,T,\Delta,0). \label{eq9} \end{eqnarray} \noindent These corrections vanish in the limit of zero temperature. The second contributions on the right-hand side of Eq.~(\ref{eq5}) can be explicitly presented in the form \cite{34,56} \begin{eqnarray} && \tp_{00,l}^{(1)}\yt=\frac{4\alpha D}{\vF^2}\int_1^{\infty}\!\!\!dt w(t,T,\Delta,\mu)X_{00,l}(t,y,D), \nonumber\\[-1mm] &&\label{eq10}\\[-2mm] && \tp_{l}^{(1)}\yt=-\frac{4\alpha D}{\vF^2}\int_1^{\infty}\!\!\!dt w(t,T,\Delta,\mu)X_{l}(t,y,D), \nonumber \end{eqnarray} \noindent where the $\mu$-dependent factor is given by \begin{equation} w(t,T,\Delta,\mu)=\left(e^{\frac{t\Delta+2\mu}{2k_BT}}+1\right)^{-1}+ \left(e^{\frac{t\Delta-2\mu}{2k_BT}}+1\right)^{-1}, \label{eq11} \end{equation} \noindent and the functions $X_{00,l}$ and $X_l$ are defined as follows: \begin{widetext} \begin{eqnarray} && X_{00,l}(t,y,D)=1-{\rm Re} \frac{p_l^2-D^2t^2+2\ri\zeta_lDt}{\left[p_l^4-p_l^2D^2t^2+\vF^2(y^2-\zeta_l^2)D^2 +2\ri\zeta_lp_l^2Dt\right]^{1/2}}, \nonumber\\[-1mm] &&\label{eq12}\\[-2mm] && X_{l}(t,y,D)=\zeta_l^2-{\rm Re} \frac{\zeta_l^2p_l^2-p_l^2D^2t^2+\vF^2(y^2-\zeta_l^2)D^2+ 2\ri\zeta_lp_l^2Dt}{\left[p_l^4-p_l^2D^2t^2+\vF^2(y^2-\zeta_l^2)D^2 +2\ri\zeta_lp_l^2Dt\right]^{1/2}}. \nonumber \end{eqnarray} \end{widetext} It has been shown \cite{33,34} that for a doped and gapped graphene satisfying the condition $\Delta\geqslant2\mu$ the polarization tensor at $T=0$ also does not depend on $\mu$. As a result, one obtains the equalities similar to those in Eqs.~(\ref{eq8}) and (\ref{eq9}) \begin{equation} \tp_{00,l}(y,0,\Delta,\mu)=\tp_{00,l}^{(0)}(y,\Delta), \quad \tp_{l}(y,0,\Delta,\mu)=\tp_{l}^{(0)}(y,\Delta), \label{eq13} \end{equation} \noindent and \begin{equation} \dT\tp_{00,l}\yt=\tp_{00,l}^{(1)}\yt, \quad \dT\tp_{l}\yt=\tp_{l}^{(1)}\yt, \label{eq14} \end{equation} \noindent where the thermal corrections vanish with vanishing temperature. It is significant that under the condition $\Delta<2\mu$ the polarization tensor of doped and gapped graphene at $T=0$ depends both on $\Delta$ and $\mu$, and Eqs.~(\ref {eq13}) and (\ref{eq14}) are not valid any more. In this case, the 00 component of the polarization tensor at $T=0$ and the combination of its components (\ref{eq3}) are given by \cite{33} \begin{widetext} \begin{eqnarray} && \tp_{00,l}(y,0,\Delta,\mu)=\frac{8\alpha\mu}{\vF^2\ho}- \frac{2\alpha(y^2-\zeta_l^2)}{p_l^3}\left\{ \vphantom{\left[\frac{\pi}{2}\right]}(p_l^2+D^2) {\rm Im}\left(z_l\sqrt{1+z_l^2}\right)\right. \nonumber \\ &&~~~~~~~\left. +(p_l^2-D^2)\left[ {\rm Im}\ln\left(z_l+\sqrt{1+z_l^2}\right)-\frac{\pi}{2}\right]\right\}, \nonumber \\[-1mm] &&\label{eq15}\\[-1mm] && \tp_{l}(y,0,\Delta,\mu)=-\frac{8\alpha\mu\zeta_l^2}{\vF^2\ho}+ \frac{2\alpha(y^2-\zeta_l^2)}{p_l}\left\{ \vphantom{\left[\frac{\pi}{2}\right]}(p_l^2+D^2) {\rm Im}\left(z_l\sqrt{1+z_l^2}\right)\right. \nonumber \\ &&~~~~~~~\left. -(p_l^2-D^2)\left[ {\rm Im}\ln\left(z_l+\sqrt{1+z_l^2}\right)-\frac{\pi}{2}\right]\right\}, \nonumber \end{eqnarray} \end{widetext} where \begin{equation} z_l\equiv z_l(y,\Delta,\mu)=\frac{p_l}{\vF\sqrt{p_l^2+D^2}\sqrt{y^2-\zeta_l^2}} \left(\zeta_l+\ri\frac{2\mu}{\ho}\right). \label{eq16} \end{equation} The thermal corrections to the polarization tensor of graphene satisfying the condition $\Delta<2\mu$ are immediately obtained from Eqs.~(\ref{eq4a}) and (\ref{eq5}) \begin{eqnarray} && \dT\tp_{00,l}\yt=\tp_{00,l}\yt-\tp_{00,l}\yo \nonumber \\ && \phantom{\dT\tp_{00,l}\yt}=\tp_{00,l}^{(1)}\yt-\tp_{00,l}^{(1)}\yo, \nonumber \\[-1.mm] &&\label{eq16a}\\[-1.3mm] && \dT\tp_{l}\yt=\tp_{l}\yt-\tp_{l}\yo \nonumber\\ && \phantom{\dT\tp_{l}\yt}=\tp_{l}^{(1)}\yt-\tp_{l}^{(1)}\yo. \nonumber \end{eqnarray} As to the case of an exact equality $\Delta=2\mu$, it is considered in Sec.~V. \section{Perturbation expansion of the Lifshitz formula at low temperature} Using the reflection coefficients (\ref{eq4}) expressed above via the polarization tensor, one can represent the Casimir free energy per unit area of graphene sheets by means of the Lifshitz formula \cite{2,60} \begin{equation} \cF=\frac{k_BT}{8\pi a^2}\sum_{l=0}^{\infty} {\vphantom{\sum}}^{\prime} \int_{\zeta_l}^{\infty}\!\!\!\!ydy\sum_{\lambda} \ln\left[1-r_{\lambda}^2\zy e^{-y}\right], \label{eq17} \end{equation} \noindent where the prime on the summation sign divides the term with $l=0$ by 2, and the sum in $\lambda$ is over two polarizations of the electromagnetic field, transverse magnetic and transverse electric ($\lambda={\rm TM,\,TE}$). We are in fact interested not in the total Casimir free energy but in its temperature-dependent part, i.e., in the thermal correction to the Casimir energy defined as \begin{equation} \dT\cF=\cF-E(a), \label{eq18} \end{equation} \noindent where the Casimir energy at zero temperature is given by \cite{2,60} \begin{equation} E(a)=\frac{\hbar c}{32\pi^2 a^3}\int_0^{\infty}\!\!d\zeta \int_{\zeta}^{\infty}\!\!\!\!ydy\sum_{\lambda} \ln\left[1-r_{\lambda}^2(\ri\zeta,y,0) e^{-y}\right]. \label{eq19} \end{equation} \noindent Here, the reflection coefficients are expressed by Eq.~(\ref{eq4}) in which one should replace the Matsubara frequencies with a continuous frequency $\zeta$ and put $T=0$ \begin{eqnarray} && \rM(\ri\zeta,y,0)= \frac{y\tp_{00}(\ri\zeta,y,0,\Delta,\mu)}{y\tp_{00}(\ri\zeta,y,0,\Delta,\mu) +2(y^2-\zeta^2)}, \nonumber \\[-1mm] &&\label{eq20}\\[-2mm] && \rE(\ri\zeta,y,0)= -\frac{\tp(\ri\zeta,y,0,\Delta,\mu)}{\tp(\ri\zeta,y,0,\Delta,\mu) +2y(y^2-\zeta^2)}. \nonumber \end{eqnarray} \noindent Note that both the propagating waves, which are on the mass shell, and the evanescent waves off the mass shell contribute to Eqs.~(\ref{eq17}) and (\ref{eq19}). In the case $\Delta>2\mu$, following Eq.~(\ref{eq13}), one should substitute to Eq.~(\ref{eq20}) the expressions for the $\tp_{00}$ and $\tp$ defined in Eq.~(\ref{eq6}) making there the above replacement $\zeta_l\to\zeta$. If, however, the condition $\Delta<2\mu$ is fulfilled, it is necessary to substitute in Eq.~(\ref{eq20}) the quantities (\ref{eq15}) with the same replacement. Now we identically rearrange Eq.~(\ref{eq18}) to the form \begin{equation} \dT\cF=\daT\cF+\deT\cF, \label{eq21} \end{equation} \noindent where \begin{widetext} \begin{eqnarray} && \daT\cF=\frac{k_BT}{8\pi a^2}\sum_{l=0}^{\infty} {\vphantom{\sum}}^{\prime} \int_{\zeta_l}^{\infty}\!\!\!\!ydy\sum_{\lambda} \ln\left[1-r_{\lambda}^2\ozy e^{-y}\right]-E(a) \label{eq22}\\ &&\hspace*{-2.cm}\mbox{and} \nonumber\\ && \deT\cF=\cF-\frac{k_BT}{8\pi a^2}\sum_{l=0}^{\infty} {\vphantom{\sum}}^{\prime} \int_{\zeta_l}^{\infty}\!\!\!\!ydy\sum_{\lambda} \ln\left[1-r_{\lambda}^2\ozy e^{-y}\right]. \label{eq23} \end{eqnarray} \end{widetext} \noindent As is seen from Eqs.~(\ref{eq21})--(\ref{eq23}), we have simply added and subtracted from Eq.~(\ref{eq18}) the quantity having the same form as the Casimir free energy in Eq.~(\ref{eq17}) but containing the reflection coefficients (\ref{eq4}) taken at $T=0$. An advantage of Eq.~(\ref{eq21}) is that the implicit temperature dependence of the first term, $\daT\ocF$, is entirely determined by a summation on the Matsubara frequencies, whereas the polarization tensor is taken at $T=0$. As to the second term, $\deT\ocF$, it simply vanishes for the temperature-independent polarization tensors. Thus, the dependence of this term on $T$ can be called explicit. We turn our attention to the perturbation expansion of the Casimir free energy at low temperature. Taking into account that the thermal corrections $\dT\tp_{00,l}$ and $\dT\tp_{l}$ go to zero with vanishing $T$, we substitute Eq.~(\ref{eq4a}) in Eq.~(\ref{eq4}), expand up to the first order of small parameters \begin{equation} \frac{\dT\tp_{00,l}\yt}{\tp_{00,l}\yo}\ll 1, \quad \frac{\dT\tp_{l}\yt}{\tp_{l}\yo}\ll 1 \label{eq24} \end{equation} \noindent and obtain \begin{equation} r_{\rm TM(TE)}\zy=r_{\rm TM(TE)}\ozy+\dT r_{\rm TM(TE)}\zy, \label{eq25} \end{equation} \noindent where the first contributions are given by Eq.~(\ref{eq4}) taken at $T=0$ and the thermal corrections to the reflection coefficients are given by \begin{eqnarray} && \dT\rM\zy=\frac{2y(y^2-\zeta_l^2)\dT\tp_{00,l}\yt}{[y\tp_{00,l}\yo +2(y^2-\zeta_l^2)]^2}, \nonumber \\[-0.8mm] &&\label{eq26}\\[-2mm] && \dT\rE\zy=-\frac{2y(y^2-\zeta_l^2)\dT\tp_{l}\yt}{[\tp_{l}\yo +2y(y^2-\zeta_l^2)]^2}. \nonumber \end{eqnarray} \noindent This approach is applicable under the conditions $\tp_{00,l}\yo\neq 0$ and $\tp_{l}\yo\neq 0$ which are valid for the cases $\Delta\geqslant 2\mu$ considered in Secs.~IV and V. For the case $\Delta<2\mu$, however, one cannot use the perturbation theory in the parameters (\ref{eq24}) for the contribution of the Matsubara term with $l=0$ (see Sec.~VI). The implicit thermal correction $\daT\ocF$ defined in Eq.~(\ref{eq22}) is the difference between the sum in $l$ and the integral (\ref{eq19}) with respect to $\zeta$. From Eq.~(\ref{eq2}) it is seen that $\zeta_l=\tau l$ where $\tau\equiv 4\pi k_BTa/(\hbar c)$. By replacing the integration variable $\zeta$ in Eq.~(\ref{eq19}) with $t=\zeta/\tau$, one can bring Eq.~(\ref{eq22}) to the form \begin{equation} \daT\cF=\frac{k_BT}{8\pi a^2}\left[ \sum_{l=0}^{\infty}{\vphantom{\sum}}^{\prime} \Phi(\tau l)- \int_{0}^{\infty}\!\!\!dt\Phi(\tau t)\right], \label{eq27} \end{equation} \noindent where \begin{equation} \Phi(x)=\int_{x}^{\infty}\!\!\!\!ydy\sum_{\lambda} \ln\left[1-r_{\lambda}^2(\ri x,y,0) e^{-y}\right]. \label{eq28} \end{equation} By applying the Abel-Plana formula \cite{2,60a}, Eq.~(\ref{eq27}) can be rewritten as \begin{equation} \daT\cF=\frac{\ri k_BT}{8\pi a^2} \int_{0}^{\infty}\!\frac{dt}{e^{2\pi t}-1}\left[ \Phi(\ri \tau t)-\Phi(-\ri \tau t)\right]. \label{eq29} \end{equation} \noindent In the next sections, Eq.~(\ref{eq29}) is used to find the asymptotic behavior of $\daT\ocF$ at arbitrarily low $T$. In order to determine the low-temperature behavior of the second thermal correction to the Casimir energy, $\deT\ocF$, we substitute Eq.~(\ref{eq25}) into its definition (\ref{eq23}) and use the identity \begin{widetext} \begin{eqnarray} && \ln\left\{1-\left[r_{\lambda}\ozy+\dT r_{\lambda}\zy\right]^2e^{-y}\right\} -\ln\left[1-r_{\lambda}^2\ozy e^{-y}\right] \nonumber \\ && =\ln\left\{1-\frac{2r_{\lambda}\ozy\dT r_{\lambda}\zy+ [\dT r_{\lambda}\zy]^2}{1-r_{\lambda}^2\ozy e^{-y}}\,e^{-y}\right\}. \label{eq30} \end{eqnarray} \end{widetext} \noindent Then, expanding the logarithm up to the first power of a small parameter and preserving only the term of the first power in $\dT r_{\lambda}\zy$, one arrives at \begin{equation} \deT\cF=-\frac{k_BT}{4\pi a^2} \sum_{l=0}^{\infty}{\vphantom{\sum}}^{\prime} \int_{\zeta_l}^{\infty}\!\!\!ydye^{-y} \sum_{\lambda} \frac{r_{\lambda}\ozy\dT r_{\lambda}\zy}{1-r_{\lambda}^2\ozy e^{-y}}. \label{eq31} \end{equation} \noindent This equation valid under a condition that $\tp_{00,l}$ and $\tp_l$ are nonzero at $T=0$ and, thus, $r_{\lambda}\ozy\neq 0$ is used below to determine the behavior of $\deT\ocF$ at low temperature. \section{Low-temperature behavior of the Casimir free energy and entropy for graphene sheets with {\boldmath$\Delta>2\mu$}} We assume that the graphene sheets under consideration in this section satisfy the condition $\Delta>2\mu$ and start with the thermal correction $\daT\cF$ to the Casimir energy defined in Eq.~(\ref{eq22}) and expressed by Eqs.~(\ref{eq27}) and (\ref{eq29}). In accordance to Eq.~(\ref{eq28}) the function $\Phi$ entering Eq.~(\ref{eq27}) is defined as the sum of contributions from the TM and TE modes \begin{equation} \Phi(x)=\Phi_{\rm TM}(x)+\Phi_{\rm TE}(x). \label{eq32} \end{equation} \noindent As a result, $\daT\cF$ becomes the sum of $\daT\ocF_{\rm TM}(a,T)$ and $\daT\ocF_{\rm TE}(a,T)$. Under the condition $\Delta>2\mu$, the polarization tensor at $T=0$ is given by Eq.~(\ref{eq6}). By replacing $\zeta_l$ with $x$ in Eq.~(\ref{eq6}) and substituting the obtained expressions in Eq.~(\ref{eq20}) where $\zeta$ is also replaced with $x$, one obtains \begin{eqnarray} && \rM(\ri x,y,0)= \frac{\alpha y\Psi(Dp^{-1})}{\alpha y\Psi(Dp^{-1})+2p(x,y)}, \nonumber \\[-1mm] &&\label{eq33}\\[-2mm] && \rE(\ri x,y,0)= -\frac{\alpha p(x,y)\Psi(Dp^{-1})}{\alpha p(x,y)\Psi(Dp^{-1})+2y}, \nonumber \end{eqnarray} \noindent where the quantity $p$ is defined as \begin{equation} p\equiv p(x,y)=[\vF^2y^2+(1-\vF^2)x^2]^{1/2}. \label{eq33a} \end{equation} In the analytic asymptotic expressions here and below we use the condition $\Delta>\hbar\omega_c$ (i.e., $D>1$) which is satisfied at not too small separations between the graphene sheets. Under this condition, at sufficiently small $x$ (low $T$) one can safely use the inequality $D\gg p(x,y)$ because the dominant contribution to the integrals in Eq.~(\ref{eq28}) is given by $y\sim 1$. We consider first the case $\lambda={\rm TM}$. By expanding in Eq.~(\ref{eq28}) in Taylor series around $x_0=0$ with the help of the first formula in Eq.~(\ref{eq33}) and above condition, we find \begin{eqnarray} && \Phi_{\rm TM}(x)=\Phi_{\rm TM}(0)+\frac{4\alpha^2}{9D^2}x^4+ \frac{16\alpha^2(8\alpha+3D)}{135D^3}x^5+O(x^6) \nonumber \\ && ~~~~\approx \Phi_{\rm TM}(0)+\frac{4\alpha^2}{9D^2}x^4+ \frac{16\alpha^2}{45D^2}x^5+O(x^6). \label{eq34} \end{eqnarray} The first two terms on the right-hand side of this equation do not contribute to Eq.~(\ref{eq29}), whereas the third term leads to \begin{equation} \Phi_{\rm TM}(\ri\tau t)-\Phi_{\rm TM}(-\ri\tau t)= \ri\frac{32\alpha^2}{45D^2}\tau^5t^5. \label{eq35} \end{equation} \noindent Substituting this result in Eq.~(\ref{eq29}), one arrives at \begin{equation} \daT\ocF_{\rm TM}(a,T)= -\frac{16\alpha^2\pi^4a(k_BT)^6}{315\Delta^2(\hbar c)^3}. \label{eq36} \end{equation} We continue with the case $\lambda={\rm TE}$. The function $\Phi_{\rm TE}(x)$ cannot be expanded in Taylor series around the point $x_0=0$. Because of this, we substitute the second line of Eq.~(\ref{eq33}) in Eq.~(\ref{eq28}), expand the integrand in powers of $x$ and integrate with respect to $y$ thereafter. The result is \begin{eqnarray} && \Phi_{\rm TE}(x)=\left(\frac{4\alpha}{3D}\right)^2\left[ \vphantom{\left(1-\frac{3}{4}\vF^2\right)} -6\vF^4-2\vF^2(1-\vF^2)x^2 \right. +\vF^2\left(1-\frac{3}{4}\vF^2\right)x^4+(1-\vF^2)x^4{\rm Ei}(-x) \nonumber \\ &&~~~~~~~~ \left. -\frac{2\vF^2}{3} \left(1-\frac{7}{10}\vF^2\right)x^5+O(x^6)\right], \label{eq37} \end{eqnarray} \noindent where ${\rm Ei}(z)$ in the exponential integral. The first three terms on the right-hand side of this expression do not contribute to Eq.~(\ref{eq29}). The dominant contribution is given by the term containing the exponential integral which leads to \begin{equation} \Phi_{\rm TE}(\ri\tau t)-\Phi_{\rm TE}(-\ri\tau t)= \ri\pi\left(\frac{4\alpha}{3D}\right)^2 \tau^4t^4. \label{eq38} \end{equation} \noindent Substituting this equation in Eq.~(\ref{eq29}) and integrating, one arrives at the result \begin{equation} \daT\ocF_{\rm TE}(a,T)= -\frac{32\zeta(5)\alpha^2(k_BT)^5}{3\pi^2\Delta^2(\hbar c)^2}. \label{eq39} \end{equation} Comparing this with Eq.~(\ref{eq36}), we conclude that the dominant term in the asymptotic behavior of $\daT\ocF$ at low $T$ is given by Eq.~(\ref{eq39}) and determined by the contribution of the TE mode, i.e., \begin{equation} \daT\cF=\daT\ocF_{\rm TE}(a,T)\sim -\frac{\alpha^2(k_BT)^5}{\Delta^2(\hbar c)^2}. \label{eq39a} \end{equation} We are now coming to the asymptotic behavior of the second thermal correction, $\deT\ocF$, at low $T$ which takes into account an explicit dependence of the polarization tensor on temperature as a parameter. This correction is presented in Eq.~(\ref{eq31}). It is convenient to express $\deT\ocF$ as a sum of two contributions \begin{equation} \deT\cF=\dbT\cF+\dcT\cF, \label{eq40} \end{equation} \noindent where the first one contains the term of Eq.~(\ref{eq31}) with $l=0$ and the second one --- all terms with $l\geqslant 1$. We start from the first contribution on the right-hand side of Eq.~(\ref{eq40}). According to Eq.~(\ref{eq31}), it contains the zero-temperature reflection coefficients and thermal corrections to them, both taken at the zero Matsubara frequency. The reflection coefficients at $l=0$ are obtained from Eq.~(\ref{eq33}) by putting $x=0$ \begin{eqnarray} && \rM(0,y,0)= \frac{\alpha \Psi(D\vF^{-1}y^{-1})}{\alpha \Psi(D\vF^{-1}y^{-1})+2\vF}, \nonumber \\[-1mm] &&\label{eq41}\\[-2mm] && \rE(0,y,0)= -\frac{\alpha \vF\Psi(D\vF^{-1}y^{-1})}{\alpha \vF\Psi(D\vF^{-1}y^{-1})+2}, \nonumber \end{eqnarray} Taking into account that for $y\sim 1$ it holds $\vF y\ll D$, we expand the function $\Psi$ in powers of the small parameter $\vF y/D$ and obtain \begin{equation} \Psi(D\vF^{-1}y^{-1}) \approx\frac{8}{3}\,\frac{\vF y}{D}. \label{eq42} \end{equation} \noindent As a result, Eq.~(\ref{eq41}) reduces to \begin{eqnarray} && \rM(0,y,0)\approx \frac{\alpha y}{\alpha y+\frac{3}{4}D} \approx \frac{4\alpha y}{{3}D}, \nonumber \\[-1mm] &&\label{eq43}\\[-2mm] && \rE(0,y,0)\approx -\frac{\alpha \vF^2 y}{\alpha \vF^2 y+\frac{3}{4}D} \approx -\frac{4\alpha \vF^2 y}{{3}D}. \nonumber \end{eqnarray} \noindent {}From Eq.~(\ref{eq43}) it is seen that \begin{equation} \rE(0,y,0)\approx -\vF^2 \rM(0,y,0), \label{eq44} \end{equation} \noindent i.e., the magnitude of the TE reflection coefficient taken at zero frequency and temperature is negligibly small as compared to the TM one. Next, we consider the thermal corrections to the reflection coefficients (\ref{eq43}) entering Eq.~(\ref{eq31}). By putting $l=0$ in Eq.~(\ref{eq26}), one obtains \begin{eqnarray} && \dT\rM(0,y,T)=\frac{2y\dT\tp_{00,0}\yt}{[\tp_{00,0}\yo +2y]^2}, \nonumber \\[-0.8mm] &&\label{eq45}\\[-2mm] && \dT\rE(0,y,T)=-\frac{2y^3\dT\tp_{0}\yt}{[\tp_{0}\yo +2y^3]^2}. \nonumber \end{eqnarray} \noindent Under the condition $\Delta>2\mu$ we can use Eq.~(\ref{eq14}) and, thus, the quantities $\dT\tp_{00,0}$ and $\dT\tp_0$ can be obtained from Eq.~(\ref{eq10}) taken at $l=0$. Taking into account that under the condition $\Delta>2\mu$ the first contribution to Eq.~(\ref{eq11}) leads to an additional exponentially small factor $\exp[-2\mu/(k_BT)]$, one can preserve only the second contribution. As a result, we have \begin{equation} \dT\tp_{00,0}\yt=\frac{4\alpha D}{\vF^2}\left[I_{00,0}^{(1)}+ \frac{1}{\vF y}I_{00,0}^{(2)}\right], \label{eq46} \end{equation} \noindent where \begin{eqnarray} && I_{00,0}^{(1)}=\int_1^{\infty}\!\!dt \left(e^{\frac{t\Delta-2\mu}{2k_BT}}+1\right)^{-1}, \label{eq47} \\ && I_{00,0}^{(2)}=\int_1^{f(y,D)}\!\!dt \left(e^{\frac{t\Delta-2\mu}{2k_BT}}+1\right)^{-1}\!\! \frac{D^2t^2-\vF^2y^2}{[\vF^2y^2-D^2(t^2-1)]^{1/2}} \nonumber \end{eqnarray} \noindent and the function $f(y,D)$ is defined as \begin{equation} f(y,D)=\sqrt{1+\frac{\vF^2y^2}{D^2}}. \label{eq48} \end{equation} For the thermal correction $\dT\tp_0$ from the second line in Eq.~(\ref{eq10}) one obtains \begin{equation} \dT\tp_{0}\yt=-\frac{4\alpha D^3y}{\vF}\int_1^{f(y,D)}\!\!dt \left(e^{\frac{t\Delta-2\mu}{2k_BT}}+1\right)^{-1}\! \frac{t^2-1}{[\vF^2y^2-D^2(t^2-1)]^{1/2}}. \label{eq49} \end{equation} Since we consider arbitrarily low $T$, we can use the condition $\Delta-2\mu\gg k_BT$. Under this condition the quantity $I_{00,0}^{(1)}$ in Eq.~(\ref{eq47}) takes an especially simple form \begin{equation} I_{00,0}^{(1)}\approx \frac{2k_BT}{\Delta}\,e^{-\frac{\Delta-2\mu}{2k_BT}}. \label{eq50} \end{equation}. The quantity $I_{00,0}^{(2)}$ defined in Eq.~(\ref{eq47}) is calculated at low temperature in the Appendix. According to Eq.~(\ref{A3}), the asymptotic behavior of $I_{00,0}^{(2)}$ is given by \begin{equation} I_{00,0}^{(2)}\sim \ \frac{k_BT}{\vF}\frac{\Delta}{(\hbar \omega_c)^2} \deE. \label{eq51} \end{equation} Then, from Eqs.~(\ref{eq46}), (\ref{eq50}), and (\ref{eq51}) we can conclude that \begin{equation} \dT\tp_{00,0}\yt\sim \frac{\alpha k_BT}{\hbar \omega_c}\deE \left(C_1+\frac{C_2}{y}\right), \label{eq54} \end{equation} \noindent where $C_1\sim \vF^{-2}$ and $C_2\sim \vF^{-4}$ are the constants. The integral with respect to $t$ in Eq.~(\ref{eq49}) for $\dT\tp_0$ can be estimated similar to Eqs.~(\ref{A2}) and (\ref{A3}). For this purpose, using Eq.~(\ref{eq48}), we replace $t^2-1$ with a larger quantity $\vF^2 y^2/D^2$ and obtain \begin{equation} \dT\tp_{0}\yt\sim -\frac{\alpha k_BT}{\hbar \omega_c}C_3\deE, \label{eq55} \end{equation} \noindent where $C_3\sim \vF^{0}$. Substituting Eqs.~(\ref{eq6}), (\ref{eq42}), (\ref{eq54}) and (\ref{eq55}) in Eq.~(\ref{eq45}), one finds \begin{eqnarray} &&\dT\rM\oyt=\frac{\dT\tp_{00,0}\yt}{2y\left(\alpha y\frac{4}{3D}+1\right)^2} \approx \frac{\dT\tp_{00,0}\yt}{2y} \sim\frac{\alpha k_BT}{\hbar \omega_c}\deE \left(\frac{C_1}{y}+\frac{C_2}{y^2}\right), \nonumber \\ &&\dT\rE\oyt=-\frac{\dT\tp_{0}\yt}{2y^3\left(\alpha \vF^2\frac{4y}{3D}+1\right)^2} \approx -\frac{\dT\tp_{0}\yt}{2y^3} \sim\frac{\alpha k_BT}{\hbar \omega_c y^3}C_3\deE. \label{eq56} \end{eqnarray} {}From these equations, we obtain \begin{equation} \dT\rE\oyt\sim\vF^4\dT\rM\oyt, \label{eq57} \end{equation} \noindent i.e., similar to Eq.~(\ref{eq44}), thermal correction to the TE reflection coefficient at zero Matsubara frequency is negligibly small comparing to the TM one. Now we substitute the first lines of Eqs.~(\ref{eq43}) and (\ref{eq56}) in the term of Eq.~(\ref{eq31}) with $l=0$ and obtain \begin{equation} \dbT\cF\approx\dbT\ocF_{\rm TM}(a,T)\sim -\frac{\alpha^2(k_BT)^2}{a^2\Delta} \deE\int_0^{\infty}\!\!\!dy\,e^{-y} \frac{C_1y+C_2}{1-\left(\frac{4\alpha y}{3D}\right)^2e^{-y}}. \label{eq58} \end{equation} Taking into consideration that the integral in this equation converges, the final result is \begin{equation} \dbT\cF\sim -\frac{\alpha^2(k_BT)^2}{a^2\Delta}\deE \label{eq59} \end{equation} We are passing now to a consideration of the correction $\dcT\ocF$ which is equal to the sum of all terms with $l\geqslant 1$ in Eq.~(\ref{eq31}). In this case, from Eq.~(\ref{eq33}) with $x=\zeta_l$, using an approximate equality \begin{equation} \Psi\left(\frac{D}{p_l}\right) \approx\frac{8}{3}\,\frac{p_l}{D} \label{eq60} \end{equation} \noindent similar to Eq.~(\ref{eq42}), we find \begin{eqnarray} && \rM\ozy\approx \frac{\alpha y}{\alpha y+\frac{3}{4}D} \approx \frac{4\alpha y}{{3}D}, \label{eq61}\\ && \rE\ozy\approx -\frac{\alpha p_l^2}{\alpha p_l^2+\frac{3}{4}Dy} \approx -\frac{4\alpha p_l^2}{{3}Dy}\approx -\frac{4\alpha \vF^2 y}{{3}D}. \nonumber \end{eqnarray} \noindent Here we have used that for $y\sim 1$, giving the dominant contribution to Eq.~(\ref{eq31}), $D\gg\alpha y$ and considered $p_l\approx\vF y$ at $\tau\to 0$. From Eq.~(\ref{eq61}) it is seen that similar to Eq.~(\ref{eq44}) relationship \begin{equation} \rE(\ri\zeta_l,y,0)\approx -\vF^2 \rM(\ri\zeta_l,y,0), \label{eq62} \end{equation} \noindent holds at any $\zeta_l$. Using Eq.~(\ref{eq26}), in the same approximation as in Eq.~(\ref{eq56}) one obtains \begin{eqnarray} &&\dT\rM\zy\approx \frac{y\dT\tp_{00,l}\yt}{2(y^2-\zeta_l^2)}, \nonumber \\ &&\dT\rE\zy \approx -\frac{\dT\tp_{l}\yt}{2y(y^2-\zeta_l^2)}. \label{eq63} \end{eqnarray} {}From Eqs.~(\ref{eq10}), (\ref{eq12}) and (\ref{eq14}) one can make sure that \begin{equation} \left.\dT\tp_{00,l}\yt\right|_{y=\zeta_l}\!\!\!= \left.\dT\tp_{l}\yt\right|_{y=\zeta_l\!\!\!}=0. \label{eq64} \end{equation} \noindent Because of this, the integrals with respect to $y$ in Eq.~(\ref{eq31}) are convergent at the low integration limit for all $l\geqslant 1$. Since the dominant contribution in Eq.~(\ref{eq31}) is given by $y\sim 1$, in the limiting case $\tau\to 0$ one can expand the integrand in Taylor series in the powers of $\zeta_l=\tau l$. For the order of magnitude estimation of the asymptotic behavior at $T\to 0$, it will suffice to consider the lowest expansion order. In this way, from Eqs.~(\ref{eq31}), (\ref{eq54}) and (\ref{eq63}) we find \begin{widetext} \begin{eqnarray} && \dcT\ocF_{\rm TM}(a,T)\sim -\frac{k_BT}{a^2}\sum_{l=1}^{\infty} \int_{\zeta_l}^{\infty}\!\!\!\!ydye^{-y} \frac{\rM(0,y,0)}{1-r_{\rm TM}^2(0,y,0)e^{-y}}\frac{\dT\tp_{00,0}\yt}{y} \nonumber \\ &&~~ \sim -\frac{\alpha(k_BT)^2}{\hbar c a}\deE\sum_{l=1}^{\infty} \int_{\zeta_l}^{\infty}\!\!\!\!dye^{-y} \frac{\rM(0,y,0)}{1-r^2_{\rm TM}(0,y,0)e^{-y}}\left(C_1+\frac{C_2}{y}\right). \label{eq65} \end{eqnarray} \end{widetext} \noindent By introducing the variable $v=y/\zeta_l$ and using Eq.~(\ref{eq61}), it is seen that in the asymptotic limit $\tau\to 0$ the denominator in Eq.~(\ref{eq65}) can be replaced with unity and, thus, \begin{widetext} \begin{eqnarray} && \dcT\ocF_{\rm TM}(a,T)\sim -\frac{\alpha^2(k_BT)^2}{\hbar c a}\deE \sum_{l=1}^{\infty}\zeta_l^2 \int_{\zeta_l}^{\infty}\!\!\!\!vdve^{-\zeta_lv} \left(C_1+\frac{C_2}{\zeta_lv}\right) \label{eq66} \\ &&~~ = -\frac{\alpha^2(k_BT)^2}{\hbar c a}\deE\sum_{l=1}^{\infty} \left[C_1(1+\zeta_l)+C_2\right]e^{-\zeta_l} \sim -\frac{\alpha^2(k_BT)^2}{\hbar c a}\deE\frac{1}{\tau} \sim -\frac{\alpha^2k_BT}{a^2}\deE\, . \nonumber \end{eqnarray} \end{widetext} Similar estimation shows that the contribution of the TE mode to Eq.~(\ref{eq31}) is again negligibly small \begin{equation} \dcT\ocF_{\rm TE}(a,T)\sim \vF^2\dcT\ocF_{\rm TM}(a,T). \label{eq67} \end{equation} \noindent Because of this, the result is \begin{equation} \dcT\cF\sim \dcT\ocF_{\rm TM}(a,T)\sim -\frac{\alpha^2k_BT}{a^2}\deE. \label{eq68} \end{equation} Comparing Eqs.~(\ref{eq59}) and (\ref{eq68}), we notice that a summation over the nonzero Matsubara frequencies decreases by one the power of temperature in front of the main exponential factor. Note also that Eqs.~(\ref{eq39a}), (\ref{eq59}), and (\ref{eq68}) are obtained under the condition $\Delta>\hbar\omega_c$ and, thus, one cannot put there $\Delta=0$. These equations, however, are well applicable for graphene with $\mu=0$. Now we can find the dominant asymptotic behavior of the total thermal correction to the Casimir energy at zero temperature $\dT\ocF$ in the limit of low temperature. Taking into account that in accordance to Eqs.~(\ref{eq21}) and (\ref{eq40}) $\dT\ocF$ is given by the sum of Eqs.~(\ref{eq39a}), (\ref{eq59}), and (\ref{eq68}), one concludes that under a condition $\Delta>2\mu$ its leading behavior is given by Eq.~(\ref{eq39a}), i.e., \begin{equation} \dT\cF\sim -\frac{\alpha^2(k_BT)^5}{\Delta^2(\hbar c)^2}, \label{eq69} \end{equation} \noindent and is determined by the TE contribution to the implicit temperature dependence. This result gives the possibility to find the low-temperature behavior of the Casimir entropy per unit area of the graphene sheets defined as \begin{equation} S(a,T)=-\frac{\partial\cF}{\partial T}=-\frac{\partial\dT\cF}{\partial T}. \label{eq70} \end{equation} \noindent Using Eq.~(\ref{eq69}), one finds \begin{equation} S(a,T)\sim \frac{\alpha^2k_B^5T^4}{\Delta^2(\hbar c)^2}, \label{eq71} \end{equation} \noindent which vanishes with vanishing temperature in agreement with the third law of thermodynamics (the Nernst heat theorem) \cite{61,62}. This means that the Lifshitz theory using the response function of graphene with $\Delta>2\mu$ expressed in terms of the polarization tensor is thermodynamically consistent. To summarize the application region of the obtained results, in this section we used the conditions \begin{equation} k_BT\ll\frac{\hbar v_F}{2a}\ll\frac{\hbar c}{2a}<\Delta, \quad k_BT\ll\Delta-2\mu \label{eq73a} \end{equation} \noindent and made the asymptotic expansions in three small parameters \begin{equation} \tau\equiv\frac{4\pi k_BTa}{\hbar c}\ll 1, \quad \frac{\hbar v_F}{2a\Delta}\ll 1, \quad e^{-\frac{\Delta-2\mu}{2k_BT}}\ll 1. \label{eq73b} \end{equation} \noindent The last parameter was used in finding the low-temperature behavior of $\deT{\cal F}$. It is possible, however, to dispense with this parameter (see the next section). \section{Low-temperature behavior of the Casimir free energy and entropy for graphene sheets with {\boldmath$\Delta=2\mu$}} As was stated in Sec.~II, Eqs.~(\ref{eq13}) and (\ref{eq14}) preserve their validity in the case $\Delta=2\mu$. Because of this, all the results for $\daT\ocF$ obtained in Sec.~III for the graphene sheets with $\Delta>2\mu$ remain valid in the case $\Delta=2\mu$. Specifically, the low-temperature behavior of $\daT\ocF$ is again determined by the TE mode and is given by Eq.~(\ref{eq39a}). An explicit temperature dependence, however, leads to a radically different results. Although Eqs.~(\ref{eq40})--(\ref{eq49}) remain valid in the case $\Delta=2\mu$, the subsequent equations obtained under a condition $\Delta-2\mu\gg k_BT$ are not applicable. Thus, instead of Eq.~(\ref{eq50}), from the first line of Eq.~(\ref{eq47}) we obtain \begin{equation} I_{00,0}^{(1)}=\frac{2k_BT}{\Delta}\ln 2. \label{eq72} \end{equation} A more exact calculation of the integral $I_{00,0}^{(2)}$ defined in Eqs.~(\ref{eq47}) and (\ref{eq48}) in the case $\Delta=2\mu$ (see Appendix) in accordance to Eq.~(\ref{A6}) results in \begin{equation} I_{00,0}^{(2)}\sim\frac{k_BT}{\vF}\frac{\Delta}{(\hbar\omega_c)^2}\ln 2. \label{eq73} \end{equation} As is seen from the comparison of Eqs.~(\ref{eq72}) and (\ref{eq73}) with Eqs.~(\ref{eq50}) and (\ref{eq51}), respectively, the values of $I_{00,0}^{(1)}$ and $I_{00,0}^{(2)}$ in the cases $\Delta>2\mu$ and $\Delta=2\mu$ differ only by the missing exponential factor and by an occurrence of the factor $\ln 2$ in the latter case. This allows to conclude that, similar to the case $\Delta>2\mu$ considered in Sec.~IV, the dominant contribution to the thermal correction $\dbT\ocF$ is determined by the TM mode. Up to an order of magnitude estimation of this contribution for the case $\Delta=2\mu$, in accordance to Eq.~(\ref{eq59}), is given by \begin{equation} \dbT\cF\sim -\frac{\alpha^2(k_BT)^2}{a^2\Delta}. \label{eq76} \end{equation} In a similar way, by repeating the derivation in Eqs.~(\ref{eq60})--(\ref{eq68}), one arrives at a conclusion that for $\Delta=2\mu$ the contribution $\dcT\ocF$ to the thermal correction at low temperature is estimated by Eq.~(\ref{eq68}) where the exponential factor is replaced with unity \begin{equation} \dcT\cF\sim -\frac{\alpha^2k_BT}{a^2}. \label{eq77} \end{equation} {}From the comparison of Eq.~(\ref{eq39a}) for an implicit contribution to the thermal correction, which is valid also for the case $\Delta=2\mu$, with the explicit contributions (\ref{eq76}) and (\ref{eq77}), one concludes that in this case the low-temperature behavior of the total thermal correction is given by \begin{equation} \dT\cF\sim -\frac{\alpha^2k_BT}{a^2}, \label{eq78} \end{equation} \noindent which originates from the TM mode in an explicit temperature dependence. In the case $\Delta=2\mu$, Eqs.~(\ref{eq39a}) and (\ref{eq76})--(\ref{eq78}) are obtained under the first set of inequalities in Eq.~(\ref{eq73a}), i.e., do not using the condition $k_BT\ll\Delta-2\mu$. They employ only the first two small parameters indicated in Eq.~(\ref{eq73b}) and are valid for graphene with $\Delta\neq 0$ and $\mu\neq 0$. The result (\ref{eq78}) leads to problems. The point is that, in accordance to Eq.~(\ref{eq70}), the respective Casimir entropy per unit area of the graphene sheets at low temperature behaves as \begin{equation} S(a,T)\sim \frac{\alpha^2k_B}{a^2}. \label{eq79} \end{equation} Thus, the Casimir entropy at zero temperature is the nonzero (positive) constant depending on the volume of a system in violation of the Nernst heat theorem \cite{61,62}. As discussed in Sec.~I, the same situation holds for metals with perfect crystal lattices described by the dielectric permittivity of the Drude model which, as opposed to the polarization tensor of graphene, is not derived from the first principles of quantum field theory. It should be taken into consideration, however, that for a real graphene sheet the values of $\Delta$ and $\mu$ cannot be known precisely. Thus, from the practical standpoint, the equality $\Delta=2\mu$ can be considered as some singular point (see further discussion in Sec.~VII). It is only important what are the properties of the Casimir free energy and entropy at low temperatures for graphene sheets with $\Delta<2\mu$. This question is answered in the next section. \section{Low-temperature behavior of the Casimir free energy and entropy for graphene sheets with {\boldmath$\Delta<2\mu$}} Here, we consider the last possibility when the chemical potential is relatively large by exceeding the half of the energy gap. As in two preceding sections, we begin with consideration of the implicit contribution to the thermal correction given by Eq.~(\ref{eq29}), where the function $\Phi(x)$ is expressed via the reflection coefficients at zero temperature by Eq.~(\ref{eq28}). In order to find these reflection coefficients, we consider the polarization tensor (\ref{eq15}) and (\ref{eq16}) found in the case $\Delta<2\mu$, replace $\zeta_l$ with $x$ in Eqs.~(\ref{eq15}) and (\ref{eq16}) and expand the results up to the first power in $x$ under the condition $\sqrt{4\mu^2-\Delta^2}>\hbar\omega_c$ which is satisfied at not too small separations between the graphene sheets. The result is \begin{equation} \tp_{00}(x,y,0,\Delta,\mu)=Q_0-Q_1\frac{x}{y}, \quad \tp(x,y,0,\Delta,\mu)=Q_2yx, \label{eq80} \end{equation} \noindent where the following notations are introduced \begin{equation} Q_0=\frac{4\alpha}{\vF^2}\,\frac{2\mu}{\hbar\omega_c}, \quad Q_1=\frac{16\alpha\mu^2}{\vF^3\hbar\omega_c\sqrt{4\mu^2-\Delta^2}}, \quad Q_2=\frac{4\alpha\sqrt{4\mu^2-\Delta^2}}{\vF\hbar\omega_c}. \label{eq81} \end{equation} \noindent It is easily seen that under the used conditions $Q_0\gg 1$ holds. We consider first the TM contribution to the function $\Phi(x)$ in Eqs.~(\ref{eq28}) and (\ref{eq32}) and expand it up to the first power in small $x$ \begin{equation} \Phi_{\rm TM}(x)=\Phi_{\rm TM}(0)+x\Phi_{\rm TM}^{\prime}(0). \label{eq82} \end{equation} Substituting Eq.~(\ref{eq80}) in the first line of Eq.~(\ref{eq20}), where $\zeta$ is replaced with $x$, one obtains \begin{eqnarray} && \rM(x,y,0)=\frac{yQ_0-Q_1x}{ yQ_0-Q_1x+2(y^2-x^2)}, \nonumber \\ && \rM(0,y,0)=\frac{Q_0}{ Q_0+2y}. \label{eq83} \end{eqnarray} {}From Eq.~(\ref{eq28}) at $\lambda={\rm TM}$, using Eq.~(\ref{eq83}), it is easily seen that the quantity $\Phi_{\rm TM}(x)$ at $x=0$ is represented by a converging integral. Calculating the first derivative of $\Phi_{\rm TM}(x)$, one obtains \begin{equation} \Phi_{\rm TM}^{\prime}(x)=-x\ln(1-e^{-x}) -\int_x^{\infty}\!\!\!ydy\frac{2\rM(x,y,0)e^{-y}}{1-r_{\rm TM}^2(x,y,0)e^{-y}} \frac{\partial\rM(x,y,0)}{\partial x}. \label{eq84} \end{equation} By differentiating the first equality in Eq.~(\ref{eq83}), one finds \begin{equation} \left.\frac{\partial\rM(x,y,0)}{\partial x}\right|_{x=0}= -\frac{2Q_1}{(Q_0+2y)^2}. \label{eq85} \end{equation} \noindent Then, substituting Eq.~(\ref{eq85}) in Eq.~(\ref{eq84}), we have \begin{equation} \Phi_{\rm TM}^{\prime}(0)=4Q_1 \int_0^{\infty}\!\!\!dy\frac{y}{(Q_0+2y)^2} \frac{\rM(0,y,0)e^{-y}}{1-r_{\rm TM}^2(0,y,0)e^{-y}}. \label{eq86} \end{equation} Taking into account that $Q_0\gg 1$ and that the main contribution to the integral is given by $y\sim 1$, one finds from the second equality in Eq.~(\ref{eq83}) that $\rM(0,y,0)\approx 1$. In such a manner, Eq.~(\ref{eq86}) reduces to \begin{equation} \Phi_{\rm TM}^{\prime}(0)\approx\frac{4Q_1}{Q_0^2} \int_0^{\infty}\!\!\frac{y\,dy}{e^y-1} =\frac{2\pi^2Q_1}{3Q_0^2}. \label{eq87} \end{equation} \noindent Substituting this equation in Eq.~(\ref{eq82}), one obtains \begin{equation} \Phi_{\rm TM}(\ri\tau t)-\Phi_{\rm TM}(-\ri\tau t)= \ri\frac{4\pi^2Q_1}{3Q_0^2}\tau T. \label{eq88} \end{equation} Now we consider the contribution of the TE mode in Eqs.~(\ref{eq28}) and (\ref{eq32}). In this case the reflection coefficient is obtained by substituting Eq.~(\ref{eq80}) in the second line of Eq.~(\ref{eq20}) \begin{equation} \rE(x,y,0)=-\frac{Q_2x}{Q_2x+2(y^2-x^2)}. \label{eq89} \end{equation} \noindent As is seen from this equation, $\rE(x,y,0)$ goes to zero with vanishing $x$. Using the first expansion term in the powers of $\rE(x,y,0)$ in Eq.~(\ref{eq28}), we find \begin{equation} \Phi_{\rm TE}(x)\approx -\int_x^{\infty}\!\!\!ydyr_{\rm TE}^2(x,y,0)e^{-y}. \label{eq90} \end{equation} \noindent Substituting here Eq.~(\ref{eq89}), one obtains \begin{eqnarray} && \Phi_{\rm TE}(x)\approx -Q_2^2x^2\int_x^{\infty}\!\!\!dy \frac{y\,e^{-y}}{\left[Q_2x+2(y^2-x^2)\right]^2} \nonumber \\ && \approx -\frac{Q_2^2x^2}{4}\int_x^{\infty}\!\!\!dy \frac{e^{-y}}{y^3}= \frac{Q_2^2x^2}{8}\left[{\rm Ei}(-x)-\frac{e^{-x}(1-x)}{x^2}\right] \nonumber \\ &&~~ \approx -\frac{1}{8}Q_2^2\left[1-2x+x^2\ln x+O(x^2)\right]. \label{eq91} \end{eqnarray} \noindent {}From this equation, the difference of our interest is given by \begin{equation} \Phi_{\rm TE}(\ri\tau t)-\Phi_{\rm TE}(-\ri\tau t)= \ri\frac{Q_2^2}{2}\tau t. \label{eq92} \end{equation} Comparing the difference in Eq.~(\ref{eq88}) with that in Eq.~(\ref{eq92}), one finds that the latter is larger than the former by the factor \begin{equation} \frac{3Q_0^2Q_2^2}{8\pi^2Q_1}=\frac{24}{\pi^2} \left(\frac{\alpha\sqrt{4\mu^2-\Delta^2}}{\vF\hbar\omega_c}\right)^3\gg 1. \label{eq93} \end{equation} \noindent Thus, one can approximately put \begin{equation} \Phi(\ri\tau t)-\Phi(-\ri\tau t)\approx \Phi_{\rm TE}(\ri\tau t)-\Phi_{\rm TE}(-\ri\tau t). \label{eq94} \end{equation} Finally, substituting Eqs.~(\ref{eq92}) and (\ref{eq94}) in Eq.~(\ref{eq29}), one arrives at the result \begin{equation} \daT\cF\approx-\frac{k_BT}{16\pi a^2}Q_2^2\tau \int_0^{\infty}\!\!\frac{t\,dt}{e^{2\pi t}-1} =-\frac{4\alpha^2a(k_BT)^2(4\mu^2-\Delta^2)}{3\vF^2(\hbar c)^3}. \label{eq95} \end{equation} \noindent This result is obtained under a condition $\sqrt{4\mu^2-\Delta^2}>\hbar\omega_c$ and, thus, $\mu\neq 0$. However, $\Delta=0$ is allowed. Now we consider the explicit contributions to the thermal correction in the case $\Delta<2\mu$ starting with $\dbT\ocF$. We again use the condition $\sqrt{4\mu^2-\Delta^2}>\hbar\omega_c$. Under this condition, in accordance with Eq.~(\ref{eq80}), $\tp_{00,0}\yo=Q_0\neq 0$ and the reflection coefficient $\rM(0,y,0)$ is given by the second expression in Eq.~(\ref{eq83}) and, thus, is not equal to zero. Because of this, for calculating the TM contribution to $\dbT\ocF$ one can use the term with $l=0$ in Eq.~(\ref{eq31}). The TE contribution to $\dbT\ocF$ is a different matter. Here, in accordance to the second formula in Eq.~(\ref{eq80}), $\tp_0\yo= 0$ and, due to Eq.~(\ref{eq89}), $\rE(0,y,0)=0$. Because of this, Eq.~(\ref{eq31}) is not applicable in this case and one should calculate $\dbT\ocF_{\rm TE}$ using its definition as the term with $l=0$ in Eq.~(\ref{eq23}). Taking into account that due to the equality $\rE(0,y,0)=0$ one has $\rE(0,y,T)=\dT\rE(0,y,T)$, Eq.~(\ref{eq23}) leads to \begin{eqnarray} && \dbT\ocF_{\rm TE}(a,T)=\frac{k_BT}{16\pi a^2}\int_0^{\infty}\!\!\!\! y\,dy \ln\left\{1-\left[\dT\rE(0,y,T)\right]^2e^{-y}\right\} \nonumber \\ &&~~ \approx -\frac{k_BT}{16\pi a^2}\int_0^{\infty}\!\!\!\! ydy\left[\dT\rE(0,y,T)\right]^2e^{-y}, \label{eq96} \end{eqnarray} \noindent where the last transformation is valid at sufficiently low $T$. The thermal correction to the TM reflection coefficient in Eq.~(\ref{eq31}), in accordance to Eqs.~(\ref{eq45}) and (\ref{eq80}) taken at $x=0$, is given by \begin{equation} \dT\rM\oyt=\frac{2y\dT\tp_{00,0}\yt}{(Q_0+2y)^2}. \label{eq97} \end{equation} For obtaining $\dT\rE$, Eq.~(\ref{eq45}) is not applicable, so that it is found using Eq.~(\ref{eq4}) taken at $l=0$ with account of the equalities $\tp_0=\dT\tp_0$ and $\rE\oyt=\dT\rE\oyt$ \begin{equation} \dT\rE\oyt=-\frac{\dT\tp_{0}\yt}{\dT\tp_{0}\yt+2y^3} \approx -\frac{\dT\tp_{0}\yt}{2y^3}. \label{eq98} \end{equation} \noindent In the last transformation we have taken into account that the dominant contribution to Eq.~(\ref{eq96}) is given by $y\sim 1$ and that $\dT\tp_0$ goes to zero with vanishing $T$. In the case $\Delta<2\mu$ under consideration now, the quantities $\dT\tp_{00,0}$ and $\dT\tp_0$, entering Eqs.~(\ref{eq97}) and (\ref{eq98}), can be found from Eqs.~(\ref{eq10}) and (\ref{eq16a}) \begin{widetext} \begin{eqnarray} && \dT\tp_{00,0}\yt=\frac{4\alpha D}{\vF^2}\left[ \int_1^{\infty}\!\!dt\left(e^{\frac{t\Delta-2\mu}{2k_BT}}+1\right)^{-1} \!\!X_{00,0}(t,y,D) -\int_1^{2\mu/\Delta}\!\!\!\!dtX_{00,0}(t,y,D) \right], \nonumber \\[-1mm] &&\label{eq99}\\[-1mm] && \dT\tp_{0}\yt= \frac{4\alpha D}{\vF^2}\left[ \int_1^{\infty}\!\!dt\left(e^{\frac{t\Delta-2\mu}{2k_BT}}+1\right)^{-1} \!\!X_{0}(t,y,D) -\int_1^{2\mu/\Delta}\!\!\!\!\!dtX_{0}(t,y,D) \right]. \nonumber \end{eqnarray} \end{widetext} \noindent Here, similar to Eqs.~(\ref{eq46}) and (\ref{eq47}), we have omitted the first contribution to Eq.~(\ref{eq11}) leading to an additional exponentially small factor. The quantities $X_{00,0}$ and $X_0$ in Eq.~(\ref{eq99}) are defined by Eq.~(\ref{eq12}) where one should put $l=0$ \begin{eqnarray} && X_{00,0}(t,y,D)=1+\frac{1}{\vF y} {\rm Re} \frac{D^2t^2-\vF^2y^2}{\sqrt{\vF^2y^2-D^2t^2+D^2}}, \nonumber \\ && X_{0}(t,y,D)={\vF y}D^2 {\rm Re}\frac{t^2-1}{\sqrt{\vF^2y^2-D^2t^2+D^2}}. \label{eq100} \end{eqnarray} \noindent Note that here the real part is not equal to zero only for $t\leqslant f(y,D)$, where $f(y,D)$ is defined in Eq.~(\ref{eq48}). It is easily seen that $f(y,D)<2\mu/\Delta$ [the upper integration limit in the second contributions in Eq.~(\ref{eq99})] if $y$ satisfies the inequality \begin{equation} y<\frac{\sqrt{4\mu^2-\Delta^2}}{\vF\hbar\omega_c}. \label{eq101} \end{equation} Under the condition $\sqrt{4\mu^2-\Delta^2}>\hbar\omega_c$, accepted above, this inequality is satisfied with large safety margin over the entire range of $y$ giving the major contribution to Eqs.~(\ref{eq31}) and (\ref{eq96}). Because of this, the upper integration limits of the integrals with respect to $t$ in Eq.~(\ref{eq99}), containing the real parts indicated in Eq.~(\ref{eq100}), should be replaced with $f(y,D)$. Taking into account also that $D>1$, i.e., $D\gg\vF y$, and $t^2-1<\vF^2y^2/D^2$ over the entire domain of integration, from Eqs.~(\ref{eq99}) and (\ref{eq100}) in the asymptotic limit $k_BT\ll 2\mu-\Delta$ one obtains \begin{eqnarray} && \dT\tp_{00,0}\yt=\frac{4\alpha D}{\vF^2} \times\left[ \int_1^{\infty}\!\!dt\left(e^{\frac{t\Delta-2\mu}{2k_BT}}+1\right)^{-1} -\int_1^{2\mu/\Delta}\!\!\!dt\right] +\frac{4\alpha D^3}{\vF^3y}Y\yt, \nonumber \\ && \dT\tp_{0}\yt=\frac{4\alpha\vF y^3}{D}Y\yt, \label{eq102} \end{eqnarray} \noindent where the following notation is introduced \begin{equation} Y\yt\equiv\int_1^{f(y,D)}\!\!dt \left[\left(e^{\frac{t\Delta-2\mu}{2k_BT}}+1\right)^{-1}-1\right] \frac{1}{\left[\vF^2y^2-D^2(t^2-1)\right]^{1/2}}. \label{eq103} \end{equation} The first contribution to $\dT\tp_{00,0}$ in Eq.~(\ref{eq102}) is easily calculated \begin{eqnarray} &&~ \frac{4\alpha D}{\vF^2}\left[ \int_1^{\infty}\!\!dt\left(e^{\frac{t\Delta-2\mu}{2k_BT}}+1\right)^{-1} -\int_1^{2\mu/\Delta}\!\!\!dt\right] =\frac{8\alpha}{\vF^2\hbar\omega_c}\left[k_BT \ln\frac{\left(1+e^{\frac{\Delta-2\mu}{2k_BT}}\right) \left(1+e^{\frac{\mu}{k_BT}}\right)}{1+e^{-\frac{\mu}{k_BT}}} -\mu\right] \nonumber \\ &&~~ \approx\frac{8\alpha}{\vF^2\hbar\omega_c}\meE. \label{eq104} \end{eqnarray} The low-temperature behavior of the integral $Y$ defined in Eq.~(\ref{eq103}) is found in the Appendix. According to Eq.~(\ref{A9}) one has \begin{equation} Y\yt\approx -\frac{\vF y}{D^2}\,\meE. \label{eq105} \end{equation} Substituting Eqs.~(\ref{eq104}) and (\ref{eq105}) in Eq.~(\ref{eq102}), one obtains \begin{eqnarray} && \dT\tp_{00,0}\yt\approx\frac{4\alpha}{\vF^2} \left(\frac{2k_BT}{\hbar c}-D\right)\meE \sim -\frac{\alpha\Delta}{\hbar\omega_c}\meE, \nonumber \\ && \dT\tp_{0}\yt\approx -\frac{4\alpha\vF^2y^4}{D^3}\meE. \label{eq108} \end{eqnarray} \noindent We note that according to Eq.~(\ref{eq96}) $\dbT\ocF_{\rm TE}$ is of the order of $(\dT\rE)^2$, i.e., $\sim(\dT\tp_0)^2\sim \exp[-2(2\mu-\Delta)/(2k_BT)]$ and, thus, contains an additional exponentially small factor. Because of this, we have \begin{equation} \dbT\cF\approx\dbT\ocF_{\rm TM}(a,T). \label{eq109} \end{equation} Substituting Eqs.~(\ref{eq83}), (\ref{eq97}), and the first equality in Eq.~(\ref{eq108}) in the TM term of Eq.~(\ref{eq31}) with $l=0$, one finally finds \begin{equation} \dbT\ocF_{\rm TM}(a,T)\sim\frac{k_BTQ_0\alpha\Delta}{a^2\hbar\omega_c} \,\meE \int_0^{\infty}\!\!\!\!y^2dy\frac{e^{-y}}{(Q_0+2y)^3-Q_0(Q_0+2y)e^{-y}}. \label{eq110} \end{equation} \noindent Taking into account Eq.~(\ref{eq109}), the convergence of the integral which is of the order of $Q_0^{-3}$, and substituting the definition of $Q_0$ given in Eq.~(\ref{eq81}), the up to an order of magnitude behavior of $\dbT\ocF$ at low temperature is \begin{equation} \dbT\cF\sim \frac{k_BT\hbar c\Delta}{\alpha a^3\mu^2}\,\meE. \label{eq111} \end{equation} We recall that this asymptotic behavior is derived under the conditions $D>1$, i.e., $\Delta>\hbar\omega_c$ and $\sqrt{4\mu^2-\Delta^2}>\hbar\omega_c$ which are satisfied at sufficiently large separations between graphene sheets with nonzero $\Delta$ and $\mu$. It only remains to find the low-temperature behavior of the last contribution to the thermal correction $\dcT\ocF$. We note that for $l\geqslant 1$ both the quantities $\tp_{00,l}\yo\neq 0$ and $\tp_{l}\yo\neq 0$ so that $\dcT\ocF$ is given by sum of all terms with $l\geqslant 1$ in Eq.~(\ref{eq31}). In doing so, it will suffice to preserve the dependence on $\tau$ ($\zeta_l=\tau l$) only in the lower integration limits of all integrals in Eq.~(\ref{eq31}) and substitute the integrands in the lowest perturbation order in $\tau$. For the TM mode, this means that one should use in Eq.~(\ref{eq31}) the second line in Eq.~(\ref{eq83}), Eq.~(\ref{eq97}), and the first line in Eq.~(\ref{eq108}). For the TE mode, according to Eq.~(\ref{eq89}), $\rE(0,y,0)=0$. Because of this, $\rE\ozy$ should be taken in the first perturbation order in $\tau$ as given by Eq.~(\ref{eq89}), whereas the thermal correction to the TE reflection coefficient is given by Eq.~(\ref{eq98}) and by the second line in Eq.~(\ref{eq108}). As a result, for the contribution of the TM mode one obtains \begin{equation} \dcT\ocF_{\rm TM}(a,T)\sim \frac{k_BT\hbar c\Delta}{\alpha\mu^2a^3}\meE \sum_{l=1}^{\infty}\int_{\zeta_l}^{\infty}\!\!\frac{y^2dy}{e^{y}-1}, \label{eq112} \end{equation} \noindent where we have used that $y$ giving the major contribution to the integral satisfies the condition $y\ll Q_0$. For the sum of integrals in Eq.~(\ref{eq112}) we have \begin{eqnarray} && \sum_{l=1}^{\infty}\int_{\zeta_l}^{\infty}\!\!\frac{y^2dy}{e^{y}-1} =\sum_{n=1}^{\infty}\frac{1}{n^3}\int_{n\zeta_l}^{\infty}\!\!\!\! dxx^2e^{-x} =\sum_{n=1}^{\infty}\left[\frac{2}{n^3}\frac{1}{e^{\tau n}-1}+ \frac{2\tau}{n^2}\frac{e^{\tau n}}{(e^{\tau n}-1)^2}+ \frac{\tau^2}{n}\frac{e^{\tau n}(1+e^{\tau n})}{(e^{\tau n}-1)^3}\right] \nonumber \\ && \sim \frac{1}{\tau}\sum_{n=1}^{\infty}\left[\frac{2}{n^4}+\frac{2}{n^4} +\frac{1}{n^4}\right] \sim \frac{1}{\tau}. \label{eq113} \end{eqnarray} Substituting this to Eq.~(\ref{eq112}), we arrive at \begin{equation} \dcT\ocF_{\rm TM}(a,T)\sim \frac{(\hbar c)^2\Delta}{\alpha\mu^2a^4}\meE . \label{eq114} \end{equation} The contribution of the TE mode is obtained by substituting Eqs.~(\ref{eq89}), (\ref{eq98}), and (\ref{eq108}) in Eq.~(\ref{eq31}) at low $T$ \begin{equation} \dcT\ocF_{\rm TE}(a,T)\sim \frac{\alpha k_BT}{a^2} \frac{Q_2}{D^3}\meE\tau \sum_{l=1}^{\infty}l\int_{\zeta_l}^{\infty}\!\!dy e^{-y} \sim \frac{\alpha^2\sqrt{4\mu^2-\Delta^2}(\hbar c)^3}{\Delta^3a^5}\,\meE. \label{eq115} \end{equation} It is easily seen that the quantities in Eqs.~(\ref{eq114}) and (\ref{eq115}) can be of the same order of magnitude. Thus, for the total contribution $\dcT\ocF$ we obtain \begin{equation} \dcT\cF\sim \frac{(\hbar c)^2}{a^4}\left(\frac{\Delta}{\alpha\mu^2} +\frac{\alpha^2\hbar c\sqrt{4\mu^2-\Delta^2}}{a\Delta^3} \right)\meE . \label{eq116} \end{equation} \noindent This result is derived for $\mu>0$ and $\Delta>0$. {}From Eqs.~(\ref{eq95}), (\ref{eq111}), and (\ref{eq116}), one concludes that the main term in the low-temperature behavior of the Casimir free energy for graphene with $\Delta<2\mu$ is determined by the TE mode in the implicit contribution given by Eq.~(\ref{eq95}). Substituting Eq.~(\ref{eq95}) in Eq.~(\ref{eq70}) one arrives at the Casimir entropy at low temperature \begin{equation} S(a,T)\sim \frac{\alpha^2a(4\mu^2-\Delta^2)k_B^2T}{(\hbar c)^3}. \label{eq117} \end{equation} \noindent In the limit of vanishing temperature, the Casimir entropy (\ref{eq117}) goes to zero in agreement with the Nernst heat theorem. The results of this section were derived under the conditions \begin{equation} k_BT\ll\frac{\hbar v_F}{2a}\ll\frac{\hbar c}{2a}<\Delta, \quad k_BT\ll 2\mu-\Delta. \label{eq115a} \end{equation} \noindent Thus, although the first two expansion parameters in Eq.~(\ref{eq73b}) remain the same, the third one is replaced with \begin{equation} e^{-\frac{2\mu-\Delta}{2k_BT}}\ll 1. \label{eq115b} \end{equation} \noindent One more condition used in the derivation of expressions (\ref{eq80}) for the polarization tensor is \begin{equation} \frac{\hbar c}{2a}<\sqrt{4\mu^2-\Delta^2}. \label{eq115c} \end{equation} \noindent These application conditions are discussed in Sec.~VII. \section{CONCLUSIONS AND DISCUSSION} In this paper, we have found the low-temperature behavior of the Casimir free energy and entropy of two real graphene sheets possessing the nonzero energy gap and chemical potential. This problem is solved analytically in the framework of the Dirac model. The response of graphene to the electromagnetic field is described on the basis of first principles of thermal quantum field theory by means of the polarization tensor in (2+1)-dimensional space-time. The thermal correction to the Casimir energy of two parallel graphene sheets at zero temperature is presented as a sum of two contributions. The first of them, called implicit, contains the polarization tensor at zero temperature, and the dependence of this contribution on temperature is determined by a summation over the Matsubara frequencies. The temperature dependence of the second contribution, called explicit, is determined by the thermal correction to the polarization tensor. The low-temperature behaviors of both contributions were found for different relationships between the energy gap and chemical potential of graphene sheets, i.e., for $\Delta>2\mu$, $\Delta=2\mu$, and $\Delta<2\mu$, and turned out to be essentially different. According to the results of Sec.~IV, which are repeated here by presenting only the dimensional quantities, the low-temperature behavior of the Casimir free energy and entropy for graphene sheets with $\Delta>2\mu$ is eventually determined by the TE mode in an implicit contribution to the thermal correction \begin{equation} \dT\cF\sim -\frac{(k_BT)^5}{(\hbar c)^2\Delta^2},\quad S(a,T)\sim\frac{k_B^5T^4}{(\hbar c)^2\Delta^2}, \label{eq118} \end{equation} \noindent and it does not depend on the chemical potential. In Sec.~V it is shown that for graphene sheets with $\Delta=2\mu$ the eventual low-temperature behavior of the Casimir free energy and entropy is determined by the TM mode in an explicit contribution to the thermal correction \begin{equation} \dT\cF\sim -\frac{k_BT}{a^2}, \quad S(a,T)\sim \frac{k_B}{a^2}, \label{eq119} \end{equation} \noindent Finally, as shown in Sec.~VI, for the case $\Delta<2\mu$ the low-temperature behavior of the Casimir free energy and entropy is governed by the TE mode in an implicit contribution to the thermal correction given by \begin{equation} \dT\cF\sim-\frac{a(4\mu^2-\Delta^2)(k_BT)^2}{(\hbar c)^3}, \quad S(a,T)\sim\frac{a(4\mu^2-\Delta^2)k_B^2T}{(\hbar c)^3}. \label{eq120} \end{equation} \noindent It is interesting to compare these results with the case of a pristine graphene with $\Delta=\mu=0$ where \cite{53} \begin{equation} \dT\cF\sim \frac{(k_BT)^3}{(\hbar c)^2}\ln\frac{ak_BT}{\hbar c}, \quad S(a,T)\sim-k_B\frac{(k_BT)^2}{(\hbar c)^2}\ln\frac{ak_BT}{\hbar c}. \label{eq121} \end{equation} \noindent As is seen from the comparison of Eqs.~(\ref{eq118})--(\ref{eq120}) with Eq.~(\ref{eq121}), for real graphene sheets there is a nontrivial interplay between the values of $\Delta$ and $\mu$ which leads to different behaviors of the Casimir energy and entropy with vanishing temperature, especially in the case $\Delta<2\mu$ where the polarization tensor at $T=0$ depends on $\mu$. {}From Eqs.~(\ref{eq118}) and (\ref{eq120}) one concludes that the Casimir entropy is positive and vanishes with vanishing temperature, i.e., for graphene with $\Delta>2\mu$ and $\Delta<2\mu$ the Nernst heat theorem is satisfied and, thus, the Lifshitz theory of the Casimir interaction is consistent with the requirements of thermodynamics (the same holds for a pristine graphene). According to Eq.~(\ref{eq119}), this is, however, not so for graphene with $\Delta=2\mu\neq 0$ where the Casimir entropy at zero temperature is not equal to zero and its value depends on the parameter of a system (volume). As discussed in Sec.~V, however, this anomaly is not relevant to any physical situation because for real graphene samples the exact equality $\Delta=2\mu$ is not realizable. We note that the real part of the electrical conductivity of graphene as a function of frequency also experiences a qualitative change when the energy gap $\Delta$ decreases from $\Delta>2\mu$ to $\Delta<2\mu$ \cite{37}. It should be noted that the asymptotic expressions (\ref{eq118}) and (\ref{eq120}) are not applicable to graphene sheets with too small values of $\Delta-2\mu$ and $2\mu-\Delta$, respectively. The point is that if the values of $\Delta$ and $2\mu$ are too close to each other the exponentially small parameters in Eqs.~(\ref{eq73b}) and (\ref{eq115b}) lose their meaning and cannot be used. Taking into account that the polarization tensor is a continuous function of $\Delta$ at the point $\Delta=2\mu$, the possibility exists that an apparent discontinuity of the obtained asymptotic formulas at $\Delta=2\mu$ may be an artifact of the expansion in small parameters at the crossover region. For a comprehensive resolution of this question, it would be desirable to find the more exact asymptotic expressions applicable for the values of $2\mu$ arbitrarily close to $\Delta$ from the left and from the right. In future it is also interesting to investigate the case of two dissimilar graphene sheets with different values of the energy gap and chemical potential. The configuration of a graphene sheet interacting with an ideal metal plane (it has been known that for two ideal metal planes the Casimir entropy satisfies the Nernst heat theorem \cite{88}) or a plate made of conventional metallic or dielectric materials. According to Sec.~I, theoretical description of the Lifshitz theory using the polarization tensor of graphene \cite{52} is in good agreement with the experiment on measuring the Casimir interaction in graphene system \cite{51}. Taking into consideration that the polarization tensor of graphene results in two spatially nonlocal, complex dielectric permittivities (the longitudinal one and the transverse one \cite{27}), it may be suggested that a more fundamental theoretical description of the dielectric response of metals admits a similar approach. In application to metals, the nonlocal dielectric permittivities of this kind could lead to almost the same results, as the dissipative Drude model, for the propagating waves on the mass shell, but deviate from them significantly for the evanescent fields off the mass shell (in contrast to the nonlocal dielectric functions describing the anomalous skin effect \cite{63}). In such a manner graphene might point the way for resolution of the Casimir puzzle which remains unresolved for already 20 years. \section*{Acknowledgments} The work of G.L.K. and V.M.M. was partially supported by the Peter the Great Saint Petersburg Polytechnic University in the framework of the Program ``5--100--2020". The work of V.M.M. was partially funded by the Russian Foundation for Basic Research, Grant No. 19-02-00453 A. His work was also partially supported by the Russian Government Program of Competitive Growth of Kazan Federal University.
2006.15435
\section{Introduction} Summarization is the task of generating a shorter text that contains the key information from source text, and the task is a good measure for natural language understanding and generation. Broadly, there are two approaches in summarization: extractive and abstractive. Extractive approaches simply select and rearrange sentences from the source text to form the summary. There has been many neural models proposed for extractive summarization over the past years [11, 18]. Current state-of-the-art model for the extractive approach fine-tunes a simple variant of the popular language model BERT [12] for the extractive summarization task [10]. On the other hand, abstractive approaches generate novel text, and are able to paraphrase sentences while forming the summary. This is a hard task even for humans, and it's hard to evaluate due to the subjectivity of what is considered a "ground truth" summary during evaluation. Recently, many neural abstractive summarization models have been proposed that use either LSTM-based sequence-to-sequence attentional models or Transformer as their backbone architectures [1, 3, 6, 9]. These models also integrate various techniques to their backbone architecture such as coverage, copy mechanism and content selector module in order to enhance their performance. There is also some recent work on abstractive summarization based on reinforcement learning techniques that optimize objectives in addition to the standard maximum likelihood loss [1, 2]. Although current neural abstractive summarization models can achieve high ROUGE scores on popular benchmarks and are able to produce fluent summaries, they have two main shortcomings: i. They don't respect the facts that are either included in the source article or are known to humans as commonsense knowledge; ii. They don't produce coherent summaries when the source article is long. In this work, we propose a novel architecture that extends Transformer encoder-decoder architecture to improve on these challenges. First, we incorporate entity-level knowledge from the Wikidata knowledge graph into the encoder-decoder architecture. Injecting structural world knowledge from Wikidata helps our abstractive summarization model to be more fact-aware. Second, we utilize the ideas used in Transformer-XL language model in our encoder-decoder architecture. This helps our model with producing coherent summaries even when the source article is long. \section{Proposed Method} \label{headings} \subsection{Transformer vs. Transformer-XL} Recently, Transformer architectures have been immensely successful in various natural language processing applications including neural machine translation, question answering and neural summarization and pretrained language modeling. However, Transformers have fixed-length context, which results in worse performance while encoding long source text. In addition, these fixed-length context segments do not respect the sentence boundaries, resulting in context fragmentation which is a problem even for the short sequences. Recently, Transformer-XL has offered an effective solution for this long-range dependency problem in the context of language modeling. They have introduced the notion of recurrence into a self-attention-based model by reusing hidden states from the previous segments, and have introduced the idea of relative positional encoding to make the recurrence scheme possible. Transformer-XL has state-of-the-art perplexity performance, learns dependency 450\% longer than vanilla Transformers, and is up to 1,800+ times faster than vanilla Transformers at inference time on language modeling tasks. Inspired by the strong performance of the Transformer-XL language model on modeling long-range dependency, we extend Transformer-XL to an encoder-decoder architecture based on the Transformer architecture. In other words, we calculate the attention scores at every multi-head attention layer in our architecture shown in Figure 1 based on Transformer-XL attention decomposition. We compare the attention decompositions of vanilla Transformer and Transformer-XL. Below equations show the attention computation between query $q_i$ and key vector $k_j$ within the same segment. U matrix shows the absolute positional encoding, E matrix is the token embedding matrix, $W_q$ and $W_k$ represent the query and key matrices. In the Transformer-XL attention formulation, $R_{i-j}$ is the relative positional encoding matrix without trainable parameters, and u, v, $W_{k, R}$, $W_{k,E}$ are all trainable parameters. \[A^{vanilla}_{i,j} = E^{T}_{x_i}W^{T}_qW_kE_{x_j}+E^{T}_{x_i}W^{T}_qW_kU_{j}+U_i^{T}W^{T}_qW_kE_{x_j}+U_i^{T}W^{T}_qW_kU_j \] \[A^{xl}_{i,j} = E^{T}_{x_i}W^{T}_qW_{k,E}E_{x_j}+E^{T}_{x_i}W^{T}_qW_{k,R}R_{i-j}+u^{T}W_{k,E}E_{x_j}+v^{T}W_{k,R}R_{i-j} \] Overall, Transformer-XL's architecture is shown below for a segment $\tau$ in the $n$-th transformer layer. $SG$ denotes stop-gradient, and $\circ$ denotes concatenation. We refer the readers to the original Transformer-XL paper [8] for further discussion on the new parameterization for attention calculations and more details on the design decisions for the architecture. \\ \[\tilde{h}^{n-1}_{\tau} = [SG(h^{n-1}_{\tau-1}) \circ h^{n-1}_{\tau}]\]\[q^n_{\tau}, k^n_{\tau}, v^n_{\tau} = h^{n-1}_{\tau}{W^n_q}^T, \tilde{h}^{n-1}_{\tau}{W^n_{k,E}}^T, \tilde{h}^{n-1}_{\tau}{W^n_v}^T\] \[A^n_{\tau, i, j} = {q^n_{\tau,i}}^Tk^n_{\tau,j}+{q^n_{\tau,i}}^TW^n_{k,R}R_{i-j}+u^Tk^n_{\tau,j}+v^{T}W^n_{k,R}R_{i-j}\] \\ It is important to note that, as in the vanilla Transformer, we still have the fully connected feed-forward network layers after multi-head attention layers, and residual connections around sublayers followed by layer normalizations. These layers are omitted in Figure 1 for simplicity. Empirically, we observe much more coherent articles with Transformer-XL encoder-decoder architecture compared to the Transformer baseline. Figure 3 shows a comparison for an input source article sampled from CNN/Daily Mail dataset. \subsection{Wikidata Knowledge Graph Entity Embeddings} Wikidata is a free and open multi-relational knowledge graph that serves as the central storage for the structured data of its many services including Wikipedia. We sample part of Wikidata that has 5 million entities and 25 million relationship triples. We learn entity embeddings for these sampled entities through the popular multi-relational data modeling method TransE [14]. TransE is a simple yet very powerful method that represents relationships between fact triples as translations operating in the low-dimensional entity embedding space. Specifically, we minimize a margin-based ranking criterion over the entity and relationship set using $L_2$ norm as the dissimilarity measure, d, as shown in the below equation. $S$ is the set of relationship triplets $(h, l, t)$ where $h$ and $t$ are entities in the set of entities $E$, and $l$ represents the relationships in the set of relationships $\mathcal{L}$. We construct the corrupted relationship triplets, which forms the "negative" set for the margin-based objective, through replacing either the head or tail of the relationship triple by a random entity. Low-dimensional entity and relationship embeddings are optimized through stochastic gradient descent with the constraint that $L_2$-norms of the entity embeddings are 1 (on the unit sphere), which is important in order to obtain meaningful embeddings. \[\mathcal{L} = \sum_{(h, l, t) \in S}\sum_{(h', l, t') \in S'_{(h, l, t)}}[\gamma + d(h+l,t) - d(h'+l,t')]_{+}\] where $[x]_{+}$ denotes the positive part of $x$, $\gamma > 0$ is a margin hyperparameter, and \[S'_{(h, l, t)} = \{(h', l , t) | h' \in E \} \cup \{(h, l , t') | t' \in E \}\] \subsection{Our Model Architecture} Our overall model architecture is shown in Figure 1. We extend the encoder-decoder architecture such that the entity information can be effectively incorporated into the model. On the encoder side, we have a separate attention channel for the entities in parallel to the attention channel for the tokens. These two channels are followed by multi-head token self attention and multi-head cross token-entity attention. On the decoder side, we have multi-head masked token self-attention, multi-head masked entity self-attention, and multi-head cross attention between encoder and decoder, respectively. Finally, we have another layer of multi-head token attention followed by a feed-forward layer and softmax to output the tokens. Multi-head attention are conducted based on the Transformer-XL decomposition as in Section 2.1. Entity Linker modules use an off-the-shelf entity extractor and disambiguate the extracted entities to the Wikidata knowledge graph. Extracted entities are initialized with the pretrained Wikidata knowledge graph entity embeddings that are learned through TransE, as discussed in Section 2.2. Entity Conversion Learner modules use a series of feed-forward layers with ReLU activation. These modules learn entities that are in the same subspace with the corresponding tokens in the text. \begin{figure}[htbp] \centering \includegraphics[scale=0.3]{architecture} \caption{Our model architecture. PE stands for positional encoding. Single encoder and decoder layers are shown in parenthesis. In multi-layer architectures, these layers in curly brackets are stacked.} \end{figure} \section{Experiments} \subsection{Dataset} We evaluate our models on the benchmark dataset for summarization, CNN/Daily Mail. The dataset contains online news articles (781 words on average) paired with multi-sentence summaries (56 words on average). We use the standard splits that include 287,226 training pairs, 13,368 validation pairs and 11,490 test pairs. We do not anonymize the entities, instead operate directly on the original text. We truncate the articles to 400 tokens, and summaries to 100 tokens in train time and 120 tokens in test time. During preprocessing, we do not remove the case for higher quality entity extraction in our entity linking module. \subsection{Quantitative Results} We evaluate our model and the baseline based on the ROUGE metric that compares the generated summary to the human-written ground truth summary and counts the overlap of 1-grams (ROUGE-1), 2-grams (ROUGE-2), and longest common sequence (ROUGE-L). We use the pyrouge [17] package to obtain our scores and report the F1 scores for all ROUGE types. Our baseline is the vanilla Transformer encoder-decoder architecture that's commonly used as the backbone architecture in abstractive summarization models. For both the baseline and our proposed model, we use 2 transformer layers and 4 heads and utilize beam search for decoding. We use 300 dimensions for both entity and token embeddings, BERTAdam as the optimizer, and minimum sentence generation length of 60. After hyperparameter search, we set the learning rate to 0.00005, dropout rate to 0.3, beam width to 5 and maximum sentence length to 90 during inference. We start entity extraction at the decoder it produces 20 tokens. Our results on CNN/Daily Mail dataset are shown in Table 1. Our model improves over the Transformer baseline by +0.45 ROUGE-1 points and +0.56 ROUGE-L points on the full test set. In fact, we see better improvements when we test our model on the higher entity density slice of the test set as demonstrated in Table 2. Specifically, our model improves over the baseline by +0.85 ROUGE-1 points and +1.08 ROUGE-L points on the test set article-summary pairs in which there are more than 50 entities in the source article. Also, we include results in Table 1 in which we initialized the entity embeddings randomly to test the benefit of using Wikidata KG entity embeddings. Using random entity embeddings decreased the model performance while using Wikidata KG entity embeddings increased the model performance both for vanilla Transformer and for Transformer-XL backbone architectures. This supports our hypothesis that injecting structural world knowledge from external knowledge bases to abstractive summarization models improves model performance. \begin{table}[!h] \caption{Results on CNN/Daily Mail dataset. R used as an abbreviation for ROUGE.} \label{results-table} \centering \begin{tabular}{llll} \toprule \cmidrule(r){1-2} Model & R-1 & R-2 & R-L \\ \midrule Transformer Baseline & 33.351& 12.473 & 30.663 \\ Transformer-Entity w/ Random Entity Emb & 33.047 & 11.536 & 30.487 \\ Transformer-Entity w/ Wikidata KG Emb & 33.741 & 12.171 & 31.076 \\ \textbf{Transformer-XL-Entity w/ Wikidata KG Emb (Our Model)} & \textbf{33.804} & \textbf{12.509} & \textbf{31.225}\\ \bottomrule \end{tabular} \end{table} \begin{table}[!h] \caption{Results on CNN/Daily Mail dataset with high density entities. R used as an abbreviation for ROUGE. >50 ent denotes the slice of test data that has more than 50 entities in the source article.} \label{sample-table} \centering \begin{tabular}{llll} \toprule \cmidrule(r){1-2} Model & R-1 (>50 ent) & R-2 (>50 ent) & R-L (>50 ent)\\ \midrule Transformer Baseline & 33.423 & 12.46 & 30.97 \\ \textbf{Our model} & \textbf{34.273} & \textbf{13.018} & \textbf{32.048}\\ \bottomrule \end{tabular} \end{table} \subsection{Qualitative Results} We conduct qualitative analysis on our model's predicted summaries and include some samples here. In Figure 2, we compare the transformer baseline output to the output of our model for a sampled input article from CNN/Daily Mail corpus. Baseline model makes several factual errors based on our manual fact-checking: 1. Neither McClaren nor Paul Clement was 42 years old at the time when the article was published. 2. Neither Steve McClaren nor Paul Clement worked as a Manchester United boss. On the other hand, our model respects the facts through incorporating world knowledge from Wikidata knowledge graph. Again based on our manual fact checking, we find: 1. Paul Clement was indeed working in Real Madrid before he was appointed the manager of Derby County. 2. Although "England boss" is too broad, he did work at Chelsea 2009-2011. In Figure 3, we compare the transformer baseline output to the output of the Transformer-XL encoder-decoder model without entity integration in order to test the effect of architecture on summary coherency. Baseline model produces an incoherent summary, while Transformer-XL encoder-decoder model outputs coherent, human readable summary. \begin{figure}[!h] \centering \noindent\fbox{% \parbox{\textwidth}{% \textbf{Ground Truth Summary}\\ Steve McClaren is expected to take Newcastle job if Derby don't go up. Rams are currently battling for Championship promotion via the play-offs. Paul Clement is a leading candidate for job. Derby will make formal contact with Real Madrid if McClaren leaves.\\ \textbf{Transformer Baseline Output}\\ Steve McClaren is a leading candidate to replace Steve McClaren. The 42-year-old has established a reputation as one of European football's leading coaches in recent years, working on mainly under Carlo Ancelotti. The former Manchester United boss is keen to secure promotion into the Premier League next season.\\ \textbf{Output of Our Model}\\ Paul Clement is a leading candidate to replace Steve McClaren at Derby County. The former England boss has established a reputation as one of Europe's leading football coaches in recent years. Clement is currently a Real Madrid coach. }% } \caption{Comparison of the transformer baseline output and the output of our proposed model. Ground truth summary is sampled from the CNN/Daily Mail summarization corpus. Baseline model makes factual errors, while our model respects the facts through incorporating entity-level knowledge from Wikidata knowledge graph.} \end{figure} \begin{figure}[!h] \centering \noindent\fbox{% \parbox{\textwidth}{% \textbf{Transformer Baseline Output}\\ Wayne Oliveira has scored four goals in seven games and Oliveira. Oliveira has recovered from training-ground ankle injury. Oliveira says he is "not happy he is injured but if it gives me an chance". Gomis has been ruled out for between three to four weeks after being injured. Oliveira believes Bafetimbi Gomis' form has made seven of his eight Swansea appearances. ...\\ \textbf{Transformer-XL Output}\\ The Portugal striker has been ruled out for between three to four weeks. Nelson Oliveira has been sidelined for four weeks with injury. He has scored four goals in seven matches and has recovered from a training-ground injury. The 23-year-old has made his Swansea debut in the 5-0 home defeat against Chelsea in January ... }% } \caption{Comparison of the transformer baseline output and the output of Transformer-XL encoder-decoder model output. Source article is sampled from the CNN/Daily Mail summarization corpus. Baseline model produces an incoherent summary, while Transformer-XL encoder-decoder model outputs coherent, human readable summary.} \end{figure} \newpage \section{Discussion and Future Work} We present an end-to-end novel encoder-decoder architecture that effectively integrates entity-level knowledge from the Wikidata knowledge graph in the attention calculations and utilizes Tranformer-XL ideas to encode longer term dependency. We show performance improvements over a Transformer baseline under same resources (in terms of number of layers, number of heads, number of dimensions of hidden states, etc.) on the popular CNN/Daily Mail summarization dataset. We also conduct preliminary fact-checking and include examples for which our model is respectful to the facts while baseline Transformer model isn't. Similar to the previous works in abstractive summarization, we find that ROUGE metric is not representative of the performance in terms of human readability, coherence and factual correctness. ROUGE, by definition, rewards extractive strategies by evaluating based on word overlap between ground truth summary and output model summary. Metric is not flexible towards rephrasing, which limits model's ability to output abstractive summaries. It's also important to note that "ground truth" is subjective in the abstractive summarization setting, since there can be more than one correct abstractive summary to a source article. We believe finding metrics that are representative of the desired performance is an important research direction. Finally, we believe entity linking should be part of the end-to-end training instead of a separate pipeline in the beginning. It's possible that we lose valuable information both during entity extraction part and during disambiguation part to the chosen knowledge graph. \section*{References} \medskip \small [1] Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the NAACL Conference. [2] Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In Proceedings of the ICLR Conference. [3] Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the ACL Conference. [4] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. [5] Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown words. In Proceedings of the ACL Conference. [6] Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Computational Natural Language Learning. [7] Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: Enhanced Language Representation with Informative Entities. In Proceedings of the ACL Conference. [8] Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive Language Models beyond a Fixed-Length Context. In Proceedings of the ACL Conference. [9] Sebastian Gehrmann, Yuntian Deng and Alexander M. Rush. 2018. Bottom-Up Abstractive Summarization. In Proceedings of the EMNLP Conference. [10] Yang Liu. 2019. Fine-tune BERT for Extractive Summarization. ArXiv abs/1903.10318 [11] Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural document summarization by jointly learning to score and select sentences. In Proceedings of the ACL Conference. [12] Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the NAACL Conference. [13] Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Neural Information Processing Systems. [14] Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston and Oksana Yakhnenko. 2013. Translating Embeddings for Modeling Multi-relational Data. In Neural Information Processing Systems. [15] https://www.wikidata.org/ [16] Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: ACL workshop. [17] pypi.python.org/pypi/pyrouge/0.1.3 [18] Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the ACL Conference. \end{document}
1811.10371
\section{Important theorems and lemmas} \label{chap:RMT} In the derivation of large system analysis, we use well-known lemmas including trace lemma~\cite[Lemma 2.6]{bai1998no},\cite[Theorem 3.4]{RMT} along with rank-1 perturbation lemma~\cite[Lemma 2.6]{silverstein1995empirical},\cite[Theorem 3.9]{RMT}. The former one shows asymptotic convergence of $\textbf{x}\herm \textbf{A} \textbf{x}-\frac{1}{N}\text{Tr}{\bf{A}}\rightarrow0$ when ${\bf x}\in \mathbb{C}^N$ has i.i.d entries with zero mean, variance of $\frac{1}{N}$ and independent of $\mathbf{A}$. The latter one states that addition of rank-1 matrix ${\bf xx}\herm$ to the random Gram matrix ${\bf{X}} {\bf X}\herm$ does not affect trace $\frac{1}{N}\text{Tr}({\bf X} {\bf X}\herm+{\bf I}_N)$ term in the large dimensional limit. The formal presentation of these lemmas are given in~\cite{bai1998no,silverstein1995empirical,RMT}. The other result from random matrix theory that characterizes the so-called Stieltjes transform of the Gram matrix~\cite{RMT} of the propagation channel matrix is given in the following theorem. \begin{theorem} \label{th:main theorem} \cite[Theorem 1]{wagner2012large} Consider a channel matrix $\mathbf{H} \in \mathbb{C}^{N \times K}$ with columns $ \mathbf{h}_k=\boldsymbol{\Theta}_k^{\frac{1}{2}} \mathbf{z}_k$, where correlation matrices $\boldsymbol{\Theta}_k =\boldsymbol{\Theta}_k^{\frac{1}{2}}(\boldsymbol{\Theta}_k \herm )^{\frac{1}{2}} $ are subject to Assumption~\ref{as:1} and vectors ${\vec z}_j \in \mathbb{C}^N$ have zero mean i.i.d entries of variance $\frac{1}{N}$ and eighth-order moment of order $\mathcal{O}(\frac{1}{N^4})$. Then, for $\mathbf{H} \bf{C} \mathbf{H}^{\mathrm{\scriptsize H}}$ with $\bf{C}=\rm{diag}\{c_1,...,c_K\} $ where $c_k,\,\forall k$ are finite deterministic values, we define \begin{equation}\label{theroma1} m_{k,i }(z,x)\triangleq \frac{1}{N}{\rm Tr}\biggr(\boldsymbol{\Theta}_k(\mathbf{H} {\bf{C}} \mathbf{H}^{\mathrm{\scriptsize H}}+x\boldsymbol{\Theta}_i-z\mathbf{I}_N)^{-1}\biggr) \end{equation} where for $z \in \mathbb{C} \backslash \mathbb{R}^+$ and bounded positive variable $x$, when dimensions $K$ and $N$ grows large with fixed ratio of $\frac{N}{K} < \infty$, it follows that $m_{k,i }(z,x)-\bar{m}_{k,i }(z,x)\xrightarrow{N\rightarrow \infty} 0$ almost surely and the deterministic equivalent is given by \begin{equation}\label{Proof equivalent ST form} \begin{aligned} \bar{m}_{k,i}(z,x)=\frac{1}{N} \!{\rm Tr }\biggr(\boldsymbol{\Theta}_k\biggr(\frac{1}{N}\sum\limits_{j=1}^K \frac{c_j \boldsymbol{\Theta}_j}{1+c_j\bar{m}_{j,i}(z,x)}+x\boldsymbol{\Theta}_i-z\mathbf{I}_N\biggr)^{-1}\biggr). \end{aligned} \end{equation} \end{theorem} \begin{IEEEproof} The fundamental idea of the proof is based on Bai and Silverstein technique~\cite{RMT} where the deterministic equivalent of $m_{k,i }(z,x)$ is inferred by writing it under the form $\frac{1}{N}\text{Tr}({\bf D}^{-1})$ where $\bf D$ needs to be determined. This is performed by selecting ${\bf{D}}=(\mathbf{R}+x\boldsymbol{\Theta}_i-z\mathbf{I}_N)\boldsymbol{\Theta}_k^{-1}$ and then taking the difference $m_{k,i }(z,x)-\frac{1}{N}\text{Tr}({\bf D}^{-1})\rightarrow0$. Then, utilizing random matrix theory results, the deterministic matrix $\bf R$ is determined such that the difference tends to zero almost surely. The formal proof of the theorem in a more generic configuration is given in~\cite{wagner2012large}.\end{IEEEproof} For clarity, we simplify $m_{k,i}(z,x)$ and $\bar{m}_{k,i}(z,x)$ notations to reflect specific settings. In particular, we drop the index $i$ and variable $x$ in $\bar{ m}_{k,i}(z,x)$ in the cases with $x=0$, i.e., $\bar{m}_{k}(z)=\bar{m}_{k,i}(z,0)$. Also, when $z$ is equal to the noise variance, we simply refer to $\bar{m}_{k}(z)$ by $\bar{m}_{k}$. In multicell scenario the measures carry a BS index as well, for example, $\bar{m}_{b.k}$ refers to the measure corresponding to the channel matrix at BS $b$. \section{Lagrangian Duality Analysis} \label{sec:duality analysis} In this appendix, we formulate the {Lagrange dual problem} of~\eqref{Opt_problem}. To do so, we first show that the {Lagrange dual problem} of~\eqref{Opt_problem} is the same as that of~\eqref{eq:prim problemSimple}. To this end, we first write the Lagrangian of~\eqref{Opt_problem} as \begin{equation} \begin{aligned}\nonumber L({\bf w}_{k},\lambda_k,\beta_{b,k},\epsilon_{b,k}) &=\sum_{k=1}^{K} \mu_{b_k}\left\|{\bf w}_{k} \right\|^{2}- \sum_{k=1}^{K} \frac{\lambda_k}{N} \Biggr(\frac{\left|{\bf h}_{b_k,k}\herm {\bf w}_{k}\right|^{2}}{\gamma_k}-\!\!\!\sum\limits_{i\in {\mathcal U}_{b_{k}}\setminus k} \left|{\bf h}_{b_k,k}\herm{\bf w}_{i}\right|^{2}-\!\! \sum\limits_{b\in{\mathcal B}\setminus b_{k}} \epsilon_{b,k} - \sigma^{2} \Biggr)\\ & + \sum\limits_{b}\sum\limits_{k\notin \mathcal{U}_{b}}\beta_{b,k} \Biggr(\sum\limits_{j\in \mathcal{U}_{b}} \left|{\bf h}_{b,k}\herm{\bf w}_{j}\right|^{2}-\epsilon_{b,k}\Biggr) \end{aligned} \end{equation} where $\{\lambda_k\}$ and $\{\beta_{b,k}\}$ are the Lagrange dual variables associated with SINR and ICI constraints, respectively. From complementary slackness~\cite{Boyd-Vandenberghe-04}, we know that either $\beta_{b,k}$ or $\sum_{j\in \mathcal{U}_{b}} \left|{\bf h}_{b,k}\herm{\bf w}_{j}\right|^{2}-\epsilon_{b,k}$ have to be equal to zero. By setting the variable $\beta_{b,k}$ to zero, the variable $\epsilon_{b,k}$ becomes unconstrained, and thus, can be chosen to make $ {\text{min}}_{{\bf w}_{k},\epsilon_{b,k},\lambda_k} \,\, L({\bf w}_{k},\lambda_k,\beta_{b,k},\epsilon_{b,k})=- \infty . This suggests that $\beta_{b,k}, \forall b,k$ have non-zero values and consequently complementary slackness implies the equality $\epsilon_{b,k}=\sum_{j\in \mathcal{U}_{b}} \left|{\bf h}_{b,k}\herm{\bf w}_{j}\right|^{2}$ to hold. By plugging this into the Lagrangian, we can follow the same approach as in~\cite{Dahrouj-Yu-10} to obtain the {Lagrange dual problem} of~\eqref{eq:prim problemSimple} and~\eqref{Opt_problem} as\footnote{{The Lagrangian duality between~\eqref{eq:prim problemSimple} and~\eqref{eq:dual uplink} holds when both~\eqref{eq:prim problemSimple} and~\eqref{eq:dual uplink} are feasible.}} \begin{equation}\label{eq:dual uplink} \begin{aligned} &\underset{{{\{{\bf v}_{k}\}}} , \{\lambda_{k}\}}{\min} \quad \sum_{b{\in}\mathcal{B}}\sum_{k{\in}\mathcal{U}_{b}} \frac{\lambda_{k}}{N}\sigma^2 \\ &\quad\text{s.t.}\quad \frac{\lambda_{k}|{{\vec v}}_{k}\herm{\vec h}_{b_{k},k}|^2} { \sum_{j\in \mathcal U \setminus k} \lambda_{j} |{{\vec v}}_{k} \herm{\vec h}_{b_{k},j}|^2+ \mu_{{b_k}}N\|{{\vec v}}_{k}\|^2 } \geq {\gamma_{k}}, \;\forall k \in \mathcal{U}, \end{aligned} \end{equation} where the Lagrange dual variables ${\lambda_k}/{N}$ can be thought of as the UE power in the dual uplink power minimization problem~\cite{ Dahrouj-Yu-10}. The optimal receive beamforming vectors $\{{\vec v}_{k}\}$ are given as a set of minimum mean square error (MMSE) receivers $ \mathbf{v}_{k}=(\sum_{j\in \mathcal U\setminus k}{\lambda_{j}}\mathbf{ h}_{b_{k},j}\mathbf{ h}_{b_{k},j}\herm + \mu_{b_k}N\mathbf{I}_{N})^{-1}\mathbf{ h}_{b_k,k}$, and the optimal Lagrangian multipliers $\boldsymbol{\lambda}^*= [\lambda_{1}^*, \ldots, \lambda_{K}^*]\tran $ are obtained as in~\eqref{eq:lambda itr}. The duality condition provides the downlink precoders as ${{\vec w}}_{k}=\sqrt{{{\delta_{k}}}/{N}} {\vec v}_{k}$ with the scaling factors $\{\delta_{k}\}$ given as $\boldsymbol{\delta} = \mathbf{G}^{-1} \mathbf{1}_{K}\sigma^2$ where $\boldsymbol{\delta} = [\delta_{1}, \ldots, \delta_{K}]\tran$, and the $(i,k)^{\text{th}}$ element of the so-called coupling matrix $\mathbf{G} \in \mathbb C^{K\times K}$ is given as in~\eqref{eq:G_matrix}. \section{Proof of Theorem \ref{th:up powers updw duality}} \label{sec:proof Theorem1} Without loss of generality, the noise power $\sigma^2$ and BS power weights $\mu_{b},\,\forall b$ are assumed to be equal to one in the following analyses. In order to prove the theorem, we first provide an intuition by heuristically assuming the Lagrangian multipliers to be deterministic values and given independently of channel vectors. Then we prove the convergence of the optimal Lagrangian multipliers in a feasible problem to the deterministic equivalents rigorously via a contradiction argument. Denoting $\boldsymbol{\lambda}^*$ as the fixed point solution to~\eqref{eq:lambda itr} in a feasible optimization problem, the following equality holds \begin{align} \label{eq:lambda itr proofs} \frac{\gamma_{k}}{\lambda_{k}^*}= \frac{1}{N}{\mathbf{h}_{b_k,k}\herm\left(\sum\limits_{j\in \mathcal U\setminus k}\frac{\lambda_{j}^*}{N}\mathbf{ h}_{b_{k},j}\mathbf{ h}_{b_{k},j}\herm + \mathbf{I}_{N}\right)^{-1}\mathbf{h}_{b_k,k}} \end{align} where the superscript $^*$ stands for the optimal solution. Assuming erroneously that $\boldsymbol{\lambda}^*$ is given and independent of channel vectors, trace lemma~\cite[Lemma 2.6]{bai1998no} along with rank-1 perturbation lemma~\cite[Lemma 2.6]{silverstein1995empirical} in~\eqref{eq:lambda itr proofs} yields the following $\frac{\gamma_{k}}{\lambda_{k}^*}-\frac{1}{N} {\rm{Tr}}\big(\boldsymbol{\Theta}_{b_{k},k} \big(\sum_{j\in \mathcal U}\frac{\lambda_{j}^*}{N}\mathbf{ h}_{b_{k},j}\mathbf{ h}_{b_{k},j}\herm+\mathbf{I}_N \big)^{-1}\big) \rightarrow 0$, almost surely. This trace term is equivalent to $m_{b_k,k}^*$ in Theorem~\ref{th:main theorem} that satisfies an almost sure convergence $m_{b_k,k}^*-\bar{m}_{b_k,k}^*\rightarrow 0$ where $\bar{m}_{b_k,k}^*$ is given as a solution of a system of equations \vspace{-0.1cm} \begin{align} \bar{m}_{b_k,k}^* = \frac{1}{N}{\rm {Tr}}\biggr({\bf \Theta}_{b_k,k}\bigg(\frac{1}{N}\sum_{j\in \mathcal U}\frac{ \lambda_{j}^* {\bf \Theta}_{b_k,j}}{1+\lambda_j^*\bar{m}_{b_k,j}^*} + {\bf I}_{N}\bigg)^{-1}\biggr). \end{align} From the above discussion, we may then expect the terms $\{{\lambda}_k^*\}$ to be all close to $\{\gamma_k/\bar{m}_{b_k,k}^*\}$ for $N$ and $K$ large enough. However, since the optimal Lagrangian multipliers depend on the channel vectors, we cannot rely on classical random matrix theory results for proving the asymptotic convergence of $\boldsymbol{\lambda}^*$ to the deterministic equivalents. Thus, we follow the approach introduced in~\cite{couillet2014large,LucaCouilletDebbahJournal2015}, and prove the asymptotic convergence of $\boldsymbol{\lambda}^*$ via a contradiction argument. In particular, we set $\bar \lambda_k=\gamma_k/\bar{m}_{b_k,k}$ with $\bar{m}_{b_k,k}$ given as \begin{align} \label{eq:stform proof of th1} \bar{m}_{b_k,k} = \frac{1}{N}{\rm {Tr}}\biggr({\bf \Theta}_{b_k,k}\biggr(\frac{1}{N}\sum_{j\in \mathcal U}\frac{\bar \lambda_{j} {\bf \Theta}_{b_k,j}}{1+\bar\lambda_j\bar{m}_{b_k,j}} + {\bf I}_{N}\biggr)^{-1}\biggr). \end{align} Then we show via a contradiction argument that the ratios $r_k=\frac{\bar \lambda_k}{\lambda_k^*}=\frac{ \gamma_k}{\bar{m}_{b_k,k}}\frac{1}{\lambda_k^*},\,\forall k\in \mathcal{U}$ converge asymptotically to one, which allows the results of the theorem to be claimed. To do so, we consider BS $b$ with the set of served UEs $\mathcal{U}_b=\{\rm{UE}_1,...,\rm{UE}_M\}$ where, given the ratios $r_k,\,\forall k\in \mathcal{U}_b$, equation~\eqref{eq:lambda itr proofs} can be rewritten as \begin{align} \label{eq:rewrite as ratio} \frac{r_k\gamma_{k}}{\bar\lambda_{k}}= \frac{1}{N}{{\bf z}_{b,k}\herm{\bf \Theta}_{b,k}^{1/2}\left(\sum\limits_{j\in \mathcal U\setminus k}\frac{\bar \lambda_{j}}{r_j}{\bf B}_j+ \mathbf{I}_{N}\right)^{-1}{\bf \Theta}_{b,k}^{1/2}{\bf z}_{b,k}}, \end{align} where ${\bf B}_j=\frac{1}{{N}}{\bf \Theta}_{b,j}^{1/2}{{\bf z}_{b,j}{\bf z}_{b,j}\herm}{\bf \Theta}_{b,j}^{1/2}$. Next, the UE indexes in $\mathcal{U}_b$ are relabeled such that the following holds $0\leq r_1 \leq...\leq r_M$ with $\{r_j\}$ assumed to be well defined and positive. Rewriting~\eqref{eq:rewrite as ratio} for UE $M\in\mathcal{U}_b$ and replacing all $r_j,\forall j\in \mathcal{U}$ in the summation with the largest ratio $r_M$, we get the following inequality, based on monotonicity arguments, \begin{align} \label{eq:inequalality0} \frac{r_M\gamma_{M}}{\bar\lambda_{M}}\leq \frac{1}{N} {{\bf{z}}_{b,M}\herm {\boldsymbol\Theta}_{b,M}^{1/2}\left(\sum\limits_{j\in \mathcal U\setminus M}\frac{\bar \lambda_{j}}{r_M}{\bf B}_j+ \mathbf{I}_{N}\right)^{-1}\!\!\!\!{\bf \Theta}_{b,M}^{1/2} {\bf{z}}_{b,M}}, \end{align} or equivalently \begin{align} \label{eq:inequality1} \frac{\gamma_{M}}{\bar\lambda_{M}}\leq \frac{1}{N}{{\bf{z}}_{b,M}^{\mathrm{\scriptsize H}} {\boldsymbol\Theta}_{b,M}^{1/2}\left(\sum\limits_{j\in \mathcal U\setminus M}{\bar \lambda_{j}} {\bf B}_j + {r_M}\mathbf{I}_{N}\right)^{-1}{\!\!\!\!\bf \Theta}_{b,M}^{1/2} {\bf{z}}_{b,M}}. \end{align} Assume now that $r_M$ is infinitely often larger than $1+l$ with $l> 0$ some positive value. Restricting ourselves to such a subsequence, the monotonicity arguments give the inequality in~\eqref{eq:inequality1} equivalently as \begin{align} \label{eq:inequality2} \!\!\!\frac{\gamma_{M}}{\bar\lambda_{M}}\leq\frac{1}{N}{{\bf{z}}_{b,M}\herm {\boldsymbol\Theta}_{b,M}^{1/2}\left(\sum\limits_{j\in \mathcal U\setminus M}{\bar \lambda_{j}} {\bf B}_j + {(1+l)}\mathbf{I}_{N}\right)^{-1}\!\!\!\!\!\!{\bf \Theta}_{b,M}^{1/2}{\bf{z}}_{b,M}}. \end{align} Denoting the right hand side of~\eqref{eq:inequality2} by $\iota _M$, we observe that $\iota _M$ does not depend anymore on $\boldsymbol{\lambda}^*$. Thus, we can apply trace lemma~\cite[Lemma 2.6]{bai1998no}, rank-1 perturbation lemma~\cite[Lemma 2.6]{silverstein1995empirical} and Theorem~\ref{th:main theorem} to the right hand side of the above inequality to get $ \iota _M-\bar{m}_{b_M,M}(-(1+l))\xrightarrow{N\rightarrow \infty} 0 $ with \begin{align} \bar{m}_{b_M,M}(z) = {\rm {Tr}}\biggr({\bf \Theta}_{b_M,M}\biggr(\sum_{j\in \mathcal U}\frac{\bar \lambda_{j} {\bf \Theta}_{b_M,j}}{1+\bar\lambda_j\bar{m}_{b_M,j}}\!-zN {\bf I}_{N}\biggr)^{-1}\biggr), \end{align} which along with~\eqref{eq:inequality2} results in $\frac{\gamma_{M}}{\bar\lambda_{M}}\leq\lim_{N \to \infty} \text{inf}\,\, \bar{m}_{b_M,M}(-(1+l))$. On the other hand, we notice that $\bar{m}_{b_M,M}(-(1+l))$ at $l=0$ is equal to $\bar{m}_{b_M,M}$ in~\eqref{eq:stform proof of th1} with $\bar \lambda_M=\gamma_M/\bar{m}_{b_M,M}$. Since $\bar{m}_{b_M,M}(-(1+l))$ is a decreasing function of $l$ it can be proved~\cite{couillet2014large} that for any $l>0$ we have $\lim_{N \to \infty} \text{sup}\,\, \bar{m}_{b_M,M}(-(1+l))<\frac{\gamma_{M}}{\bar\lambda_{M}}$. This, however, goes against the former condition and creates a contradiction on the initial hypothesis that $r_M> 1+l$ infinitely often. Therefore, we must admit that $r_M\leq 1+l$ for all large values of $N$ and $K$. Reverting all inequalities and using similar arguments yields $r_1\geq 1-l$ for all large values of $N$ and $K$. Putting all these results together yields $1-l\leq r_1,...,r_M\leq 1+l$, from which we may write ${\rm{max}}_{k\in \mathcal{U}_b}|r_k-1|\leq l$ for all large values of $N$ and $K$. Taking a countable sequence of $l$ going to zero, we eventually obtain ${\rm{max}}_{k\in \mathcal{U}_b}|r_k-1|\rightarrow0$. Noticing that $r_k=\frac{ \gamma_k}{\bar{m}_{b_k,k}\lambda_k^*}$, we get ${\rm{max}}_{k\in \mathcal{U}_b}|\lambda_k^*-\bar \lambda_k|\rightarrow 0$ almost surely with $\bar \lambda_k=\frac{ \gamma_k}{\bar{m}_{b_k,k}}$. Following the same steps for all other $b\in \mathcal{B}$ completes the proof. \section{Proof of Theorem \ref{th:down pw updwn duality}} \label{sec:proof Theorem2} Given the Lagrangian multipliers $\{\bar{{\lambda}}_k\}$ as deterministic values in a feasible problem from Theorem~\ref{th:up powers updw duality}, we derive the deterministic equivalents for the elements of coupling matrix in~\eqref{eq:G_matrix} using standard random matrix theory tools. To do so, we rewrite the diagonal elements of the coupling matrix as $[\mathbf{G}]_{k,k}=\frac{1}{\gamma_{k}}{|\frac{1}{N}{\mathbf{h}_{b_k,k}\herm\boldsymbol{\Sigma}_{b_{k}}^{\backslash k}\,\mathbf{h}_{b_k,k}}|^2}$ where $\boldsymbol{\Sigma}_{b_{k}}^{\backslash k}=\big(\sum_{j\in \mathcal U\setminus k}\frac{\bar{\lambda}_{j}}{N}\mathbf{ h}_{b_{k},j}\mathbf{ h}_{b_{k},j}\herm +\mathbf{I}_N\big)^{-1}$ and the notation $()^{\backslash k}$ excludes the $k^{\text{th}}$ term from the summation. Given the growth rate in Assumption~\ref{as:0}, we apply trace lemma~\cite[Lemma 2.6]{bai1998no} and rank-1 perturbation lemma~\cite[Lemma 2.6]{silverstein1995empirical} which, gives $\frac{1}{N}{\mathbf{h}_{b_k,k}\herm\boldsymbol{\Sigma}_{b_{k}}^{\backslash k}\,\mathbf{h}_{b_k,k}}-\frac{1}{N} \text{Tr}(\boldsymbol{\Theta}_{b_{k},k} \boldsymbol{\Sigma}_{b_{k}})\rightarrow 0 $ almost surely. The resulted trace term is equal to $m_{b_k,k}$ defined in Theorem~\ref{th:main theorem} where, according to the theorem, we have $m_{b_k,k}-\bar{m}_{b_k,k}\rightarrow 0$ almost surely. This implies $[\mathbf{G}]_{k,k}-\frac{1}{\gamma_k}\bar{m}_{b_k,k}^2\rightarrow 0$ almost surely, which gives the diagonal elements of the coupling matrix as stated in the theorem. The non-diagonal elements of the coupling matrix $ [\mathbf{G}]_{k,i}=-\frac{1}{N^2}{\vec h}_{b_{i},k}\herm \boldsymbol{\Sigma}^{\backslash i}_{b_{i}} \,{ {\vec h}_{b_{i},i} {\vec h}_{b_{i},i}\herm }\boldsymbol{\Sigma}^{\backslash i}_{b_{i}} \, {\vec h}_{b_{i},k}$ can be rewritten using matrix inversion lemma~\cite[Equation 2.2]{silverstein1995empirical} as \begin{equation}\label{eq:Gil l remov} [\mathbf{G}]_{k,i}=-\frac{1}{N^2}\frac{{\vec h}_{b_{i},k}\herm \boldsymbol{\Sigma}^{\backslash i,k}_{b_{i}} \, {\vec h}_{b_{i},i} {\vec h}_{b_{i},i}\herm \boldsymbol{\Sigma}^{\backslash i,k}_{b_{i}}\, {\vec h}_{b_{i},k}}{\big(1+\frac{\bar \lambda_k}{N}{\vec h}_{b_{i},k}\herm \boldsymbol{\Sigma}^{\backslash i,k}_{b_{i}} \, {\vec h}_{b_{i},k}\big)^2}, \end{equation} where $\boldsymbol{\Sigma}_{b_{k}}^{\backslash i, k}=\big(\sum_{j\in \mathcal U\setminus i, k}\frac{\bar{\lambda}_{j}}{N}\mathbf{ h}_{b_{k},j}\mathbf{ h}_{b_{k},j}\herm +\mathbf{I}_N\big)^{-1}$ with notation $()^{\backslash i, k}$ excluding the $i^\text{th}$ and $k^{\text{th}}$ terms from the summation. Now, we can apply trace lemma~\cite[Lemma 2.6]{bai1998no} and rank-1 perturbation lemma~\cite[Lemma 2.6]{silverstein1995empirical} to the denominator of~\eqref{eq:Gil l remov} to obtain $\frac{1}{N}{{\vec h}_{b_{i},k}\herm \boldsymbol{\Sigma}^{\backslash i,k}_{b_{i}}\, {\vec h}_{b_{i},k}}-{\frac{1}{N}\text{Tr} (\boldsymbol{\Theta}_{b_i,k} \boldsymbol{\Sigma}_{b_{i}} })\, \rightarrow 0$ almost surely. Therefore, as a result of Theorem~\ref{th:main theorem}, we get the almost sure convergence of the denominator as \begin{equation} \label{eq:denom G} \bigg(1+\frac{\bar \lambda_k}{N}{\vec h}_{b_{i},k}\herm \boldsymbol{\Sigma}^{\backslash i,k}_{b_{i}} \, {\vec h}_{b_{i},k}\bigg)^2-\left(1+\bar \lambda_k \bar m_{b_i,k}\right)^2\rightarrow 0. \end{equation} We proceed by applying trace lemma~\cite[Lemma 2.6]{bai1998no} to the numerator of~\eqref{eq:Gil l remov} that gives \begin{equation}\label{eq:Gli tr} \begin{aligned} \hspace{-1.5cm}\frac{1}{N^2}{\vec h}_{b_{i},k}\herm \boldsymbol{\Sigma}^{\backslash i,k}_{b_{i}}\, {{\vec h}_{b_{i},i} {\vec h}_{b_{i},i}\herm} \boldsymbol{\Sigma}^{\backslash i,k}_{b_{i}} \,{\vec h}_{b_{i},k} - \frac{1}{N^2} \text{Tr}({\boldsymbol{\Theta} }_{b_{i},k} \boldsymbol{\Sigma}^{\backslash i,k}_{b_{i}}\, {{\vec h}_{b_{i},i} {\vec h}_{b_{i},i}\herm} \boldsymbol{\Sigma}^{\backslash i,k}_{b_{i}})\rightarrow 0 \end{aligned} \end{equation} almost surely. Rearranging the terms inside the trace in~\eqref{eq:Gli tr} and reapplying trace lemma~\cite[Lemma 2.6]{bai1998no} yields \begin{equation}\label{eg:numerator} {\vec h}_{b_{i},i} \herm \boldsymbol{\Sigma}^{\backslash i,k}_{b_{i}} \frac{{\boldsymbol{\Theta} }_{b_{i},k}}{N^2} \boldsymbol{\Sigma}^{\backslash i,k}_{b_{i}} \, {\vec h}_{b_{i},i} - \text{Tr} \big(\boldsymbol{\Theta}_{b_{i},i} \boldsymbol{\Sigma}_{b_{i}}^{\backslash i,k} \frac{\boldsymbol{\Theta}_{b_{i},k}}{N^2} \boldsymbol{\Sigma}_{b_{i}}^{ \backslash i,k}\big) \rightarrow 0 \end{equation} almost surely. As a result of rank-1 perturbation lemma~\cite[Lemma 2.6]{silverstein1995empirical}, the $i^\text{th}$ and $k^\text{th}$ excluded terms in $\boldsymbol{\Sigma}_{b_{i}}^{ \backslash i,k}$ are asymptotically insignificant in the trace, and thus,~\eqref{eq:denom G}-\eqref{eg:numerator} give the almost sure convergence of~\eqref{eq:Gil l remov} as \begin{equation}\label{eq:Glk determins meanway} \hspace{-0.1 in}(-[\mathbf{G}]_{k,i})-\frac{\frac{1}{N^2} \text{Tr}( \boldsymbol{\Theta}_{b_{i},i} \boldsymbol{\Sigma}_{b_{i}} \, \boldsymbol{\Theta}_{b_{i},k} \boldsymbol{\Sigma}_{b_{i}})}{\left(1+\bar \lambda_k \bar m_{b_i,k}\right)^2}\rightarrow 0. \end{equation} From matrix identities~\cite{Horn-Johnson-90}, we know that $ {\partial \mathbf{Y}^{-1}}/{\partial x}=- \mathbf{Y}^{-1} ( {\partial \mathbf{Y}}/{\partial x} ) \mathbf{Y}^{-1} $ with $\bf Y$ being a matrix depending on variable $x$. Keeping this in mind, we refer to $m_{b_i,i,k}(z,x)$ in its general form defined in Theorem~\ref{th:main theorem} as $m_{b_i,i,k}(z,x)=\frac{1}{N} \text{Tr}\big(\boldsymbol{\Theta}_{b_{i},i}\big( \sum_{j\in \mathcal U}\frac{\bar{\lambda}_{j}}{N}\mathbf{ h}_{b_{i},j}\mathbf{ h}_{b_{i},j}\herm -z\mathbf{I}_N-x \boldsymbol{\Theta}_{b_{i},k} \big)^{-1} \big)$ where in the special setting with $x=0$ and $z=-1$ diminishes to $m_{b_i,i}$. Using the identity, the numerator of~\eqref{eq:Glk determins meanway} can be written as a derivative of $m_{b_i,i,k}(z,x)$ with respect to the auxiliary variable $x$ at point $(z=-1,x=0)$, i.e., \begin{equation} \label{eq:numer G} \frac{1}{N} \text{Tr} \big(\boldsymbol{\Theta}_{b_{i},i} \boldsymbol{\Sigma}_{b_{i}} \, \boldsymbol{\Theta}_{b_{i},k} \boldsymbol{\Sigma}_{b_{i}}\big) =\frac{\partial}{\partial x} m_{b_i,i,k}(z,x)|_{x=0,z=-1}. \end{equation} Therefore, given the deterministic equivalents of derivative terms $m_{b_i,i,k}'=\frac{\partial}{\partial x} m_{b_i,i,k}(z,x)|_{x=0,z=-1}$ in~\eqref{eq:numer G}, the deterministic equivalents for the non-diagonal elements of the coupling matrix will follow from~\eqref{eq:Glk determins meanway}. In doing so, we notice that Theorem~\ref{th:main theorem} ensures the almost sure convergence of $m_{b_i,i,k}(z,x)$ to its deterministic equivalent given by $\bar{m}_{b_i,i,k}(z,x)=\frac{1}{N}\text{Tr}(\boldsymbol{\Theta}_{b_i,i}{\mathbf{T}_{b_i,k}(z,x)})$ where \begin{equation}\label{eq:Tbkl} {\mathbf{T}_{b,k}(z,x)}={\bigg(\frac{1}{N} \sum_{j \in \mathcal{U}} \frac{\bar{\lambda}_j \boldsymbol{\Theta}_{b,j}}{1+ \bar{ \lambda}_j\bar{m}_{b,j,k}(z,x) } -x\boldsymbol{\Theta}_{b,k}-z \mathbf{I}_{N} \bigg)^{-1}}\!\!\!. \end{equation} Therefore, the deterministic equivalents for the derivative terms, hereafter denoted by $\bar{m}_{b_i,i,k}'$, can be evaluated by deriving the derivative of~\eqref{eq:Tbkl} with respect to $x$ as \begin{equation}\label{eq:Tbkl prime} \mathbf{T}_{b,k}^{\prime} = \mathbf{T}_{b}\bigg(\frac{1}{N} \sum_{j \in \mathcal{U}} \frac{\bar{\lambda}_j^2 \boldsymbol{\Theta}_{b,j}\bar{m}_{b,j,k}^{\prime}}{(1+\bar{ \lambda}_j\bar{ m}_{b,j})^2} +\boldsymbol{\Theta}_{b,k} \bigg) \mathbf{T}_{b} \end{equation} where $\mathbf{T}_{b}=\mathbf{T}_{b,k}(-1,0)$, $\mathbf{T}_{b,k}^{\prime}=\frac{\partial}{\partial x}\mathbf{T}_{b,k}(-1,x)|_{x=0}$ and $\bar{ m}_{b,j}=\bar{ m}_{b,j,k}(-1,0)$ . Since $\bar{m}_{b_i,i,k}'=\frac{1}{N} \text{Tr} (\boldsymbol{\Theta}_{b_{i},i} \mathbf{T}_{b_i,k}^{\prime})$ with $\mathbf{T}_{b,k}^{\prime}$ given by~\eqref{eq:Tbkl prime}, we get a system of equation to evaluate $\bar{m}_{b_i,i,k}'$ as $ [\bar{m}_{b,1,k}^{\prime},...,\bar{m}_{b,K,k}^{\prime}]$ $=(\mathbf{I}_K-\mathbf{L}_{b})^{-1} {\vec u}_{b,k}$ with ${\vec u}_{b,k}$ and $\mathbf{L}_{b}$ defined as in~\eqref{eq:Proof en prime sys of equations3} and \eqref{eq:Proof en prime sys of equations4}, respectively. Given $\bar{m}_{b_i,i,k}'$, we get the deterministic equivalents for non-diagonal elements of the coupling matrix from~\eqref{eq:Glk determins meanway} and~\eqref{eq:numer G} as $[\mathbf{G}]_{k,i}=-{\frac{1}{N} \bar{m}_{b_i,i,k}'}/{\left(1+\bar \lambda_k \bar m_{b_i,k}\right)^2}$, which completes the proof. \section{Proof of Corollary \ref{cor:homo closeICI}} \label{subse:proof of corr ICI} We notice that the ICI from BS $b$ to UE $k$ in term of downlink transmit powers is given by $\epsilon_{b,k}=-\sum_{j\in \mathcal U_{b}} p_{j} [{\mathbf{G}}]_{k,j}/\|{\bf v}_j\|^2$ that carries a normalization term compared to the formulation in~\eqref{3.1}. Assuming UE $k$ to belong to a group $g\in \mathcal{A}_b$ with $\mathcal{A}_b$ denoting the set of all groups of BS $b$, one can observe from~\eqref{eq:nonnormal G} that $[\bar{\mathbf{G}}]_{k,j},\,\forall j\not\in \mathcal{G}_g$ is zero (due to inter-group orthogonality assumption) and we get \begin{equation}\label{eq:epsil group exact} \bar{\epsilon}_{b,k}=-\sum_{j\in \mathcal U_{b}\cap \mathcal{G}_g } \bar{p}_{j} [\bar{{\mathbf{G}}}]_{k,j}/\|\bar{{\bf v}}_j\|^2 \end{equation} where the elements of coupling matrix $\{[\bar{{\mathbf{G}}}]_{k,j}\}$ are given in~\eqref{eq:nonnormal G} as a function of $\bar{m}_{b_j,j,k}'$ and $\bar{m}_{b_j,k}$. The term $\bar{m}_{b_j,j,k}'$ is derivative of $\bar{m}_{b_j,j,k}(z.x)$ with respect to $x$ at point $x=0$ and $z=-1$, and $\bar{m}_{b_j,j,k}(z.x)$ is the Stieltjes transform in its general form as defined in Theorem~\ref{th:main theorem}. Given identical correlation properties for UEs within a group we can introduce group specific parameters $\bar{\eta}_{b,g}=\bar{m}_{b,j}/a_{b,j}^2,\,\forall j\in \mathcal{G}_g $ and $\bar{\eta}_{b,g,k}(z,x)=\bar{m}_{b,j,k}(z.x)/a_{b,j}^2,\,\forall j\in \mathcal{G}_g$ that allows the asymptotic expressions for the elements of the coupling matrix and consequently the asymptotic ICI expressions to be simplified. In particular, the derivative of $\bar{\eta}_{b,g,k}(z,x)$ with respect to $x$ at point $x=0$ and $z=-1$ can be evaluated similar to Appendix~\ref{sec:proof Theorem2} from \begin{equation} \label{eq:eta proof deriv1} \begin{aligned} \bar{\eta}_{b,g,k}^{\prime}= \,\,\,\,\frac{1}{N}{ \rm Tr} \bigg({\boldsymbol{\Theta}_{b,g} \mathbf{T}_{b,g}} \big( \sum_{j \in \mathcal{G}_g} \!\!\frac{(\bar{\lambda}_j a_{b,j}^2)^2\boldsymbol{\Theta}_{b,g} \bar{\eta}_{b,g,k}^{\prime}}{N(1+\bar{\lambda}_j a_{b,j}^2 \bar{\eta}_{b,g})^2} +a_{b,k}^2\boldsymbol{\Theta}_{b,g} \big) \mathbf{T}_{b,g}\bigg) \end{aligned} \end{equation} with \begin{equation} {\mathbf{T}_{b,g}}={\bigg(\frac{1}{N} \sum_{j \in \mathcal{G}_g} \frac{\bar{\lambda}_j a_{b,j}^2 \boldsymbol{\Theta}_{b,g}}{1+ \bar{ \lambda}_ja_{b,j}^2\bar{\eta}_{b,g} }+\mathbf{I}_{N} \bigg)^{-1}} \end{equation} where then similar to~\eqref{eq:numer G}-\eqref{eq:Tbkl prime}, the unknown variable $\bar{\eta}_{b,g,k}'$ can be solved as \begin{equation}\label{eq:eta prime 2idx} \bar{\eta}_{b,g,k}^{\prime}=\frac{a_{b,k}^2}{N}\frac{{\rm Tr} ((\boldsymbol{\Theta}_{b,g} \mathbf{T}_{b,g})^2)}{1-\rho_{b,g} \rm Tr ((\boldsymbol{\Theta}_{b,g} \mathbf{T}_{b,g})^2)} \end{equation} with $\rho_{b,g}=\frac{1}{N^2} \sum_{j \in \mathcal{G}_g} ({\bar{\lambda}_j a_{b,j}^2 })^2/{(1+\bar{\lambda}_j a_{b,j}^2 \bar{\eta}_{b,g})^2}$. Similarly, the normalization terms $\|{\bf v}_j\|^2=\frac{1}{N^2}{\vec h}_{b,j}\herm \boldsymbol{\Sigma}^{\backslash j}_{b} \boldsymbol{\Sigma}^{\backslash j}_{b} {\vec h}_{b,j},\forall j \in \mathcal U_{b}\cap \mathcal{G}_g$ converges almost surely as \begin{equation} \label{eq:norm V} \|{\bf v}_j\|^2-\frac{a_{b,j}^2}{N} \bar \zeta_{b,g}' \rightarrow 0 \end{equation} where the measure ${\zeta}_{b,g}(x)$ is given as ${\zeta}_{b,g}(x)=\frac{1}{N} \text{Tr} (\boldsymbol{\Theta}_{b,g} ( \sum_{j\in \mathcal U}\frac{\bar{\lambda}_{j}}{N}\mathbf{ h}_{b,j}\mathbf{ h}_{b,j}\herm +(1-x)\mathbf{I}_N)^{-1}) $, and similar to~\eqref{eq:eta proof deriv1}-\eqref{eq:eta prime 2idx}, the deterministic equivalent for derivative of ${\zeta}_{b,g}(x)$ with respect to $x$ at $x=0$ can be evaluated as \begin{equation}\label{eq:eta prime} \bar{\zeta}_{b,g}^{\prime}=\frac{{\rm Tr} \big(({\boldsymbol{\Theta}}_{b,g} {\mathbf{T}}_{b,g})^2\big)/N}{1-\rho_{b,g} {\rm Tr} ((\boldsymbol{\Theta}_{b,g} \mathbf{T}_{b,g})^2)}. \end{equation} Thus, the deterministic equivalent for ${\epsilon_{b,k}}$ can be evaluated, based on~\eqref{eq:epsil group exact} and~\eqref{eq:norm V}, as $\bar \epsilon_{b,k}=-\sum_{j\in \mathcal U_{b}\cap \mathcal{G}_g } \bar p_{j} N[\bar{\mathbf{G}}]_{k,j}/a_{b_j,j}^2 \bar \zeta_{b,g}'$. Recalling the non-diagonal elements of $\bar{\mathbf{G}}$ from~\eqref{eq:nonnormal G} and using $\bar{\eta}_{b,g,k}'=\bar{m}_{b,j,k}'/a_{b,j}^2,\forall j\in \mathcal{G}_g$, we get \begin{equation} \label{eq:ICI appen} \bar \epsilon_{b,k} = \frac{(\bar{\eta}_{b,g,k}^{\prime})/(\bar{\zeta}_{b,g}^{\prime})}{(1+\bar{\lambda}_k a_{b,k}^2 \bar \eta_{b,g})^2}\sum_{j\in \mathcal U_{b}\cap \mathcal{G}_g }\!\! \bar p_{j} , \quad\forall k\in \mathcal{G}_g. \end{equation} Finally, replacing $\bar{\eta}_{b,g,k}^{\prime}$ and $\bar{\zeta}_{b,g}^{\prime}$ with equivalents in~\eqref{eq:eta prime 2idx} and~\eqref{eq:eta prime}, respectively, and denoting $P_{b,g}=\sum_{j\in \mathcal U_{b}\cap \mathcal{G}_g } \bar p_{j}$, the interference term $\bar \epsilon_{b,k}$ can be written as in the corollary. Next, we evaluate the total power required at a given BS to serve UEs within a group in asymptotic regime. In doing so, we start from the SINR constraints in~\eqref{Opt_problem}, which can be equivalently written for UE~$k$ as \begin{equation} \label{eq:norm SINR} \frac{p_k\left|{\bf h}_{b_k,k}\herm{\bf v}_{k}\right|^{2}/\|{\bf v}_k\|^2}{\epsilon_{b_k,k} + \sum_{b\in{\mathcal B}\setminus b_{k}} \epsilon_{b,k} + \sigma^{2}} \geq {\gamma_{k}} \end{equation} where $\epsilon_{b_k,k} $ denotes the intra-cell interference and $\epsilon_{b,k},\,\forall b\in{\mathcal B}\setminus b_{k} $ are the intercell interference term. Denoting the SINR of $k^\text{th}$ UE by $\Gamma_k$, we have $\Gamma_k-\bar{\Gamma}_k\rightarrow 0$ almost surely with \begin{equation} \bar{\Gamma}_k=\frac{ ({N a_{b_k,k}^2 \bar{\eta}_{b_k,g_k}^2}/{\gamma_k}{\bar{\zeta}_{b_k,g_k}^{\prime}} )\bar{p}_k }{\bar{\epsilon}_{b_k,k} + \sum_{b\in{\mathcal B}\setminus b_{k}} \bar{\epsilon}_{b,k} + \sigma^{2}} \end{equation} where the deterministic equivalents for numerator and denominator of~\eqref{eq:norm SINR} directly follows from~\eqref{eq:nonnormal G} and~\eqref{eq:ICI appen}. Since the SINR constraints at the optimal point must be satisfied with equality, we set $\bar{\Gamma}_k=\gamma_k$ to evaluate $\bar{p}_k$ as \begin{equation} \label{eq:pk group} \bar{p}_k= \frac{\bar{\zeta}_{b_k,g_k}^{\prime}}{\bar{\eta}_{b_k,g_k}^2} \frac{\gamma_k}{N a_{b_k,k}^2 } ({\bar{\epsilon}_{b_k,k} + \sum\limits_{b\in{\mathcal B}\setminus b_{k}} \bar{\epsilon}_{b,k} + \sigma^{2}}). \end{equation} Now, consider BS $b'$ with a subset of UEs within group~$g'\in \mathcal{A}_{b'}$. The transmit power imposed on BS $b'$ for serving UEs $k\in\mathcal{G}_{g'}\cap \mathcal{U}_{b'}$ is given by $\bar{P}_{b',g'}=\sum_{k\in \mathcal U_{b'}\cap \mathcal{G}_{g'}} \bar p_{k}$, with $\bar{p}_k$ given by~\eqref{eq:pk group}. Keeping this in mind and plugging~\eqref{eq:ICI appen} into~\eqref{eq:pk group}, we get a system of equation to evaluate $P_{b',g'}$ as follows \begin{equation} \label{eq:sys eq group P} \begin{aligned} \bar{P}_{{{b'}},{g'}} \frac{\bar{\eta}_{{{b'}},{g'}}^2} {\bar{\zeta}_{{{b'}},{g'}}^{\prime}} = \frac{1}{N}\sum_{k\in \mathcal U_{{{b'}}}\cap \mathcal{G}_{g'}} \frac{\gamma_k}{ a_{{{b'}},k}^2 } \sigma^2+ \sum_{b\in \mathcal{B}}\sum_{g\in \mathcal{A}_b}\sum_{k\in \mathcal U_{{{b'}}}\cap \mathcal{G}_{g'}\cap\mathcal{G}_g} \!\!\!\!\!\!\!\!\!\!\! \frac{ {\rm Tr} \big((\boldsymbol{\Theta}_{b,g} \mathbf{T}_{b,g})^2\big)/{\rm Tr} (\boldsymbol{\Theta}_{b,g} \mathbf{T}_{b,g}^2)}{N(1+\frac{\gamma_ka_{b,k}^2 \bar \eta_{b,g}}{a_{{{b'}},k}^2 \bar \eta_{{{b'}},{g'}}})^2} \frac{\gamma_k{a_{b,k}^2} }{{a_{{{b'}},k}^2} }\bar{P}_{b,g} \\ \end{aligned} \end{equation} evaluated $\forall {{b'}}\in \mathcal{B},\forall {g'}\in\mathcal{A}_{b'}$. By rearranging the above equations in matrix form, the system of equations in the corollary is obtained. \section{Distributed Optimization} \label{sec:distributedOpt} {The large system analysis provided in Section~\ref{sec: large system analysis}, gives the optimal power allocations in the asymptotic regime. Relying on these results, one can directly obtain asymptotically optimal receive and transmit beamforming vectors for UE $k$ as $\bar{\mathbf{v}}_{k}=(\sum_{j\in \mathcal U\setminus k}{\bar \lambda_{j}}{\mathbf{ h}}_{b_{k},j}{\mathbf{ h}}_{b_{k},j}\herm + \mu_{b_k}N\mathbf{I}_{N})^{-1}\mathbf{ h}_{b_k,k}$ and $\bar{{\vec w}}_{k}=\sqrt{\bar{\delta}_{k}/N} \bar{\vec v}_{k}$, respectively. The computation of asymptotic beamformers for UE $k$ needs only locally measured CSI at the serving BS $b_k$, i.e., ${\mathbf{ h}}_{b_{k},j},\,\forall j\in \mathcal{U}$, along with the asymptotic power allocations (i.e., $\{\bar \lambda_{j}\}$ and $\{\bar{\delta}_{k}\}$) whose computation needs only statistical information from neighboring BSs. However, the resulting beamforming vectors satisfy the SINR constraints only asymptotically~\cite{LucaCouilletDebbahJournal2015} but not for a finite value of $N$, as demonstrated later by numerical examples. In order to ensure the SINR constraints, we invoke the solution briefly introduced in Section~\ref{sec:distibuted Sol intro}, which is exploited in more details in the following. } \vspace{-0.3cm} \subsection{Distributed QoS Guaranteed Precoding} { The optimization problem formulation in~\eqref{Opt_problem} identifies the ICIs as the principal coupling parameters among BSs. Starting from the ICI formulation in~\eqref{3.1} and relying on the large system analysis in Section~\ref{sec: large system analysis}, we propose deterministic equivalents for these coupling parameters as \begin{align}\label{3.4} \bar \epsilon_{b,k}= -\bar \delta_{j} [\bar{\mathbf{G}}]_{k,j} \end{align} with $\bar{\boldsymbol{\delta}} = \bar{ \mathbf{G}}^{-1} \mathbf{1}_{K}$, and the deterministic equivalents for the elements of coupling matrix $\{[\bar{\mathbf{G}}]_{k,j}\}$ given as in Theorem~\ref{th:down pw updwn duality}. Observe that the computation of \eqref{3.4} requires only the channel correlation matrices $\{{\bf \Theta}_{b,k},\,\forall b,k\}$ to be shared among BSs. Thus, plugging the deterministic ICIs from~\eqref{3.4} into the optimization problem in~\eqref{Opt_problem}, the centralized problem can be decoupled into independent sub-problems at each BS. This solution is summarized in Algorithm~\ref{alg:ICI_approx}. } \begin{algorithm} [H] \caption{Decentralized beamforming with approximate ICI values.} \label{alg:ICI_approx} \begin{algorithmic}[1] \LOOP \IF{Any change in the UEs' statistics or during the initial stage} \STATE Each BS sends the updated correlation matrices to the coupled BSs via backhaul. \STATE Update $\bar{\boldsymbol{\lambda}}$ and $\bar{\boldsymbol{\delta}}$ values locally based on Theorem~\ref{th:up powers updw duality} and~\ref{th:down pw updwn duality}. \STATE Update the approximate ICIs locally based on~\eqref{3.4}. \ENDIF \STATE Use the approximate ICIs as fixed $\epsilon_{b,k}$ in \eqref{Opt_problem}, and solve the sub-problems locally to get the downlink beamformers. \ENDLOOP \end{algorithmic} \end{algorithm} \vspace{-0.2cm} { Algorithm~\ref{alg:ICI_approx} allows the BSs to obtain the precoders locally relying only on shared statistics and locally available CSI. The resulting precoders satisfy the SINR constraints. The sub-problems at each BS can be solved using a convex optimization solver or fixed point iterations as shown in~\cite{Pennanen-Tolli-Latva-aho-TSP-13}. Note that the resulting problem with approximate ICIs is a restriction of the original problems~\eqref{eq:prim problemSimple} and~\eqref{Opt_problem}, and the infeasibility rate and total transmission power would increase, depending on the accuracy of the ICI approximations. However, since the deterministic equivalents of ICI terms provide good approximations for the optimal ICI values in the finite regime, the performance loss is small for a relatively moderate number of UEs and antennas. } \vspace{-0.3cm} \subsection{{ An Alternative Distributed Precoding Method With Reduced Backhaul Signaling}} Although each ${\bf \Theta}_{b,k}$ changes slowly in time compared to small-scale fading components, the exchange of such information among coupled BSs via backhaul links may not be practical when $N$ and $K$ are large. To overcome this issue, a heuristic solution is proposed that reduces the amount of shared information subject to slightly higher transmit power (as shown in numerical results). We notice that $[\bar{\mathbf{G}}]_{k,j},\forall j\in \mathcal U_{b}$ are the coupling terms between UE $j,\forall j\in \mathcal U_{b}$ and UE $k$, and define the amount of interference leaking from the precoders of UE $j,\forall j\in \mathcal U_{b}$ to UE $k$. In particular, observe that the amount of interference from an interfering BS $b$ to a given UE $k$ in~\eqref{3.4} is given in terms of $[\bar{\mathbf{G}}]_{k,j},\forall j\in \mathcal U_{b}$ and $\delta_{j},\,\forall j\in \mathcal U_{b}$. On the other hand, for a given set of Lagrangian multipliers $\{\bar{{\lambda}}_k\}$, the coupling terms $[\bar{\mathbf{G}}]_{k,j},\,\forall j\in \mathcal U_{b}$ in~\eqref{eq:nonnormal G} depend only on the statistics locally available at the interfering BS~$b$. This observation motivates extraction of an approximation for the interference that a BS $b$ causes to a non-served UE $k$ based on partial knowledge of non-local statistics (statistics available at other BSs) while utilizing the locally available statistics. In doing so, we declare the large-scale attenuation (due to pathloss and fading) explicitly and express the correlation matrices as $a^2_{b,k}{\bf \Theta}_{b,k}$ where $a^2_{b,k}$ accounts for pathloss from BS $b$ to UE $k$. We assume that each BS $b$ is able to estimate (perfectly) the channel correlation matrices $a^2_{b,k}{\bf \Theta}_{b,k},\, \forall k$ while the correlation matrices $a^2_{b',k}{\bf \Theta}_{b',k}, \, \forall k$ from all other BSs $b' \neq b$ are not known locally at BS~$b$. Only the large-scale attenuation values $\{a^2_{b',k}\}$ are assumed to be available for the non-local channels. The first assumption relies on the observation that correlation matrices remain constant for a sufficiently large number of reception phases to be accurately estimated at the BS~\cite{BjörnsonLuca2016}. The second one is motivated by the observation that most current standards require the UEs to periodically report received signal strength indication (RSSI) values to their serving BSs (usually using orthogonal uplink resources). Under the assumption that nearby BSs are also able to receive such RSSI measurements, a partial knowledge of non-local channel statistics can be obtained without any information exchange through backhaul~links\footnote{Alternatively, RSSI values can be exchanged among BSs over backhaul links.}. \begin{figure*}[tpb] \begin{center} \includegraphics[clip, trim=0cm 0cm 0cm 0cm, width=\textwidth]{Fig1.pdf} \end{center}\vspace{-0.3cm} \caption{ {An illustration of locally measured statistics and backhaul signaling in Algorithm 2.}} \label{fig:Heuristic algor \end{figure*} Under the above assumptions, each BS can locally compute (through Theorems \ref{th:up powers updw duality} and \ref{th:down pw updwn duality}) approximations of the optimal powers along with the coupling parameters, which can be used in \eqref{3.4} to locally obtain an approximation for the interference that the BS causes to a non-served UE. The approximate ICI values are then sent to the respective serving BSs over the backhaul link to be plugged in the ICI constraints of the local optimization problems. {Fig.~\ref{fig:Heuristic algor} shows a two-cells example where a given BS $b$ locally measures the statistics of the local channel vectors, i.e., ${a}_{b,k}^2 {\bf \Theta}_{b,k}, k\in\{1,2\}$, and obtains the pathlosses of the non-local channels i.e., ${a}_{b',k}^2, k\in\{1,2\}$, from the reported received signal strength indication (RSSI). Finally, BS $b$ sends the approximation for the interference that it causes to UE2 over the backhaul link. This solution is summarized in Algorithm \ref{alg:ICI_approx heuristic}.} \vspace{-0.2cm} \begin{algorithm} [h] \caption{Heuristic solution} \label{alg:ICI_approx heuristic} \begin{algorithmic}[1] \LOOP \IF{Any change in the UEs' statistics or during the intial stage} \STATE Users broadcast the pathloss information to the nearby BSs using uplink resources. \STATE Each BS $b$ locally calculates approximations for $\delta_k$ and $[{\bf G}]_{i,j}$ values using Theorems~\ref{th:up powers updw duality} and~\ref{th:down pw updwn duality} where BS $b$ locally assumes ${a}_{b',k}^2 {\bf \Theta}_{b',k}={a}_{b',k}^2{\bf I}_{N} \ \forall k$ for all $b'\neq b$. \STATE ICI values $\bar{\epsilon}_{b,k}, \forall k\not\in \mathcal{U}_{b}$ are computed from \eqref{3.4} at each BS $b$. \STATE Each BS $b$ sends the ICI values $\bar{\epsilon}_{b,k}, \forall k\not\in \mathcal{U}_{b}$ to the corresponding serving BSs. \ENDIF \STATE BSs use the approximate ICIs as fixed $\epsilon_{b,k}$ in~\eqref{Opt_problem} and solve the sub-problems locally to get the downlink precoders. \ENDLOOP \end{algorithmic} \end{algorithm} \vspace{-0.5cm} \subsection{Backhaul Signaling And Complexity Analysis} { Table~\ref{tab:t1} presents the locally available CSI and the required information exchange over the backhaul links for a given BS~$b$ to obtain the beamformers with Algorithms 1 and 2. Algorithm~\ref{alg:ICI_approx heuristic} is a semi-static coordination method that, unlike the available decoupling methods requiring a continuous exchange of CSI messages~\cite{ Tolli-Pennanen-Komulainen-TWC10, Pennanen-Tolli-Latva-aho-spl-11, ChaoADMM2012, Pennanen-Tolli-Latva-aho-TSP-13}, relies only on local channel statistics and reported path gain values. This makes it more resilient to limited link capacity and latency. Unlike Algorithm~\ref{alg:ICI_approx} that needs exchanges of correlation matrices over the backhaul links, Algorithm~\ref{alg:ICI_approx heuristic} sends the approximate ICI values (scalars) on the backhaul links only when sufficient changes occur in the channel statistics. This reduces the exchange rate by almost $N^2/2$. In the numerical analysis of Section~\ref{sec:Simulation Results}, a small difference in the transmission powers of these algorithms is observed, which is due to the difference in the accuracy of approximate ICI values.} {Concerning the complexity analysis, we notice that the proposed algorithms need to solve the sub-problems at BSs subject to the fixed ICIs given by~\eqref{3.4}. The solution to such sub-problems can be obtained using SOCP, semidefinite programming (SDP) and uplink-downlink duality~\cite{Pennanen-Tolli-Latva-aho-TSP-13}. This latter approach requires lower computational complexity compared to SOCP and SDP~\cite{Pennanen-Tolli-Latva-aho-TSP-13}. Specifically, at a given BS $b$, the Lagrangian multipliers associated with rate and ICI constraints are evaluated via a projected sub-gradient method and a simple fixed point iteration~\cite{Pennanen-Tolli-Latva-aho-TSP-13}. This involves a matrix inversion of size $N\times N$ with a complexity per iteration in the order of $\mathcal{O}(|\mathcal{U}_b|\times N^3)$ where $|\mathcal{U}_b|$ denotes the number of UEs served by BS $b$. Concerning the calculation of approximate ICIs, we notice that the ICI terms in~\eqref{3.4} are updated only when there are sufficient changes in channel matrix statistics, which vary at a much slower rate than the fading CSI. The computation of approximate ICIs requires evaluation of $\{{\bar{\lambda}}_k\}$ and $\{{\bar{\delta}}_k\}$ values with a complexity of order $\mathcal{O}(K\times N^3)$ and $\mathcal{O}(K^3)$ respectively.} \begin{table}[] \centering \caption{Locally available knowledge and acquired information over backhaul at BS $b$} \label{tab:t1} { \begin{tabular}{l|l|l|} \cline{2-3} & \scriptsize{Local CSI at BS $b$} & \scriptsize{Acquired information from other BSs} \\ \hline \multicolumn{1}{|l|}{\!\!\!\scriptsize{Alg. 1}\!\!\!} & \!\!$\{{a}_{b,j}^2 {\bf \Theta}_{b,j}\}, \{{\bf{h}}_{b,j}\}, \forall j\in\mathcal{U}$\!\! & $\{{a}_{b',j}^2 {\bf \Theta}_{b',j}\}, \forall j\in\mathcal{U},\forall b'\neq b$\\ \hline \multicolumn{1}{|l|}{\!\!\!\scriptsize{Alg. 2}\!\!\!} & \!\!$\{{a}_{b,j} ^2{\bf \Theta}_{b,j}\}, \{{\bf{h}}_{b,j}\}, \forall j\in\mathcal{U}$\!\! & $\{\bar{\epsilon}_{b',j}\}, \forall j \in\mathcal{U}_{b},\forall b'\neq b$ \\ \hline \end{tabular}} \vspace{-0.5cm} \end{table} \section{Network with Partitioned UE Population} { So far, we have assumed distinct statistical properties for UEs. However, as in many works in literature (such as~\cite{UserPartionAdhikary,GroupKim2015}), one can consider a special scenario where the UEs are grouped on the basis of their statistical properties. In particular, each BS partitions the UE population into groups such that the eigenspaces of correlation matrices in distinct groups be asymptotically orthogonal (this is referred to as asymptotic orthogonality condition). The main idea of this section consists of exploiting the asymptotic orthogonality condition, and the similarities of statistical properties of nearly co-located UEs~\cite{UserPartionAdhikary} to get both mathematically and computationally simpler approximations of the ICI terms. The dependency of approximate ICIs on group-specific correlation properties allows further reduction in the backhaul exchange rate of the decentralized solutions in Section~\ref{sec:distributedOpt}. Moreover, the analysis motivates development of the decentralization framework within a context similar to two-stage beamforming~\cite{UserPartionAdhikary} (discussed further in Section~\ref{sec:Conclusion}). {The UE-grouping idea is presented next by using the simple two-cell configuration in Fig.~\ref{fig:Groups}. The detailed multicell model is presented mathematically later on. Consider a BS, equipped with a linear array, which properly partitions the UE population into distinct groups such that the angle of arrivals of UEs' signals within distinct groups, at the given BS, are sufficiently separated. Assuming one-ring channel model\footnote{In a typical cellular configuration with a tower-mounted BS and no significant local scattering, the propagation between the BS antennas and any given UE is expected to be characterized by the local scattering around the UE resulting in the well-known one-ring model~\cite{shiu2000fading}. }, it is shown in~\cite{UserPartionAdhikary} that the correlation matrices of UEs in distinct groups have nearly orthogonal eigenspaces as $N\rightarrow \infty$. Following this line of thoughts, BS $b$ ($b'$) in Fig.~\ref{fig:Groups}, equipped with a linear array, resolves two smaller groups $g_1,g_2$ (equivalently $g_4,g_5$ for BS $b'$) and one bigger group $g_3$ (equivalently $g_6$ for BS $b'$ ) with non-overlapping supports of AoA distributions. The beams in the figure represent the AoA spread of UEs' signal at BSs. Equivalently, the groups with dashed (dotted) contours correspond to the partition related to BS $b$ ($b'$). In the following, the index $g$ is used to refer to the $g^{\text{th}}$ group, the set of UEs in group $g$ is denoted by $\mathcal{G}_g$ and the set of all groups at a given BS~$b$~is~denoted~by~$\mathcal{A}_b$.} \label{sec:grouped UEs} \begin{figure*}[tpb] \begin{center} \includegraphics[clip, trim=0cm 0cm 0cm 0cm, width=\textwidth]{Fig2.pdf} \end{center}\vspace{-0.3cm} \caption{{An illustration of UE-groups formation at BSs based on UEs' angular separation }} \label{fig:Groups} \end{figure*} {In the following, the aforementioned assumptions are presented mathematically for a generic multicell setting. We declare the large-scale attenuation $a^2_{b,k}$ explicitly and express the correlation matrices as $a^2_{b,k}{\bf \Theta}_{b,k}$. Also, the correlation matrices are assumed to have eigenvalue decomposition given as $a^2_{b,k}\boldsymbol{\Theta}_{b,k}=a^2_{b,k}{\bf U}_{b,k}{\bf \Xi}_{b,k}{\bf U}_{b,k} \herm$. The diagonal $r_{b,k}\times r_{b,k}$ matrix ${\bf \Xi}_{b,k}$ holds the non-zero eigenvalues, and the corresponding eigenvectors are stacked in ${\bf U}_{b,k}\in \mathcal{C}^{N\times r_{b,k}}$. Under grouping assumption, we declare the aforementioned asymptotic orthogonality condition as follows: ${\bf U}_{b,i}\herm {\bf U}_{b,j}= {\bf 0},\, \forall i\in \mathcal{G}_g, j\not\in \mathcal{G}_g,\forall g\in \mathcal{A}_b$ as $ N \rightarrow \infty$~\cite{UserPartionAdhikary}, which indicates the orthogonality of eigenspaces of correlation matrices for UEs in distinct groups in asymptotic regime. Within this context a mathematically attractive problem is to assume a homogeneous model wherein the UEs in a group have identical correlation while experiencing different pathlosses i.e. $a_{b,k}^2 \boldsymbol{\Theta}_{b,k}=a_{b,k}^2\boldsymbol{\Theta}_{b,g}, \,\forall k \in \mathcal{G}_g,\forall g\in \mathcal{A}_b$ where $a_{b,k}^2$ accounts for UE specific pathloss values and $\boldsymbol{\Theta}_{b,g}$ dictates the group specific correlation properties. These conditions are not verified exactly in practice. However, as UEs in the same group are likely to be nearly co-located, one might closely approximate these conditions by proper UE scheduling (specifically when there is a large pool of UEs to be scheduled), which is not however in the scope of this work. Given the aforementioned system model, one can write $\bar{m}_{b,i}$ in~\eqref{eq:ST Th1} for UEs in a group $g$ resolved at BS $b$ as $\bar{m}_{b,i}={a}_{b,i}^2\bar{\eta}_{b,g}$~with $\bar{\eta}_{b,g}$ being a group specific measure evaluated as $\forall g \in \mathcal{A}_b, \forall b$ \begin{equation} \label{eq:eta multi cell} \bar{\eta}_{b,g} = \sum_{i=1}^{r_{g}} {\biggr( \sum_{j\in \mathcal{G}_g} \frac{{\gamma_{j}} }{ {a_{b_j,j}^2}\bar{\eta}_{b_j,g_{j}}/{a_{b,j}^2}+\gamma_{j}\bar{\eta}_{b,g}} + {N\mu_b}({[{\bf \Xi}_{b,g}]_{i,i}}) \biggr)^{-1}} \end{equation} where for a given UE $j$, we use $g_j$ to denote the group resolved at the serving BS $b_j$, which contains UE $j$ as a member. The formulation of measure $\bar{\eta}_{b,g}$ follows directly from~\eqref{eq:ST Th1} by substituting $\boldsymbol{\Theta}_{b,k},\,\forall k \in \mathcal{G}_g$ with $a_{b,k}^2\boldsymbol{\Theta}_{b,g}, \,\forall k \in \mathcal{G}_g$. Note that $\boldsymbol{\Theta}_{b,g}={\bf U}_{b,g}{\bf \Xi}_{b,g}{\bf U}_{b,g} \herm$ with ${\bf U}_{b,g}\herm {\bf U}_{b,g'}= {\bf 0},\, \forall g,g'\in \mathcal{A}_b,\, g\neq g'\, {\text{ as}}\, N\rightarrow \infty$, and ${\bf U}_{b,g}\herm {\bf U}_{b,g}= {\bf I}_{r_{g}}$, which gives $\bar{\eta}_{b,g}$ as in~\eqref{eq:eta multi cell}. The interactions among UE groups are regulated using $\bar{\eta}_{b,g}$ values in~\eqref{eq:eta multi cell} such that the SINR constraints for all UEs within the groups are satisfied asymptotically. The corresponding asymptotically optimal uplink power for UE $j$ can be evaluated as $\bar{\lambda}_j/N=\gamma_j/(N a_{b_j,j}^2\bar{\eta}_{b_j,g_j})$. As a consequence of the considered system model, the optimal ICI terms, as the inter-cell coordination messages, can be characterized directly in terms of channel statistics and total transmit power per group as stated in the following corollary. \vspace{-0.2cm} \begin{corollary} \label{cor:homo closeICI} Consider the multicell scenario with grouped UEs experiencing homogeneous correlation properties $a_{b,k}^2\boldsymbol{\Theta}_{b,k}=a_{b,k}^2\boldsymbol{\Theta}_{b,g}, \,\forall k \in \mathcal{G}_g,\forall g\in \mathcal{A}_b$. {Then under Assumption \ref{as:1} and given the growth rate defined in Assumption~\ref{as:0}, the optimal ICI values $\epsilon_{b,k},\forall k \in \mathcal{G}_g\centernot\cap \mathcal{U}_b$ converge almost surely to the deterministic equivalents $\bar{\epsilon}_{b,k}$ with} \vspace{-0.2cm} \begin{equation} \label{eq:determ ICI homogen} \bar \epsilon_{b,k}= \varphi_{b,k} {a_{b,k}^2} \bar{P}_{b,g},\,\forall k \in \mathcal{G}_g\centernot\cap \mathcal{U}_b \end{equation} where \vspace{-0.3cm} \begin{equation} \label{eq:varphi in ici} \varphi_{b,k}= \frac{{\rm Tr} ((\boldsymbol{\Theta}_{b,g} \mathbf{T}_{b,g})^2)/{\rm Tr} ( \boldsymbol{\Theta}_{b,g} \mathbf{T}_{b,g}^2)}{(1+\gamma_k(a_{b,k}^2 \bar \eta_{b,g})/(a_{b_k,k}^2 \bar \eta_{b_k,g_k}))^2} \end{equation} where the term $\bar \eta_{b,g}$ is the group specific parameter given by~\eqref{eq:eta multi cell} or equivalently as $\bar \eta_{b,g}=\frac{1}{N}{\rm Tr}({\boldsymbol \Theta}_{b,g}{\mathbf T}_{b,g})$ with ${\bf{T}}_{b,g} = (\frac{1}{N}\sum_{j\in \mathcal{G}_g}({ \gamma_{j}{a_{b,j}^2\bf{\Theta}}_{b,g}})/({a_{b_j,j}^2 \bar \eta_{b_j,g_j}+ \gamma_j a_{b,j}^2 \bar \eta_{b,g}}) + \mu_{b}{\bf I}_N)^{-1}$. The asymptotic total transmit power at BS~$b$ required for serving UEs in $\mathcal{G}_g\cap \mathcal{U}_b$ is denoted by $\bar{P}_{b,g}$. Stacking all $\bar{P}_{b,g},\forall g\in\mathcal{A}_b, \forall b$ in a vector, we get the system of equations $[\bar{P}_{1,1},...,\bar{P}_{1,|\mathcal{A}_1|},...,\bar{P}_{|\mathcal{B}|,|\mathcal{A}_{|\mathcal{B}|}|}]=({\bf D}-{\bf L})^{-1}\bf u$ where the elements of vector $\bf u$ and matrix $\bf L$ are given as $[{\bf u}]_{(b,g)}=\frac{1}{N}\sum_{j\in \mathcal U_{{{b}}}\cap \mathcal{G}_g} {\gamma_j}/{ a_{{{b}},j}^2 }$ and $[{\bf L}]_{(b',g'),(b,g)}=\frac{1}{N}\sum_{j\in \mathcal U_{{b'}}\cap \mathcal{G}_g\cap \mathcal{G}_{g'}} \varphi_{b,j} {\gamma_j{a_{b,j}^2} }/{{a_{{b'},j}^2} }$, respectively. The tuple index $(b',g')$ points to the row/column element corresponding to group $g'$ at BS $b'$. The matrix $\bf D$ is a diagonal matrix where $[{\bf D}]_{(b,g),(b,g)}=\frac{\bar{\eta}_{{{b,g}}}^2} {\bar{\zeta}_{{{b,g}}}^{\prime}}$ with $ \bar{\zeta}_{b,g}^{\prime}=\frac{1}{N}{\rm Tr \big(\boldsymbol{\Theta}_{b,g} \mathbf{T}_{b,g}^2}/{(1-\rho_{b,g} \rm Tr ((\boldsymbol{\Theta}_{b,g} \mathbf{T}_{b,g})^2)\,)\big)} $ and $\rho_{b,g}=\frac{1}{N^2} \sum_{j \in \mathcal{G}_g} 1/{( \frac{a_{b_j,j}^2}{\gamma_j a_{b,j}^2}\bar{\eta}_{b_j,g_j} + \bar{\eta}_{b,g})^2}$. \end{corollary} \begin{IEEEproof} The proof is given in Appendix~\ref{subse:proof of corr ICI} . \end{IEEEproof} Looking at the derived expression in~\eqref{eq:determ ICI homogen}, we observe that the asymptotically optimal ICI $\bar{\epsilon}_{b,k}$ is directly related to the group aggregated transmit power at the interfering BS $b$ degraded by the pathloss $a_{b,k}^2$. The total transmit power is generally proportional to the ratio $K/N$~\cite{ICC17Asghari}. Therefore, ICIs are expected to go to zero only when there exists a large imbalance between these quantities. The parameter $\varphi_{b,k}$ in~\eqref{eq:varphi in ici} indicates that the UEs with higher SINR targets are generally assigned smaller ICIs. Also, the target SINRs are multiplied by $(a_{b,k}^2 \bar \eta_{b,g})/(a_{b_k,k}^2 \bar \eta_{b_k,g_k})$ terms to reflect the position of the UEs with respect to the serving and the interfering BSs, as well as the priority of the BSs via $\bar \eta_{b,g}$ terms. Assuming properly partitioned UE population, one can directly utilize the ICI expression from Corollary~\ref{cor:homo closeICI} to obtain the approximate ICIs in Algorithm~\ref{alg:ICI_approx}. This brings two benefits: Firstly, the approximate ICIs can be attained based on group-specific correlation properties that reduces the backhaul exchange rate. Secondly, the computational effort for ICI evaluation is decreased. Denoting the total number of groups formed at all BSs by $M$, we notice that the computation of asymptotic ICIs for UEs of a group in~\eqref{eq:determ ICI homogen} requires an $M\times M$ matrix inversion as compared to $K\times K$ matrix inversion required in the generic formulation~\eqref{3.4}. Generally, we expect $M$ to be much smaller than $K$. The viability of this approach is studied numerically in Section~\ref{sec:Simulation Results}. \section{Introduction} High spatial utilization is a promising approach to meet the significant spectral efficiency enhancements required for 5G cellular networks. In general, this achieved by using a large number of antennas $N$ at the base stations (BSs) to serve a large number of user equipments (UEs) $K$ on the same frequency-time resources. The need for serving such a large number of UEs in multicellular environments pronounces the importance of proper precoder design that takes into account the intercell coordination and subsequent challenges in such large networks. In the context of massive multiple-input-multiple-output (MIMO)~\cite{marzlaryngo16,BjoHoydSang17} under the assumption of i.i.d.~Rayleigh fading channels (i.e., no spatial correlation), as $N\to\infty$ with $K$ fixed, non-cooperative precoding schemes such as maximum ratio transmission~\cite{marzetta2010noncooperative}, single-cell~\cite{hoydis2013massive,Krishnan2014a} and multicell\cite{Ngo2012b,EmilEURASIP17} minimum mean squared error (MMSE) schemes can entirely eliminate the multicell interference and the performance of is only limited by pilot contamination. As shown recently in~\cite{BjornsonHS17}, even the pilot contamination is not a fundamental asymptotic limitation when a small amount of spatial channel correlation or large-scale fading variations over the array is considered. Despite all this, when $N$ is not relatively large compared to $K$, cooperation among cells provides additional benefits in mitigating intercell interference (ICI). Coordinated multicell resource allocation is generally formulated as an optimization problem in which the desired network utility is maximized subject to some constraints. In this work, we consider a coordinated multicell multiuser MIMO system in which $L$ BSs, each equipped with $N$ antennas, jointly minimize the transmission power required to satisfy target rates for $K$ single-antenna UEs. {We recognize that the ICI coordination in a dense network with a large number of UEs and antennas is very challenging, due to the practical limitations of backhaul links. Hence, we are particularly interested in developing a semi-static coordination scheme that allows the cooperating BSs to locally obtain near-optimal precoders with a minimal information exchange.} \subsection{Prior Work} Coordinated multicell minimum power beamforming has been largely investigated in the literature~\cite{SOCPshamai,Dahrouj-Yu-10,Tolli-Pennanen-Komulainen-TWC10, Pennanen-Tolli-Latva-aho-spl-11, ChaoADMM2012, Pennanen-Tolli-Latva-aho-TSP-13} and has recently received renewed interest in the context of green multicellular networks~\cite{GreenNet}. The optimal solution to this optimization problem can be computed by means of second-order cone programming (SOCP)~\cite{SOCPshamai} or exploiting uplink-downlink duality~\cite{ Dahrouj-Yu-10}. However, this requires full channel state information (CSI) at all BSs, meaning that the locally measured instantaneous CSI needs to be exchanged among BSs. To avoid the exchange of CSI among BSs, several different decentralized solutions have been proposed in the literature~\cite{Tolli-Pennanen-Komulainen-TWC10, Pennanen-Tolli-Latva-aho-spl-11, ChaoADMM2012, Pennanen-Tolli-Latva-aho-TSP-13}. The underlying idea of all these methods is to reformulate the optimization problem such that the BSs are only coupled by real-valued ICI terms. In this way, the centralized problem can be decoupled by primal or dual decomposition approaches leading to a distributed algorithm, which needs the ICI values to be continuously exchanged among BSs (to follow the changes in the fading process). Despite the remarkable reduction in information exchange, when the system dimensions grow large (as envisioned in 5G networks) and consequently the amount of information to be exchanged increases, the limited capacity and high latency of backhaul links in practical networks may become a bottleneck. A possible way out of these issues is to rely on the asymptotic analysis in which $N$ and $K$ grow large with a non-trivial ratio $K/N$. In these circumstances, tools from random matrix theory allow to derive explicit expressions for (most) performance metrics such that they only depend on the channel statistics~\cite{RMT}. The asymptotic analysis for the closely related problem, i.e., regularized zero-forcing precoding, is presented in~\cite{wagner2012large,zhang2013large}, and the power minimization problem in conjunction with sum-rate maximization is considered in~\cite{SHeY2015SumRtToPWR}. In the course of developing large system analysis for power minimization problem subject to UEs' rate constraints, one can begin with the Lagrangian duality formulation developed in~\cite{ Dahrouj-Yu-10} where the optimal power assignments are presented in terms of channel entries. In particular, the results of large system analysis can be utilized to compute deterministic equivalents for the optimal powers. The authors in~\cite{sanguinetti2014optimal,LucaSanguinetti,zakhour2013min, lakshminarayana2015coordinated, LucaCouilletDebbahJournal2015} perform such analysis under i.i.d. Rayleigh fading channels in single-cell~\cite{sanguinetti2014optimal,LucaSanguinetti} and multicell~\cite{zakhour2013min, lakshminarayana2015coordinated, LucaCouilletDebbahJournal2015} settings. The impact of spatial correlation on the asymptotic power assignment is studied in~\cite{Huang2012correlated} for a single-cell scenario with UEs experiencing identical correlation matrices. The deterministic equivalents are found to depend only on the long-term channel statistics and on the UEs' target rates. This enables the cross-cell coordination based on slow varying channel statistics and also provides insights into the structure of the optimal solution as a function of underlying statistical properties. However, the major drawback in using the asymptotic power expressions in practical networks of finite size (with a finite number of antennas $N$) is that the rate constraints are not met since those can be guaranteed only asymptotically. \subsection{Contributions} {The main contribution of this paper is to introduce two novel semi-static coordination algorithms that allow BSs to obtain near-optimal QoS-guaranteed precoders locally, and subject to relaxed coordination requirements. This is realized by reformulating the optimization problem such that the BSs are only coupled by ICI values~\cite{Tolli-Pennanen-Komulainen-TWC10}. Then, by utilizing the Lagrangian duality analysis in~\cite{Dahrouj-Yu-10} and techniques of random matrix theory~\cite{RMT}, we derive deterministic equivalents for the optimal ICIs in terms of channel statistics. They are derived under a generic spatially correlated channel model. Such an analysis is instrumental to develop two distributed algorithms.} \begin{itemize} \item {Algorithm~\ref{alg:ICI_approx} incorporates the deterministic ICIs as approximations for the coordination messages in finite networks. This allows the BSs to obtain the precoders locally by exploring the local CSI and exchanges of slow varying channel statistics over the backhaul links. } \item { Algorithm~\ref{alg:ICI_approx heuristic} includes a heuristic simplification in the calculation of approximate ICIs. This allows an alternative backhaul signaling that reduces the backhaul exchange rate requirement by a factor of almost $2/N^2$ compared to Algorithm~1. The performance loss of both algorithms is shown to be small, with respect to the optimal solution, via numerical results. } \end{itemize} {The large system analysis is also developed in a special scenario where UEs are assumed to be grouped on the basis of their statistical properties as in~\cite{UserPartionAdhikary,GroupKim2015}. This allows to derive the approximate ICIs in concise form, thereby revealing the structure of the coordination messages; that is, the optimal ICI values in terms of underlying channel statistics.} {The analysis ultimately reveals the potential benefits of UE-grouping to further reduce the overhead and computational effort of the proposed decentralized solutions.} Parts of this paper have been published in the conference publications~\cite{Asgharimoghaddam-Tolli-Rajatheva-ICC2014,asgharimoghaddam2014decentralized,HAsghariTolliLucaDebbah2015CAMSAP}. The decentralized solution relying on deterministic ICI values is investigated in an i.i.d Rayleigh fading model in~\cite{Asgharimoghaddam-Tolli-Rajatheva-ICC2014} and in the correlated scenario in~\cite{asgharimoghaddam2014decentralized, HAsghariTolliLucaDebbah2015CAMSAP}. Specifically, the large system analysis is sketched in~\cite{asgharimoghaddam2014decentralized, HAsghariTolliLucaDebbah2015CAMSAP} while the precise proofs along with derivation details are presented in the current work. In addition, the analysis is extended to the case where UEs are grouped on the basis of their statistical properties. In the numerical analysis, the exponential correlation model, utilized in the conference counterparts, is extended to a more general case where UEs experience various angle of arrivals and angular spreads. A numerical study for the UE-grouping scenario~is also provided. The remainder of this work is organized as follows.\footnote{The following notations are used throughout the manuscript. All boldface letters indicate vectors (lower case) or matrices (upper case). Superscripts $(\cdot)\tran$, $(\cdot)\herm$, $(\cdot)^{-1}$, $(\cdot)^{1/2}$ stand for transpose, Hermitian transpose, matrix inversion and positive semidefinite square root, respectively. We use $\mathbb{C}^{m \times n}$ and $\mathbb{R}^{m \times n}$ to denote the set of $m \times n$ complex and real valued matrices, respectively. Furthermore, $\mathrm{diag}( \cdots)$ denotes the diagonal matrix with elements $(\cdots)$ on the main diagonal. The sets are indicated by calligraphic letters and $|\mathcal{A}|$ denotes the cardinality of the set $\mathcal{A}$. ${\mathrm{Tr}}({\bf A})$ denotes the trace of $\bf A$, and $\|\cdot\|$ represents the Euclidean norm. Finally, $[.]_{i,j}$ denotes the $(i,j)^{\text{th}}$ element of the matrix and $\mathcal{A}\backslash k $ excludes the index $k$ from the set.} In Section~\ref{sec:System model}, the network model and problem formulation are presented. Section~\ref{sec: large system analysis} deals with the large system analysis of the optimal power allocations. Section~\ref{sec:distributedOpt} makes use of the asymptotic analysis to derive two distributed solutions with different coordination overheads. In Section~\ref{sec:grouped UEs}, the analysis is extended to a network in which the UE population is partitioned in groups on the basis of statistical properties as in~\cite{UserPartionAdhikary,GroupKim2015}. Section~\ref{sec:Simulation Results} describes the simulation environment and illustrates numerical results. Conclusions are drawn in Section~\ref{sec:Conclusion} while all the proofs are presented in the Appendices. \section{{System Model And Problem Formulation}} \label{sec:System model} Consider the downlink of a multicell multiuser MIMO system composed of $L$ cells where each BS has $N$ antennas. A total number of $K$ single-antenna UEs is dropped in the coverage area. We assume that each UE is assigned to a single BS while being interfered by the other BSs. We denote the set of UEs served by BS $b$ as $\mathcal U_{b}$ and the index of the BS associated to UE $k$ as $b_{k}$. The set of all UEs is represented by $\mathcal U$ whereas $\mathcal B$ collects all BS indexes. { Under this convention and assuming narrow-band transmission, we define ${\bf h}_{b,k} \in \mathbb{C}^{N}$ as the channel from BS $b$ to UE $k$ and ${\bf w}_{k} \in \mathbb {C}^{N}$ as the precoding vector of UE $k$ at the intended BS. Then, the received signal can be written as} \begin{align} y_{k} = {\bf h}_{b_{k},k}\herm{\bf w}_{k}s_{k} \!+\!\!\!\! \!\!\sum\limits_{i\in{\mathcal U}_{b_{k}}\setminus k}\!\! {\bf h}_{b_k,k}\herm{\bf w}_{i}s_{i} +\!\!\! \!\!\sum\limits_{b\in{\mathcal B}\setminus b_{k}} \!\sum\limits_{i\in {\mathcal U}_{b}}{\bf h}_{b,k}\herm{\bf w}_{i}s_{i} + n_{k} \end{align} where the first term is the desired received signal whereas the second and third ones represent intra-cell and inter-cell interference terms, respectively. The zero mean, unit variance data symbol intended to UE $k$ is denoted by $s_{k}$, and is assumed to be independent across UEs. Denoting the receiver noise by $n_{k}\sim \mathcal {CN}(0,\sigma^{2})$ and treating interference as noise, the SINR attained at UE $k$ is given by \begin{equation}\label{eq:SINR dl} {\Gamma}_{k}=\frac{\left|{\bf h}_{b_k,k}\herm{\bf w}_{k}\right|^{2}}{\sum\limits_{i\in {\mathcal U}_{b_{k}}\setminus k} \left|{\bf h}_{b_k,k}\herm{\bf w}_{i}\right|^{2} + \sum\limits_{b\in\mathcal{B}\backslash b_k, j\in \mathcal{U}_{b}} \left|{\bf h}_{b,k}\herm{\bf w}_{j}\right|^{2} + \sigma^{2}}. \end{equation} \subsection{Coordinated Beamforming} {In a coordinated network, the BSs design precoders jointly to satisfy a given set of SINRs for all UEs while minimizing the total transmit power. In order to reflect different power budgets at BSs, we consider the problem of minimizing the weighted total transmit power with the transmit power at BS $b$ weighted by a factor~$\mu_{b}$ as proposed in~\cite{Dahrouj-Yu-10}. This yields \begin{equation}\label{eq:prim problemSimple} \min_{\{{\bf w}_{k}\}} \quad \sum_{b{\in}\mathcal{B}}\sum_{k{\in}\mathcal{U}_{b}} \mu_b \|{\vec w}_{k}\|^2\quad {\text{s.t.}}\quad {\Gamma}_k\ge \gamma_{k}, \;\forall k \in \mathcal{U} \end{equation} where $\gamma_{k}$ denotes the UE's target SINR obtained from the corresponding target rate. The SINR target constraints in~\eqref{eq:prim problemSimple} may appear to be non-convex at a first glance. However, non-convex constraints of this type can be transformed into a second-order cone constraint~\cite{SOCPshamai}, which enables methods for solving~\eqref{eq:prim problemSimple} via convex optimization. Denoting the ICI term from BS $b$ to UE $k$ as $\epsilon_{b,k}$, the optimization problem in~\eqref{eq:prim problemSimple} can be equivalently reformulated as~\cite{Tolli-Pennanen-Komulainen-TWC10} \begin{subequations} \label{Opt_problem} \begin{align} &\underset{{\vec w}_{k},\epsilon_{b,k}}{\min} \quad \sum_{b{\in}\mathcal{B}}\sum_{k{\in}\mathcal{U}_{b}} \mu_b \|{\vec w}_{k}\|^2 \\ &\,\,\,\,\text{s.t.} \frac{\left|{\bf h}_{b_k,k}\herm{\bf w}_{k}\right|^{2}}{\sum\limits_{i\in {\mathcal U}_{b_{k}}\setminus k} \left|{\bf h}_{b_k,k}\herm{\bf w}_{i}\right|^{2} + \sum\limits_{b\in{\mathcal B}\setminus b_{k}} \epsilon_{b,k} + \sigma^{2}} \geq {\gamma_{k}}, \, \forall k \in \mathcal{U}_{b} , \forall b \\ & \quad \sum_{j \in \mathcal{U}_{b}}|{\vec h}_{b,k}\herm{\vec w}_{j}|^2 \leq \epsilon_{b,k}, \; \forall k\not\in \mathcal{U}_{b}, \forall b. \label{Opt_problem_ICICons} \end{align} \end{subequations} As shown in~\cite{Tolli-Pennanen-Komulainen-TWC10}, \eqref{eq:prim problemSimple} and \eqref{Opt_problem} are equivalent at the optimal solution where the ICI constraints in~(\ref{Opt_problem}c) are satisfied with equality. The problem formulation in~\eqref{Opt_problem} recognizes the ICI constraints as the principal coupling parameters among BSs, which enforces cross-cell coordination. Therefore, one may need to solve the problem either centrally~\cite{SOCPshamai, BjornsonMinPower2014,Dahrouj-Yu-10}, which requires full CSI at all BSs, or distributively, which needs the ICI values to be continuously exchanged among BSs~\cite{ Tolli-Pennanen-Komulainen-TWC10, Pennanen-Tolli-Latva-aho-spl-11, ChaoADMM2012, Pennanen-Tolli-Latva-aho-TSP-13}. This is hard in practice to achieve due to the practical limitations of backhaul~links.} \subsection{Decentralized Solution Via Deterministic Equivalents} \label{sec:distibuted Sol intro} {We observe that using any fixed ICI term in~\eqref{Opt_problem} decouples the problems at BSs. This leads to a suboptimal solution that, however, satisfies the SINR constraints (if feasible) subject to a higher total transmit power as compared to the optimal solution. Provided a set of good approximations for the optimal ICI terms $\{\epsilon_{b,k}\}$, the individual problems at the BSs can be decoupled subject to small performance loss. { In order to derive such approximations, we need to formulate the {Lagrange dual problem} of~\eqref{Opt_problem} to unveil the structure of the optimal beamfomers, and thus, express the ICIs as a function of channel entries. The authors in~\cite{Dahrouj-Yu-10} show that upon existence, the unique solution\footnote{{The arguments ${{\bf w}_j,\,\forall j}$ of the problem in~\eqref{eq:prim problemSimple} are defined up to a phase scaling, i.e., if ${\bf w}_{j}$ is optimal, then ${\bf w}_{j}e^{i\phi_j}$ is also optimal~\cite{SOCPshamai} where $\phi_j$ is an arbitrary phase rotation for UE $j$. The uniqueness of the solution is declared as we can restrict ourselves to precoders ${{\bf w}_j,\,\forall j}$ such that the terms $\{{\bf h}_{b,k}^{\text{H}} {\bf w}_{j}\}$ in~\eqref{eq:SINR dl} have non-negative real part and a zero imaginary part.}} to the problem in~\eqref{eq:prim problemSimple} can be obtained by using {Lagrangian duality} in convex optimization. We prove that the Lagrange dual problem of~\eqref{Opt_problem} is the same as that of~\eqref{eq:prim problemSimple}, and thus, we can utilize the Lagrangian duality analysis in~\cite{Dahrouj-Yu-10}. To keep the flow of the work uninterrupted, the details of Lagrangian duality analysis are presented in Appendix~\ref{sec:duality analysis}. As a result of this analysis, the ICI from BS $b$ to UE $k$ can be expressed as} \begin{align}\label{3.1} \epsilon_{b,k} = \sum_{j\in \mathcal U_{b}}\left|{\bf h}_{b,k}^H{\bf w}_{j}\right|^{2} = \sum_{j\in \mathcal U_{b}}\delta_{j}\left|{\bf h}_{b,k}^H{\bf v}_{j}\right|^{2} \end{align} where, $\{{\vec v}_{k}\}$ denotes a set of minimum mean square error (MMSE) receivers, and $\{\delta_k\}$ are scaling factors relating the beamforming vectors to the MMSE receivers as ${{\vec w}}_{k}=\sqrt{{{\delta_{k}}}/{N}} {\vec v}_{k}$. In particular, we have $ \mathbf{v}_{k}=(\sum_{j\in \mathcal U\setminus k}{\lambda_{j}}\mathbf{ h}_{b_{k},j}\mathbf{ h}_{b_{k},j}\herm + \mu_{b_k}N\mathbf{I}_{N})^{-1}\mathbf{ h}_{b_k,k}$ with $\{\lambda_j\}$ being the Lagrange dual variables associated with SINR constraints. The optimal Lagrangian multipliers gathered in $\boldsymbol{\lambda}^*= [\lambda_{1}^*, \ldots, \lambda_{K}^*]\tran $ are obtained as the unique fixed point solution of \begin{align} \label{eq:lambda itr} \lambda_{k} = \frac{\gamma_{k}}{ {\mathbf{h}_{b_k,k}\herm \left(\sum\limits_{j\in \mathcal U\setminus k}\lambda_{j}\mathbf{ h}_{b_{k},j}\mathbf{ h}_{b_{k},j}\herm + \mu_{b_k}N\mathbf{I}_{N}\right)^{-1}\!\!\!\!\!\!\mathbf{h}_{b_k,k}} } \,\,\forall k \in \mathcal{U}. \end{align} The scaling factors $\{\delta_{k}\}$ can be obtained as the unique solution of the set of equations such that the SINR constraints in~\eqref{eq:prim problemSimple} are all satisfied, i.e., $\boldsymbol{\delta} = \mathbf{G}^{-1} \mathbf{1}_{K}\sigma^2$, where $\boldsymbol{\delta} = [\delta_{1}, \ldots, \delta_{K}]\tran$, and the $(i,k)^{\text{th}}$ element of the so-called coupling matrix $\mathbf{G} \in \mathbb C^{K\times K}$ is~\cite{ Dahrouj-Yu-10} \begin{align}\label{eq:G_matrix} \left[\mathbf{G}\right]_{k,i}= \begin{cases} \frac{1}{\gamma_{k}}{|\mathbf{h}_{b_k,k}\herm \mathbf{v}_{k}|^2}& \text{for} \,\,\, i=k \\ -{|\mathbf{h}_{b_{i},k}\herm \mathbf{v}_{i}|^2}& \text{for} \,\,\, i\ne k. \end{cases} \end{align}} The optimal ICIs in~\eqref{3.1} are expressed in terms of channel entries via parameters $\{\lambda_k\}$, $\{\delta_k\}$ and $\{[{\bf G}]_{i,j}\}$. This allows us to utilize techniques for deterministic equivalents~\cite{RMT}, as detailed in Section~\ref{sec: large system analysis}, to characterize the behavior of these parameters in terms of underlying channel statistics, and thus, propose proper approximations for the optimal ICIs. To this end, we first need to introduce a statistical model for channel vectors. \subsection{Channel Model} \label{sec:Ch model} {The channel from BS $b$ to UE $k$ is modeled as ${\bf h}_{b,k} = {\bf \Theta}_{b,k}^{1/2}{\bf z}_{b,k}$ where ${\bf z}_{b,k}\in \mathbb {C}^{N}$ represents small-scale fading and has i.i.d, zero-mean, unit-variance complex entries.} The matrix ${\bf \Theta}_{b,k}\in \mathbb {C}^{N\times N}$ accounts for the UE specific channel correlation at BS $b$. {The pathloss due to large scale fading is implicitly considered in the correlation matrix unless otherwise stated. In the latter case, pathloss values are explicitly declared by expressing the correlation matrix as $a^2_{b,k}{\bf \Theta}_{b,k}$ where $a^2_{b,k}$ accounts for pathloss from BS $b$ to UE $k$. }The correlated scenario is motivated by the lack of space for implementing large antenna array and poor scattering environment~\cite{NgoMarzeta2011} that must be considered for a realistic performance evaluation. Moreover, the generic correlated model takes into account distinct angle of arrivals and angular spreads of UEs' signal for designing the precoder vectors. Also, it allows arbitrary configuration for the antenna array, including geographically distributed arrays. \vspace{-0.5cm} \section{{Large System Analysis}} \label{sec: large system analysis} In the following, we exploit the theory of large random matrices~\cite{RMT} to compute the so-called deterministic equivalents of the optimal Lagrangian multipliers $\{\lambda_k^*\}$ given by~\eqref{eq:lambda itr} under the generic channel model presented in Section~\ref{sec:Ch model}. Plugging such deterministic equivalents into~\eqref{eq:G_matrix} allows characterization of the coupling matrix elements $\{[{\bf G}]_{i,j}\}$ in \eqref{eq:G_matrix} in asymptotic regime, which consequently gives the asymptotically optimal scaling factors $\{\delta_k\}$. In doing so, the following assumptions (widely used in the literature) are made to properly define the growth rate of system dimensions, \begin{assumption}\label{as:0} As $N\to \infty$, $\!0 < \!\lim \inf \frac{K}{N} \le \lim \sup \frac{K}{N} < \infty$. \end{assumption} \begin{assumption} \label{as:1} The spectral norm of ${\bf \Theta}_{b,k}$ is uniformly bounded as $N\to\infty$, i.e., $ \! \lim \sup_{{N \to \infty}}$ $ \!\!\!\max_{\forall b,k} \{\left\|{\bf \Theta}_{b,k}\right\|\}\!\! < \infty.$ \end{assumption} \subsection{{Deterministic Equivalents For Lagrangian Multipliers}} \label{sec:uplink asympto} The derivation of the deterministic equivalents for the Lagrangian multipliers needs special handling because of their implicit formulation in~\eqref{eq:lambda itr}. In particular, dependency of $\boldsymbol{\lambda}^*$ on channel vectors prevents using trace lemma~\cite[Theorem 3.4]{RMT} explicitly for the denominator of~\eqref{eq:lambda itr}. The work in~\cite{LucaCouilletDebbahJournal2015} tackles this problem under i.i.d Rayleigh fading channels relying on a method introduced originally in~\cite{couillet2014large} in a different context. By using the same approach, the following result is obtained. \begin{theorem}\label{th:up powers updw duality} Let Assumptions \ref{as:0} and \ref{as:1} hold. If \eqref{Opt_problem} is feasible and its optimal solution is $\boldsymbol{\lambda}^*$, we have ${\rm{max}}_{\,\,k\,\,} |\lambda_k^*-\bar{ \lambda}_k| \rightarrow 0$ almost surely where \begin{align} \label{eq:lambdaebn} \bar \lambda_{k} = \frac{\gamma_{k}}{\bar{m}_{b_{k},k}}\quad \forall k\in \mathcal{U} \end{align} and $\bar{m}_{b_k,k}$ is obtained as the unique non-negative solution of the following system of equations, evaluated for $b\in\mathcal{B}, i\in \mathcal{U}$ \vspace{-0.3cm} \begin{align} \label{eq:ST Th1} \bar{m}_{b,i} =\!\!{\rm{Tr}}\biggr({\bf \Theta}_{b,i}\biggr(\sum_{j\in \mathcal U}\frac{{\gamma_{j}}{\bf \Theta}_{b,j} }{{\bar{m}_{b_{j},j}}+\gamma_{j}{\bar{m}_{b,j}}} + \mu_b N {\bf I}_{N}\biggr)^{\scriptstyle-1}\biggr). \end{align} \end{theorem} \begin{IEEEproof} The proof is given in Appendix~\ref{sec:proof Theorem1}.\end{IEEEproof} We observe that $\bar{ \lambda}_k \bar{m}_{b_k,k}$ represents the deterministic equivalent of the received SINR at BS $b_k$ when the MMSE receiver is aligned toward UE $k$. The UEs interact through the quantities $\bar{m}_{b,k}, \forall b,k$ such that, at the optimum, the SINR constraints for all UEs are asymptotically satisfied. \vspace{-1cm} \subsection{{Deterministic Equivalents For Coupling Matrix Entries}} The deterministic equivalents for $\{\lambda_k^*\}$ in Theorem~\ref{th:up powers updw duality} are used to compute the asymptotically optimal receive beamforming vectors $\bar{\mathbf{v}}_{k}=(\sum_{j\in \mathcal U\setminus k}{\bar \lambda_{j}}{\mathbf{ h}}_{b_{k},j}{\mathbf{ h}}_{b_{k},j}\herm + \mu_{b_k}N\mathbf{I}_{N})^{-1}\mathbf{ h}_{b_k,k},\forall k$ in the dual uplink problem. By plugging $\{\bar{\bf{v}}_k\}$ into~\eqref{eq:G_matrix}, the following result is obtained. \begin{theorem}\label{th:down pw updwn duality} Let Assumptions \ref{as:0} and \ref{as:1} hold, and assume \eqref{Opt_problem} to be feasible. Then, given the set of $\bar{ {\lambda}}_k$ and $\bar{m}_{b,k}, \forall k \in \mathcal{U},b\in \mathcal{B}$ as in Theorem~\ref{th:up powers updw duality}, we have $\left[{ \mathbf{ G}}\right]_{k,i} - \left[\bar{ \mathbf{ G}}\right]_{k,i} \to 0$ almost surely with \begin{align} \label{eq:nonnormal G} \left[\bar{ \mathbf{ G}}\right]_{k,i}= \begin{cases} { \gamma_{k}}/{\bar \lambda_k^2}& \textrm{for} \,\,\, i=k \\ -\frac{1}{N}\frac{\bar{m}_{b_i,i,k}^{\prime}}{\left(1 + \bar {\lambda}_k \bar m_{b_i,k}\right)^{2}}& \textrm{for} \,\,\, i\ne k \end{cases} \end{align} where we have that $ [\bar{m}_{b,1,k}^{\prime},...,\bar{m}_{b,K,k}^{\prime}]=(\mathbf{I}_K-\mathbf{L}_{b})^{-1} {\vec u}_{b,k},\,\forall k\in\mathcal{U}$ and where \begin{equation}\label{eq:Proof en prime sys of equations3} \left[\mathbf{L}_{b}\right]_{i,j}= \frac{1}{N^2} \frac{ {\rm Tr}\left(\boldsymbol{\Theta}_{b,i} \mathbf{T}_{b}\boldsymbol{\Theta}_{b,j}\mathbf{T}_{b}\right) }{(1/\bar \lambda_j+ \bar m_{b,j})^2} \end{equation} and \begin{equation}\label{eq:Proof en prime sys of equations4} \begin{aligned} {\vec u}_{b,k}=\!\!\left[\frac{1}{N} {\rm Tr} \left(\boldsymbol{\Theta}_{b,1} \mathbf{T}_{b}\boldsymbol{\Theta}_{b,k} \mathbf{T}_{b}\right),\ldots,\frac{1}{N} {\rm Tr}(\boldsymbol{\Theta}_{b,K} \mathbf{T}_{b}\boldsymbol{\Theta}_{b,k} \mathbf{T}_{b})\right] \end{aligned} \end{equation} with ${\bf{T}}_{b}$ given by \begin{align}\label{6.16} {\bf{T}}_{b} = \left(\frac{1}{N}\sum\limits_{j\in \mathcal U}\frac{ \bar \lambda_j{\bf{\Theta}}_{b,j}}{1+ \bar \lambda_j\bar m_{b,j}} + \mu_{b}{\bf I}_N\right)^{-1}\!\!\!. \end{align} \end{theorem} \begin{IEEEproof} The proof is given in Appendix~\ref{sec:proof Theorem2}.\end{IEEEproof} The term $\bar{m}_{b,i,k}'$ is the derivative of $\bar{m}_{b,i,k}(x)= \frac{1}{N}{\rm {Tr}}\big({\bf \Theta}_{b,i}(\frac{1}{N}\sum_{j\in \mathcal U}\frac{{\gamma_{j}}{\bf \Theta}_{b,j} }{{\bar{m}_{b_{j},j}}+\gamma_{j}{\bar{m}_{b,j}}} -x {\bf \Theta}_{b,k} + \mu_b {\bf I}_{N})^{-1}\big)$ with respect to the auxiliary variable $x$ and then evaluated at point $x=0$. The term $\bar{m}_{b_i,i,k}'$ in~\eqref{eq:nonnormal G} determines the coupling between UE~$i$ served by BS $b_i$ and UE $k$, and consequently indicates the level of interference leaking in between these two UEs. The deterministic equivalents of entries $\{\left[\bar{ \mathbf{ G}}\right]_{k,i}\}$ can be used to compute the asymptotically optimal scaling factors as $\bar{\boldsymbol{\delta}} = \sigma^2 {\bar{\mathbf{G}}}^{-1} \mathbf{1}_{K}$, which depend only on the statistics of channel vectors. {Based on these results, we can now derive the deterministic equivalents of ICI terms $\{\epsilon_{b,k}\}$, and consequently present the coordination algorithms.} \input{decentralizedoptimization} \input{groupUE} \input{numerics} \section{Conclusions and Discussions} \label{sec:Conclusion} In this work, a decentralization framework was proposed for the power minimization problem in multicell MU-MIMO networks. The proposed decentralized solutions attain the QoS-guaranteed precoders locally subject to relaxed coordination requirements. This is particularly important in practice where the backhaul links suffer from imperfections, including limited capacity and latency. The analysis under the assumption of partitioned UE population allowed the ICIs, as the inter-cell coordination messages, to be characterized explicitly in terms of channel statistics. This, in particular, provided insight into the coordination mechanism. Also, it reduced the computational complexity of the approximate ICIs. This analysis can be further exploited to attain substantial complexity reduction in the precoding phase as well. This approach generally motivates per-group processing, which is reminiscent of two-stage beamforming~\cite{UserPartionAdhikary}. To this end, the UE population is first divided into multiple groups each with approximately the same channel correlation matrix. Then, the BSs get approximations of interference leakage in-between groups at a given BS and across cells using results of Theorem~\ref{th:down pw updwn duality} and Corollary~\ref{cor:homo closeICI}. Declaring the group specific interference terms as constraints in the optimization problem, similar to ICI constraints introduced in~\eqref{Opt_problem}, the BSs get the precoders for UEs in a group independently by exploring CSI within each group. The search space, in this case, is limited to the group's degrees of freedom, which is generally much smaller than $N$, and thus results into a significant complexity reduction. As a final remark, we notice that the weighting factors $\{\mu_{b}\}$ in~\eqref{eq:prim problemSimple} provide a mechanism to tradeoff the power consumption at different BSs~\cite{Dahrouj-Yu-10}. In addition, one can rely on the results of Theorems~\ref{th:up powers updw duality} and~\ref{th:down pw updwn duality} to get a good approximation for total power consumption at BSs based on statistical information of channel vectors. This can be utilized in a feasibility assessment to ensure BS-specific power limits where, as in~\cite{JointAdmisPwrCtrl,JointSchedPWr,PowrCTrlBook}, the infeasibility implies that some UEs should be dropped (admission control methods)~\cite{JointAdmisPwrCtrl} or rescheduled in orthogonal dimensions (scheduling methods)~\cite{JointSchedPWr}. \input{Appendix} \bibliographystyle{IEEEtran} \vspace{-0.8cm} \section{Numerical Analysis} \label{sec:Simulation Results} Monte Carlo simulations are now used to validate the performance of the proposed solutions. The performance metrics are averaged over 1000 independent UE drops and channel realizations. {We consider a network with $L$ cells and assume that the same number of $\bar{K}=\frac{K}{L}$ UEs is assigned to each cell. The UEs are distributed uniformly in the coverage area of cells.} The pathloss function is modeled as~$a_{b,k}^2=({d_{0}}/{d_{b,k}})^{3}$ where $d_{b,k}$ represents the distance between BS $b$ and UE $k$, and $d_{0}=1$ m is the reference distance. The BSs are placed 1000m apart from each other. The transmission bandwidth is $W = 10$ MHz, and the total noise power $\sigma^2=WN_0$ is -$104$ dBm. By assuming a diffuse 2-D field of isotropic scatterers around the receiver~\cite{JakesMicrowavebook}, the correlation matrix for an antenna element spacing of $\Delta$ is given by \begin{equation}\label{eq:corr model} \left[{\bf{\Theta}}_{b,k}\right]_{j,i}=\frac{a_{b,k}^2}{\varphi_{b,k}^{\max}-\varphi_{b,k}^{\min}}\int_{\varphi_{b,k}^{\min}}^{\varphi_{b,k}^{\max}} \! e^{i\frac{2\pi}{w}\Delta (j-i){\rm cos}(\varphi)} \, \mathrm{d}\varphi \end{equation} where waves arrive with an angular spread $\Delta \varphi$ from $\varphi_{\min}$ to $\varphi_{\max}$. The wavelength is denoted by $w$, and the antenna element spacing is fixed to half the wavelength $\Delta=1/2w$. \subsection{{Performance Evaluation Of The Distributed Precoding Methods}} Next, the analyses of Sections~\ref{sec: large system analysis} and~\ref{sec:distributedOpt} are validated numerically. The UEs are dropped in a network with $7$ cells. Fixed wide angular spread ($\Delta \varphi_{b,k} =\frac{\pi}{2}$) is assigned to UEs served by a given BS that accounts for well-conditioned correlation matrices. While the channels to non-served UEs (that are far from the given BS) have an angular spread of ${\pi}/{6}$ which yields rank deficient correlation matrices. The angle of arrival is determined by the angular position of the UEs with respect to the BSs. {Theorems~\ref{th:up powers updw duality} and~\ref{th:down pw updwn duality} present the asymptotically optimal power assignments in downlink and dual uplink problems in terms of channel statistics, which is instrumental to get further insight into the structure of the optimal solution. As mentioned in Section~\ref{sec:distributedOpt}, one might serve UEs in finite system regime by utilizing the asymptotic power terms in Theorems~\ref{th:up powers updw duality} and~\ref{th:down pw updwn duality}, i.e., $\{\bar{\lambda}_{K}\}$ and $\{\bar{\delta}_{K}\}$, and corresponding receive/transmit beamforming vectors, i.e., $\bar{\mathbf{v}}_{k}=(\sum_{j\in \mathcal U\setminus k}{\bar \lambda_{j}}\mathbf{ h}_{b_{k},j}\mathbf{ h}_{b_{k},j}\herm + \mu_{b_k}N\mathbf{I}_{N})^{-1}\mathbf{ h}_{b_k,k}$ and $\bar{{\vec w}}_{k}=\sqrt{\bar{\delta}_{k}/N} \bar{\vec v}_{k}$, respectively. However, this approach only guarantees the target rates to be satisfied asymptotically. This is shown in Fig.~\ref{fig:ratevspowerUp} where the empirical CDF of the achievable rates is depicted in downlink and dual uplink problems. The number of antennas is the same as the number of UEs, and the UEs are served with a target rate of 1 bit/s/Hz/UE using the asymptotically optimal beamformers, i.e., $\bar{{\vec w}}_{k}$ and $\bar{\vec v}_{k}$. It can be seen that the achievable rates are mainly concentrated around the target rates. However, as shown in Fig.~\ref{fig:ratevspowerUp}-b, 30 percent of UEs attains a rate of less than 0.7 bit/s/Hz/UE in the downlink with $N = 14$ antennas. This percentage reduces to 12 percent when $N$ increases up to almost 100. The empirical CDF shows that the deviation from the target rate decreases as dimensions of the problem increase and the rate constraints for all UEs are expected to be satisfied asymptotically.} \begin{figure*} \centering \begin{subfigure}{.51\textwidth} \centering \includegraphics[width=\columnwidth]{Fig3a.pdf} \caption{CDF of achievable uplink rates.} \end{subfigure}% \begin{subfigure}{.49\textwidth} \centering \includegraphics[width=\columnwidth]{Fig3b.pdf} \caption{CDF of achievable downlink rates.} \end{subfigure} \caption{Empirical CDF of achievable rates using asymptotic beamformers, $\frac{N}{K}=1$.} \label{fig:ratevspowerUp} \end{figure*} \vspace{0.3cm} As mentioned in Section~\ref{sec:distributedOpt}, we can explore the availability of local CSI while relying on deterministic equivalents of ICI values to obtain the QoS guaranteed precoders. Both Algorithm~\ref{alg:ICI_approx} and~\ref{alg:ICI_approx heuristic} satisfy the SINR constraints while having a minimum cooperation among BSs. Fig.~\ref{fig:perfcomp SNR1} presents the averaged total transmission power required for serving UEs with target rates fixed to 1 bits/s/Hz/UE. The total number of UEs in the network grows at the same rate as the number of antennas such that $\frac{N}{K}=1$ in Fig.~\ref{fig:perfcomp SNR1}-a and $\frac{N}{K}=2$ in Fig.~\ref{fig:perfcomp SNR1}-b. Thus, the spatial loading is fixed as the number of antennas is increased. The optimal total transmission power using~\eqref{eq:prim problemSimple} is the reference curve denoted as the centralized approach. It can be seen in both Fig.~\ref{fig:perfcomp SNR1}-a and~\ref{fig:perfcomp SNR1}-b that Algorithm~\ref{alg:ICI_approx} satisfies the target rates subject to small performance degradation even for relatively small $N$ and $K$. This gap diminishes further as the number of antennas and UEs are increased. {Based on asymptotic ICI expressions, we observed that the interference caused by a BS to a non-served UE depends mainly on the local statistics. Utilizing this in Algorithm~2, the approximate ICIs are evaluated at the interfering BSs relying on local and partial non-local knowledge of channel statistics. The viability of this approach can be seen from the small difference in the transmission powers of Algorithms~\ref{alg:ICI_approx} and~\ref{alg:ICI_approx heuristic} in Fig.~\ref{fig:perfcomp SNR1}. } A heuristic case (included for comparison) labeled as 'i.i.d fully decentralized' is also depicted in Fig.~\ref{fig:perfcomp SNR1}, where the correlation properties are ignored and the approximated ICI values are derived relying only on pathloss information. Ignoring the correlation properties when designing the precoders generally results in a large performance degradation as depicted in Fig.~\ref{fig:perfcomp SNR1}. {Another observation is the smaller performance gap among all methods in Fig.~\ref{fig:perfcomp SNR1}-b with $\frac{N}{K}=2$, compared to Fig.~\ref{fig:perfcomp SNR1}-a with $\frac{N}{K}=1$, which is due to the increase in the number of degrees of freedom (d.o.f) per UE. In particular, for any given number of antennas, the number of UEs in Fig.~\ref{fig:perfcomp SNR1}-a is twice as large as that in Fig.~\ref{fig:perfcomp SNR1}-b. In general, the performance gap among various methods diminishes when the ratio of d.o.f per UE increases. As the ratio of d.o.f per UE goes to infinity the differences disappear. In particular, this is illustrated in~\cite{Asgharimoghaddam-Tolli-Rajatheva-ICC2014} (the conference counterpart of the current work) under i.i.d Rayleigh fading channel where the transmission powers of all methods converge as $N$ grows large, given a fixed $K$.} The other sub-optimal solutions, including ICZF and ZF, design the precoders locally without cross cell coordination. In particular, ICZF sets ICIs equal to zero while handling the local interference optimally. ZF attains the precoders by forcing all (intra-cell and inter-cell) interference terms to zero. These methods are infeasible for some UE drops in the case with $\frac{N}{K}=1$, thus a fair comparison is not possible and the corresponding curves are omitted in Fig.~\ref{fig:perfcomp SNR1}-a . This is in fact due to narrow angular spreads, which result in lack of degrees of freedom for nulling the interference to UEs with overlapping angular spreads. In the case with $\frac{N}{K}=2$, ICZF and ZF attain the target rates subject to 4 dB higher transmission power as compared to Algorithm~\ref{alg:ICI_approx} and~\ref{alg:ICI_approx heuristic}. As the final remark, the gap in performance of ICZF indicates that a large portion of the optimal resource allocation gain comes from inter-cell coordination. \begin{figure*} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[ width=1\columnwidth]{Fig4a.pdf} \caption{ The scenario with $N=K$.} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=\columnwidth]{Fig4b.pdf} \caption{ The scenario with $N=2K$.} \end{subfigure} \caption{Transmit power vs. $\bar K$, target rate = 1 bits/s/Hz/UE. } \label{fig:perfcomp SNR1} \end{figure*} \subsection{The Scenario With Partitioned UE Population } \begin{figure}[!tbp] \centering {\includegraphics[width=0.5\columnwidth]{Fig5.pdf}} \caption{Transmit power with grouped UEs versus $N$ when $\frac{N}{K}=1$ and the target rate is $1$ bits/s/Hz/UE.} \label{fig:GroupUEs Sim Res} \end{figure} {In the following, the analysis in Section~\ref{sec:grouped UEs} is validated numerically. We consider a two-cell configuration with the same number of UEs assigned to each cell. Additionally, UEs within a cell are divided equally among three groups according to the arrangement presented in Fig.~\ref{fig:Groups}. We note that the system model considered in Section~\ref{sec:grouped UEs} is generic and can be applied to larger configurations. However, here we reused the two-cell example illustrated in Fig.~\ref{fig:Groups} to avoid redundancy and to keep the numerical results consistent with the explanations in Section~\ref{sec:grouped UEs}.} As in Corollary~\ref{cor:homo closeICI}, the UEs within each group are assumed to have an identical correlation matrix but with distinct user specific pathloss values. The angular spread for all groups are assumed to be equal to $\frac{\pi}{6}$ and the group-specific correlation matrices are evaluated using~\eqref{eq:corr model}. The groups have disjoint angular spreads, and hence, have orthogonal correlation eigenspaces when $N\rightarrow \infty$. However, in finite dimensions, the correlation matrices of distinct groups are not fully orthogonal. The reference curve in Fig.~\ref{fig:GroupUEs Sim Res} titled 'Algorithm 1' calculates approximate ICI values based on~\eqref{3.4} where the non-orthogonality of correlation matrices of distinct groups is considered in the ICI approximations. On the other hand, the curve titled 'Alg.1 grouped UEs' relies on Corollary~\ref{cor:homo closeICI} for deriving approximate ICIs. In the latter case, the UEs in distinct groups are assumed to have orthogonal correlation matrices. The precoders in both cases are derived as in Algorithm~\ref{alg:ICI_approx} while utilizing the corresponding ICI approximations. We observe a small difference in the transmission power of these two cases, which is due to inter-group orthogonality condition being satisfied only asymptotically. The corresponding curves converge and both curves approach the optimal solution as dimensions are increased.
1412.3674
\section{Introduction} In-line holography relates to the original holographic scheme proposed by Gabor \cite{Gabor:1947,Gabor:1948,Gabor:1949}. It is of conceptually simple design, does not include optical elements between sample and detector and has been employed since its invention in numerous experiments using various types of waves, be it light, electrons or X-rays, to name just a few. Nowadays, holograms are recorded by digital detectors and are subject to numerical reconstruction, which constitutes the field of digital holography \cite{Schnars:2005}. A good overview of different types of holograms and the theory dedicated to their formation and reconstruction is given in the book by Kim \cite{Kim:2011}. All simulation and reconstruction routines applied in digital holography employ fast Fourier transforms (FFT). Most of the routines utilize single Fourier transform, except the routine for plane waves based on the angular spectrum method \cite{Goodman:2004}, where two Fourier transforms are involved. In general, optimal reconstructions are achieved when two Fourier transforms are employed \cite{Molony:2010}. The reason is twofold. Firstly, when two Fourier transforms are involved in simulation or reconstruction, the object and its hologram are sampled with a similar number of pixels. For example, if the object occupies a quarter in the object plane, its hologram will also occupy approximately a quarter of the detector area, and vice versa. Secondly, when a single Fourier transform is employed, all of the following parameters are co-dependent and bound by one equation: the distance between sample and detector, the number of pixels, the wavelength, the object area size, and the detector area size. Thus, the correct reconstruction, provided all distances are given by the experimental arrangement, can be achieved only at a certain fixed number of pixels, which is highly inconvenient. On the other hand, a calculation of wave propagation that employs two Fourier transforms makes it possible to avoid such dependency on the number of pixels. Here, we summarize simple methods for simulation and reconstruction of holograms with both plane and spherical waves. All of the algorithms described here employ two Fourier transforms. Some of the methods described here, have been employed in previous studies \cite{Latychevskaia:2007,Latychevskaia:2009,Latychevskaia:2012} but not been discussed in detail. \section{Hologram Formation and Reconstruction} By definition, in in-line holography, the reference wave and the object wave share the same optical axis. Typically, the experiment is realized as follows: a wave passes by an object located at positions in the plane. Part of the wave is scattered by the object, thus creating the object wave $O$, and the unscattered part of the wave forms the reference wave $R$. The two waves interfere beyond the object and the interference pattern recorded at some distance is named the hologram. In Fig. \ref{Fig:1} two in-line holography schemes are displayed utilizing plane respectively spherical waves. In-line holography with spherical waves is also called Gabor holography \cite{Gabor:1948,Gabor:1949}. \begin{figure}[htbp] \centerline{\includegraphics[width=12cm]{fig1.eps}} \caption{In-line holography schemes realized with (a) a plane wave and (b) a spherical wave.}\label{Fig:1} \end{figure} The incident wave distribution is described by $U_{\rm incident}(x,y)$, with $(x,y)$ being coordinates in the object plane, $\displaystyle k=\frac{2\pi}{\lambda}$, with $\lambda$ denoting the wavelength. An object is described by a transmission function \cite{Latychevskaia:2007,Latychevskaia:2009}: \begin{equation}\label{Eq:HF01} t(x,y) = \exp{(-a(x,y))}\exp{(i\phi(x,y))}, \end{equation} \noindent where $a(x,y)$ describes the absorption and $\phi(x,y)$ the phase distribution while the wave is scattered off the object. From Eq. \ref{Eq:HF01}, it is obvious that the transmission function $t(x,y) = 1$ where there is either no object or where $a(x,y) = 0$ and $\phi(x,y) = 0$ implying that the distribution of the incident wave remains undisturbed. This observation allows the object transmission function to be rewritten as: \begin{equation}\label{Eq:HF02} t(x,y) = 1 + \tilde{t}(x,y), \end{equation} \noindent where $\tilde{t}(x,y)$ is a perturbation imposed onto the reference wave, not necessarily a small term however. Equation \ref{Eq:HF02} is just a mathematical representation to allow for separating contributions from reference respectively object wave. The wavefront distribution beyond the object, the so-called exit wave, is then described by: \begin{equation}\label{Eq:HF03} U_{\rm exit\; wave}(x,y) = U_{\rm incident}(x,y) \cdot t(x,y) = U_{\rm incident}(x,y) + U_{\rm incident}(x,y) \cdot \tilde{t}(x,y), \end{equation} where the first term describes the reference and the second term describes the object wave. The propagation of the wave towards the detector is described by the Fresnel-Kirchhoff diffraction formula: \begin{equation}\label{Eq:HF04} U_{\rm detector}(X,Y) = -\frac{i}{\lambda} \int\int U_{\rm incident}(x,y) \cdot t(x,y)\frac{\exp{\left( ik \left|\vec{r} - \vec{R} \right| \right)}}{\left|\vec{r} - \vec{R} \right|}\; {\rm d} x {\rm d} y, \end{equation} \noindent where $\left|\vec{r}_{\rm P_0} - \vec{r}_{\rm P_1} \right| = \left|\vec{r} - \vec{R} \right|$ denotes the distance between a point in the object plane $\rm P_0$ and a point in the detector plane $\rm P_1$, as illustrated in Fig. \ref{Fig:1}. Here $\vec{r} = (x,y,z)$ and $\vec{R} = (X,Y,Z)$. The distribution of the two waves at a detector positioned in the plane $(X,Y)$ is described by $R(X,Y)$ and $O(X,Y)$, respectively. The transmission of the recorded hologram is therefore given by: \begin{equation}\label{Eq:HF05} H(X,Y) = \left| U_{\rm detector}(X,Y) \right|^2 = \left| R(X,Y)\right|^2 + \left| O(X,Y)\right|^2 + + R^{*}(X,Y)O(X,Y) + R(X,Y)O^{*}(X,Y), \end{equation} \noindent where the first term is the constant background created by the reference wave alone, the second term is assumed to be small compared to the strong reference wave term, and the last two terms give rise to the interference pattern observed in the hologram. Before reconstruction, the hologram must be normalized by division with the background image: \begin{equation}\label{Eq:HF06} B(X,Y) = \left| R(X,Y)\right|^2. \end{equation} \noindent The background image is recorded under the exact same experimental conditions as the hologram, but without the object being present. The distribution of the normalized hologram \begin{equation}\label{Eq:HF07} H_0(X,Y) = \frac{H(X,Y)}{B(X,Y)} - 1 \approx \frac{R^{*}(X,Y)O(X,Y) + R(X,Y)O^{*}(X,Y)}{\left| R(X,Y)\right|^2} \end{equation} \noindent does thus not depend on such factors as the intensity of the incident or reference wave or detector and camera sensitivity. After the normalization procedure, the hologram can be reconstructed by applying routines that are described below. The subtraction of $1$ leaves only the interference term, which approaches $0$ wherever the object wave approaches $0$, for example at the edges of the hologram. Thus, the hologram $H_0(X, Y)$ has a smaller folding-fringe effect on its edges due to Fourier transformation. In addition, an apodization cosine window filter is applied to the hologram to minimize effects due to the edges of the hologram caused by the digital Fourier transform (see Appendix A). The reconstruction of a digital hologram consists of a multiplication of the hologram with the reference wave $R(X,Y)$ followed by back-propagation to the object plane based on the Fresnel-Kirchhoff diffraction integral: \begin{equation}\label{Eq:HF08} U(x,y) \approx \frac{i}{\lambda} \int\int R(X,Y) H_0(X,Y) \frac{\exp{\left( - ik \left|\vec{r} - \vec{R} \right| \right)}}{\left|\vec{r} - \vec{R} \right|}\; {\rm d} X {\rm d} Y. \end{equation} \noindent The wavefront reconstructed from $H_0(X, Y)$ corresponds to $\tilde{t}(x,y)$ and $1$ should be added to the reconstruction to obtain the transmission function $t(x,y)$ as follows from Eq. \ref{Eq:HF02}. Finally, Eq. \ref{Eq:HF01} is applied to extract absorption and phase distributions of the imaged object. \section{In-line Holography with Plane Waves} In this section we describe methods of simulating and reconstructing holograms created with plane waves, as illustrated in Fig. \ref{Fig:1}(a). A plane wave is described by a complex-valued distribution $\exp{\left( i (k_x x + k_y y + k_z z) \right)}$, where $(k_x, k_y, k_z)$ are the components of the wave vector. By selecting the optical axis along the propagation of the plane wave, we obtain $k_x = k_y = 0$, and by choosing the origin of the $z$-axis so that $z = 0$ at the object location, we obtain the incident wave: \begin{equation}\label{Eq:PW01} U_{\rm incident} (x,y) = 1. \end{equation} \noindent The exit wave behind the object given by Eq. \ref{Eq:HF03} equals: \begin{equation}\label{Eq:PW02} U_{\rm exit\; wave} (x,y) = t(x,y). \end{equation} \noindent The wave propagating from the object plane $(x, y)$ towards the detector plane $(X, Y)$ is described by the Fresnel-Kirchhoff diffraction formula, see Eq. \ref{Eq:HF04}: \begin{equation}\label{Eq:PW03} U_{\rm detector} (X,Y) = -\frac{i}{\lambda} \int\int t(x,y) \frac{\exp{\left( ik \left|\vec{r} - \vec{R} \right| \right)}}{\left|\vec{r} - \vec{R} \right|}\; \; {\rm d} x {\rm d} y, \end{equation} \noindent where \begin{equation}\label{Eq:PW04} \left|\vec{r} - \vec{R} \right| = \sqrt{\left( x - X \right)^2 + \left( y - Y \right)^2 + z^2}. \end{equation} \noindent The reconstruction of a digital hologram recorded with plane waves is given by Eq. \ref{Eq:HF08} where $R(X,Y) = 1$: \begin{equation}\label{Eq:PW05} U(x,y) \approx \frac{i}{\lambda} \int\int H_0(X,Y) \frac{\exp{\left( - ik \left|\vec{r} - \vec{R} \right| \right)}}{\left|\vec{r} - \vec{R} \right|}\; {\rm d} X {\rm d} Y. \end{equation} \subsection{Large $z$-distance, Fresnel Approximation} When the $z$ distance is sufficiently large so that the Fresnel approximation \begin{equation}\label{Eq:PWFA01} z^3 \gg \frac{\pi}{4\lambda} \left[ \left( x - X \right)^2 + \left( y - Y \right)^2 \right]_{\rm max}^2 \end{equation} \noindent is fulfilled, Eq. \ref{Eq:PW03} turns into: \begin{equation}\label{Eq:PWFA02} U_{\rm detector}(X,Y) = - \frac{i}{\lambda z} \int\int t(x,y) \exp{\left( \frac{i\pi}{\lambda z}\left(\left( x - X \right)^2 + \left( y - Y \right)^2 \right)\right)} \; {\rm d} x {\rm d} y, \end{equation} \noindent where the constant phase factor was neglected. Equation \ref{Eq:PWFA02} can be re-written in the form of a convolution \begin{equation}\label{Eq:PWFA03} U_{\rm detector}(X,Y) = t(X,Y) \otimes s(X,Y) \end{equation} \noindent of the object transmission function $t(x,y)$ with the Fresnel function: \begin{equation}\label{Eq:PWFA04} s(x,y) = - \frac{i}{\lambda z} \exp{\left( \frac{i\pi}{\lambda z} \left( x^2 + y^2 \right) \right)}, \end{equation} \noindent whose Fourier transform $S(u,v)$ is given by: \begin{equation}\label{Eq:PWFA05} S(u,v) = - \frac{i}{\lambda z} \int\int \exp{\left( \frac{i\pi}{\lambda z} \left( x^2 + y^2 \right) \right)} \exp{\left( - 2 \pi i \left( x u + y v \right) \right)} \; {\rm d} x {\rm d} y = \exp{\left( - i \pi \lambda z \left( u^2 + v^2 \right) \right)}, \end{equation} \noindent where $(u, v)$ denote the Fourier domain coordinates. It is important to note that for calculating the convolution, instead of computing $s(x,y)$ in real space and taking its Fourier transform, as it is for example done in \cite{Schnars:2002}, it is better to directly calculate $S(u, v)$ using Eq. \ref{Eq:PWFA05}, as it allows for correct sampling. The coordinates in the object plane and in the Fourier plane are sampled as explained in Appendix B. The pixel size in the Fourier domain $\Delta_F$ is given by the digital Fourier transform equation, see Eq. \ref{Eq:B7}: \begin{equation}\label{Eq:PWFA06} \Delta_F = \frac{1}{N\Delta} = \frac{1}{S}, \end{equation} \noindent where $S \times S$ is the area size, $N$ denotes the number of pixels, and $\Delta$ is the pixel size in the hologram plane. In in-line holography with plane waves, the pixel size in the hologram plane $\Delta$ is equal to that in the object plane. \newpage \noindent The hologram simulation consists of the following steps: \begin{tabular}{lp{15cm}} (a) & Calculating the Fourier transform of $t(x,y)$. All digital Fourier transforms mentioned in this work are centered, see Appendix B.\\ (b) & Simulating $S(u,v) = \exp{\left( - i \pi \lambda z \left( u^2 + v^2 \right) \right)}$.\\ (c) & Multiplying the results of (a) and (b).\\ (d) & Calculating the inverse Fourier transform of (c).\\ (e) & Taking the square of the absolute value of the result (d). \end{tabular} \vspace{0.5cm} \noindent The reconstruction of a digital hologram recorded with plane waves is given by Eq. \ref{Eq:PW05} and can also can be represented as a convolution: \begin{equation}\nonumber U(x,y) \approx \frac{i}{\lambda} \int\int H_0(X,Y) \frac{\exp{\left( - ik \left|\vec{r} - \vec{R} \right| \right)}}{\left|\vec{r} - \vec{R} \right|}\; {\rm d} X {\rm d} Y\approx \end{equation} \begin{equation}\nonumber \approx \frac{i}{\lambda z} \int\int H_0(X,Y) \exp{\left( - \frac{i\pi}{\lambda z}\left(\left( x - X \right)^2 + \left( y - Y \right)^2 \right)\right)}\; {\rm d} X {\rm d} Y = \end{equation} \begin{equation}\label{Eq:PWFA07} = H_0(x,y) \otimes s^{*}(x,y). \end{equation} \noindent The hologram reconstruction consists of the following steps: \begin{tabular}{lp{15cm}} (a) & Calculating the Fourier transform of $H_0(X,Y)$.\\ (b) & Simulating $S^{*}(u,v) = \exp{\left( i \pi \lambda z \left( u^2 + v^2 \right) \right)}$.\\ (c) & Multiplying the results of (a) and (b).\\ (d) & Calculating the inverse Fourier transform of (c). The result provides $\tilde{t}(x,y)$. \end{tabular} \vspace{0.5cm} \noindent It can be shown that a convolution can also be simulated also via an inverse Fourier transforms as: \begin{equation}\label{Eq:PWFA08} U(x,y) = {\rm FT} \bigg({\rm FT^{-1}}\Big(H_0(x,y)\Big) \cdot {\rm FT^{-1}} \Big(s^{*}(x,y)\Big) \bigg), \end{equation} \noindent where \begin{equation}\label{Eq:PWFA09} {\rm FT^{-1}} \Big(s^{*}(x,y)\Big) = \bigg({\rm FT} \Big(s(x,y)\Big)\bigg)^{*} = S^{*}(u,v), \end{equation} \noindent where $\rm FT$ and $\rm FT^{-1}$ are the Fourier transform and inverse Fourier transform respectively. \vspace{0.5cm} \noindent Using this approach, the hologram reconstruction consists of the following steps: \begin{tabular}{lp{15cm}} (a) & Calculating the inverse Fourier transform of $H_0(X,Y)$.\\ (b) & Simulating $S^{*}(u,v) = \exp{\left( i \pi \lambda z \left( u^2 + v^2 \right) \right)}$.\\ (c) & Multiplying the results of (a) and (b).\\ (d) & Calculating the Fourier transform of (c). The result provides $\tilde{t}(x,y)$. \end{tabular} \vspace{0.5cm} \noindent At very large distances, the Fresnel condition is replaced by an even stronger Fraunhofer condition: \begin{equation}\label{Eq:PWFA10} z \gg \frac{\pi}{\lambda} \left[ \left( x - X \right)^2 + \left( y - Y \right)^2 \right]_{\rm max} \end{equation} \noindent and the wave scattered by the object, given by Eq. \ref{Eq:PWFA02}, becomes \begin{equation}\label{Eq:PWFA11} U_{\rm detector}(X,Y) = - \frac{i}{\lambda z} \exp{\left( \frac{i}{\lambda z}\left( X^2 + Y^2 \right)\right)} \int\int t(x,y) \exp{\left( - \frac{2 \pi i}{\lambda z}\left(x X + y Y \right)\right)} \; {\rm d} x {\rm d} y \end{equation} \noindent which is just a Fourier transform of the object transmission function $t(x, y)$. The far-field Fraunhofer condition is realized in coherent diffractive imaging \cite{Miao:1999,Latychevskaia:2012}. \subsection{Angular Spectrum Method} The angular spectrum method was first described by J. A. Ratcliffe \cite{Ratcliffe:1965}, and has been explained in detail by J.W. Goodman in his book \cite{Goodman:2004}. The angular spectrum method does not use any approximations. It is based on the notion, that plane wave propagation can be described by the propagation of its spectrum. The components of the scattering vector \begin{equation}\label{Eq:PWASM01} \vec{k} = \frac{2\pi}{\lambda} \left( \cos{\varphi} \sin{\theta}, \sin{\varphi} \sin{\theta}, \cos{\theta}\right) \end{equation} \noindent are related to the Fourier domain coordinates $(u,v)$ as following: \begin{equation}\nonumber \cos{\varphi} \sin{\theta} = \lambda u \end{equation} \begin{equation}\label{Eq:PWASM02} \sin{\varphi} \sin{\theta} = \lambda v \end{equation} \noindent whereby $\left( \lambda u, \lambda v\right)$ are the direction cosines of the vector $\vec{k}$, and therefore the following condition is fulfilled: \begin{equation}\label{Eq:PWASM03} \left(\lambda u\right)^2 + \left(\lambda v\right)^2 \le 1 \end{equation} \noindent The complex-valued exit wave $U_{\rm exit\;wave}(x,y) = t(x,y)$ is propagated to the detector plane by calculation of the following transformation \cite{Goodman:2004}: \begin{equation}\label{Eq:PWASM04} U_{\rm detector} (X,Y) = {\rm FT}^{-1} \left[ {\rm FT}\left( t(x,y) \right) \exp{\left( \frac{2\pi i z}{\lambda} \sqrt{1 - \left(\lambda u\right)^2 - \left(\lambda v\right)^2} \right)} \right], \end{equation} \noindent where $(u, v)$ denote the same Fourier domain coordinates as defined above. The reconstruction of the hologram is calculated by using the formula: \begin{equation}\label{Eq:PWASM05} U(x,y) = {\rm FT}^{-1}\left[ {\rm FT}\left( H_0(X,Y) \right)\exp{\left( -\frac{2\pi i z}{\lambda} \sqrt{1 - \left(\lambda u\right)^2 - \left(\lambda v\right)^2} \right)}\right]. \end{equation} \noindent The term $\displaystyle \exp{\left( \pm \frac{2\pi i z}{\lambda} \sqrt{1 - \left(\lambda u\right)^2 - \left(\lambda v\right)^2} \right)}$ has to be simulated, and it has non-zero values for the range of $\left( \lambda u, \lambda v \right)$ constrained by Eq. \ref{Eq:PWASM03}, which thus acts like a low-pass filter. Equation \ref{Eq:PWASM03} sets the limit for the maximal {\it possible} frequency in the Fourier domain $u_{\rm max}^{\rm max}$: \begin{equation}\label{Eq:PWASM06} \lambda u_{\rm max}^{\rm max} = 1. \end{equation} \noindent Taking into account Eq. \ref{Eq:PWASM02}, we obtain: $\lambda u_{\rm max}^{\rm max} = \sin{\theta}_{\rm max}^{\rm max} = 1$, where ${\theta}_{\rm max}^{\rm max}$ is the maximal possible angle of the scattered wave. The related resolution, given by the Abbe criterion \cite{Abbe:1881,Abbe:1882} for ${\theta}_{\rm max}^{\rm max}$ amounts to: \begin{equation}\label{Eq:PWASM07} {\rm Resolution\; lateral} = \frac{\lambda}{2 \sin{\theta}_{\rm max}^{\rm max}} = \frac{\lambda}{2}. \end{equation} \noindent Thus, the condition given by Eq. \ref{Eq:PWASM03} relates to the classical resolution limit. Therefore, as long as imaging is done within the classical resolution limit, the condition in Eq. \ref{Eq:PWASM03} is always fulfilled and the wavefront propagation can be calculated by applying the angular spectrum method. \vspace{0.5cm} \noindent The hologram is simulated as follows: \begin{tabular}{lp{15cm}} (a) & Calculating the Fourier transform of $t(x,y)$.\\ (b) & Simulating $\exp{\left( \frac{2\pi i z}{\lambda} \sqrt{1 - \left(\lambda u\right)^2 - \left(\lambda v\right)^2} \right)}$.\\ (c) & Multiplying the results of (a) and (b).\\ (d) & Calculating the inverse Fourier transform of (c).\\ (e) & Taking the square of the absolute value of the result (d). \end{tabular} \vspace{0.5cm} \noindent The hologram reconstruction consists of the following steps: \begin{tabular}{lp{15cm}} (a) & Calculating the Fourier transform of $H_0(X,Y)$.\\ (b) & Simulating $\exp{\left( - \frac{2\pi i z}{\lambda} \sqrt{1 - \left(\lambda u\right)^2 - \left(\lambda v\right)^2} \right)}$.\\ (c) & Multiplying the results of (a) and (b).\\ (d) & Calculating the inverse Fourier transform of (c). The result provides $\tilde{t}(x,y)$. \end{tabular} \subsection{Resolution in In-line Holography with Plane Waves} In general, the achievable lateral resolution in digital Gabor in-line holography is defined by \cite{Schnars:2005}: \begin{equation}\label{Eq:PWR01} {\rm R}_{\rm Holography} = \frac{\lambda d}{N \Delta} = \frac{\lambda d}{S}, \end{equation} \noindent where $d$ is the distance between sample and the detector, and $S = N\Delta$ is the side length of the hologram. In practice, the resolution in in-line holography is limited by the visibility of the finest interference fringes which are formed by the interference between reference and object wave scattered at large diffraction angles. Experimentally, at least if electrons are used, the achievable resolution is often limited by the mechanical stability of the optical setup. Resolution can quantitatively be evaluated by inspecting the Fourier spectrum of a hologram \cite{Latychevskaia:2012}, similar to the resolution estimation in coherent diffractive imaging \cite{Shapiro:2005,Chapman:2006b}. Given the highest observable frequency in the Fourier spectrum $u_{\rm max}$ is detected at pixel $A$ from the center of the spectrum, its coordinate is given by: \begin{equation}\label{Eq:PWR02} u_{\rm max} = \Delta_F A. \end{equation} \noindent Using the relation $\sin{\theta_{\rm max}} = \lambda u_{\rm max} $, where $\theta_{\rm max}$ is the maximal detected scattering angle of the scattered wave, we obtain $\sin{\theta_{\rm max}} = \lambda \Delta_F A$. With the classical Abbe resolution criterion given by Eq. \ref{Eq:PWASM07} we obtain: \begin{equation}\label{Eq:PWR03} {\rm Resolution\;lateral} = \frac{\lambda}{2\sin{\theta}_{\rm max}} = \frac{1}{2 u_{\rm max}} = \frac{1}{2 \Delta_F A} = \frac{S}{2 A}, \end{equation} \noindent whereby we substituted $\Delta_F$ by the expression given in Eq. \ref{Eq:PWFA06}. Thus, by estimating the position of the highest visible frequency $u_{\rm max}$ in the Fourier spectrum of a hologram, the lateral resolution intrinsic to the hologram can easily be evaluated by employing Eq. \ref{Eq:PWR03}. The axial resolution (in $z$-direction) can be defined as a depth of focus $\delta$. An ideal point scatterer when imaged by a diffraction-limited system will be represented as an Airy spot, with $80~\%$ of the intensity staying in the main maximum at the defocus distance \cite{Bountry:1962,Meng:1995}: \begin{equation}\label{Eq:PWR04} \delta = \frac{2\lambda}{(\rm 2N.A.)^2}, \end{equation} \noindent where $\rm N.A.$ is the numerical aperture of the system. This provides an estimate for the axial resolution: \begin{equation}\label{Eq:PWR05} {\rm Resolution\; axial} = \frac{\lambda}{(\rm N.A.)^2}. \end{equation} \section{In-line Holography with Spherical Waves} In this section, we describe methods of simulating and reconstructing holograms created by spherical waves, as illustrated in Fig. \ref{Fig:1}(b). This type of hologram is also called a Fresnel or Gabor hologram. The incident wave in the object plane is given by: \begin{equation}\label{Eq:SW01} U_{\rm incident}(x,y) = \frac{\exp{\left( ikr \right)}}{r}, \end{equation} \noindent where $\vec{r} = \left(x, y, z\right)$ and $z$ is the distance between source and object plane, as indicated in Fig. \ref{Fig:1}. The exit wave beyond the object is given by Eq. \ref{Eq:HF03}: \begin{equation}\label{Eq:SW02} U_{\rm exit\; wave}(x,y) = U_{\rm incident} (x,y) \cdot t(x,y) = \frac{\exp{\left( ikr \right)}}{r} \cdot t(x,y). \end{equation} \noindent The propagation of the wave towards the detector is described by the Fresnel-Kirchhoff diffraction formula, see Eq. \ref{Eq:HF04}: \begin{equation}\label{Eq:SW03} U_{\rm detector} (X,Y) = -\frac{i}{\lambda} \int\int \frac{\exp{\left( ikr \right)}}{r} \cdot t(x,y) \frac{\exp{\left( ik \left|\vec{r} - \vec{R} \right| \right)}}{\left|\vec{r} - \vec{R} \right|}\; {\rm d} x {\rm d} y, \end{equation} \noindent where $\vec{r} = (x, y, z)$ is a vector pointing from the source to a point in the object, $\vec{R} = (X, Y, Z)$ is a vector pointing from the source to a point on the detector, and $\left|\vec{r} - \vec{R} \right|$ is the distance between a point in the object plane and a point in the detector plane (see Fig. \ref{Fig:1}(b)). The reconstruction of a digital hologram recorded with spherical waves is given by Eq. \ref{Eq:HF08} where $R(X,Y) = \exp(ikR)/R$: \begin{equation}\label{Eq:SW04} U(x,y) \approx \frac{i}{\lambda} \int\int \frac{\exp{\left(ikR\right)}}{R} H_0(X,Y) \frac{\exp{\left( - ik \left|\vec{r} - \vec{R} \right| \right)}}{\left|\vec{r} - \vec{R} \right|}\; {\rm d} X {\rm d} Y. \end{equation} \subsection{Paraxial Approximation} In the paraxial approximation, the following approximations are valid: \begin{equation}\label{Eq:SWPA01} r \approx z + \frac{x^2 + y^2}{2z} \end{equation} \noindent and \begin{equation}\label{Eq:SWPA02} \left|\vec{r} - \vec{R} \right| \approx Z + \frac{\left(x - X \right)^2 + \left( y - Y \right)^2}{2Z}. \end{equation} \noindent They allow the following expansion of Eq. \ref{Eq:SW03}: \begin{equation}\nonumber U_{\rm detector}(X,Y) = -\frac{i}{\lambda Z z} \exp{\left( \frac{2\pi i}{\lambda} (Z + z)\right)} \int\int \exp{\left( \frac{i\pi}{\lambda z} (x^2 + y^2)\right)}\; t(x,y)\times \end{equation} \begin{equation}\label{Eq:SWPA03} \times\exp{\left( \frac{i\pi}{\lambda Z} \left(\left(x - X\right)^2 + \left(y - Y\right)^2\right)\right)} \; {\rm d} x {\rm d} y. \end{equation} \noindent By taking into account that $z \ll Z$, we rewrite: \begin{equation}\nonumber U_{\rm detector}(X,Y) = -\frac{i}{\lambda Z z} \exp{\left( \frac{2\pi i}{\lambda} (Z + z)\right)} \exp{\left( \frac{i\pi}{\lambda Z} \left(X^2 + Y^2\right)\right)} \int\int \exp{\left( \frac{i\pi}{\lambda z} (x^2 + y^2)\right)}\times \end{equation} \begin{equation}\label{Eq:SWPA04} \times t(x,y)\exp{\left( - \frac{2\pi i}{\lambda Z} \left(x X + y Y \right)\right)} \; {\rm d} x {\rm d} y. \end{equation} \noindent In his original work, Gabor \cite{Gabor:1949} arrived at a similar relation, where $t(x,y)$ and $U_{\rm detector}(X,Y)$ constitute a Fourier pair, and thus $U_{\rm detector}(X,Y)$ can be obtained from $t(x,y)$ by multiplying it with a spherical phase term and taking the Fourier transform of the result, as is obvious from Eq. \ref{Eq:SWPA04}. However, such a single Fourier transform approach is not optimal when applied to digital holograms. To design a routine for wave propagation that employs two Fourier transforms, we rewrite Eq. \ref{Eq:SWPA04} in the form of a convolution \cite{Latychevskaia:2012} \begin{equation}\nonumber U_{\rm detector}(X,Y) \approx -\frac{i}{\lambda Z z} \exp{\left( \frac{2\pi i}{\lambda} (Z + z)\right)} \exp{\left( \frac{i\pi}{\lambda Z} \left(X^2 + Y^2\right)\right)} \int\int t(x,y) \times \end{equation} \begin{equation}\label{Eq:SWPA05} \times \exp{\left( \frac{i \pi}{\lambda z} \left(\left(x - X\frac{z}{Z}\right)^2 + \left(y - Y\frac{z}{Z}\right)^2 \right)\right)} \; {\rm d} x {\rm d} y \end{equation} \noindent of the transmission function with the Fresnel function $s(x,y)$, whereby the latter is given by Eq. \ref{Eq:PWFA05}. The hologram is then calculated as: \begin{equation}\label{Eq:SWPA06} H(X,Y) = \left| U_{\rm detector}(X,Y)\right|^2 = \left| t(X,Y) \otimes s(X,Y) \right|^2. \end{equation} \noindent The coordinates in the object plane respectively in the Fourier domain are sampled as explained in Appendix B. The pixel size in the Fourier domain is given by the digital Fourier transform equation, see Eq. \ref{Eq:B7}: \begin{equation}\label{Eq:SWPA07} \Delta_F = \frac{1}{N \Delta_{\rm Object}} = \frac{1}{S_{\rm Object}}, \end{equation} \noindent where $\Delta_{\rm Object} = S_{\rm Object}/N$ is the pixel size in the object plane and $S_{\rm Object} \times S_{\rm Object}$ is the object area size. \vspace{0.5cm} \noindent Thus, a hologram is simulated by: \begin{tabular}{lp{15cm}} (a) & Calculating the Fourier transform of $t(x,y)$.\\ (b) & Simulating \newline $S(u,v) = \exp{\left( - i \pi \lambda z (u^2 + v^2) \right)}$.\\ (c) & Multiplying the results of (a) and (b).\\ (d) & Calculating the inverse Fourier transform of (c).\\ (e) & Taking the square of the absolute value of the result (d). \end{tabular} \noindent The size of the simulated hologram is equal to the size of the object area multiplied by the magnification factor: \begin{equation}\label{Eq:SWPA08} {M} = \frac{Z}{z}. \end{equation} \vspace{0.5cm} \noindent The hologram is reconstructed in the reciprocal order by: \begin{tabular}{lp{15cm}} (a) & Calculating the inverse Fourier transform of $H_0(X,Y)$.\\ (b) & Simulating \newline $S^{*}(u,v) = \exp{\left( i \pi \lambda z (u^2 + v^2) \right)}$.\\ (c) & Multiplying the results of (a) and (b).\\ (d) & Calculating the Fourier transform of (c). The result provides $\tilde{t}(x,y)$. \end{tabular} \noindent The size of the reconstructed object area is equal to the size of the hologram divided by the magnification factor $M$. \subsection{Non-paraxial Approximation} When the incident spherical wave extends over larger angles, the paraxial approximation is no longer valid and the field propagation based on the Fresnel-Kirchhoff diffraction formula Eq. \ref{Eq:SW03} must be calculated. An approach that allows the single Fourier transform integral to be transformed into the convolution integral was presented by \cite{Kreuzer:2002}. Below, we present an approach that uses propagation through the source plane \cite{Latychevskaia:2009}. \vspace{0.5cm} \noindent {\bf Simulation} \noindent To avoid difficulties with sampling, we design a two-step routine which employing two Fourier transforms. In the first step, the wave is propagated from the object plane $\vec{r} = (x,y,z)$ to the source plane $\vec{r}_0 = (x_0,y_0,0)$. In the second step, the wave is propagated from the source plane to the detector plane \cite{Latychevskaia:2009,ZhangFucai:2004,Wang:2008}. In the first step, with the approximation $r_0 \ll r $, we expand: \begin{equation}\label{Eq:SWNP01} \left|\vec{r} - \vec{r}_0 \right| \approx r - \frac{\vec{r}\vec{r}_0}{r} + \frac{r_0^2}{2r} \end{equation} \noindent which, when substituted into Fresnel-Kirchhoff diffraction formula Eq. \ref{Eq:SW04}, results in: \begin{equation}\nonumber U_0(x_0,y_0) = \frac{i}{\lambda} \int\int \frac{\exp{\left( ikr \right)}}{r} \cdot t(x,y) \frac{\exp{\left( - ik \left|\vec{r} - \vec{r}_0 \right| \right)}}{\left|\vec{r} - \vec{r}_0 \right|}\; {\rm d} x {\rm d} y \approx \end{equation} \begin{equation}\nonumber \approx \frac{i}{\lambda} \int\int \frac{\exp{\left( ikr \right)}}{r} \cdot t(x,y) \frac{\exp{\left( - ikr \right)}}{r} \exp{\left( ik\frac{\vec{r}\vec{r}_0}{r} \right)} \exp{\left( - ik\frac{r_0^2}{2r} \right)} \; {\rm d} x {\rm d} y = \end{equation} \begin{equation}\label{Eq:SWNP02} = \frac{i}{\lambda z^2} \exp{\left( -\frac{i\pi}{\lambda z} \left( x_0^2 + y_0^2 \right)\right)} \int\int t(x,y) \exp{\left( \frac{2\pi i}{\lambda z} (x_0 x + y_0 y) \right)}\; {\rm d} x {\rm d} y. \end{equation} \noindent Thus, the first step consists of an inverse Fourier transform of the object transmission function $t(x,y)$ multiplied with the spherical phase term $\displaystyle \exp{\left( -\frac{i\pi}{\lambda z} (x_0^2 + y_0^2) \right)}$. The sampling in the source plane is given by the digital Fourier transform, see Eq. \ref{Eq:B7}: \begin{equation}\label{Eq:SWNP03} \Delta_0 = \frac{\lambda z}{N \Delta_{\rm Object}} = \frac{\lambda z}{S_{\rm Object}} . \end{equation} \noindent In the second step, the wavefront is propagated to the detector plane, which is described by the Fresnel-Kirchhoff diffraction formula: \begin{equation}\label{Eq:SWNP04} U_{\rm detector}(X,Y) = - \frac{i}{\lambda} \int\int U_0(x_0,y_0) \frac{\exp{\left( ik \left|\vec{r}_0 - \vec{R} \right| \right)}}{\left|\vec{r}_0 - \vec{R} \right|}\; {\rm d} x {\rm d} y. \end{equation} \noindent Here, the approximation $r_0\ll R$ holds and the following expansion can be applied: \begin{equation}\label{Eq:SWNP05} \left|\vec{r}_0 - \vec{R} \right| \approx R - \frac{\vec{R}\vec{r}_0}{R} = R - \vec{\kappa}\vec{r}_0, \end{equation} \noindent where we introduced the emission vector (see Appendix C): \begin{equation}\label{Eq:SWNP06} \vec{\kappa} = \frac{\vec{R}}{R} = \left( \frac{X}{R}, \frac{Y}{R}, \frac{Z}{R}\right) = \left(\kappa_x, \kappa_y, \kappa_z\right), \end{equation} \begin{equation}\nonumber R = \sqrt{X^2 + Y^2 + Z^2}. \end{equation} \noindent We rewrite Eq. \ref{Eq:SWNP04}: \begin{equation}\nonumber U_{\rm detector}(\kappa_x,\kappa_y) = - \frac{i}{\lambda} \frac{\exp{\left(ikR\right)}}{R} \int\int U_0(x_0, y_0) \exp{\left( - ik \vec{\kappa}\vec{r}_0 \right)}\; {\rm d} x {\rm d} y = \end{equation} \begin{equation}\label{Eq:SWNP07} = - \frac{i}{\lambda} \frac{\exp{\left(ikR\right)}}{R} \int\int U_0(x_0, y_0) \exp{\left( - ik (x_0 \kappa_x + y_0 \kappa_y) \right)}\; {\rm d} x_0 {\rm d} y_0. \end{equation} \noindent Thus, the second step consists of just the Fourier transform of $U_0(x_0,y_0)$. The phase factors in front of the integral vanish when the square of the absolute value is calculated and the $1/R^2$ factor cancels out after normalization of the hologram by division with the background image: $B(X,Y) = 1/R^2$. The remaining constant factor is given by $\displaystyle \frac{\Delta_0^2 \Delta_{\rm Object}^2}{\lambda^2 z^2} = \frac{1}{N^2}$. \vspace{0.1cm} \vspace{0.5cm} \noindent The hologram is simulated by: \begin{tabular}{lp{15cm}} (a) & Calculating the inverse Fourier transform of $t(x,y)$.\\ (b) & Simulating $\displaystyle \exp{\left( - \frac{i\pi}{\lambda z} (x_0^2 + y_0^2) \right)}$.\\ (c) & Multiplying the results of (a) and (b).\\ (d) & Calculating the Fourier transform of (c).\\ (e) & Transformation from $(\kappa_x, \kappa_y)$-coordinates to $(X,Y)$-coordinates.\\ (f) & Taking the square of the absolute value of the result (e).\\ (g) & Multiplication with the factor $1/N^2$. \end{tabular} \vspace{0.5cm} \noindent {\bf Reconstruction} \noindent The numerical reconstruction of a digital hologram consists of a multiplication of the hologram with the reference wave $R(X,Y) = e^{ikR}/R$ followed by back-propagation to the object plane given by the Fresnel-Kirchhoff diffraction formula Eq.{\ref{Eq:SW04}}: \begin{equation}\label{Eq:SWNP09} U(x,y) \approx \frac{i}{\lambda} \frac{\exp{\left( ikR \right)}}{R}\int\int H_0(X,Y) \frac{\exp{\left( - ik \left|\vec{r} - \vec{R} \right| \right)}}{\left|\vec{r} - \vec{R} \right|}\; {\rm d} X {\rm d} Y. \end{equation} \noindent Here again, we split the reconstruction routine into two steps, which employ two Fourier transforms. In the first step the wave is propagated from the detector plane $\vec{R} = (X,Y,Z)$ to the source plane $\vec{r}_0 = (x_0,y_0,0)$. In the second step, the wave is propagated from the source plane $\vec{r}_0 = (x_0,y_0,0)$ to the object plane $\vec{r} = (x,y,z)$. In the first step, the approximation $r_0\ll R$ is fulfilled and the expansion given by Eq. \ref{Eq:SWNP05} can be inserted into Eq. \ref{Eq:SWNP09}: \begin{equation}\nonumber U_0(x_0,y_0) \approx \frac{i}{\lambda} \frac{\exp{\left( ikR \right)}}{R^2}\int\int H_0(X,Y) \exp{\left(- ikR\right)} \exp{\left( ik\vec{\kappa}\vec{r}_0 \right)}\; {\rm d} X {\rm d} Y = \end{equation} \begin{equation}\label{Eq:SWNP10} = \frac{i}{\lambda} \int\int H(X,Y) \exp{\left( ik\vec{\kappa}\vec{r}_0 \right)} J(\kappa_x,\kappa_y)\; {\rm d} \kappa_x {\rm d} \kappa_y, \end{equation} \noindent where we took into account that $\displaystyle H_0(X,Y) = \frac{H(X,Y)}{R^2}$ and introduced the Jacobian of the coordinate transformation: \begin{equation}\label{Eq:SWNP11} J(\kappa_x,\kappa_y) = \frac{Z^2}{\left(1-\kappa_x^2 -\kappa_y^2\right)^2}. \end{equation} \noindent We rewrite Eq. \ref{Eq:SWNP10} as: \begin{equation}\label{Eq:SWNP12} U_0(x_0,y_0) = \frac{i}{\lambda} \int\int H(\kappa_x,\kappa_y) \exp{\left( ik (x_0 \kappa_x + y_0 \kappa_y) \right)} J(\kappa_x,\kappa_y)\; {\rm d} \kappa_x {\rm d} \kappa_y \end{equation} \noindent which is simply the inverse Fourier-transform of the holographic image in $\left(\kappa_x,\kappa_y \right)$-coordinates. The transformation of the holographic image into $\left(\kappa_x,\kappa_y \right)$-coordinates is described in Appendix C. In the second step, the field is propagated from the source plane to the object plane, which is again calculated by the Fresnel-Kirchhoff diffraction formula: \begin{equation}\label{Eq:SWNP13} U(x,y) = - \frac{i}{\lambda} \int\int U_0(x_0,y_0)\frac{\exp{\left( ik \left|\vec{r} - \vec{r}_0 \right| \right)}}{\left|\vec{r} - \vec{r}_0 \right|}\; {\rm d} x_0 {\rm d} y_0. \end{equation} \noindent Using the expansion given by Eq. \ref{Eq:SWNP01}, we obtain \begin{equation}\label{Eq:SWNP14} U(x,y) \approx \frac{i}{\lambda r } \exp{\left(ikr\right)} \int\int U_0(x_0,y_0) \exp{\left( - \frac{2\pi i}{\lambda z} (x_0 x + y_0 y) \right)} \exp{\left( \frac{i\pi}{\lambda z} (x_0^2 + y_0^2) \right)} \; {\rm d} x_0 {\rm d} y_0, \end{equation} \noindent which is a multiplication of with a complex spherical wave factor, followed by a Fourier transform of the result. The reconstructed exit wave includes the incident spherical wave. Thus, the result of Eq. \ref{Eq:SWNP14} must be divided by the incident wave to reveal the object transmission function: \begin{equation}\label{Eq:SWNP15} t(x,y) = r \exp{\left(-ikr\right)} U(x,y). \end{equation} \noindent The total integral transform involved in the second step is given by \begin{equation}\label{Eq:SWNP16} t(x,y) \approx \frac{i}{\lambda } \int\int U_0(x_0,y_0) \exp{\left( - \frac{2\pi i}{\lambda z} (x_0 x + y_0 y) \right)} \exp{\left( \frac{i\pi}{\lambda z} (x_0^2 + y_0^2) \right)} \; {\rm d} x_0 {\rm d} y_0. \end{equation} \noindent When analytical integration is replaced by numerical integration, the total pre-factor turns into: $\displaystyle \frac{\Delta_0^2\Delta_{\kappa}^2}{\lambda^2} = \frac{1}{N^2}$, where $\Delta_{\kappa}$ is the pixel size in $\kappa$-space and $\Delta_0$ is the pixel size in the source plane. $\Delta_0$ is derived from $\Delta_{\kappa}$ by using Eq. \ref{Eq:B7}: \begin{equation}\label{Eq:SWNP17} \Delta_0 = \frac{\lambda}{N \Delta_{\kappa}}. \end{equation} \noindent Thus, the hologram reconstruction includes the following steps: \begin{tabular}{lp{15cm}} (a) & Transforming the hologram image to $\kappa$-coordinates.\\ (b) & Calculating $J(\kappa_x, \kappa_y)$ using Eq. \ref{Eq:SWNP11}.\\ (c) & Inverse Fourier transform of the product of (a) and (b).\\ (d) & Simulating $\displaystyle \exp{\left(\frac{i\pi}{\lambda z} (x_0^2 + y_0^2) \right)}$.\\ (e) & Multiplying the results of (c) and (d).\\ (f) & Calculating the Fourier transform of (e).\\ (g) & Multiplication with the factor $1/N^2$. The result provides $t(x,y)$. \end{tabular} \vspace{0.5cm} \noindent In the algorithms for in-line holography with spherical waves, the size of a pixel in the hologram plane is equal to the size of a pixel in the object plane multiplied by the magnification factor $M$ given by Eq. \ref{Eq:SWPA08}. \subsection{Resolution in In-line Holography with Spherical Waves} Similar arguments as in the discussion above on resolution concerning in-line holography with plane waves, also apply here. The practical resolution limit, intrinsic to an in-line hologram recorded with spherical waves, can be estimated from the highest frequency observed in its Fourier spectrum. The formula of the resolution, similar to Eq. \ref{Eq:PWR03}, is given by: \begin{equation}\label{Eq:SWR01} {\rm Resolution\; lateral} = \frac{S}{2A\cdot M}, \end{equation} \noindent where $A$ is the pixel number at which the highest frequency in the Fourier domain is detected, $S$ is the size of the hologram, and $M$ denotes the magnification factor. \section{Relationship between Holograms Recorded with Plane Respectively Spherical Waves} It is worth noting that the reconstruction algorithms presented here consist of similar steps regardless of the wavefront shape: inverse Fourier transform and multiplication with the spherical phase factor followed by a Fourier transform. The spherical wave factor is given by $S^{*}(u, v)$, see Eq. \ref{Eq:PWFA05}: \begin{equation}\label{Eq:PWSW01} S^{*}(u,v) = \exp{\left( i \pi \lambda z \left( u^2 + v^2 \right) \right)}. \end{equation} \noindent in the case of plane waves and by $\displaystyle \exp{\left(\frac{i\pi}{\lambda z} (x_0^2 + y_0^2) \right)}$ in the case of spherical waves. When written in digital form, these two terms are: \begin{equation}\nonumber S^{*}(p, q) = \exp{\left( i \pi \lambda z \Delta_F^2 \left( p^2 + q^2 \right) \right)} \; {\rm and} \end{equation} \begin{equation}\label{Eq:PWSW02} \exp{\left( \frac{i \pi }{\lambda z} \Delta_0^2\left( p^2 + q^2 \right) \right)}, \qquad p, q = 1...N, \end{equation} \noindent where $p$ and $q$ are the pixel numbers. Substituting $\Delta_F$ and $\Delta_0$ from Eq. \ref{Eq:PWFA06} respectively Eq. \ref{Eq:SWNP17}, we obtain: \begin{equation}\nonumber S^{*}(p, q) = \exp{\left( \frac{i \pi \lambda z}{N^2 \Delta^2} \left(p^2 + q^2 \right) \right)} \; {\rm and} \end{equation} \begin{equation}\label{Eq:PWSW03} \exp{\left( \frac{i \pi \lambda}{z N^2 \Delta_{\kappa}^2} \left(p^2 + q^2\right) \right)}. \end{equation} \noindent Taking into account that $\displaystyle \Delta = \frac{S_{\rm plane}}{N}$ and $\displaystyle \Delta_{\kappa} = \frac{S_{\rm spherical}}{Z N}$, where $S_{\rm plane} \times S_{\rm plane}$ and $S_{\rm spherical} \times S_{\rm spherical}$ are the related sizes of the areas in the detector plane, we obtain from Eq. \ref{Eq:PWSW03}: \begin{equation}\nonumber S^{*}(p, q) = \exp{\left( \frac{i \pi \lambda z}{S_{\rm plane}^2} \left(p^2 + q^2 \right) \right)} \; {\rm and} \end{equation} \begin{equation}\label{Eq:PWSW04} \exp{\left( \frac{i \pi \lambda Z^2}{z S_{\rm spherical}^2} \left(p^2 + q^2 \right) \right)}. \end{equation} \noindent These two terms are equal when the following equation holds: \begin{equation}\label{Eq:PWSW05} \frac{\lambda z}{S_{\rm plane}^2} = \frac{\lambda Z^2}{z S_{\rm spherical}^2} = \alpha. \end{equation} \noindent The above equation implies that a hologram recorded with a spherical wave can be reconstructed as it was recorded with plane waves, or vice versa, provided that the following relation is fulfilled: \begin{equation}\label{Eq:PWSW06} {S_{\rm plane}} = \frac{z}{Z} {S_{\rm spherical}}. \end{equation} \noindent Examples of such reconstructions are shown in Fig. \ref{Fig:2}. \begin{figure}[htbp] \centerline{\includegraphics[width=12cm]{fig2.eps}} \caption{Optical hologram of a tungsten tip and its reconstruction. (a) Hologram recorded with $\lambda = 532$~nm laser light by the in-line Gabor scheme with the following parameters: source-to-detector distance $Z = 1060$~mm, hologram size ${S_{\rm spherical}}\times {S_{\rm spherical}} = 325 \times 325~\rm mm^2$, source-to-object distance $z = 1.4$~mm. The hologram exhibits a parameter $\alpha = 4.046\cdot 10^{-3}$. (b) Reconstructed object using the algorithm for spherical waves. The size of the reconstructed area amounts to $ 429 \times 429~\rm\mu m^2$. (c) Reconstructed object assuming a planar wavefront. The hologram size is set to ${S_{\rm plane}}\times { S_{\rm plane}} = 429 \times 429~\rm\mu m^2$ and the reconstruction is obtained at a hologram-to-object distance of $z = 1.4$~mm. Prior to the reconstruction, an apodization cosine-filter is applied to the edges of the normalized hologram to minimize digital Fourier transform artefacts that would otherwise arise due to a step-like intensity drop at the rim of the holographic record (see Appendix A).}\label{Fig:2} \end{figure} Moreover, Eq. \ref{Eq:PWSW05} implies that a uniquely defined factor $\alpha$ can be assigned to any hologram. Consequently, holograms recorded with variable wavelengths, screen sizes or source-detector distances can uniquely be reconstructed as long as $\alpha$ remains constant. This approach however is only valid for a thin object which can be assumed to be in one plane, or when the reconstruction at a certain plane within object distribution must be obtained. When reconstructing a truly three-dimensional object by obtaining a sequence of object distribution at different $z$ distances from the hologram, one must adjust the geometrical parameters at each reconstruction distance. For example, from Eq. \ref{Eq:PWSW06} it can be seen that the size of the reconstructed area is scaled with the distance $z$. \section{Optimal Parameters} During the reconstruction procedure, the inverse Fourier transform of a hologram is multiplied with the spherical phase term, which for example in the case of plane waves equals to \begin{equation}\label{Eq:OP01} S^{*}(u,v) = \exp{\left(i\pi \lambda z (u^2 + v^2) \right)}. \end{equation} \noindent Such a spherical phase function is correctly simulated with $ N \times N$ pixels when it can be reduced to \begin{equation}\label{Eq:OP02} \exp{\left(\frac{i\pi}{N} (m^2 + n^2) \right)} \qquad m, n = 1...N, \end{equation} \noindent where $m$ and $n$ are the pixel numbers. Taking into account the sampling $u = m \Delta_F$ and $v = n \Delta_F$, and substituting $\Delta_F$ from Eq. \ref{Eq:PWFA06}, we obtain the following condition for correct sampling (at Nyquist or higher frequency) of the spherical phase term: \begin{equation}\label{Eq:OP03} \frac{S^2}{\lambda z} \leq {N}. \end{equation} \noindent This condition allows selecting optimal experimental parameters. \section{Conclusions} We have presented simple recipes for the numerical reconstruction of in-line holograms recorded with plane and spherical waves. These methods are wavelength-independent and can thus be applied to holograms recorded with any kind of radiation exhibiting wave nature. Moreover, reconstructions of both absorbing as well as phase shifting properties of objects can be achieved. We also demonstrated that any digital hologram can be assigned a uniquely defined parameter which defines its digital reconstruction. \vspace{0.5cm} \noindent {\bf Acknowledgments} \noindent Financial support by the Swiss National Science Foundation and the University of Zurich are gratefully acknowledged. \newpage
1412.3596
\section{Introduction} With the increasing popularity of GoPro \cite{gopro} and the introduction of Google Glass \cite{glass} the use of head worn egocentric cameras is on the rise. These cameras are typically operated in a hands-free, always-on manner, allowing the wearers to concentrate on their activities. While more and more egocentric videos are being recorded, watching such videos from start to end is difficult due to two aspects: (i) The videos tend to be long and boring; (ii) Camera shake induced by natural head motion further disturbs viewing. These aspects call for automated tools to enable faster access to the information in such videos. An exceptional tool for this purpose is the ``Hyperlapse" method recently proposed by \cite{hyperlapse}. While our work was inspired by \cite{hyperlapse}, we take a different, lighter, approach to address this problem. \begin{figure}[t] \centering \subfigure[]{\includegraphics[width=0.9\columnwidth]{figures/fig-schematic-ff1-uniform.pdf}} \\ \subfigure[]{\includegraphics[width=0.9\columnwidth]{figures/fig-schematic-ff1-ours.pdf}} \caption{Frame sampling for Fast Forward. A view from above on the camera path (the line) and the viewing directions of the frames (the arrows) as the camera wearer walks forward during a couple of seconds. (a) Uniform $5\times$ frames sampling, shown with solid arrows, gives output with significant changes in viewing directions. (b) Our frame sampling, represented as solid arrows, prefers forward looking frames at the cost of somewhat non uniform sampling.} \label{fig:ff-schematic} \end{figure} Fast forward is a natural choice for faster browsing of egocentric videos. The speed factor depends on the cognitive load a user is interested in taking. Na\"{\i}ve fast forward uses uniform sampling of frames, and the sampling density depends on the desired speed up factor. Adaptive fast forward approaches \cite{Petrovic:2005} try to adjust the speed in different segments of the input video so as to equalize the cognitive load. For example, sparser frame sampling giving higher speed up is possible in stationary scenes, and denser frame sampling giving lower speed ups is possible in dynamic scenes. In general, content aware techniques adjust the frame sampling rate based upon the importance of the content in the video. Typical importance measures include scene motion, scene complexity, and saliency. None of the aforementioned methods, however, can handle the challenges of egocentric videos, as we describe next. \begin{figure*}[ht] \centering \includegraphics[width=1\textwidth]{figures/fig-ff-naive-vs-ours-comparison-bike2.pdf} \caption{Representative frames from the fast forward results on `Bike2' sequence \cite{hyperlapse-dataset}. The camera wearer rides a bike and prepares to cross the road. \underline{Top row:} uniform sampling of the input sequence leads to a very shaky output as the camera wearer turns his head sharply to the left and right before crossing the road. \underline{Bottom row:} EgoSampling prefers forward looking frames and therefore samples the frames non-uniformly so as to remove the sharp head motions. The stabilization can be visually compared by focusing on the change in position of the building (circled yellow) appearing in the scene. The building does not even show up in two frames of the uniform sampling approach, indicating the extreme shake. Note that the fast forward sequence produced by EgoSampling can be post-processed by traditional video stabilization techniques to further improve the stabilization.} \label{fig:result_bike2} \end{figure*} Most egocentric videos suffer from substantial camera shake due to natural head motion of the wearer. We borrow the terminology of \cite{us} and note that when the camera wearer is ``stationary" (e.g, sitting or standing in place), head motions are less frequent and pose no challenge to traditional fast-forward and stabilization techniques. However, when the camera wearer is ``in transit" (e.g, walking, cycling, driving, etc), existing fast forward techniques end up accentuating the shake in the video. We therefore focus on handling these cases, leaving the simpler cases of a stationary camera wearer for standard methods. We use the method of \cite{us} to identify with high probability portions of the video in which the camera wearer is not ``stationary", and operate only on these. Other methods, such as \cite{kitani,grauman-story} can also be used to identify a stationary camera wearer. We propose to model frame sampling as an energy minimization problem. A video is represented as a directed a-cyclic graph whose nodes correspond to input video frames. The weight of an edge between nodes, e.g. between frame $t$ and frame $t+k$, represents a cost for the transition from $t$ to $t+k$. For fast forward, the cost represents how ``stable" the output video will be if frame $t$ is followed by frame $t+k$ in the output video. This can also be viewed as introducing a bias favoring a smoother camera path. The weight will additionally indicate how suitable $k$ is to the desired playback speed. In this formulation, the problem of generating a stable fast forwarded video becomes equivalent to that of finding a shortest path in a graph. We keep all edge weights non-negative and note that there are numerous, polynomial time, optimal inference algorithms available for finding a shortest path in such graphs. We show that sequences produced with our method are more stable and easier to watch compared to traditional fast forward methods. An interesting phenomenon of a walking person is the shifting of body weight from one leg to the other leg, causing periodic head motion from left to right and back. Given an egocentric video taken by a walking person, sampling frames from the left most and right most head positions gives approximate stereo-pairs. This enables generation of a stereo video from a monocular input video. The contributions of this papers are: (i) A novel and lightweight approach for creating fast forward videos for egocentric videos. (ii) A method to create stereo sequences from monocular egocentric video. The rest of this paper is organized as follows. We survey related works in Section \ref{sec:related_work}. Proposed frame sampling method for fast forward and problem formulation are presented in Sections \ref{sec:sampling_framework} and \ref{sec:problem_formulation} respectively. In Section \ref{sec:stereo} we describe our method for creating perceptual stereo sequences. Experiments and user study results are given in Section \ref{sec:exp}. We conclude in Section \ref{sec:concl}. \section{Related Work} \label{sec:related_work} \paragraph*{Video Summarization:} Video Summarization methods sample the input video for salient events to create a concise output that captures the essence of the input video. This field has seen many new papers in the recent years, but only a handful address the specific challenges of summarizing egocentric videos. In \cite{grauman-important-people,grauman-snap-points}, important keyframes are sampled from the input video to create a story-board summarizing the input video. In \cite{grauman-story}, subshots that are related to the same ``story" are sampled to produce a ``story-driven" summary. Such video summarization can be seen as an extreme adaptive fast forward, where some parts are completely removed while other parts are played at original speed. These techniques are required to have some strategy for determining the importance or relevance of each video segment, as segments removed from summary are not available for browsing. As long as automatic methods are not endowed with human intelligence, fast forward gives a person the ability to survey all parts of the video. \paragraph*{Video Stabilization:} There are two main approaches for video stabilization. One approach uses $3D$ methods to reconstruct a smooth camera path \cite{content_preserving_warps,vid3d_stab_depth}. Another approach avoids $3D$, and uses only $2D$ motion models followed by non-rigid warps \cite{youtube_stabilizer,subspace_vid_stab,BundledPaths2013,steadyflow,raanan}. A na\"{\i}ve fast forward approach would be to apply video stabilization algorithms before or after uniform frame sampling. As noted by \cite{hyperlapse} also, stabilizing egocentric video doesn't produce satisfying results. This can be attributed to the fact that uniform sampling, irrespective of whether done before or after the stabilization, is not able to remove outlier frames, e.g. the frames when camera wearer looks at his shoe for a second while walking in general. An alternative approach that was evaluated in \cite{hyperlapse}, termed ``coarse-to-fine stabilization", stabilizes the input video and then prunes frames from the stabilized video a bit. This process is repeated until the desired playback speed is achieved. Being a uniform sampling approach, this method does not avoid outlier frames. In addition, it introduces significant distortion to the output as a result of repeated application of a stabilization algorithm. EgoSampling differs from traditional fast forward as well as traditional video stabilization. We attempt to adjust frame sampling in order to produce a stable-as-possible fast forward sequence. Rather than stabilizing outlier frames, we prefer to skip them. While traditional stabilization algorithms must make compromises (in terms of camera motion and crop window) in order to deal with every outlier frame, we have the benefit of choosing which frames to include in the output. Following our frame sampling, traditional video stabilization algorithms \cite{youtube_stabilizer,subspace_vid_stab,BundledPaths2013,steadyflow,raanan} can be applied to the output of EgoSampling to further stabilize the results. \paragraph*{Hyperlapse:} A recent work \cite{hyperlapse}, dedicated to egocentric videos, proposed to use a combination of $3D$ scene reconstruction and image based rendering techniques to produce a completely new video sequence, in which the camera path is perfectly smooth and broadly follows the original path. The results of Hyperlapse are impressive. However, the scene reconstruction and image based rendering methods are not guaranteed to work for many egocentric videos, and the computation costs involved are very high. Hyperlapse may therefore be less practical for day-long videos which need to be processed at home. Unlike Hyperlapse, EgoSampling uses only raw frames sampled from the original video. \section{Proposed Frame Sampling} \label{sec:sampling_framework} Most egocentric cameras are usually worn on the head or attached to eyeglasses. While this gives an ideal first person view, it also leads to significant shaking of the camera due to the wearer's head motion. Camera Shaking is higher when the person is ``in transit" (e.g. walking, cycling, driving, etc.). In spite of the shaky original video, we would prefer for consecutive output frames in the fast forward video to have similar viewing directions, almost as if they were captured by a camera moving forward on rails. In this paper we propose a frame sampling technique, which selectively picks frames with similar viewing directions, resulting in a stabilized fast forward egocentric video. See Fig.~\ref{fig:ff-schematic} for a schematic example. \paragraph*{Head Motion Prior} As noted by \cite{us,kitani,grauman-important-people,ryoo}, the camera shake in an egocentric video, measured as optical flow between two consecutive frames, is far from being random. It contains enough information to recognize the camera wearer's activity. Another observation made in \cite{us} is that when ``in transit'', the mean (over time) of the instantaneous optical flow is always radially away from the Focus of Expansion (FOE). The interpretation is simple: when ``in transit'' (e.g., walking/cycling/driving etc), our head might be moving instantaneously in all directions (left/right/up/down), but the physical transition between the different locations is done through the forward looking direction (i.e. we look forward and move forward). This motivates us to use a forward orientation sampling prior. When sampling frames for fast forward, we prefer frames looking to the direction in which the camera is translating. \paragraph*{Computation of Motion Direction (Epipole)} Given $N$ video frames, we would like to find the motion direction (Epipolar point) between all pairs of frames, $I_t$ and $I_{t+k}$, where $k \in [1,\tau]$, and $\tau$ is the maximum allowed frame skip. Under the assumption that the camera is always translating (when the camera wearer is ``in transit''), the displacement direction between $I_t$ and $I_{t+k}$ can be estimated from the fundamental matrix $F_{t,t+k}$ \cite{hartley_book}. Frame sampling will be biased towards selecting forward looking frames, where the epipole is closest to the center of the image. Recent V-SLAM approaches such as \cite{lsdslam,svo} provide camera ego-motion estimation and localization in real-time. However, these methods failed on our dataset after a few hundreds frames. We decided to stick with robust $2D$ motion models. \begin{comment} \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{figures/fig_fmat_decay.pdf} \caption{The decrease in successful computation of the Fundamental Matrix as the two frames become farther apart temporally. The x-axis is the number of skipped frames between the two frames used for computation.} \label{fig:fmat_decay} \end{figure} \end{comment} \paragraph*{Estimation of Motion Direction (FOE)} We found that the fundamental matrix computation can fail frequently when $k$ (temporal separation between the frame pair) grows larger. Whenever the fundamental matrix computation breaks, we estimate the direction of motion from the FOE of the optical flow. We do not compute the FOE from the instantaneous flow, but from integrated optical flow as suggested in \cite{us} and computed as follows: (i) We first compute the sparse optical flow between all consecutive frames from frame $t$ to frame $t+k$. Let the optical flow between frames $t$ and $t+1$ be denoted by $g_t(x,y)$. (ii) For each flow location $(x,y)$, we average all optical flow vectors at that location from all consecutive frames. $G(x,y) = \frac{1}{k} \sum_{i=t}^{t+k-1} g_i(x,y)$. The FOE is computed from $G$ according to \cite{technion-foe}, and is used as an estimate of the direction of motion. The temporal average of optical flow gives a more accurate FOE since the direction of translation is relatively constant, but the head rotation goes to all directions, back and forth. Averaging the optical flow will tend to cancel the rotational components, and leave the translational components. In this case the FOE is a good estimate for the direction of motion. For a deeper analysis of temporally integrated optical flow see ``Pixel Profiles'' in \cite{steadyflow}. \paragraph*{Optical Flow Computation} Most available algorithms for dense optical flow failed for our purposes, but the very sparse flow proposed in \cite{us} for egocentric videos worked relatively well. The fifty optical flow vectors were robust to compute, while allowing to find the FOE quite accurately. \section{Problem Formulation and Inference} \label{sec:problem_formulation} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figures/fig-first-order-schematic.pdf} \caption{We formulate the joint fast forward and video stabilization problem as finding a shortest path in a graph constructed as shown. There is a node corresponding to each frame. The edges between a pair of frames $(i,j)$ indicate the penalty for including a frame $j$ immediately after frame $i$ in the output (please refer to the text for details on the edge weights). The edges between source/sink and the graph nodes allow to skip frames from start and end. The frames corresponding to nodes along the shortest path from source to sink are included in the output video.} \label{fig:first_order_graph} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=0.8\linewidth]{figures/fig-ff-compare-bike02.pdf} \caption{Comparative results for fast forward from na\"{\i}ve uniform sampling (first row), EgoSampling using first order formulation (second row) and using second order formulation (third row). Note the stability in the sampled frames as seen from the tower visible far away (circled yellow). The first order formulation leads to a more stable fast forward output compared to na\"{\i}ve uniform sampling. The second order formulation produces even better results in terms of visual stability.} \label{fig:res_ff_comparison} \end{figure*} We model the joint fast forward and stabilization of egocentric video as an energy minimization problem. We represent the input video as a graph with a node corresponding to every frame in the video. There are weighted edges between every pair of graph nodes, $i$ and $j$, with weight proportional to our preference for including frame $j$ right after $i$ in the output video. There are three components in this weight: \begin{enumerate} \item Shakiness Cost ($S_{i,j}$): This term prefers forward looking frames. The cost is proportional to the distance of the computed motion direction (Epipole or FOE) from the center of the image. \item Velocity Cost ($V_{i,j}$): This term controls the playback speed of the output video. The desired speed is given by the desired magnitude of the optical flow, $K_{flow}$, between two consecutive output frames. This optical flow is estimated as follows: (i) We first compute the sparse optical flow between all consecutive frames from frame $i$ to frame $j$. Let the optical flow between frames $t$ and $t+1$ be $g_t(x,y)$. (ii) For each flow location $(x,y)$, we sum all optical flow vectors at that location from all consecutive frames. $G(x,y) = \sum_{t=i}^{j-1} g_t(x,y)$. (iii) The flow between frames $i$ and $j$ is then estimated as the average magnitude of all the flow vectors $G(x,y)$. The closer the magnitude is to $K_{flow}$, the lower is the velocity cost. The velocity term samples more densely periods with fast camera motion compared to periods with slower motion, e.g. it will prefer to skip stationary periods, such as when waiting at a red light. The term additionally brings in the benefit of content aware fast forwarding. When the background is close to the wearer, the scene changes faster compared to when the background is far away. The velocity term reduces the playback speed when the background is close and increases it when the background is far away. \item Appearance Cost ($C_{i,j}$): This is the Earth Movers Distance (EMD) \cite{emd} between the color histograms of frames $i$ and $j$. The role of this term is to prevent large visual changes between frames. A quick rotation of the head or dominant moving objects in the scene can confuse the FOE or epipole computation. The terms acts as an anchor in such cases, preventing the algorithm from skipping a large number of frames. \end{enumerate} The overall weight of the edge between nodes (frames) $i$ and $j$ is given by: \begin{equation} \mathcal{W}_{i,j}=\alpha\cdot\mathcal{S}_{i,j}+\beta\cdot V_{i,j}+\gamma\cdot C_{i,j}, \end{equation} where $\alpha$, $\beta$ and $\gamma$ represent the relative importance of various costs in the overall edge weight. With the problem formulated as above, sampling frames for stable fast forward is done by finding a shortest path in the graph. We add two auxiliary nodes, a \emph{source} and a \emph{sink} in the graph to allow skipping some frames from start or end. We add zero weight edges from start node to first $D_{start}$ frames and from last $D_{end}$ nodes to sink, to allow such skip. We then use Dijkstra's algorithm \cite{dijkstra} to compute the shortest path between source and sink. The algorithm does the optimal inference in time polynomial in the number of nodes (frames). Fig.~\ref{fig:first_order_graph} shows a schematic illustration of the proposed formulation. We note that there are content aware fast forward and other general video summarization techniques which also measure importance of a particular frame being included in the output video, e.g. based upon visible faces or other objects. In our implementation we have not used any bias for choosing a particular frame in the output video based upon such a relevance measure. However, the same could have been included easily. For example, if the penalty of including a frame, $i$, in the output video is $\delta_i$, the weights of all the incoming (or outgoing, but not both) edges to node $i$ may be increased by $\delta_i$. \subsection{Second Order Smoothness} \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figures/fig-higher-order-schematic.pdf} \caption{The graph formulation, as described in Fig.~\ref{fig:first_order_graph}, produces an output which has almost forward looking direction. However, there may still be large changes in the epipole locations between two consecutive frame transitions, causing jitter in the output video. To overcome this we add a second order smoothness term based on triplets of output frames. Now the nodes correspond to pair of frames, instead of single frame in first order formulation described earlier. There are edges between frame pairs $(i,j)$ and $(k,l)$, if $j=k$. The edge reflects the penalty for including frame triplet $(i,k,l)$ in the output. Edges from source and sink to graph nodes (not shown in the figure) are added in the same way as in first order formulation to allow skipping frames from start and end.} \label{fig:second_order_graph} \end{figure} The formulation described in the previous section prefers to select forward looking frames, where the epipole is closest to the center of the image. With the proposed formulation, it may so happen that the epipoles of the selected frames are close to the image center but on the opposite sides, leading to a jitter in the output video. In this section we introduce an additional cost element: stability of the location of the epipole. We prefer to sample frames with minimal variation of the epipole location. To compute this cost, nodes now represent two frames, as can be seen in Fig.~\ref{fig:second_order_graph}. The weights on the edges depend on the change in epipole location between one image pair to the successive image pair. Consider three frames $I_{t_1}$, $I_{t_2}$ and $I_{t_3}$. Assume the epipole between $I_{t_i}$ and $I_{t_j}$ is at pixel $(x_{ij}, y_{ij})$. The second order cost of the triplet (graph edge) $(I_{t_1},I_{t_2},I_{t_3})$, is proportional to $\|(x_{23}-x_{12}, y_{23}-y_{12})\|$. This is the difference between the epiople location computed from frames $I_{t_1}$ and $I_{t_2}$, and the epipole location computed from frames $I_{t_2}$ and $I_{t_3}$. This second order cost is added to the previously computed shakiness cost, which is proportional to the distance from the origin $\|(x_{23}, y_{23})\|$. The graph with the second order smoothness term has all edge weights non-negative and the running-time to find optimal solution to shortest path is linear in the number of nodes and edges, i.e. $O(n\tau^2)$. In practice, with $\tau=100$, the optimal path was found in all examples in less than 30 seconds. Fig.~\ref{fig:res_ff_comparison} shows results obtained from both first order and second order formulations. As noted for the first order formulation, we do not use importance measure for a particular frame being added in the output in our implementation. To add such, say for frame $i$, the weights of all incoming (or outgoing but not both) edges to all nodes $(i,j)$ may be increased by $\delta_i$, where $\delta_i$ is the penalty for including frame $i$ in the output video. \section{Turning Egocentric Video to Stereo} \label{sec:stereo} \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{figures/fig-schematic-stereo1-all.pdf} \caption{Frame sampling for Stereo: A view from above for the camera path (the line) and the viewing directions of the frames (numbered arrows). The camera wearer walks forward for a couple of seconds. We pick the frames in which the wearer's head is in the right most position (frames 1,6,10) and left most position (frames 4,8,12) to form stereo pairs. Frame pairs (1,4), (6,8) and (10,12) form the output stereo video.} \label{fig:stereo-formulation} \end{figure} \begin{figure}[t] \centering \subfigure{\includegraphics[height=0.37\linewidth]{figures/fig-stereo-walking5-output.png}} \subfigure{\includegraphics[height=0.37\linewidth]{figures/fig-stereo-walking4-output-frame8.png}} \caption{Two stereo results obtained from our method. The output is shown as anaglyph composite. Please use cyan and red anglyph glasses and zoom to 800\% for best view. Readers without anaglyph glasses may note the observed disparity evident from red separation at various pixels. There is higher disparity and larger red separation on the objects near to observer. Stereo video output for these examples are available in the project page\protect\footnotemark[1].} \label{fig:stereo_walking4} \end{figure} When walking, the head moves left and right as the body shifts its weight from the left leg to the right leg and back. Pictures taken during the shift of the head to the left and to the right can be used to generate stereo egocentric video. For this purpose we would like to generate two stabilized videos: The left video will sample frames taken when the head moved to the left, and the right video will sample frames taken when the head moved to the right. Fig.~\ref{fig:stereo-formulation} gives the schematic approach for generating stereo egocentric videos. For generating the stereo streams we need to determine the head location. We found the following to work well: (i) Average all optical flow vectors in each frame, and keep one scalar describing the average x-shift for that frame. (ii) Compute for each frame the accumulated x-shift of all preceding frames starting from the first frame. The curve of the accumulated x-shift is very similar to the camera path shown in Fig.~\ref{fig:stereo-formulation}. Frames near the left peaks are selected for the left video, and frames near the right peaks are selected for the right video. In perfect stereo pairs the displacement between the two images is a pure sideways translation. In our case we also have forward motion between the two views. The forward motion can disturb stereo perception for objects which are too close, but for objects farther away stereo output produced from the proposed scheme looks good. Fig.~\ref{fig:stereo_walking4} shows frames from a stereo video generated using proposed framework. \section{Experiments} \label{sec:exp} In this section we give implementation details and show the results for fast forward as well as stereo. We use publicly available sequences \cite{hyperlapse-dataset, youtube-running1, youtube-driving2, ego_social} as well as our own videos (for the stereo only) for the demonstration. We used a modified (faster) implementation of \cite{us} for the LK \cite{lk} optical flow estimation. We use the code and calibration details given by \cite{hyperlapse} to correct for lens distortion in their sequences. Feature point extraction and fundamental matrix recovery is performed using VisualSFM \cite{visualsfm}, with GPU support. The rest of the implementation (FOE estimation, energy terms and shortest path etc.) is in Matlab. All the experiments have been conducted on a standard desktop PC. \subsection{Fast Forward} \renewcommand{\tabcolsep}{0.1cm} \begin{table}[t] \centering \begin{tabular}{lccccc} \toprule[1.5pt] \specialcell{\bf \small Name} & \specialcell{\bf \small Src} &\specialcell{\bf \small Resolution} & \specialcell{\bf \small Camera} & \specialcell{\bf \small Num\\ \bf \small Frames} & \specialcell{\bf \small Lens \\ \bf \small Correction} \\ \midrule \small Walking1 & \cite{hyperlapse-dataset} & \specialcell{\small $1280$x$960$} & \small Hero2 & \small $17249$ & \cmark \\ \small Walking2 & \cite{hyperlapse-dataset} &\specialcell{\small $1280$x$720$} & \small Hero & \small $6900$ & \\ \small Walking3 & \cite{us} & \specialcell{\small $1920$x$1080$} & \small Hero3 & \small $8001$ & \\ \small Driving & \cite{youtube-driving2} & \specialcell{\small $1280$x$720$ } & \small Hero2 & \small $10200$ & \\ % \small Bike1 & \cite{hyperlapse-dataset} & \specialcell{\small $1280$x$960$} & \small Hero3 & \small $10786$ & \cmark \\ \small Bike2 & \cite{hyperlapse-dataset} & \specialcell{\small $1280$x$960$} & \small Hero3 & \small $7049$ & \cmark \\ \small Bike3 & \cite{hyperlapse-dataset} & \specialcell{\small $1280$x$960$} & \small Hero3 & \small $23700$ & \cmark \\ \small Running & \cite{youtube-running1} & \specialcell{\small $1280$x$720$} & \small Hero3+ & \small $12900$ & \\ \bottomrule[1.5pt] \\ \end{tabular} \caption{Sequences used for the fast forward algorithm evaluation. All sequences were shot in 30fps, except `Running1' which is 24fps and `Walking3' which is 15fps.} \label{tb:ff_sequences} \end{table} We show results for EgoSampling on $8$ publicly available sequences. The details of the sequences are given in Table \ref{tb:ff_sequences}. For the $4$ sequences for which we have camera calibration information (marked with checks in the `Lens Correction' column), we estimated the motion direction based on epipolar geometry. We used the FOE estimation method as a fallback when we could not recover the fundamental matrix. For this set of experiments we fix the following weights: $\alpha=1000$, $\beta=200$ and $\gamma=3$. We further penalize the use of estimated FOE instead of the epipole with a constant factor $c=4$. In case camera calibration is not available, we used the FOE estimation method only and changed $\alpha=3$ and $\beta=10$. For all the experiments, we fixed $\tau=100$ (maximum allowed skip). We set the source and sink skip to $D_{start}=D_{end}=120$ to allow more flexibility. We set the desired speed up factor to $10\times$ by setting $K_{flow}$ to be $10$ times the average optical flow magnitude of the sequence. We show representative frames from the output for one such experiment in Fig.\ref{fig:res_ff_comparison}. Output videos from other experiments are given in the supplementary material\footnote[1]{http://www.vision.huji.ac.il/egosampling/}. \paragraph*{Running times} The advantage of the proposed approach is in its simplicity, robustness and efficiency. This makes it practical for long unstructured egocentric video. We present the coarse running time for the major steps in our algorithm below. The time is estimated on a standard Desktop PC, based on the implementation details given above. Sparse optical flow estimation (as in \cite{us}) takes 150 milliseconds per frame. Estimating F-Mat (including feature detection and matching) between frame $I_t$ and $I_{t+k}$ where $k\in[1,100]$ takes 450 milliseconds per input frame $I_t$. Calculating second-order costs takes 125 milliseconds per frame. This amounts to total of 725 milliseconds of processing per input frame. Solving for the shortest path, which is done once per sequence, takes up to 30 seconds for the longest sequence in our dataset ($\approx 24K$ frames). In all, running time is more than an order of magnitude faster than \cite{hyperlapse}. \paragraph*{User Study} We compare the results of EgoSampling, first and second order smoothness formulations, with na\"{\i}ve fast forward with $10\times$ speedup, implemented by sampling the input video uniformly. For EgoSampling the speed is not directly controlled but is targeted for $10\times$ speedup by setting $K_{flow}$ to be $10$ times the average optical flow magnitude of the sequence. We conducted a user study to compare our results with the baseline methods. We sampled short clips (5-10 seconds each) from the output of the three methods at hand. We made sure the clips start and end at the same geographic location. We showed each of the 35 subjects several pairs of clips, before stabilization, chosen at random. We asked the subjects to state which of the clips is better in terms of stability and continuity. The majority ($75\%$) of the subjects preferred the output of EgoSampling with first-order shakeness term over the na\"{\i}ve baseline. On top of that, $68\%$ preferred the output of EgoSampling using second-order shakeness term over the output using first-order shakeness term. To evaluate the effect of video stabilization on the EgoSampling output, we tested three commercial video stabilization tools: (i) Adobe Warp Stabilizer (ii) Deshaker \footnote[2]{http://www.guthspot.se/video/deshaker.htm} (iii) Youtube's Video stabilizer. We have found that Youtube's stabilizer gives the best results on challenging fast forward videos \footnote[3]{We attribute this to the fact that Youtube's stabilizer does not depend upon long feature trajectories, which are scarce in sub-sampled video as ours.}. We stabilized the output clips using Youtube's stabilizer and asked our 35 subjects to repeat process described above. Again, the subjects favored the output of EgoSampling. \paragraph*{Quantitative Evaluation} \renewcommand{\tabcolsep}{0.1cm} \begin{table} \centering \begin{tabular}{lccccccc} \toprule[1.5pt] \specialcell{\bf \small Name} &\specialcell{\bf \small Input\\ \bf \small Frames} &\specialcell{\bf \small Output \\ \bf \small Frames} & \specialcell{\bf \small Median \\ \bf \small Skip} & \specialcell{\bf \small Improvement over \\ \bf \small Na\"{\i}ve $10\times$} \\ \midrule \small Walking1 & \small $17249$ & \small $931$ & \small $17$ & $283\%$ \\ \small Walking2 & \small $6900$ & \small $284$ & \small $13$ & $88\%$ \\ \small Walking3 & \small $8001$ & \small $956$ & \small $4$ & $56\%$ \\ \small Driving & \small $10200$ & \small $188$ & \small $48$ & $-7\%$ \\ % \small Bike1 & \small $10786$ & \small $378$ & \small $13$ & $235\%$ \\ \small Bike2 & \small $7049$ & \small $343$ & \small $14$ & $126\%$ \\ \small Bike3 & \small $23700$ & \small $1255$ & \small $12$ & $66\%$ \\ \small Running & \small $12900$ & \small $1251$ & \small $8$ & $200\%$ \\ \bottomrule[1.5pt] \\ \end{tabular} \caption{Fast forward results with desired speedup of factor $10$ using second-order smoothness. We evaluate the improvement as degree of epipole smoothness in the output video (column $5$). Please refer to text for details on how we quantify smoothness. The proposed method gives huge improvement over na\"{\i}ve fast forward in all but one test sequence (`Driving'), see Fig.~\ref{fig:ff-failure-driving} for details. Note that one of the weaknesses of the proposed method is lack of direct control over speedup factor. Though the desired speedup factor is $10$, the actual frame skip (column $4$) differs a lot from target due to conflicting constraint posed by stabilization.} \label{tb:ff} \end{table} We quantify the performance of EgoSampling using the following measures. We measure the deviation of the output from the desired speedup. We found that measuring the speedup by taking the ratio between the number of input and output frames is misleading, because one of the features EgoSampling is to take large skips when the magnitude of optical flow is rather low. We therefore measure the effective speedup as the median frame skip. Additional measure is the reduction in epipole jitter between consecutive output frames (or FOE if F-Matrix cannot be estimated). We differentiate the locations of the epipole (temporally). The mean magnitude of the derivative gives us the amount of jitter between consecutive frames in the output. We measure the jitter for our method as well for naive $10\times$ uniform sampling and calculate the percentage improvement in jitter over competition. Table \ref{tb:ff} shows the quantitative results for frame skip and epipole smoothness. There is a huge improvement in jitter by our algorithm. We note that the standard method to quantify video stabilization algorithms is to measure crop and distortion ratios. However since we jointly model fast forward and stabilization such measures are not applicable. The other method could have been to post process the output video with a standard video stabilization algorithm and measure these factors. Better measures might indicate better input to stabilization or better output from preceding sampling. However, most stabilization algorithms rely on trajectories and fail on resampled video with large view difference. The only successful algorithm was Youtube's stabilizer but it did not give us these measures. \paragraph*{Limitations} \begin{figure}[t] \centering \includegraphics[width=0.4\columnwidth]{figures/fig-failure-case-driving-frame1_small.jpg} \includegraphics[width=0.4\columnwidth]{figures/fig-failure-case-driving-frame2_small.jpg} \caption{A failure case for the proposed method. Two sample frames from the sequence. Note that the frame to frame optical flow computed for this sequence is misleading - most of the field of view is either far away (infinity) or inside the car. In both cases, its near zero. However, since the driver shakes his head every few seconds, the average optical flow magnitude is relatively high. The velocity term causes us to skip many frames until the desired $K_{flow}$ is met, causing large frame skips in the output video. Restricting the maximum frame skip by setting $\tau$ to a small value leads to arbitrary frames being chosen looking sideways, causing shake in the output video.} \label{fig:ff-failure-driving} \end{figure} One notable difference between EgoSampling and traditional fast forward methods is that the number of output frames is not fixed. To adjust the effective speedup, the user can tune the velocity term by setting different values to $K_{flow}$. It should be noted, however, that not all speedup factors are possible without compromising the stability of the output. For example, consider a camera that toggles between looking straight and looking to the left every $10$ frames. Clearly, any speedup factor that is not a multiple of $10$ will introduce shake to the output. The algorithm chooses an optimal speedup factor which balances between the desired speedup and what can be achieved in practice on the specific input. Sequence `Driving' (Figure \ref{fig:ff-failure-driving}) presents an interesting failure case. Another limitation of EgoSampling is to handle long periods in which the camera wearer is static, hence, the camera is not translating. In these cases, both the fundamental matrix and the FOE estimations can become unstable, leading to wrong cost assignments (low penalty instead of high) to graph edges. The appearance and velocity terms are more robust and help reduce the number of outlier (shaky) frames in the output. \subsection{Stereo} \renewcommand{\tabcolsep}{0.1cm} \begin{table}[t] \centering \begin{tabular}{lcccc} \toprule[1.5pt] \specialcell{\bf \small Name} &\specialcell{\bf \small Resolution} &\specialcell{\bf \small Camera} & \specialcell{\bf \small Input\\ \bf \small Frames} &\specialcell{\bf \small Stereo Pairs \\ \bf \small Extracted} \\ \midrule \small Walking1 & $1280$x$960$ & \small Hero2 & $330$ & $20$ \\ \small Walking4 & $1920$x$1080$ & Hero3 & \small $2870$ & $116$ \\ \small Walking5 & $1920$x$1080$ & Hero3 & \small $1301$ & $45$ \\ \bottomrule[1.5pt] \\ \end{tabular} \caption{Sequences used for stereo evaluation. The sequence `Walking1' was shot by \cite{hyperlapse-dataset}. The other two were shot by us.} \label{tb:stereo_sequences} \end{table} \begin{comment} \begin{figure}[t] \centering \subfigure{\includegraphics[height=0.2\linewidth]{figures/fig-stereo-walking5-L-frame_00618_small.jpg}} \subfigure{\includegraphics[height=0.2\linewidth]{figures/fig-stereo-walking5-R-frame_00622_small.jpg}} \subfigure{\includegraphics[height=0.2\linewidth]{figures/fig-stereo-walking5-output.png}} \label{fig:stereo_walking5} \caption{Stereo Results: Successful case. The proposed approach can compute stereo video from really difficult egocentric video. The first two images show the input frame and third the anaglyph composition. Please use red-cyan anaglyph glasses with zoom level 800\% for best view of third frame. In case glasses are not available, focus on the red shift as an indication of disparity} \end{figure} \end{comment} \begin{figure}[t] \centering \subfigure{\includegraphics[width=0.45\linewidth]{figures/fig-stereo-walking1-output.png}} \caption{Stereo failure case. The proposed framework is challenged by the presence of moving objects and registration failures. The disparity perception is presented incorrectly in the example shown because of registration failure. The image shows the anaglyph composition. Best viewed with red-cyan anaglyph glasses at a zoom level of 800\%} \label{fig:stereo_walking1} \end{figure} Table \ref{tb:stereo_sequences} gives the description of some of the sequences we experimented with for generating stereo video from a monocular egocentric camera. We use publicly available \cite{hyperlapse-dataset} as well as sequences we shot ourselves. Fig.~\ref{fig:stereo_walking4} shows some stereo frames generated by our algorithm. Registration failure and presence of moving objects pose a significant challenge to the proposed stereo generation framework. Objects present very close to the wearer also disturb the stereo perception. Fig.~\ref{fig:stereo_walking1} shows one such failure instance where the disparity perception has been wrongly computed because of multiple registration failures. \section{Conclusion} \label{sec:concl} We propose a novel frame sampling technique to produce stable fast forward egocentric videos. Instead of the demanding task of $3D$ reconstruction and rendering used by the best existing methods, we rely on simple computation of the epipole or the FOE. The proposed framework is very efficient, which makes it practical for long egocentric videos. Because of its reliance on simple optical flow, the method can potentially handle difficult egocentric videos, where methods requiring $3D$ reconstruction may not be reliable. We have also presented an approach to use the head motion for generation of stereo pairs. This turns a nuisance into a feature.\\ \noindent\textbf{Acknowledgement:} This research was supported by Intel ICRI-CI, by Israel Ministry of Science, and by Israel Science Foundation. {\small \bibliographystyle{ieee}
1003.4213
\section{Introduction} Recent efforts to produce atomic clocks in the optical domain have proven so successful that they are now surpassing the performance of the best frequency standards in the microwave domain~\cite{Rosenband2008,Boyd2007,Ludlow2008,Schneider2005}. However, one limitation to further improvements of optical clocks is the performance of ultra-stable lasers, which form the basis for the interrogation of the narrow resonance of an atomic clock~\cite{Quessada2003}. For this reason, much effort has been devoted to improving ultra-stable references in the optical domain. The best frequency references are generally based on the electromagnetic resonance of a mechanically stable macroscopic element. Cryogenic sapphire oscillators (CSO) have long provided the benchmark in the microwave domain by delivering signals with stabilities in the $10^{-16}$ range. They have also proven to be highly beneficial as the basis for ultra-stable reference frequency dissemination~\cite{Chambon2005,Mann2001}. With the use of an optical frequency comb, such microwave signals can be multiplied up to the optical domain, but avoiding significant degradation is extremely difficult. The alternative is to rely upon ultra-stable laser sources at optical wavelengths. In a seminal result, an optical signal with a fractional frequency stability of $3\times 10^{-16}$ at 1\,s was achieved by stabilizing lasers to two 24 cm long \FP\ cavities mounted on a large vibration isolation system~\cite{Young1999}. In this paper, we present a \FP-based ultra-stable source at infrared wavelengths, extended to the deep ultraviolet for use as an interrogation signal of the $^{1}S_{0}$--$^{3}P_{0}$ clock transition in mercury at 265.6\,nm~\cite{Petersen2008,Hachisu2008}. The \FP\ cavity in our case is designed to be highly immune to environmental perturbations such as temperature fluctuations and vibrations. The design also seeks to minimize the intrinsic performance limitations imposed by the phenomenon of thermal noise through the selection of fused silica for the mirror substrate in the \FP\ cavity. Although fused silica has already been shown to give rise to lower levels of thermal noise as compared with ultra-low expansion glass (ULE)~\cite{Numata2004,Notcutt2006}, this work represents the first attempt to exploit this in a fully-functional optical frequency reference. The resulting ultra-stable signal in the infrared has been subsequently transferred to the ultraviolet regime by two stages of frequency doubling, which should involve only a modest loss of fidelity of the signal~\cite{Liu2007}. The resulting system is comparatively cheap and robust and operates indefinitely, without the interruptions typically associated with high-maintenance cryogenic systems. In addition, the system is also linked to the LNE-SYRTE fountain primary frequency standards using an optical frequency comb. By comparison against the standard via this link, we demonstrate a noise level and a stability which are significantly lower than those in the best atomic fountains~\cite{Bize2005,Vian2005}. This system therefore provides the means to make absolute frequency measurements of the mercury optical lattice clock limited only by the microwave counterpart. \section{Ultra-stable Infrared Frequency Reference} The core component of this system is a \FP\ cavity comprising two high-finesse mirrors (one flat and one concave with a 500\,mm radius of curvature) optically-contacted to either end of a 100\,mm spacer made from ULE, which is very insensitive to environmental temperature fluctuations. Many previous designs have also used ULE for the mirror substrate, but this imposes a (flicker) thermal noise limit at the level of $\sim 3.7\times 10^{-17}$\,m/$\sqrt{\mathrm{Hz}}$ at 1\,Hz~\cite{Numata2004,Webster2008}. In contrast, we have chosen to use fused silica for the mirror substrate in order to exploit a lower intrinsic limit on performance due to thermal noise. The higher mechanical quality factor of fused silica leads to a reduction of this limit to less than $8.6\times 10^{-18}$\,m/$\sqrt{\mathrm{Hz}}$ at 1\,Hz. At this level, the main limitation comes from the thermal noise due to the mirror coatings. We estimate that with a 100\,mm spacer, this limitation amounts to a fractional frequency instability of about $3\times 10^{-16}$. The drawback of using fused silica, which has a comparatively high coefficient of thermal expansion of $5.5\times 10^{-6}$\,K$^{-1}$, is an increase of the cavity's sensitivity to ambient temperature fluctuations. Specifically, thermal expansion of the fused silica substrate leads to differential expansion at the interface between the mirror and spacer and subsequent deformation of the cavity geometry. Finite element modeling indicates that the temperature sensitivity resulting from this effect can be higher than $5\times 10^{-8}$\,K$^{-1}$. At this sensitivity, we must therefore ensure that the temperature fluctuations experienced by the cavity are less than 6\,nK/$\sqrt{\mathrm{Hz}}$ in order to reach the thermal noise limit. \begin{figure} \resizebox{0.5\textwidth}{!}{% \includegraphics{cavityassembly.eps} } \caption{A sectioned view of the ultra-stable cavity assembly.} \label{fig:cavity} \end{figure} To fulfil this requirement, the cavity is housed inside two nested vacuum enclosures, as depicted in Fig.~\ref{fig:cavity}. The outer vacuum surrounds an inner vacuum chamber which is actively temperature-controlled via a 3-wire temperature measurement of four thermistors in series fed back to four Peltier elements, also in series. Within this stabilized chamber are two polished gold-coated aluminium shields, which provide additional passive isolation of temperature fluctuations. The windows used on the inner enclosure are made from BK7 in order to transmit the laser beam while blocking much of the thermal radiation. Figure~\ref{fig:Temperatureresponse} shows that the impulse response measured between the inner vacuum chamber temperature and the cavity frequency is well modeled by a second-order low pass filter with a thermal time constant of about 4 days and an overall sensitivity of about $1.0\times 10^{-7}$\,K$^{-1}$, which is roughly consistent with the finite element modeling mentioned above. The modeling also predicts that the thermal expansion coefficient of the assembly of the ULE spacer and the fused silica mirrors has a zero-crossing at about $-23\,^\circ$C. In principle, the enclosure is designed to allow for operation in this regime, but so far this has not been necessary. By applying this model to the residual temperature fluctuations measured at the actively controlled outer shield, we can infer that the residual fluctuations at the cavity are below the target of 6\,nK/$\sqrt{\mathrm{Hz}}$ for timescales shorter than 1000\,s. \begin{figure} \resizebox{0.5\textwidth}{!}{% \includegraphics{Temperatureresponse2.eps} } \caption{The response of the temperature on the outer shield (red line) and the frequency of the cavity resonance (squares) to a perturbation of the ambient temperature in the laboratory. The blue line is the frequency response inferred from the temperature data by modeling the two inner stages as a second-order low pass filter (thermal time constant $\sim$ 4 days, fractional frequency sensitivity $\sim 1.0\times 10^{-7}$\,K$^{-1}$).} \label{fig:Temperatureresponse} \end{figure} The other potential cause of fluctuations is the deformation induced by mechanical accelerations of the cavity. The design with respect to vibrational immunity of this system is inspired by much previously reported work, e.g.~\cite{Young1999,Webster2007,Taylor1995,Chen2006}. In our case, we have chosen to mount the cavity in a vertical configuration such that vertical vibrations at a centrally located mounting point cause approximately equal and opposite strain in the top and bottom half of the cavity structure~\cite{Taylor1995}. Sensitivity to horizontal vibrations, however, is enhanced if there is any misalignment of the optic axis with respect to the mechanical axis. To mitigate this effect, we used finite element modeling (reported elsewhere~\cite{Millo2009b}) to choose an aspect ratio at which the bending of the cavity is balanced by the squeezing due to the Poisson effect, thus giving no tilt to first-order at either mirror under horizontal acceleration. By fixing the spacer length at 100\,mm, the model indicated that this balance is achieved with a spacer diameter of 110\,mm. Once assembled, the sensitivity of the cavity frequency to an imposed acceleration was measured to be less than $3.5\times 10^{-12}$\,/(m\,s$^{-2}$) in the vertical direction and $1.4\times 10^{-11}$\,/(m\,s$^{-2}$) in both horizontal directions~\cite{Millo2009b}. This is the lowest acceleration sensitivity measured for a \FP\ that does not require mechanical tuning. However, the ambient seismic vibrations in the laboratory can be as large as $10^{-5}$\,m\,s$^{-2}$/$\sqrt{\mathrm{Hz}}$ in the 10 to 100\,Hz range, which is still enough to produce perturbations above the thermal noise limit, so the complete assembly has been further mounted on a commercial passive isolation platform (Minus-K) surrounded by a custom made acoustic enclosure. The residual accelerations measured with a seismometer on this isolation platform were less than $1.8\times 10^{-6}$\,m\,s$^{-2}$/$\sqrt{\mathrm{Hz}}$ between 1\,Hz and 100\,Hz, which implies that the residual frequency noise due to vibrations is well below the thermal noise limit. The optical readout of the resulting mechanical stability is achieved by locking a commercially available Yb-doped fiber laser (Koheras AdjustiK) to a longitudinal mode of the \FP\ cavity. The laser has intrinsically low frequency noise (linewidth $\sim$ 3\,kHz), and provides radiation at 1062.6\,nm, which is conveniently four times the ultraviolet wavelength of the clock transition in mercury. The finesse of the cavity, as determined by ring-down measurements, is about 850\,000, implying a cavity bandwidth of $\sim$ 1.7\,kHz. Approximately 5\,$\mu$W of the light from the laser is sent to the cavity via a 156\,MHz acousto-optic modulator (AOM). A 57\,MHz resonant electro-optic phase modulator is used to generate sidebands on the optical carrier so as to detect the cavity resonance using the Pound--Drever--Hall locking scheme \cite{Drever1983}. The correction signal is fed back to a fast-tuning VCO (voltage controlled oscillator, model JC09-175LN) that drives the AOM yielding a lock bandwidth of around 500\,kHz. A second loop to achieve extra dynamic range and gain, particularly over longer timescales, is also implemented by feeding back to the piezo and temperature inputs of the laser. Figure \ref{fig:lasernoise} shows that, below a few hundred hertz, the in-loop error signal of the control system and the off-resonance detection noise floor are both less than the thermal noise limit. Comparisons of our system against other similar systems under development have indicated that the resulting laser linewidth is $\sim$200\,mHz and the stability is less than $10^{-15}$ at 1\,s. A more comprehensive account of these findings is reported elsewhere~\cite{Millo2009b}. \begin{figure} \resizebox{0.5\textwidth}{!}{% \includegraphics{phasenoise2.eps} } \caption{Phase noise power spectral density of (a: black trace) the free-running fiber laser, (b: blue trace) the in-loop PDH error signal when the fiber laser is locked to the ultra-stable cavity, (c: red trace) the detection signal when unlocked and off-resonance, i.e., the detection noise floor, (d: gray trace) the DFB laser against the Yb-doped fiber laser when injection-locked, i.e., the in-loop error signal, and (e: cyan trace) the beat between the free-running DFB laser and the fiber laser. Here, the phase is measured at the optical carrier frequency of $2.8\times 10^{14}$\,Hz. } \label{fig:lasernoise} \end{figure} \section{Ultra-stable Ultraviolet Source} In order to extend this reference to the ultraviolet regime, we have implemented two stages of frequency doubling. Our specific motivation for doing this is to provide a probe source for interrogation of the $^{1}S_{0}$\,$\rightarrow$\,$^{3}P_{0}$ transition in mercury at 265.6\,nm, which will serve as the clock transition of a future neutral mercury lattice clock. In order to exploit the very narrow natural linewidth of this transition ($\sim$\,100\,mHz), it is necessary to probe with a highly stable laser source. Furthermore, the ultimate performance of an atomic clock based on this transition would also be limited by the stability of this source~\cite{Quessada2003}. For preliminary investigations, several milliwatts of power were required in order to reach a Rabi frequency of a few kHz over the full extent of the cold atom cloud released from a magneto-optical trap~\cite{Petersen2008}. In the long run, performing high resolution spectroscopy in the dipole lattice trap and running the clock requires much less power. The first step towards generating the 265.6\,nm radiation is to amplify the ultra-stable light at $1062.6$\,nm. To achieve this, we injection-lock a distributed feedback (DFB) diode laser to a sample of the ultra-stable reference light (shifted by an AOM), yielding about 250\,mW of stable light in the infrared. The alternative of imposing an offset phase-lock loop between the ultra-stable signal and the DFB laser output provides greater tunability, but sufficient locking bandwidth has proven unachievable due to signal delay inherent in the semi-conductor laser. The frequency noise between the injection-locked DFB laser and the seed fiber laser demonstrates that the integrity of the signal has been preserved through the injection locking process (see Fig.~\ref{fig:lasernoise}). We note that the measurement in Fig.~\ref{fig:lasernoise} is limited by the measurement system (electronic noise, uncompensated free space propagation of laser beams, etc). The first doubling stage is implemented with a 20\,mm long periodically-poled MgO-doped stoichiometric LiTaO$_3$ crystal (PP-MgO:SLT) positioned within a bow-tie build-up cavity with a 92\,\% input coupler and a waist in the crystal of $36\,\mu$m. The curved mirrors have radii of curvature of 77.5\,mm and are separated by 94\,mm for a total round trip of about 552\,mm. The overall conversion efficiency was measured at 64\,\%, leading to 160\,mW at 531.2\,nm. The second doubling cavity uses a similar bow-tie cavity, except with a 7\,mm long angle-tuned 90$^\mathrm{o}$-cut anti-reflection coated BBO crystal and a waist of $29\,\mu$m and a 98.4\,\% input coupler. In this case, the curved mirrors have radii of curvature of 100\,mm and are separated by 115\,mm for a total round trip of about 618\,mm. Up to 7\,mW of 265.6\,nm radiation are generated, with the conversion efficiency being limited by the loss in one of the build-up cavity mirrors. Both doubling cavities are locked by modulating the cavity length at 31.5\,kHz through a PZT driven mirror, which is also used to apply the feedback loop corrections. This method implies that a small phase modulation is imposed on the ultraviolet output together with a second-order amplitude modulation. To achieve the ultimate accuracy, such as would be required by high performance clock operation, the use of a different locking scheme (such as the H\"{a}nsch--Couillaud scheme~\cite{Haensch1980}) may be required to avoid this modulation. \section{Link to the Ultra-stable Microwave Primary Standard} To enable accurate measurements with this ultra-stable reference, a fraction of the ultra-stable light at 1062.6\,nm is sent through a fiber-link to make a beat against a tooth of a femtosecond optical frequency comb (FOFC) generated by a titanium sapphire (Ti:Sa) femtosecond laser. Phase fluctuations induced in the fiber-link by vibrations or temperature fluctuations are avoided by implementing a phase-stabilization system. At either end of the fiber, an AOM is included to impose a modulation on the phase (AOM1 and AOM2 in Fig.~\ref{fig:Freq_comb_lock}). A semi-reflecting glass plate at the other end of the fiber retro-reflects a fraction of the signal which is then beaten against the unshifted input. The resulting beat-note is sensitive to phase fluctuations imposed by the fiber-link, and therefore can be suppressed through feedback to the AOM1. Via the repetition rate of the frequency comb, a comparison is possible against the LNE-SYRTE flywheel oscillator~\cite{Chambon2005}, which is monitored by several primary fountain frequency standards. This comparison is made by stabilizing the FOFC to the ultra-stable light as depicted in Fig.~\ref{fig:Freq_comb_lock}. \begin{figure} \resizebox{0.5\textwidth}{!}{% \includegraphics{microwavecomparison.eps} } \caption{Schematic of frequency comb measurement. The repetition rate of the FOFC is locked to the ultra-stable laser and then monitored with respect to the LNE-SYRTE reference.} \label{fig:Freq_comb_lock} \end{figure} Most of the light from the Ti:Sa femtosecond laser is sent through a non-linear photonic crystal fiber, which broadens the spectrum in order that the well-known f--2f collinear self-referencing at 1064\,nm and 532\,nm~\cite{Diddams2000,Jones2000,Jiang2005} can be used to detect the carrier-envelope frequency offset $f_0$. Since the ultra-stable light wavelength is close to the one used for self-referencing, the infrared light at 1062.6\,nm remaining from the f--2f interferometer is used to detect a beat-note, $f_b$, between one of the comb lines and the ultra-stable light. We therefore have $f_b = N\times f_r + f_0 - \nu_L$, where $f_r$ is the repetition rate (about 767\,MHz for our system) of the femtosecond laser and $\nu_L$ is the frequency of the ultra-stable laser. Here, $N$ is a large integer number referring to the index of the tooth involved in generating the beat-note. After detection, these two rf signals, $f_b$ and $f_0$, are mixed together to yield a mixing product of $f_c=f_b-f_0=N\times f_r - \nu_L$, which is then filtered and used for controlling the comb. This approach therefore suppresses the carrier-envelope offset from the measurement, removing the need to stabilize it. The control of the FOFC is achieved by acting on both the pump power and the cavity length of the Ti:Sa laser. The pump power is adjusted by diffracting a small fraction of the pump beam with an AOM, while the cavity length is adjusted with a PZT stack in one of the mirror mounts. Our system shows sufficient coupling between the pump power and $f_r$ to allow $f_c$ to be phase-locked to an rf synthesizer by acting on the pump power. To do this, $f_c$ is frequency divided by $8$ and subsequently mixed with an rf synthesizer at 40\,MHz to generate a phase error signal, which is then fed with proportional and integral gain to the AOM power controller. This, in effect, stabilizes the comb in a phase coherent manner such that $f_c=N\times f_r - \nu_L=320$\,MHz with a bandwidth of up to $\sim 400$\,kHz. A second loop sends an integration of the AOM control signal to the femtosecond laser PZT in order to cope with larger or longer term fluctuations to the cavity length. The FOFC, thus locked to the optical reference, is compared to the LNE-SYRTE 1\,GHz reference signal~\cite{Chambon2005,Chambon2007}. This is done by detecting the 12th harmonic of $f_r$ and by mixing it firstly with a microwave signal at 9.2\,GHz synthesized from the 1\,GHz and secondly with a direct digital synthesizer (DDS) to produce the signal $f_m$ actually used for the measurement. The FOFC thereby acts as a pure frequency divider that generates an ultra-low noise microwave signal from the ultra-stable infrared light, which can then be compared against the LNE-SYRTE reference signal. To measure residual fluctuations, the DDS is initially set to bring $f_m$ to zero and slowly adjusted to suppress any residual drift of the ultra-stable cavity, in order to maintain quadrature as required to perform a phase noise power spectral density measurement. The phase noise measured (shown in Fig.~\ref{fig:phase}) is less than $-80$\,dB(rad$^2$\,Hz$^{-1}$) at a Fourier frequency of 1\,Hz, with a carrier frequency of 9.2\,GHz. At this level, the phase noise is mostly limited by the microwave signal (noise inherent in the 1\,GHz reference signal as delivered through a 300\,m fiber link, plus additive noise from the multiplication chain) and not by the FOFC or the ultra-stable laser. \begin{figure} \resizebox{0.5\textwidth}{!}{% \includegraphics{phasenoise.eps} } \caption{Phase noise of the ultra-stable signal against the LNE-SYRTE flywheel oscillator via the frequency comb comparison.} \label{fig:phase} \end{figure} In order to measure the long term stability of the ultra-stable signal, the DDS is set to put $f_m$ at 275\,kHz. A $55$\,MHz quartz oscillator is divided by $200$ and phase-locked to this signal with about $400$\,Hz of bandwidth, acting as a frequency-multiplied tracking oscillator. The $55$\,MHz output is counted with digital phase recorder~\cite{Kramer2001}. From this measurement, we extract the fractional frequency instability between our ultra-stable laser and LNE-SYRTE dead-time free microwave reference. Repeated measurements of the ultra-stable laser source have shown highly predictable behavior, with the drift of the cavity against the primary standard measured at around -50\,mHz at 1062.6\,nm after a couple of months of continuous operation. Figure~\ref{fig:SRAV} shows, in terms of the square root Allan variance, the comparison of the ultra-stable signal against the flywheel signal using the optical lock method. We infer an upper limit of the performance of the ultra-stable reference knowing that the comparison is limited by the microwave signal at short to medium timescales. The $\sim 1/\tau$ dependence below 30\,s is caused by the imperfect dissemination of the microwave reference between laboratories. Furthermore, we see that the level and shape of the curve at medium timescales is close to the estimated stability of the flywheel~\cite{Chambon2005}. We therefore infer that fractional instability is less than $6\times 10^{-15}$ at 1\,s to less than $1.5\times 10^{-15}$ above 10\,s. Further measurements of the ultra-stable laser against a similar system has shown that the stability of this laser is indeed less than $10^{-15}$ between 1 and 10 seconds~\cite{Millo2009b}. \begin{figure} \resizebox{0.5\textwidth}{!}{% \includegraphics{USLvsMolly2.eps} } \caption{Allan deviation of the ultra-stable signal against the LNE-SYRTE flywheel oscillator via the frequency comb comparison.} \label{fig:SRAV} \end{figure} \section{Conclusion} We have constructed an ultra-stable reference in the near infrared with a novel \FP\ design. It has been extended by two stages of frequency doubling to provide one of the most stable frequency references in the deep ultraviolet regime reported to date. This source is referenced back to primary standards at LNE-SYRTE, allowing for the execution of precise and accurate frequency measurements. The potential of this source is exemplified by recent spectroscopy measurements made on the clock transition of mercury~\cite{Petersen2008}, and it is further intended to form a critical part of a future mercury lattice clock. This facility meets all the requirements for the interrogation system required to implement an optical lattice clock based on neutral mercury. \section{Acknowledgements} The authors would like to acknowledge support from SYRTE technical services. SYRTE is Unit\'{e} Mixte de Recherche du CNRS (UMR CNRS 8630). SYRTE is associated with Universit\'{e} Pierre et Marie Curie. This work is partly funded by the cold atom network IFRAF and received partial support from CNES.
2012.02854
\section{Introduction} Streamflow prediction is one of the key tasks for effective water resource management. A number of physics based models have been developed over the decade to model different aspects of the hydrological cycle using physical equations. A major drawback of these models is that they require extensive effort to calibrate for any given geography of interest \cite{Zheng2012,Shen2012}. Moreover, in some cases, we don't have a complete understanding of the underlying physics which impacts their ability to predict the physical quantities (fluxes) of interest. In recent years, deep learning techniques have shown tremendous success in a number of computer vision and natural language processing applications. These techniques are increasing becoming popular in earth science applications including hydrology \cite{Nearing2020,Shen2018,Boyraz2018,Fan2020,Hu2020,Ni2020,Kratzert2018,Kratzert2019,Yang2020,Feng2019,Hu2018,Fu2020,Shen2017}. However, a vast majority of existing research in hydrology rely on off-the-shelf deep learning solutions to model streamflow using weather inputs. While, these solutions show promise, their efficacy is limited because of the dependence on large amounts of data for training. Furthermore, the assumptions made by these techniques are more suited for computer vision and natural language processing applications which limits their performance for hydrology applications. In this paper, we propose a new physics guided machine learning framework that aims to address the aforementioned issues. Specifically, the proposed framework aims to incorporate physical principles in the network architecture and introduces modifications to existing components such as LSTM to incorporate assumptions that are applicable for hydrological processes. The hydrological cycle has strong temporal structure and thus time aware deep learning techniques such as RNNs can be used to model different output variables using weather inputs. However, the mapping from weather inputs to variables of interest is very complex, and hence a traditional deep learning approach would required large amounts of data to train. For example, streamflow is connected to weather inputs through a number of inter-connected processes as shown in Figure \ref{fig1}. Moreover, the hydrological system has states that acts as a memory of the system. These states play a significant role in the response of different processes to weather inputs. For example, for a given amount of rainfall, the amount of surface run-off will depend on how much water is already present in the soil. In other words, if the soil is very wet before rainfall occurs, it will lead to more surface run-off compared to the scenario when the soil is dry. Similarly, for any given temperature distribution over the day, the amount of water available through snow melt will depend on how much Snowpack is already present. Hence, a framework that captures these relationships has the potential to perform better than directly mapping weather inputs to streamflow. In this paper, we propose a hierarchical deep learning architecture that explicitly models intermediate states and fluxes to incorporate the physical relationships between different hydrological processes. The efficacy of the proposed framework is evaluated using a 200 year simulation dataset created from SWAT model. Our preliminary analysis shows the promise of the proposed architecture and provide insights for future directions. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{Physics.png} \caption{A graphical abstraction of the hydrological cycle.} \label{fig1} \end{figure} \section{Methodology} Given a timeseries of weather inputs, our goal is to predict streamflow for each timestep. The most intuitive architecture would be a RNN with LSTM (or other variants) in a many-to-many prediction setup. In other words, use a single LSTM to directly map weather inputs to streamflow. We propose to add intermediate tasks that model fluxes (e.g. Evapotranspiration, surface run-off) and memory states (e.g. Soil Water and Snowpack) which are then fed as input (together with weather inputs) to the final task of streamflow prediction. Figure \ref{fig2} shows the proposed hierarchical deep learning architecture. Each of the intermediate task uses its own LSTM to model the intermediate flux/state. This enables the architecture to model variables that change at different temporal scales using different LSTMs. Another benefit of this formulation is that we can impose physical constraints across these different tasks. For example, mass conservation budget constraint can be used to ensure that outputs from different tasks/modules adhere to water budget equation. Furthermore, memory states (Soil Water and Snowpack) behave very differently than the traditional notion of memory in natural language processing applications (e.g. gender of the subject). In case of Soil Water, it does not reset but gradually accumulates and dissipates. For example, Figure \ref{SWProfile} shows the variation of Soil Water for a period of 10 years in the simulation dataset (described later). \begin{figure}[ht] \centering \includegraphics[width=0.9\columnwidth]{SW.png} \caption{A graphical abstraction of the hydrological cycle.} \label{SWProfile} \end{figure} Similarly, for Snowpack, while it accumulates and dissipates during winter, it resets during summer. Hence, new innovations will be required to capture these states more effectively. In this paper, we present the results and analysis of an initial version of this framework where we don't introduce the physics based loss function, and only focus on Soil Water and Snowpack as intermediate variables to aid the estimation of streamflow from weather inputs. Furthermore, the intermediate tasks were learned separately instead of being learned simultaneously with the main task of streamflow prediction. For modeling Soil Water, we introduced a variation to improve the prediction performance. Specifically, instead of using just weather variables during training, we provided the value of Soil Water at the starting day of the sample. This initial constant value (replicated for other timesteps in the sequence to maintain input dimensions) avoids cold start of hidden and cell states and thus improve the temporal modeling of Soil Water changes. During prediction phase, we use the predicted value from the previous sample to act as the initial value for the next sample. Note that this is one of the ways in which physical concepts can be used to aid the machine learning algorithms. As part of future work, we aim to develop new LSTM architecture that is more suitable for modeling these memory states in physical systems. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{Framework2.png} \caption{Physics guided deep learning architecture for estimating streamflow.} \label{fig2} \end{figure} \section{Dataset} In this paper, we demonstrate the utility of the proposed architecture using a simulation dataset generated by the SWAT model. Specifically, we created 200 years of simulation from SWAT which takes 6 weather variables as input (precipitation, minimum day temperature, maximum day temperature, solar radiation, relative humidity, and wind speed) and generates different fluxes and states as output. The model was set up for a watershed in Southwest Minnesota as show in Figure \ref{fig3}. The weather variables were generated for this region using the weather generated module which is part of the SWAT model. The main goal of the paper is to show the utility of the proposed architecture in emulating the SWAT model. The evaluation of the proposed framework using real-world streamflow data will be pursued in future work. \begin{figure}[ht] \centering \includegraphics[width=0.9\columnwidth]{Dataset.jpg} \caption{The geographical location of the watershed.} \label{fig3} \end{figure} \section{Experimental Setup} To evaluate the framework, we use first 120 years of the simulation data for training and test it on the last 80 years. To ensure robust evaluation, we also evaluate different algorithms on first 80 years while being trained on the last 120 years. All three different LSTMs in our framework (one each for Snowpack, Soil Water and Streamflow) were chosen to have 180 days has sequence length and 28 hidden features. The learning rate was chosen to be 0.001. These hyper-parameters were chosen because based on their performance. We calculate two different error metrics to evaluate the proposed framework, namely RMSE (Root Mean Squared Error) and NSE (Nash Sutcliffe Efficiency). NSE is a widely used metric in the hydrology community. It is defined as follows - \begin{equation} NSE = 1 - \frac{\sum_{t = 1}^{T} | Q - \hat{Q} |}{\sum_{t = 1}^{T} | Q - \bar{Q} |} \end{equation} where $Q$ is the reference streamflow, $\hat{Q}$ is the predicted streamflow, and $\bar{Q}$ is the mean of the reference streamflow. We also compare the LSTM architecture with CNN based architecture that has been shown to perform well for streamflow monitoring task \cite{10.3389/frwa.2020.00028} (henceforth referred to as TCNN). \section{Results} Table \ref{table1} shows the RMSE and NSE values for three different model configurations. First, the model that uses the CNN architecture (TCNN) performs poorly than the model with the traditional LSTM architecture (LSTM-No Physics) which suggests that for this dataset, LSTM is able to better capture the temporal dependencies to predict streamflow. Among the different LSTM architectures, the configuration where no physics is used (i.e. a single LSTM is used to directly learn the mapping between weather inputs and streamflow) gives the lowest performance (RMSE = 0.78 and NSE = 0.57), whereas the configuration that uses both Snowpack and Soil Water modules (which are fed as input to the streamflow module) performs much better (RMSE = 0.45 and NSE = 0.76). Hence, it is evident that modeling of intermediate states explicitly helps in reducing the complexity to model streamflow directly, and thus improving the performance. Figure \ref{fig4} shows the timeseries of one of the years in the test data (year 128). As we can see, the proposed architecture was able to improve the performance on peak values and also reduced the spurious low streamflow values. \begin{table}[ht] \caption{Impact of modeling intermediate states as part of the architecture. SW stands for Soil Water and SN stands for Snowpack.}\smallskip \centering \smallskip\begin{tabular}{l|c|c} & \multicolumn{2}{c}{RMSE}\\ & Train & Test \\ & (first 120 years) & (last 80 years) \\ \hline TCNN & 1.03 (0.20) & 1.14 (0.08)\\ LSTM-No Physics & 0.56 (0.01) & 0.72 (0.03)\\ LSTM-SW & 0.4 (0.02) & 0.54 (0.02)\\ LSTM-SW and SN & 0.34 (0.01) & 0.5 (0.01)\\ \hline \\ & \multicolumn{2}{c}{RMSE}\\ & Train & Test \\ & (last 120 years) & (first 80 years) \\ \hline TCNN & 0.90 (0.16) & 1.02 (0.07)\\ LSTM-No Physics & 0.56 (0.03) & 0.7 (0.02)\\ LSMT-SW & 0.4 (0.01) & 0.54 (0.03)\\ LSTM-SW and SN & 0.34 (0.01) & 0.49 (0.01)\\ \hline \\ & \multicolumn{2}{c}{NSE} \\ & Train & Test\\ & (first 120 years) & (last 80 years) \\ \hline TCNN & 0.48 (0.09) & 0.45 (0.06)\\ LSTM-No Physics & 0.66 (0.02) & 0.62 (0.02) \\ LSTM-SW & 0.75 (0.02) & 0.71 (0.02) \\ LSTM-SW and SN & 0.76 (0.02) & 0.73 (0.02) \\ & \multicolumn{2}{c}{NSE} \\ & Train & Test\\ & (last 120 years) & (first 80 years) \\ \hline TCNN & 0.56 (0.06) & 0.45 (0.04)\\ LSTM-No Physics & 0.68 (0.01) & 0.6 (0.01) \\ LSTM-SW & 0.75 (0.01) & 0.7 (0.01) \\ LSTM-SW and SN & 0.79 (0.01) & 0.73 (0.02) \\ \end{tabular} \label{table1} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{results_timeseries2.png} \caption{An illustrative example of prediction performance of different architecture configurations. The black diamonds represent streamflow values simulated by SWAT which is being used as reference in our experiments to emulate to SWAT. The green circles represent predictions from a traditional LSTM based RNN architecture. The red line represents predictions from our initial version of the proposed architecture that models both Snowpack and Soil Water as intermediate tasks.} \label{fig4} \end{figure} \subsection{Physical Interpretation of LSTM Features} In order to gain physical insights, we compared Snowpack and Soil Water with hidden features from an LSTM that was used to directly learn the mapping between weather inputs and streamflow. In other words, this architecture did not have any knowledge about Snowpack and Soil Water. To this end, we calcualted correlation between timeseries of different hidden features and soil water (and Snowpack). The hidden features that showed very high correlation were selected for visualization. As we can see from Figure \ref{fig5} and Figure \ref{fig6}, the LSTM is automatically learning features that correspond to the memory states, which suggests that even a traditional LSTM architecture has the ability to automatically capture these physical states. However, the agreement between the hidden features is not very high which suggests that it would require much more training data to achieve more accurate modelling of these hidden states without any explicit modeling. Another possible reason could be that a traditional LSTM might not be suitable to capture these physical states. For example, in the case of Snowpack, the decay pattern of the hidden feature is much more gradual than Snowpack. Hence, new variations of LSTM might be required to effectively capture these physical states. \begin{figure}[ht] \centering \includegraphics[width=0.95\columnwidth,height=0.4\columnwidth]{sw_lstm.png} \caption{Comparison of Soil Water and hidden feature 24 for year 150 in our simulation dataset. The red line represents the Soil Water value (right Y-axis) and the blue line represents the hidden feature (left Y-axis).} \label{fig5} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.95\columnwidth,height=0.4\columnwidth]{sno_lstm.png} \caption{Comparison of Snowpack and hidden feature 4 for year 150 in our simulation dataset. The red line represents the Snowpack value (right Y-axis) and the blue line represents the hidden feature (left Y-axis).} \label{fig6} \end{figure} \section{Summary and Future Work} In this paper, we presented a physics guided deep learning architecture to improve the performance on the task of predicting streamflow from weather variables. The key idea is to model intermediate states and fluxes of the hydrological cycle explicitly in the model architecture. The hierarchical approach allows different processes that change at different temporal scales to be learned using different LSTMs, and thus improve their modeling. The results on the simulation data using a preliminary version of the proposed architecture demonstrate the utility of the hierarchical approach. The visual analysis of hidden features from a basic architecture suggests that even a purely data driven architecture is automatically trying to learn the memory states, which confirms their importance for improving performance. For future work, we will evaluate different versions of the proposed architecture. For example, in the current version, individual tasks were trained separately. We aim to train all tasks simultaneously so that the errors in higher level task can inform the training of a low level task. In order to introduce mass conservation based loss function, other relevant tasks would be added such that all the components of the equation are available during training. Finally, the efficacy of the proposed framework will be tested on real world streamflow observations. \section{ Acknowledgments} The work is being funded by NSF HDR grant 1934721 and 1934548. Access to computing facilities was provided by Minnesota Supercomputing Institute.
2004.06952
\section{Introduction} Complex Hessian equations are important examples of fully non-linear PDE's of second order on complex manifolds. They interpolate between (linear) complex Poisson equations ($m = 1$) and (non linear) complex Monge-Amp\`ere equations $(m=n$). They appear in many geometric problems, including the $J$-flow \cite{SW} and quaternionic geometry \cite{AV}. They have attracted the attention of many researchers these last years as we will mention below. \subsection{Statement of the problem} Let $\Omega \Subset \mathbb{C}^n$ be a bounded domain and $1 \leq m \leq n$ a fixed integer. We consider the following general Dirichlet problem for the complex $m$-Hessian equation : \smallskip \smallskip {\it The Dirichlet problem:} Let $g \in \mathcal{C}^{0} (\partial \Omega)$ be a continuous function (the boundary data) and $\mu$ be a positive Borel measure on $\Omega$ (the right hand side). The problem is to find a necessary and sufficient condition on $\mu$ such that the following problem admits a solution : \begin{equation}\label{eq:DirPb} \left\{\begin{array}{lcl} U \in \mathcal{SH}_m (\Omega) \cap \mathcal {C}^{0} ({\Omega}) \\ (dd^c U)^m \wedge \beta^{n - m} = \mu &\hbox{on}\ \Omega \, \, \, \, \, \, (\dag)\\ U_{\mid \partial \Omega} = g & \hbox{on}\ \partial \Omega \, \, \, \, (\dag \dag) \end{array}\right. \end{equation} The equation $(\dag)$ must be understood in the sense of currents on $\Omega$ as it will be explained in section $2$. The equality $(\dag \dag)$ means that $\lim_{z \to \zeta} U (z) = g (\zeta)$ for any point $\zeta \in \partial \Omega$. Recall that for a real function $u \in \mathcal{C}^2 (\Omega)$ and each integer $1 \leq k \leq n$, we denote by $\sigma_k (u)$ the continuous function defined at each point $z \in \Omega$ as the $k$-th symmetric polynomial of the eigenvalues $\lambda (z) := (\lambda_1 (z), \cdots \lambda_n(z))$ of the complex Hessian matrix $ \left(\frac{\partial^2 u }{\partial z_j \partial \bar{z}_k} (z)\right)$ of $u$ i.e. $$ \sigma_k (u) (z) := \sum_{1 \leq j_1 < \cdots < j_k \leq n} \lambda_{j_1} (z) \cdots \lambda_{j_k} (z), \, \, \, \, z \in \Omega. $$ We say that a real function $u \in \mathcal{C}^2 (\Omega)$ is $m$-subharmonic on $\Omega$ if for any $1 \leq k \leq m$, we have $\sigma_k (u) \geq 0$ pointwise on $\Omega$. For $m= 1$, $\sigma_1 (u) = (1 \slash 4) \Delta u$ and for $m = n$, $\sigma_n (u) = \mathrm{det} \left(\frac{\partial^2 u }{\partial z_j \partial \bar{z}_k} (z)\right).$ Therefore $1$-subharmonic means subharmonic and $n$-subharmonic means plurisubharmonic. As observed by Z. B\l ocki (\cite{Bl05}), it is possible to define a general notion of $m$-subharmonic functions using the theory of $m$-positive currents (see section 2). Moreover it is possible to define the $k$-Hessian measure $(dd^c u)^k \wedge \beta^{n - k}$ when $1 \leq k \leq m$ for any (locally) bounded $m$-subharmonic function $u$ on $\Omega$ (see section 2). When $\mu = 0$, the Dirichlet problem (\ref{eq:DirPb}) can be solved using the Perron method as for the complex Monge-Amp\`ere equation (see \cite{Bl05}, \cite{Ch16a}). When $g = 0$ and $\mu$ is a positive Borel measure on $\Omega$, the Dirichlet problem is much more difficult. A necessary condition for the existence of a solution to (\ref{eq:DirPb}) is the existence of a subsolution. Therefore a particular case of the Dirichlet problem (\ref{eq:DirPb}) we are interested in can be formulated as follows. \smallskip {\it The H\"older continuous subsolution problem :} Let $\mu$ be a positive Borel measure on $\Omega$. Assume that there exists a function $\varphi \in \mathcal{SH}_m (\Omega) \cap \mathcal{C}^{\alpha} (\bar{\Omega})$ satisfying the following condition : \begin{equation} \label{eq:subsolution} \mu \leq (dd^c \varphi)^m \wedge \beta^{n - m}, \, \, \mathrm{on} \, \, \, \, \Omega, \, \, \, \mathrm{and} \, \, \varphi_{\mid \partial \Omega} = 0. \end{equation} 1. Does the Dirichlet problem (\ref{eq:DirPb}) admit a H\"older continuous solution $U_{\mu,g}$ for any boundary data $g$ which is H\"older continuous on $\partial{\Omega}$? 2. In this case, is it possible to estimate precisely the H\"older exponent of the solution $U_{\mu,g}$ in terms of the H\"older exponents of $\varphi$ and $g$ ? \smallskip Our goal in this paper is to answer the first question on the existence of a H\"older continuous solution and give an explicit lower bound of the H\"older exponent of the solution in terms of the H\"older exponent of the subsolution when the measure $\mu$ has finite total mass. \subsection{Known results} There have been many articles on the subject. We will only mention those that are relevant to our study and closely related to our work. The terminology used below will be defined in the next section. Assume that $\Omega$ is a smooth strongly $m$-pseudoconvex domain. When the boundary data $g$ is smooth and the right hand side $\mu = f \lambda_{2n}$ is a measure with a smooth positive density $f > 0$, S.Y. Li proved in \cite{Li04} that the problem has a unique smooth solution. Later, Z. B\l ocki introduced the notion of weak solution and solved the Dirichlet problem for the homogenous Hessian equation in the unit ball in $\mathbb{C}^n$ (\cite{Bl05}). When the density $0 \leq f \in L^p (\Omega)$ with $p > n \slash m$, Dinew and Ko\l odziej proved the existence of a continuous solution (\cite{DK14}). Assuming moreover that $g$ is H\"older continuous on $\bar \Omega$, Ngoc Cuong Nguyen proved the H\"older continuity of the solution under an additional assumption on the density $f$ (\cite{N14}). The general case was considered in \cite{BKPZ16} and (\cite{Ch16}). \smallskip On the other hand, S. Ko\l odziej \cite{Kol05} proved that the Dirichlet problem has a bounded plurisubharmonic solution if (and only if) it has a bounded subsolution with zero boundary values. This is known as the bounded subsolution theorem for plurisubharmonic functions. The same result was proved for the Hessian equation by Ngoc Cuong Nguyen in \cite{N13}. The H\"older continuous subsolution problem stated above has attracted a lot of attention these last years and was formulated in \cite{DGZ16} for the complex Monge-Amp\`ere equation. \smallskip It has been solved for the complex Monge-Amp\`ere by Ngoc Cuong Nguyen in \cite{N18a,N18b}. Recently S. Kolodziej and Ngoc Cuong Nguyen solved the H\"older subsolution problem for the Hessian equation under the restrictive assumption that the measure $\mu$ is compactly supported on $\Omega$ (see \cite{KN18}, \cite{KN19}). \subsection{Main new results} In this paper we will solve the H\"older continuous subsolution problem for Hessian equations when $\mu$ is any positive Borel measure with finite mass on $\Omega$. Our first main result gives a new comparison inequality which will be applied to positive Borel measures without restriction on their support. \smallskip \smallskip {\bf Theorem A}.{ \it Let $\Omega \Subset \mathbb{C}^n$ be a bounded strongly $m$-pseudoconvex domain. Let $\varphi\in \mathcal{SH}_m(\Omega)\cap \mathcal{C}^{\alpha}(\overline\Omega)$ with $0 < \alpha \leq 1$ such that $\varphi = 0$ in $\partial \Omega$. Then for any $0 < r < m \slash (n-m)$, there exists a constant $A>0$ such that for every compact $K\subset\Omega$, $$ \int_ K (dd^c\varphi)^m\wedge\beta^{n-m} \leq A \, \left(\left[\mathrm{Cap}_m (K,\Omega)\right]^{1 + \epsilon} + \left[\mathrm{Cap}_m (K,\Omega)\right]^{1 + m\epsilon}\right), $$ where $\epsilon := \frac{\alpha r}{(2-\alpha) m + \alpha} > 0$. } \smallskip \smallskip The capacity $\mathrm{Cap}_m (K,\Omega)$ will be defined in the next section. The constant $A$ in the theorem is explicit (see formula (\ref{eq:finalConst})). Observe that the most relevant case in the application of this inequality will be when $\mathrm{Cap}_m (K,\Omega) \leq 1$. In this case the right exponent is $\tau = 1 + \frac{\alpha r}{(2-\alpha) m + \alpha}$. Theorem A improves substantially a recent result of \cite{KN19} who proved an estimate of this kind when the compact set $K \subset \Omega'$ is contained in a fixed open set $\Omega' \Subset \Omega$, i.e. $K$ stays away from the boundary of $\Omega$. When $m=n$ a better estimate was obtained in \cite{N18a} using the exponential integrability of plurisubharmonic functions which fails when $m < n$. \smallskip As a consequence of Theorem A, we will deduce the following result which solves the H\"older continuous subsolution problem. \smallskip \smallskip {\bf Theorem B}. { \it Let $\Omega \Subset \mathbb{C}^n$ be a bounded strongly $m$-pseudoconvex domain and $\mu $ a positive Borel measure on $\Omega$ with finite mass. Assume that there exists $\varphi\in \mathcal{E}^0_m(\Omega)\cap\mathcal{C}^{\alpha}(\overline\Omega)$ with $0 < \alpha \leq 1$ such that \begin{equation} \label{eq:subsol} \mu \leq (dd^c\varphi)^m\wedge\beta^{n-m}, \, \, \, \mathrm{weakly \, \, on} \, \, \Omega, \, \, \mathrm{and} \, \, \, \varphi\mid_{\partial \Omega} \equiv 0. \end{equation} Then for any boundary datum $g $ H\"older continuous on $\partial \Omega$, the Dirichlet problem (\ref{eq:DirPb}) admits a unique solution $U = U_{g,\mu} $ which is H\"older continuous on $\bar{\Omega}$. More precisely, 1) if $g \in \mathcal C^{1,1} (\partial \Omega)$, $ U \in \mathcal{C}^{\alpha'}(\overline\Omega)$ for any $0 < \alpha' < 2 \gamma (m,n,\alpha) \frac{\alpha^m}{2^{m}}$, where \begin{equation} \label{eq:gamma'} \gamma (m,n,\alpha) := \frac{ m \alpha}{ m (m + 1) \alpha + (n-m) [(2 - \alpha)m + \alpha]}, \end{equation} 2) if $g \in \mathcal C^{2 \alpha} (\partial \Omega)$, then $ U \in \mathcal{C}^{\alpha''}(\overline\Omega)$ for any $0 < \alpha'' < \gamma' (m,n,\alpha) \frac{\alpha^m}{2^{m}}$, where $$ \gamma' (m,n,\alpha) := \frac{\alpha}{ m (m + 1) \alpha + (n-m) [(2 - \alpha)m + \alpha]}\cdot $$ } \smallskip \smallskip Recall that by definition when $\alpha = 1 \slash 2$, $g \in \mathcal C^1 (\partial \Omega)$ means that $g$ is Lipschitz and when $1 \slash 2 < \alpha \leq1$ and $2 \alpha = 1 + \theta$ with $0 < \theta \leq 1$, $ g \in \mathcal{C}^{2 \alpha} (\partial \Omega)$ means that $g \in \mathcal{C}^1 (\partial \Omega)$ and and $\nabla g$ is H\"older continuous of exponent $\theta$ on $\partial \Omega$. \smallskip Let us give a rough idea of the proofs of these results. \smallskip {\it Idea of the proof of Theorem A:} The general idea of the proof is inspired by \cite{KN19}. However, since our measure is not compactly supported nor of finite mass, we need to control the behaviour of the $m$-Hessian measure of $\varphi$ close to the boundary. This will be done in several steps in section 3 and section 4. - The first step is to estimate the mass of the $m$-Hessian measure $\sigma_m (\varphi)$ of a H\"older continuous $m$-subharmonic function $\varphi$ in terms of its regularization $\varphi_\delta$ on any compact set in $\Omega_\delta$. This requires to consider the $m$-subharmonic envelope of $\varphi_\delta$ on $\Omega$ and provide a precise control on its $m$-Hessian measure (see Theorem \ref{thm:obstacle}). - The second step is to estimate the mass of $\sigma_m (\varphi)$ on a compact set close to the boundary in terms of its Hausdorff distance to the boundary (see Lemma \ref{lem:ComparisonIneq}). \smallskip \smallskip {\it Idea of the proof of Theorem B:} The proof will be in two steps. - The first step relies on a standard method which goes back to \cite{EGZ09} (see also \cite{GKZ08}) in the case of the complex Monge-Amp\`ere equation. This method consists in proving a semi-stability inequality estimating $ \sup_{\Omega} (v-u)_+ $ in terms of $\Vert (v-u)_+\Vert_{L^{1} (\Omega,\mu)}$, where $u $ is the bounded $m$-subharmonic solution to the Dirichlet problem (\ref{eq:DirPb}) and $v$ is any bounded $m$-subharmonic function with the same boundary values as $u$, under the assumption that the measure $\mu$ is dominated by the $m$-Hessian capacity with an exponent $\tau > 1$ (see Definition \ref{def:cap-domination}). - The second step uses an idea which goes back to \cite{DDGKPZ15} in the setting of compact K\"ahler manifolds (see also \cite{GZ17}). It has been also used in the local setting in \cite{N18a} and \cite{KN19}. It consists in estimating the $L^1 (\mu)$-norm of $v - u$ in terms of the $L^1 (\lambda_{2 n})$-norm of $(v-u)$ where $u$ is the bounded solution to the Dirichlet problem and $v$ is a bounded $m$-subharmonic function on $\Omega$ close to the regularization $ u_\delta$ of $u$. This step requires that the measure $\mu$ is well dominated by the $m$-Hessian capacity, which is precisely the content of our Theorem A. Then using the Poisson-Jensen formula as in \cite{GKZ08}, we see that the $L^1$-norm of $(u_\delta-u)$ is $O (\delta)$ (see Lemma \ref{lem:Poisson-Jensen}) and Lemma \ref{lem:sup-mean} allows us to finish the proof. \section{Preliminary results} In this section, we recall the basic properties of $m-$subharmonic functions and some results we will use throughout the paper. \subsection{Hessian potentials} For a hermitian $n \times n$ matrix $a = (a_{j,\bar k})$ with complex coefficients, we denote by $\lambda_1, \cdots \lambda_n$ the eigenvalues of the matrix $a$. For any $1 \leq k \leq n$ we define the $k$-th trace of $a$ by the formula $$ s_k (a) := \sum_{1 \leq j_1 < \cdots < j_k \leq n} \lambda_{j_1} \cdots \lambda_{j_k}, $$ which is the $k$-th elementary symetric polynomial of the eigenvalues $(\lambda_1, \cdots, \lambda_n)$ of $a$. Let $\mathbb{C}^n_{(1,1)} $ be the space of real $(1, 1)$-forms on $\mathbb{C}^n$ with constant coefficients, and define the cone of $m$-postive $(1,1)$-forms on $\mathbb{C}^n$ by \begin{equation}\label{eq:mpositive} \Theta_m := \{\theta \in \mathbb{C}^n_{(1,1)} \, ; \, \theta \wedge \beta^{n - 1} \geq 0, \cdots, \theta^m \wedge \beta^{n - m} \geq 0\}. \end{equation} \begin{definition} \label{def:mpositive} 1) A smooth $(1,1)$-form $\theta$ on $\Omega$ is said to be $m$-postive on $\Omega$ if for any $z \in \Omega$, $\theta (z) \in \Theta_m$. 2) A function $u:\Omega \rightarrow \mathbb{R}\cup\{-\infty\}$ is said to be $m-$subharmonic on $\Omega$ if it is subharmonic on $\Omega$ (not identically $-\infty$ on any component) and for any collection of smooth $m-$positive $(1,1)-$forms $\theta_1,...,\theta_{m-1}$ on $\Omega$, the following inequality $$ dd^c u\wedge \theta_1\wedge...\theta_{m-1} \wedge \beta^{n-m}\geq 0, $$ holds in the sense of currents on $\Omega$. \end{definition} We denote by $\mathcal{SH}_m (\Omega) $ the positive convex cone of $m$-subharmonic functions on $\Omega$. We give below the most basic properties of $m$-subharmonic functions that will be used in the sequel. \begin{proposition}\label{prop:basic} \noindent 1. If $u\in \mathcal{C}^2(\Omega)$, then $u$ is $m$-subharmonic on $\Omega$ if and only if $(dd^c u)^k\wedge \beta^{n-k}\geq0$ pointwise on $\Omega$ for $k=1, \cdots, m$. \noindent 2. $\mathcal{PSH}(\Omega)=\mathcal{SH}_n(\Omega)\subsetneq \mathcal{SH}_{n-1}(\Omega)\subsetneq...\subsetneq \mathcal{SH}_1(\Omega)=\mathcal{SH}(\Omega) $. \noindent 3. $\mathcal{SH}_m(\Omega) \subset L^1_{loc} (\Omega)$ is a positive convex cone. \noindent 4. If $u$ is $m$-subharmonic on $\Omega$ and $f: I \rightarrow\mathbb{R}$ is a convex, increasing function on some interval containing the image of $u$, then $f\circ u$ is $m$-subharmonic on $\Omega$. \noindent 5. The limit of a decreasing sequence of functions in $\mathcal{SH}_m(\Omega)$ is $m$-subharmonic on $\Omega$ when it is not identically $- \infty$ on any component. \noindent 6. Let $u \in \mathcal{SH}_m(\Omega)$ and $v \in \mathcal{SH}_m(\Omega') $, where $\Omega'\subset\mathbb{C}^n$ is an open set such that $\Omega \cap \Omega' \neq \emptyset$. If $u\geq v$ on $\Omega \cap \partial\Omega'$, then the function $$ z \mapsto w(z):=\left\{\begin{array}{lcl} \max(u(z),v(z)) &\hbox{ if}\ z \in \Omega \cap \Omega'\\ u(z) &\hbox{if}\ z \in\Omega\setminus\Omega'\\ \end{array}\right. $$ is $m$-subharmonic on $\Omega$. \end{proposition} Another ingredient which will be important is the regularization process. Let $\chi$ be a fixed smooth positive radial function with compact support in the unit ball $\mathbb{B} \subset \mathbb{C}^n$ and $\int_{\mathbb{C}^n}\chi (\zeta)d\lambda_{2 n}(\zeta)=1$. For any $ 0 < \delta < \delta_0 := \mathrm{diam} (\Omega)$, we set $\chi_{\delta}(\zeta)=\frac{1}{\delta^{2n}}\chi (\frac{\zeta}{\delta})$ and $\Omega_{\delta}=\{z \in\Omega; \mathrm{dist} (z,\partial\Omega)>\delta\}$. Let $u \in \mathcal{SH}_m (\Omega) \subset L^1_{loc} (\Omega)$ and define its standard $\delta$-regularization by the formula \begin{equation} \label{eq:reg} { u}_{\delta} (z) := \int_{\Omega} u (z - \zeta) \chi_{\delta} (\zeta) d \lambda_{2n} (\zeta), z \in \Omega_{\delta}. \end{equation} Then it is easy to see that $ {u}_{\delta}$ is $m$-subharmonic and smooth on $\Omega_{\delta}$ and decreases to $u$ on $\Omega$ as $\delta $ decreases to $0$. The following lemma was proved in \cite{GKZ08} (see also \cite{Ze20}). \begin{lemma} \label{lem:Poisson-Jensen} Let $u \in \mathcal{SH}_m (\Omega) \cap L^{1} (\Omega)$. Then for $0 < \delta < \delta_0$, its $\delta$-regularization extends to $\mathbb{C}^n$ by the formula \begin{equation} \label{eq:regularization} { u}_{\delta} (z) := \int_{\Omega} u (\zeta) \chi_{\delta} (z - \zeta) d \lambda_{2n} (\zeta), z \in \mathbb{C}^n, \end{equation} and have the following properties : \noindent 1) $ { u}_{\delta}$ is a smooth function on $\mathbb{C}^n$ which is $m$-subharmonic on $\Omega_\delta$ and $({u}_\delta)$ decreases to $u$ on $\Omega$ as $\delta $ decreases to $0$; \noindent 2) for any $0 < \delta < \delta_0$, we have \begin{equation} \label{eq:PJ1} \int_{\Omega_\delta} ( { u}_{\delta} (z) - u(z)) d\lambda_{2 n}(z) \leq a_n \delta^2 \int_{\Omega_\delta} dd^c u \wedge \beta^{n - 1}, \end{equation} where $a_n > 0$ is a uniform constant independant of $u$ and $\delta$. 3) there exists a constant $b_n > 0$ such that if $u \leq 0$ on $\Omega$, \begin{equation} \label{eq:PJ2} \int_{\Omega_\delta} (u_\delta - u) d \lambda_{2n} \leq b_n \delta \Vert u\Vert_1, \end{equation} where $\Vert u\Vert_1 := \int_\Omega \vert u\vert d \lambda_{2 n}$. \end{lemma} \begin{proof} The first property is trivial and the second is proved in \cite{GKZ08}. The thid property follows easily from the second. Indeed, since the defining function $\rho$ of $\Omega$ is smooth and $\vert \nabla \rho \vert > 0$ on $\partial \Omega$, it follows that there exists a uniform constant $ c_1 > 0$ such that $- \rho (z) \geq c_1 \, \text{dist} (z, \partial \Omega)$ (see \cite{Ze20} for more details). Then by the integration by parts inequality (\ref{eq:testinequality}), it follows that \begin{eqnarray*} \int_{\Omega_\delta} dd^c u \wedge \beta^{n-1} & \leq & c_2 \delta^{-1} \int_{\Omega} (- \rho) dd^c u \wedge \beta^{n-1} \\ &\leq & c_3 \delta^{-1} \int_{\Omega} (-u) \beta^n, \nonumber \end{eqnarray*} where $ c_2, c_3 > 0$ are uniform constants. The inequality (\ref{eq:PJ2}) follows using (\ref{eq:PJ1}. \end{proof} An estimate like (\ref{eq:PJ2}) was first obtained in \cite{BKPZ16} (see also \cite{KN19} and\cite{Ze20})). Let us introduce the notion of strong $m$-pseudoconvexity that will be used in the sequel. \begin{definition} We say that the open set $\Omega$ is strongly $m$-pseudoconvex if $\Omega$ admits a defining function $\rho$ which is smooth strictly $m$-subharmonic in a neighbourhood of $\bar \Omega$ and $\vert \nabla \rho\vert > 0$ on $\partial \Omega = \{\rho = 0\}$. In this case we can choose $\rho $ so that \begin{equation} \label{eq:stronpconvexity} (dd^c \rho)^k \wedge \beta^{n - k} \geq \beta^n \, \, \mathrm{for} \, \, 1 \leq k \leq m, \end{equation} pointwise on $\Omega$. \end{definition} The following lemma is analoguous to a lemma proved in \cite{GKZ08} using mean values rather than convolution. \begin{lemma} \label{lem:sup-mean} Let $\Omega \Subset \mathbb{C}^n$ be a bounded domain and $u \in \mathcal{SH} (\Omega) \cap L^{\infty} ({\bar \Omega})$. Assume that $u$ is H\" older continuous near $\partial\Omega$ with exponent $\alpha \in ]0,1[$. Then the following properties are equivalent: $(i)$ $\exists c_1 >0$, ${ u}_\delta := u \star \chi_\delta \leq u + c_1 \delta^{\alpha}$ in $\Omega_\delta$, $(ii)$ $\exists c_2 >0$, $\sup_{\bar B(z,\delta)} u \leq u + c_2 \delta^{\alpha}$ in $\Omega_\delta$, $(iii)$ $\exists c_3 > 0$, $\sup \vert u(z) - u(z')\vert \leq c_3 \vert z-z'\vert^\alpha$, for $z, z' \in \Omega$. \end{lemma} A similar lemma has been recently proved in the compact Hermitian manifold setting in \cite{LPT20}. A slight modification of the proof of \cite{GKZ08} with an observation from \cite{LPT20} works also in our context as it is explained in \cite{Ze20}. \begin{remark} \label{rem:HolderBoundary} Recall that $u$ is H\"older continuous near $\partial\Omega$ with exponent $\alpha \in ]0,1]$ if there exists $\delta_1 > 0$ small enough and a constant $\kappa > 0$ such that for any $\zeta \in \partial \Omega$ and any $0 < \delta < \delta_1$, $$ \sup_{z \in \Omega (\zeta,\delta)} \vert u (z) - u (\zeta) \vert \leq \kappa \delta^\alpha, \, \, \, \hbox{where} \, \, \, \, \Omega(\zeta,\delta) := \Omega \cap B (\zeta,\delta). $$ Assume that there exists two functions $v , w $ defined and H\"older continuous with exponent $\alpha $ on a neighbourhood $ U$ of $\partial \Omega$ in $\bar \Omega$ such that $v \leq u \leq w$ on $ U$ and $v = u = w$ on $\partial \Omega$. Then $u$ is H\"older continuous with exponent $\alpha $ near $\partial \Omega$. \end{remark} \subsection{Complex Hessian operators} Following \cite{Bl05}, we can define the Hessian operators acting on (locally) bounded $m$-subharmonic functions as follows. Given $u_1, \cdots, u_k \in \mathcal{SH}_m (\Omega) \cap L^{\infty} (\Omega)$ ($1 \leq k \leq m$), one can define inductively the following positive $(m-k,m-k)$-current on $\Omega$ $$ dd^c u_1 \wedge \cdots \wedge dd^c u_k \wedge \beta^{n - m} := dd^c (u_1 dd^c u_2 \wedge \cdots \wedge dd^c u_k \wedge \beta^{n - m}). $$ In particular, if $u \in \mathcal{SH}_m (\Omega) \cap L^{\infty}_{loc} (\Omega)$, the positive current $(dd^c u)^m \wedge \beta^{n-m}$ can be identifed to a positive Borel measure on $\Omega$, the so called $m$-Hessian measure of $u$ denoted by: $$ \sigma_m (u) := (dd^c u)^m \wedge \beta^{n-m}. $$ Observe that when $m= 1$, $\sigma_1 (u) = dd^c u \wedge \beta^{n-1}$ is the Riesz measure of $u$ (up to a positive constant), while $\sigma_n (u) = (dd^c u)^n$ is the complex Monge-Amp\`ere measure of $u$. It is then possible to extend Bedford-Taylor theory to this context. In particular, Chern-Levine Nirenberg inequalities holds and the Hessian operators are continuous under local uniform convergence and pointwise a.e. monotone convergence on $\Omega$ of sequences of functions in $\mathcal{SH} (\Omega) \cap L^{\infty}_{loc} (\Omega)$ (see \cite{Bl05}, \cite{Lu12}). We define $\mathcal{E}_m^0 (\Omega) $ to be the positive convex cone of negative functions $\phi \in \mathcal{SH}^-_m (\Omega) \cap L^{\infty} (\Omega)$ with zero boundary values such that $$ \int_{\Omega} (dd^c \phi)^m \wedge \beta^{n - m} < + \infty. $$ These are the "test functions" in $m$-Hessian Potential Theory integration by parts formula is valid for these functions. More generally it follows from \cite{Lu12,Lu15} that the following property hlods: if $\phi \in \mathcal{E}_m^0 (\Omega) $ and $u , v \in \mathcal{SH}_m (\Omega) \cap L^{\infty} (\Omega)$ with $u \leq 0$, then for $0 \leq k \leq m - 1$, \begin{equation} \label{eq:testinequality} \int_\Omega (-\phi) dd^c u \wedge (dd^c v)^k \wedge \beta^{n - k-1} \leq \int_\Omega (-u) dd^c \phi \wedge (dd^c v)^k \wedge \beta^{n - k-1}. \end{equation} An important tool in the corresponding Potential Theory is the Comparison Principle. \begin{proposition} \label{prop:Comparison Principle} Assume that $u,v\in \mathcal{SH}_m(\Omega)\cap L^{\infty}(\Omega)$ and for any $\zeta \in \partial \Omega$, $\liminf_{z \rightarrow \zeta }(u(z)- v(z))\geq 0$. Then $$ \int_{\{u<v\}}(dd^c v)^m\wedge\beta^{n-m} \leq \int_{\{u<v\}}(dd^c u)^m\wedge\beta^{n-m}. $$ Consequently, if $(dd^cu)^m\wedge\beta^{n-m}\leq(dd^cv)^m\wedge\beta^{n-m}$ weakly on $\Omega$, then $u \geq v$ on $\Omega$. \end{proposition} It follows from the comparison principle that if the Dirichlet problem (\ref{eq:DirPb}) admits a solution, then it is unique. The following result will be also needed. \begin{corollary} \label{coro:Comparison Principle} Let $\Omega \Subset \mathbb{C}^n$ be a bounded strongly $m$-pseudoconvex domain. Assume that $u,v\in \mathcal{SH}_m(\Omega)\cap L^{\infty}(\Omega)$ satisfy $u \leq v$ on $\Omega$ and for any $\zeta \in \partial \Omega$, $\lim_{z \rightarrow \zeta }(u(z)- v(z))= 0$. Then for any $\psi \in \mathcal{SH}_m (\Omega) \cap L^{\infty} (\Omega)$ and any $1 \leq k \leq m-1$, $$ \int_{\Omega} dd^c v \wedge (dd^c \psi)^{k} \wedge\beta^{n-k-1} \leq \int_{\Omega} dd^c u \wedge (dd^c \psi)^k \wedge\beta^{n-k-1}. $$ \end{corollary} \begin{proof} Fix $\varepsilon > 0$. From the hypothesis, the exists a compact subset $K \Subset \Omega$ such that $ u \geq v - \varepsilon$ on $\Omega \setminus K$. Then $v_\varepsilon := \max \{u,v-\varepsilon\} \in \mathcal{SH}_m (\Omega) \cap L^{\infty} (\Omega)$ and $v_\varepsilon = u$ on $\Omega \setminus K$. We claim that this implies that \begin{equation} \label{eq:masspreservation} \int_{\Omega} dd^c v_\varepsilon \wedge (dd^c \psi)^{k} \wedge\beta^{n-k-1} = \int_{\Omega} dd^c u \wedge (dd^c \psi)^k \wedge\beta^{n-k-1}. \end{equation} Indeed we have in the sense of currents $$ dd^c v_\varepsilon \wedge (dd^c \psi)^{k} \wedge\beta^{n-k-1} - dd^c u \wedge (dd^c \psi)^k \wedge\beta^{n-k-1} = dd^c T, $$ where $T := (v_\varepsilon - u) (dd^c \psi)^k \wedge\beta^{n-k-1}$. Since $T$ is a current of order $0$ with compact support in $\Omega$, it follows that $\int_\Omega dd^c T = 0$, which proves (\ref{eq:masspreservation}). Now observe that $v_\varepsilon$ increases to $v$ as $\varepsilon$ decreases to $0$. By the monotone continuity of the Hessian operators, it follows that $$ dd^c v_\varepsilon \wedge (dd^c \psi)^{k} \wedge\beta^{n-k-1} \to dd^c v \wedge (dd^c \psi)^{k} \wedge\beta^{n-k-1} $$ weakly on $\Omega$ as $\varepsilon \to 0$. Therefore using (\ref{eq:masspreservation}) we conclude that \begin{eqnarray*} \int_{\Omega} dd^c v \wedge (dd^c \psi)^{k} \wedge\beta^{n-k-1} &\leq &\liminf_{\varepsilon \to 0} \int_{\Omega} dd^c v_\varepsilon \wedge (dd^c \psi)^{k} \wedge\beta^{n-k-1} \\ & =& \int_{\Omega} dd^c u \wedge (dd^c \psi)^{k} \wedge\beta^{n-k-1}. \end{eqnarray*} \end{proof} Let us recall the following estimates due to Cegrell (\cite{Ceg04}) for the complex Monge-Amp\`ere operators and extended by Charabati to complex Hessian operators (\cite{Ch16}). \begin{lemma} \label{lem:Cegrell} Let $u, v, w \in\mathcal{E}_m^0(\Omega)$. Then for any $1 \leq k \leq m - 1$ $$ \begin{array}{lcl} \int_{\Omega}dd^cu\wedge(dd^cv)^k\wedge(dd^cw)^{m-k-1}\wedge\beta^{n-m} \leq H_m (u)^{\frac{1}{m}} \, H_m (v)^{\frac{k}{m}} \, H_m (w)^{\frac{m-k-1}{m}}, \end{array} $$ where $H_m (u) := \int_{\Omega}(dd^c u)^m \wedge \beta^{n-m}$. In particular, if $\Omega$ is strongly $m$-pseudoconvex, then $$ \int_{\Omega}dd^c u \wedge (dd^c w)^k \wedge\beta^{n-k -1} \leq c_{m,n} \left(I_m (u)\right)^{\frac{1}{m}} \left(I_m (w)\right)^{\frac{k}{m}}, $$ and $$ \int_{\Omega}dd^c u \wedge \beta^{n-1} \leq c_{m,n} \left(I_m (u)\right)^{\frac{1}{m}}, $$ where $c_{m,n} > 0$ is a uniform constant. \end{lemma} We will need the following generalization of of last part of Lemma \ref{lem:Cegrell} to functions with boundary values not vanishing identically. \begin{lemma} \label{lem:Cegrell2} Assume that $g \in C^{1,1} (\partial \Omega)$. Then there exists a constant $M' = M' (g,\Omega) > 0$ such that for any $0 \leq k \leq m - 1$, any $v \in \mathcal{SH}_m (\Omega)$ with $v\mid_{\partial \Omega} \equiv g$ and any $\psi \in \mathcal{E}^0_m (\Omega)$, we have \begin{equation} \label{eq:claim} \int_\Omega dd^c v \wedge (dd^c \psi)^{k} \wedge \beta^{n-k - 1} \leq \,\left( c_{m,n} \, H_m (v)^{1 \slash m} + M'\right) \, H_m (\psi)^{k)\slash m} , \end{equation} where $ c_{m,n} > 0$ is the same constant as in the previous lemma. \end{lemma} \begin{proof} Fix $0 \leq k \leq m-1$ and set $$ I_k(v,\psi) := \int_\Omega dd^c v \wedge (dd^c \psi)^{k} \wedge \beta^{n-k-1}. $$ If $g\mid_{\partial \Omega} \equiv 0$, then $v \equiv 0$ on $\partial \Omega$, and the statement with $M' = 0$ follows from Lemma \ref{lem:Cegrell}. \smallskip Now assume that $g \in C^{1,1} (\partial \Omega)$. There exists $G \in C^{1,1} (\bar{\Omega})$ such that $G = g$ on $\partial \Omega$. By the choice of $\rho $ we can find a large constant $L > 0$ such that $w := L \rho + G$ is $m$-subharmonic on $\Omega$ and for $1 \leq k \leq m$, $(dd^c w)^k \wedge \beta^{n-k} \leq L'_m \beta^n$ pointwise almost everywhere on $\Omega$ for some uniform constant $L'_m > 0$. By the bounded subsolution theorem (\cite{N13}) there exists $v_0 \in \mathcal{E}^0_m (\Omega)$ solution to the equation $(dd^c v_0)^m \wedge \beta^{n-m} = (dd^c v)^m \wedge \beta^{n-m}$ with boundary values $v_0 \equiv 0$. The functions $\tilde v := v_0 + w$ is $m$-subharmonic and bounded on $\Omega$ and $ \tilde v = g = v$ on $\partial \Omega$. By the comparison principle $\tilde v \leq v$ on $\Omega$. Moreover by Corollary \ref{coro:Comparison Principle} we have $$ I_k (v,\psi) \leq I_k (\tilde v,\psi). $$ It suffices to estimate $ I_k (\tilde v,\psi) $ by a uniform constant. We have $$ I_k (\tilde v,\psi) = I_k (v_0, \psi) + I_k (w,\psi) $$ Since $v_0 \mid_{\partial \Omega} \equiv 0$, from the previous case it follows that \begin{eqnarray*} I_k (v_0,\psi) & \leq & c_{m,n} \left(\int_\Omega (dd^c v_0)^m \wedge \beta^{n-m}\right)^{1 \slash m} \left(\int_\Omega (dd^c \psi)^m \wedge \beta^{n-m}\right)^{k \slash m} \\ &\leq & c_{m,n} H_m (v)^{1 \slash m} H_m (\psi)^{k\slash m}. \end{eqnarray*} It remains to estimate $ I_k (w,\psi)$. Since $w \in C^{1,1} (\bar{\Omega})$, it follows that $dd^c w \leq M_3 \beta$ pointwise almost everywhere on $\Omega$, hence by Lemma \ref{lem:Cegrell}, we have \begin{eqnarray*} \int_\Omega dd^c w \wedge (dd^c \psi)^k \wedge \beta^{n-k-1} &\leq & M' \int_\Omega (dd^c \psi) ^k \wedge \beta^{n-k} \\ & \leq & M' H_m (\psi)^{k\slash m}, \end{eqnarray*} since $R \geq 1$, where $M' = M'(g) > 0$ depends on the uniform bound of $dd^c G$. This proves the inequality of the lemma. \end{proof} \subsection{The bounded subsolution theorem} Let $\Omega \Subset \mathbb{C}^n$ be a bounded strongly $m$-pseudoconvex domain. Assume there exists $v \in \mathcal{SH}_m (\Omega)\cap L^{\infty} (\Omega)$ such that \begin{equation} \label{eq:boundedsubsol} \mu \leq (dd^c v)^m \wedge \beta^{n -m} \, \, \mathrm{on} \, \, \Omega \, \, \, \hbox{and} \, \, \, v|_{\partial \Omega} \equiv 0. \end{equation} Ngoc Cuong Nguyen proved that under this condition, the Dirichlet problem (\ref{eq:DirPb}) admits a unique bounded $m$-subharmonic solution (see \cite{N13}). \begin{theorem} \label{thm:boundedsubsolution} (\cite{N13}). Let $\Omega \Subset \mathbb{C}^n$ be a bounded strongly $m$-pseudoconvex domain and $\mu$ a positive Borel measure on $\Omega$ satisfying the condition (\ref{eq:boundedsubsol}). Then for any $g \in \mathcal C^0 (\partial \Omega)$, there exists a unique $U = U_{g,\mu} \in \mathcal{SH}_m (\Omega)\cap L^{\infty} (\Omega)$ such that $(dd^c U)^m \wedge \beta^{n -m} = \mu$ on $\Omega$ and $U|_{\partial \Omega} \equiv g.$ \end{theorem} \subsection{The viscosity comparison principle} In order to prove Theorem A, we will need to prove an important result (Theorem \ref{thm:obstacle}). The proof of this result uses the viscosity comparison principle which was established for complex Hessian equations by H.C. Lu (\cite{Lu13}) in the spirit of the earlier work by P. Eyssidieux, V. Guedj and the second author on complex Monge-Amp\`ere equations (\cite{EGZ11}). To state this comparison principle we need some definitions. Let $\Omega \Subset \mathbb{C}^n$ be a bounded domain and $F : \Omega \times \mathbb{R} \longrightarrow \mathbb{R}$ a continuous function {\it non-decreasing} in the last variable. \begin{definition} Let $u: \Omega\rightarrow \mathbb{R}\cup\{-\infty\}$ be a function and $q$ be a $\mathcal C^2$ function in a neighborhood of $z_0\in \Omega.$ We say that $q$ touches $u$ from above (resp. below) at $z_0$ if $q(z_0)=u(z_0)$ and $q(z)\geq u(z)$ (resp. $q(z)\leq u(z)$) for every $z$ in a neighborhood of $z_0.$ \end{definition} \begin{definition}\label{def: viscosity subsolution} An upper semicontinuous function $u: \Omega\rightarrow \mathbb{R}$ is a viscosity subsolution to the equation \begin{equation}\label{eq: heq 1} (dd^c u)^m\wedge \beta^{n-m} = F(z,u)\beta^n, \end{equation} if for any $z_0\in \Omega$ and any $\mathcal C^2$ function $q$ which touches $u$ from above at $z_0$ then $$ \sigma_m(q) \geq F(\cdot,q (z_0))\beta^n, \ \text{at} \ z = z_0. $$ We will also say that $\sigma_m(u)\geq F(\cdot,u)\beta^n$ in the viscosity sense at $z_0$ and $q$ is an upper test function for $u$ at $z_0$. \end{definition} \begin{definition}\label{def: viscosity supersolution} A lower semicontinuous function $v : \Omega \rightarrow \mathbb{R}$ is a viscosity supersolution to (\ref{eq: heq 1}) if for any $z_0\in X$ and any $\mathcal C^2$ function $q$ which touches $v$ from below at $z_0$, $$ [(dd^c q)^m\wedge \beta^{n-m}]_+\leq F(z,q)\beta^n, \ \text{at} \ z = z_0. $$ Here $[\alpha^m\wedge\beta^{n-m}]_+$ is defined to be $\alpha^m\wedge\beta^{n-m} $ if $\alpha$ is $m$-positive and $0$ otherwise. We will also say that $\sigma_m(v)_+ \leq F(\cdot,v)\beta^n$ in the viscosity sense at $z_0$ and $q$ is a lower test function for $v$ at $z_0$. \end{definition} \begin{remark} If $v \in \mathcal C^2 (\Omega)$ then $\sigma_m(v)\geq F(z,v)\beta^n$ (resp. $[\sigma_m(v)]_+\leq F(z,v)\beta^n$) holds on $\Omega$ in the viscosity sense iff it holds in the usual sense. \end{remark} \begin{definition} A continuous function $u: \Omega \rightarrow \mathbb{R}$ is a viscosity solution to (\ref{eq: heq 1}) if it is both a subsolution and a supersolution. \end{definition} The first important result in this theory compares the viscosity and potential subsolutions. \begin{proposition}[\cite{Lu13}] \label{prop: viscosity vs potential general case} Let $u$ be a bounded upper semi-continuous function in $\Omega.$ Then the inequality \begin{equation}\label{eq: viscosity vs potential 2} \sigma_m (u)\geq F(\cdot,u)\beta^n \end{equation} holds in the viscosity sense on $\Omega$ if and only if $u$ is $m$-subharmonic and (\ref{eq: viscosity vs potential 2}) holds in the potential sense on $\Omega$. \end{proposition} Now we can state the viscosity comparison principle. \begin{theorem} [\cite{Lu13}]\label{thm: viscosity comparison principle} Let $u : \Omega \longrightarrow \mathbb{R}$ be a bounded viscosity subsolution and $v : \Omega \longrightarrow \mathbb{R}$ be a viscosity supersolution of the equation $$ \sigma_m(u)=F(\cdot,u)\beta^n, $$ on $\Omega$. If $u \leq v$ on $\partial \Omega$ then $u\leq v$ on $\Omega.$ \end{theorem} For more details on this theory we refer to \cite{Lu13} and \cite{EGZ11} in the complex case and to \cite{CIL92} for the real case. \subsection{Weak stability estimates} An important tool in dealing with our problems is the notion of capacity. This was introduced by Bedford and Taylor in their pionneering work for the complex Monge-Amp\`ere operator (see \cite{BT82}). Let us recall the coresponding notion of capacity we will use here (see \cite{Lu12}, \cite{SA13}). Let $\Omega \Subset \mathbb{C}^n$ be a strongly $m$-pseudoconvex domain. The $m$-Hessian capacity is defined as follows. For any compact set $K \subset \Omega$, $$ \mathrm{Cap}_m(K,\Omega) := \sup \{\int_K (dd^c u)^m \wedge \beta^{n - m} ; u \in \mathcal{SH}_m (\Omega) , - 1 \leq u \leq 0\}. $$ We can extend this capacity as an outer capacity on $\Omega$. Given a set $S \subset \Omega$, we define the inner capacity of $S$ by the formula $$ \mathrm{Cap}_m(S,\Omega) := \sup \{\mathrm{Cap}_m(K,\Omega) ; K \, \, \hbox{compact} \, \, K \subset S\}. $$ The outer capacity of $S$ is defined by the formula $$ \mathrm{Cap}^*_m(S,\Omega) := \inf \{\mathrm{Cap}_m(U,\Omega) ; U \, \, \hbox{ is open} \, \, U \supset S\}, $$ It is possible to show that $\mathrm{Cap}^*_m(\cdot,\Omega)$ is a Choquet capacity and then any Borel set $ B \subset \Omega$ is capacitable and for any compact set $K \subset \Omega$, \begin{equation} \label{eq:cap} \text{Cap}_m(K,\Omega)=\int_{\Omega}(dd^c u_K^*)^m\wedge\beta^{n-m}, \end{equation} where $u_K$ is the relative equilibrium potential of $(K,\Omega)$ defined by the formula : $$ u_K:=\sup\{u\in \mathcal{SH}_m(\Omega) \, ; \, u \, \leq \, -{\bf 1}_K \, \mathrm{on } \, \, \Omega\}, $$ and $u_K^*$ is its upper semi-continuous regularization on $\Omega$ (see \cite{Lu12}). It is well known that $u_K^*$ is $m$-subharmonic on $\Omega$, $- 1 \leq u_K^* \leq 0$, $u_K^*= - 1$ quasi-everywhere (with respect to $\text{Cap}_m$) on $\Omega$ and $u_K^* \to 0$ as $z \to \partial \Omega$ (see \cite{Lu12}). We will use the following definition. \begin{definition} \label{def:cap-domination} Let $\mu$ be a positive Borel measure on $\Omega$ and let $A,\tau > 0$ be positive numbers. We say that $\mu$ is dominated by the $m$-Hessian capacity with parameters $(A,\tau)$ if for any compact subset $K \subset \Omega$ with $\mathrm{Cap}_m (K,\Omega) \leq 1$, \begin{equation} \label{eq:capdomination} \mu (K) \leq A \text{Cap}_m (K,\Omega)^{\tau}. \end{equation} \end{definition} Observe that by capacitability, this inequality is then satisfied for any Borel set $K \subset \Omega$. Let us mention that S. Ko\l odziej was the first to relate the domination of the measure $\mu$ by the Monge-Amp\`ere capacity to the regularity of the solution to complex Monge-Amp\`ere equations (see \cite{Kol96}). Using his idea, Eyssidieux, Guedj and the second author were able to establish in \cite{EGZ09} a weak stability $L^1$-$L^{\infty}$ estimate for bounded solutions to the Dirichlet problem for the complex Monge-Amp\`ere equation. This result is the main tool in deriving estimates on the modulus of continuity of solutions to the complex Monge-Amp\`ere and Hessian equations. The following examples are due to Dinew and Ko\l odziej (see \cite{DK14}). \begin{example} 1. Dinew and Kolodziej proved in \cite{DK14} that the volume measure $\lambda_{2 n}$ is dominated by capacity. Namely for any $1 < r < \frac{m}{n - m}$, there exists a constant $N (r) > 0$ such that for any compact subset $K \subset \Omega$, \begin{equation} \label{eq:DK} \lambda_{2 n} (K) \leq N (r) \mathrm{Cap}_m (K,\Omega)^{1+ r}. \end{equation} Observe that this estimate is sharp in terms of the exponent when $m < n$. This can be seen by taking $\Omega = \mathbb{B}$ the unit ball and $K := \bar{\mathbb{B}_s} \subset \mathbb{B}$ the closed ball of radius $s \in ]0,1[$, since $\mathrm{Cap}_m(\bar{\mathbb{B}}_s ,\mathbb{B}) \approx s^{2 (n-m)}$ as $s \to 0$ (see \cite{Lu12}). When $m=n$ we know that the domination is much more precise (see \cite{ACKPZ09}). 2. Let $ 0 \leq f \in L^p (\Omega)$ with $p > n \slash m$. Then $ \frac{n (p-1)}{p (n - m)} > 1$. By H\"older inequality and inequality (\ref{eq:DK}) we obtain: for any $1 < \tau < \frac{n (p-1)}{p (n - m)}$ there exists a constant $M (\tau) > 0$ such that for any compact set $K \subset \Omega$, \begin{equation} \label{eq:DK2} \int_K f d \lambda_{2 n} \leq M (\tau) \Vert f\Vert_p \mathrm{Cap}_m (K,\Omega)^{\tau}. \end{equation} \end{example} Theorem A will provide us with many new examples. The condition (\ref{eq:capdomination}) plays an important role in the following stability result which will be a crucial point in the proof of our theorems (see \cite{EGZ09, GKZ08, Ch16}). \begin{proposition} \label{prop:stability} Let $\mu$ be a positive Borel measure on $\Omega$ dominated by the $m$-Hessian capacity with parameters $(A, \tau)$ such that $\tau > 1$. Then for any $u, v \in \mathcal {SH}_m (\Omega) \cap L^{\infty} (\Omega)$ such that $(dd^c u)^m \wedge \beta^{n - m} \leq \mu$ on $\Omega$ and $\liminf_{\partial \Omega} (u - v) \geq 0$, we have \begin{equation} \label{eq:stability} \sup_{\Omega} (v - u)_+ \leq 2 \Vert (v-u)_+\Vert_{1,\mu}^{1 \slash (m+1)} + C \|(v-u)_+\|^\gamma_{1,\mu}, \end{equation} where $\Vert (v-u)_+ \Vert_{1,\mu} := \int_{\Omega} (v-u)_+ d \mu$ and \begin{equation} \label{eq:stbilityconstant} C := 1 + \frac{2^{\tau} A^{\frac{1}{m}}}{1-2^{1 -\tau}}, \, \gamma = \gamma (\tau,m):= \frac{\tau - 1}{\tau (m + 1) - m }\cdot \end{equation} \end{proposition} Observe that the most relevant case in applications is when $\Vert (v-u)_+ \Vert_{1,\mu}$ is small. So the right exponent is $\gamma < 1 \slash (m +1)$. \begin{proof} The proof uses an idea which goes back to Ko\l odziej (\cite{Kol96}) with some simplifications due to Guedj, Eyssidieux and the second author (see \cite{EGZ09, GKZ08}). It relies on the following estimates : for any $t > 0, s > 0$ \begin{equation}\label{eq:Cap-MA} t^m {Cap}_m(\{u< v -s-t\},\Omega)\leq \int_{\{u<v -s\}}(dd^cu)^m\wedge\beta^{n-m}. \end{equation} Indeed let $t > 0, s > 0$ fixed and $ w \in \mathcal{SH}_m(\Omega)$ be given such that $-1 \leq w \leq0$. Then $$\{u-v<-s-t\}\subset\{u-v< t w -s\} \subset\{u-v<-s\} \Subset\Omega.$$ It follows that \begin{eqnarray*} t^m \int_{\{u-v<-s-t\}}(dd^c w)^m\wedge\beta^{n-m}&\leq&\int_{\{u< v - s-t\}}(dd^c(v+tw))^m\wedge\beta^{n-m}\\ &\leq& \int_{\{u< v+ tw - s\}}(dd^c(v+tw))^m\wedge\beta^{n-m}. \end{eqnarray*} On the other hand the comparison principle yields \begin{eqnarray*} \int_{\{u< v+ tw - s\}}(dd^c(v+tw))^m\wedge\beta^{n-m} &\leq&\int_{\{u < v +t w - s\}}(dd^c u)^m\wedge\beta^{n-m}\\ &\leq&\int_{\{u< v -s\}}(dd^cu)^m\wedge\beta^{n-m}. \end{eqnarray*} The last two inequalities imply (\ref{eq:Cap-MA}). Applying inequality (\ref{eq:Cap-MA}) with the parameter $(s \slash 2, s\slash 2)$ instead of $(t,s)$ and taking into acount that $u$ is a supersolution, we obtain \begin{eqnarray} \nonumber \mathrm{Cap}_m(\{u< v - s \},\Omega) & \leq & 2^m s^{-m} \int_{\{u<v - s\slash 2\}}(dd^cu)^m\wedge\beta^{n-m} \\ & \leq & 2^{m + 1} s^{-m - 1} \int_\Omega (v-u)_+ d \mu. \end{eqnarray} Set $s_0 := 2 \Vert (v-u)_+\Vert_{1,\mu}^{1 \slash (m+1)}$. Then for any $s \geq s_0$, \begin{equation} \label{eq:smallcap} \mathrm{Cap}_m(\{u< v - s \},\Omega)\leq 1. \end{equation} Fix $\varepsilon > 0$ and $s \geq 0$. Then applying inequality (\ref{eq:Cap-MA}) with $s_0 + s + \varepsilon$ instead of $s$ and taking into account the fact that $(dd^c u)^m \wedge \beta^{n - m} \leq \mu$ weakly on $\Omega$, we get \begin{equation} \label{eq:cap-mu} t^m {Cap}_m(\{u< v - s_0 - \varepsilon - s -t\},\Omega)\leq \int_{\{u<v -s_0 - \varepsilon - s\}} d \mu. \end{equation} Set $ f (s) = f_\varepsilon (s):={Cap}_m(\{u-v<-s - s_0 -\varepsilon\},\Omega)^{\frac{1}{m}}$. By (\ref{eq:smallcap}), we have $f (s) \leq 1$. Hence since $\mu$ is dominated by capacity, it follows that for any $t > 0$ and $s > 0$, $$ t f (s + t) \leq A^{\frac{1}{m}} f (t)^{1 + a}, \, \, \hbox{where} \, \, a := \tau - 1 > 0. $$ It follows from \cite[Lemma 2.4]{EGZ09}) that $f (s) = 0$ for any $s\geq S_\infty$ where $$ S_\infty:=\frac{2 A^{\frac{1}{m}} }{1-2^{-a}} [f (0)]^{a}, $$ Thus $v-u\leq s_0 + \varepsilon + S_\infty$ quasi everywhere on $\Omega$ and then the inequality holds everywhere on $\Omega$ i.e. $$ \max (v-u)_+ \leq s_ 0 + \varepsilon + \frac{2 A^{\frac{1}{m}}}{1-2^{-a}} {Cap}_m(\{v - u >\varepsilon\},\Omega)^{a} $$ Applying (\ref{eq:Cap-MA}) with $t=\varepsilon$ and $s = 0$ we obtain $$ {Cap}_m(\{v - u > \varepsilon\},\Omega)\leq 2 \varepsilon^{-m-1}\|(v-u)_+\|_{1,\mu}. $$ As a consequence of the previous estimate, we obtain $$ \sup_\Omega(v-u)\leq 2 \Vert (v-u)_+\Vert_{1,\mu}^{1 \slash (m+1)} + \varepsilon+ C' \varepsilon^{-a (m+1)}\|(v-u)_+\|^{a}_{1,\mu}, $$ where $C' := \frac{2^{a + 1} A^{\frac{1}{m}}}{1-2^{-a}}$. Set $\varepsilon :=\|(v-u)_+\|^\gamma_{1,\mu}$, with $\gamma:=\frac{a}{1 +a(m+1)} = \frac{\tau - 1}{(\tau - 1) (m + 1) + 1}.$ Then $$ \sup_{\Omega}(v-u)_+\leq 2 \Vert (v-u)_+\Vert_{1,\mu}^{1 \slash (m+1)} + C \|(v-u)_+\|^\gamma_{1,\mu}, $$ where $C := C' + 1 = 1 + \frac{2^{a + 1} A^{\frac{1}{m}}}{1-2^{-a}} = 1 + \frac{2^{\tau} A^{\frac{1}{m}}}{1-2^{1 -\tau}}$. \end{proof} \section{Subharmonic envelopes and obstacle problems } Here we prove some results that will be used in the proof of the Theorem A. Since they are of independent interest, we will state them in the most general form and give complete proofs. \subsection{Subharmonic envelopes } Let $\Omega \Subset \mathbb{C}^n$ and $h : \Omega \longrightarrow \mathbb{R}$ is a non positive bounded Borel function and define the corresponding projection: \begin{equation} \label{eq:subextension} \tilde h = P_{m,\Omega} (h) := \left(\sup \{v \in \mathcal {SH}_m (\Omega) ; \, v \leq h \, \text{in} \, \, \Omega\}\right)^*. \end{equation} Observe that we do not need to take the upper semi-continuous regularization if $h$ is upper semi-continuous on $\Omega$. On the other hand, we can easily see that $$ P_{m,\Omega} (h) := \sup \{v \in \mathcal {SH}_m (\Omega) ; \, v \leq h \, \text{quasi everywhere on} \, \, \Omega\}, $$ where $v \leq h$ quasi everywhere on $\Omega$ means that the exceptional set where $v \geq h$ has zero $Cap_m$-capacity. This is a classical construction in Potential Theory and has been considered in Complex Analysis first by H. Bremermann in \cite{Brem59}, J.B. Walsh in \cite{Wal69} and also by J. Siciak in \cite{Sic81}. Later it has been studied by Bedford and Taylor when solving the Dirichlet problem for the the complex Monge-Amp\`ere equation (\cite{BT76}, \cite{BT82}. In the setting of compact K\"ahler manifolds it has bee considered R. Berman and J.-P. Demailly in \cite{BD12} and later in \cite{Ber19}. It has been also considered recently in \cite{GLZ19} in connexion with the supersolution problem for complex Monge-Amp\`ere equations, where a precise estimate of its complex Monge-Amp\`ere measure was given. We will extend these last results to Hessian equations. \begin{lemma} \label{lem:projection} Let $\Omega \Subset \mathbb{C}^n$ be a bounded strongly $m$-pseudoconvex domain and $h$ a bounded lower semi-continuous function on $\Omega$. Then the function $\tilde h := P_{m,\Omega} (h)$ satisfies the following properties: $(i)$ $\tilde h \in \mathcal {SH}_m (\Omega) \cap L^{\infty} (\Omega)$, and $\tilde h \leq h$ a.e. on $\Omega$; $(ii)$ if $h$ is continuous on $\bar \Omega$, then $\tilde h$ is continuous on $\bar{\Omega}$ and satisfies the following properties \begin{equation} \label{eq:boundaryvaluesh} \lim_{\Omega \ni z \to \zeta} \tilde h (z) = h (\zeta), \, \, \zeta \in \partial \Omega, \end{equation} $(iii)$ $ \int_\Omega (\tilde h - h) (dd^c \tilde h)^m \wedge \beta^{n - m} = 0$. \end{lemma} \begin{proof} Observe that $\min_{\bar \Omega} h \leq \tilde h \leq \max_{\bar \Omega} h$ on $\Omega$. 1. Property $(i)$ follows from the general theory (see \cite{Lu12}). 2. Property $(ii)$ can be proved using the perturbation method due to J.B. Walsh (see \cite{Wal69}). Let us recall the argument for completeness. We first prove that $\tilde h$ satisfies (\ref{eq:boundaryvaluesh}) meaning that it has boundary values equal to $h$ and then it extends as a function on $\bar \Omega$ which is continuous on $\partial \Omega$. Indeed fix $\varepsilon > 0$ and let $h'$ be a $C^2$ approximating function on $\bar \Omega$ such that $h - \varepsilon \leq h' \leq h $ on $\bar \Omega$. Let $\rho$ be the strongly $m$-subharmonic defining function for $\Omega$. Then there exists a constant $A > 0$ such that $u := A \rho + h'$ is $m$-subharmonic on $\Omega$ and $u \leq h' \leq h$ on $\bar \Omega$. Then by definition of the envelope, we have $u \leq \tilde h \leq h$ on $\bar \Omega$. Therefore for any $\zeta \in \partial \Omega$, \begin{eqnarray*} h (\zeta) - \varepsilon \leq h' (\zeta) & = & \lim_{\Omega \ni z \to \zeta } u (z) \\ & \leq & \liminf_{\Omega \ni z \to \zeta } \tilde h (z) \leq \limsup_{\Omega \ni z \to \zeta } \tilde h (z) \leq h (\zeta). \end{eqnarray*} Since $\varepsilon > 0$ is arbitrary, we obtain the identity (\ref{eq:boundaryvaluesh}). We can then extend $\tilde h$ to $\bar \Omega$ by setting $\tilde h (\zeta) = h (\zeta)$ for $\zeta \in \partial \Omega$. To prove the continuity of $\tilde h$ on $\bar \Omega$, we use the perturbation argument of J.B. Walsh. Fix $\delta > 0$ small enough, $a \in \mathbb{C}^n$ such that $\vert a \vert \leq \delta$ and set $\Omega_a := (- a) + \Omega$. We define the modulus of continuity of $\tilde{h}$ near the boundary as follows: $$ \tilde{\kappa}_{\tilde h} (\delta) := \sup \{ \vert \tilde h(z) - \tilde{h} (\zeta)\vert \, ; \, z \in \Omega, \zeta \in \partial \Omega, \vert z - \zeta\vert \leq \delta. $$ Then since $\tilde h = h$ is uniformly continuous on $\partial \Omega$, we see that $\lim_{\delta \to 0^+} \tilde{\kappa}_{\tilde h} (\delta) = 0$. By definition of $\tilde{\kappa}_{\tilde h}$, for any $z \in \Omega \cap \partial \Omega_a$, we have $$\tilde{ h} (z + a) \leq \tilde h (z) + \tilde{\kappa}_{\tilde{h}} (\delta) \leq \tilde h (z) + \tilde{\kappa}_{\tilde{h}} (\delta) + \kappa_h (\delta), $$ where $ \kappa_h (\delta)$ is the modulus of continuity of $h$ on $\bar{\Omega}$. Therefore by the gluing principle, the function defined by $$ v (z):=\left\{\begin{array}{lcl} \max \{ \tilde h (z) , \tilde h (z + a) - \tilde{\kappa}_{\tilde{h}} (\delta) - \kappa_h (\delta)\} \, &\hbox{ if}\ z \in\Omega \cap \Omega_a \\ \tilde h (z) &\hbox{if}\ z \in\Omega\setminus\Omega_a\\ \end{array}\right. $$ is $m$-subharmonic on $\Omega$ and satisfies $v \leq h$ on $\bar \Omega$. Therefore $v \leq \tilde h$ on $\bar \Omega$ and then $$ \tilde h (z + a) - \tilde{\kappa}_{\tilde{h}} (\delta) - \kappa_h (\delta) \leq \tilde h (z), $$ for any $z \in \Omega \cap \Omega_a$ with $\vert a\vert \leq \delta$. This proves that $\tilde{h}$ is uniformly continuous on $\bar{\Omega}$. 3. Property $ (iii)$ follows by a standard balayage argument in Potential Theory which goes back to Bedford and Taylor for the complex Monge-Amp\`ere equation (\cite{BT76}, \cite{BT82}, see also \cite{GLZ19}). \end{proof} \begin{remark} The proof above does not give any information on the modulus of continuity of $\tilde h$ in terms of the modulus of continuity of $h$. In other words we do not know if $\tilde{\kappa}_{\tilde h}$ is comparable to $\kappa_h$. However if $h$ is $C^2$-smooth on $\bar \Omega$, the function $u := A \rho + h$, considered in the proof above with $h'=h$, is $m$-subharmonic on $\Omega$, Lipschitz on $\bar{\Omega}$ and safisfies $u \leq \tilde{h} \leq h$ on $\bar{\Omega}$. Then this implies that $\tilde{\kappa}_{\tilde{h}} (\delta) \leq \kappa_h (\delta) + \kappa_u (\delta) \leq C \kappa_h (\delta)$, where $C > 0$ is a uniform constant. Therefore the modulus of continuity of $\tilde h$ satisfies the inequality $\kappa_{\tilde h}(\delta) \leq C' \kappa_h(\delta)$, where $C' > 0$ is an absolute constant. This information is not needed here, but it is worth mentioning that this an interesting open problem related to the regularity of solutions to obstacle problems. We will come back to this in a subsequent work. \end{remark} \subsection{An obstacle problem} \begin{theorem} \label{thm:obstacle} Let $h \in \mathcal C^2 ({\bar{\Omega}})$. Then $\tilde h := P_{m,\Omega} h \in \mathcal{SH}_m (\Omega) \cap C^0 (\bar \Omega)$ and its $m$-Hessian measure satisfies the following inequality : \begin{equation} \label{eq:BerIneq1} (dd^c \tilde h)^m \wedge \beta^{n - m} \leq {\bf 1}_{\{ \tilde h = h\}} \sigma_m^+ (h), \end{equation} in the sense of currents on $\Omega$. \end{theorem} Here for a function $h \in \mathcal C^2 ({\bar{\Omega}})$, we set $$ \sigma_m^+ (h) := {\bf 1}_G \, \sigma_m (h), $$ pointwise on $\Omega$, where $G$ is the set of points $z \in \Omega$ such that $dd^c h (z) \in \Theta_m$ i.e. the $(1,1)$-form $dd^c h (z)$ is $m$-positive (see Definition \ref{def:mpositive}). \begin{proof} To prove (\ref{eq:BerIneq1}), we proceed as in \cite{GLZ19}, using an idea which goes back to R. Berman \cite{Ber19}. Thanks to the property $(ii)$ of Lemma \ref{lem:projection}, it is enough to prove that \begin{equation} \label{eq:BerIneq2} (dd^c \tilde h)^m \wedge \beta^{n - m} \leq \sigma_m^+ (h), \end{equation} in the sense of currents on $\Omega$. We procced in two steps: 1) Assume first that $\Omega$ is smooth strongly $m$-pseudoconvex and $h \in \mathcal{C}^2 (\bar \Omega)$ and consider the following Dirichlet problem for the complex $m$-Hessian equation depending on the parameter $j \in \mathbb{N}$, \begin{equation} \label{eq:BerEqu} (dd^c u)^m \wedge \beta^{n - m} = e^{j (u-h)} \sigma_m^+ (h), \, \, u = h \, \, \mathrm{in} \, \, \, {\partial \Omega}. \end{equation} By \cite{Lu13}, for each $j \in \mathbb{N}$, there exists a unique continuous solution $u_j \in \mathcal {SH}_m (\Omega) \cap\mathcal C^0 (\Omega)$ to this problem (see also \cite{Ch16}). Our goal is to prove that the sequence $(u_j)_{j\in \mathbb{N}}$ increases to $ \tilde h$ uniformly on $\bar{\Omega}$. We argue as in \cite{GLZ19} with obvious modifications. Recall $h$ is $C^2$ in $\bar{\Omega}$. Then by definition $h$ is a viscosity supersolution to the Dirichlet problem (\ref{eq:BerEqu}). Moreover by Proposition \ref{prop: viscosity vs potential general case}, $u_j$ is a viscosity subsolution to the Dirichlet problem (\ref{eq:BerEqu}). By the viscosity comparison principle Theorem \ref{thm: viscosity comparison principle}, we conclude that $u_j \leq h$ in $ \Omega$ since $u_j = h$ on $\partial \Omega$. Therefore the pluripotential comparison principle Proposition \ref{prop:Comparison Principle} implies that $(u_j)$ is an increasing sequence. On the other hand, by Theorem \ref{thm:boundedsubsolution} there exists a bounded $m$-subharmonic function $\psi$ on $\Omega$ which is a solution to the complex Hessian equation $$ \sigma_m (\psi) = e^{\psi - h} \sigma_m^+ (h) $$ with $\psi = h$ on $\partial \Omega$. Moreover for any $j \in \mathbb{N}$, one can easily check that the function defined by the formula $$ \psi_j := (1- 1 \slash j) \tilde h + (1 \slash j) ( \psi - m \log j) $$ is a (pluripotential) subsolution to the equation (\ref{eq:BerEqu}), since $ \tilde h \leq h$ on $\Omega$. Hence by Proposition \ref{prop:Comparison Principle} we have $\psi_j \leq u_j$ on $\Omega$. Summarizing we have proved that for any $j \in \mathbb{N}$, $\psi_j \leq u_j \leq \tilde h $ on $\Omega$. Therefore $ 0 \leq \tilde h - u_j \leq \tilde h - \psi_j = (1 \slash j) ( \tilde h - \psi + m \log j) $ on $\Omega$ for any $j \in \mathbb{N}^*$. This proves that $u_j$ converges to $\tilde h$ uniformly on $\Omega$. Then since $u_j \leq h$ on $\Omega$, taking the limit as $j \to + \infty$ in (\ref{eq:BerEqu}) we obtain inequality (\ref{eq:BerIneq2}) by the continuity of the Hessian operators for uniform convergence (see \cite{Lu12}). 2) For the general case of a bounded $m$-hyperocnvex domain, we approximate $\Omega$ by an increasing sequence $(\Omega_j)_{j \in \mathbb{N}}$ of smooth strongly $m$-pseudoconvex domains such that for any $j \in \mathbb{N}$, $\Omega_{j + 1} \subset \Omega_j$ and $\Omega = \cup_{j \in \mathbb{N}} \Omega_j$. Then it is easy to see that the sequence $(P_{m,\Omega} h_j)$ decreases to $P_{m,\Omega} h$ on $\Omega$ (see \cite{GLZ19}). Thus the result follows from the previous case by the continuity of the Hessian operator for monotone sequences. \end{proof} It's worth mentioning that these envelopes have been considered by several authors in the context of compact K\"ahler manifolds. When $h$ is $C^2$ it was proved recently that $P (h)$ is $C^{1,1}$ (see \cite{ChZh17}, \cite{T18}, \cite{Ber19}) and equality holds in (\ref{eq:BerIneq1}), which means that $P(h)$ is a solution to an obstacle problem (see \cite{BD12}). We can address a similar question. \smallskip {\it Question :} Is it true that $\tilde h$ is $C^{1,1}$ locally on $\Omega$ when $h$ is $C^2$ on $\bar \Omega$ ? Is there equality in (\ref{eq:BerIneq1}) ? \begin{corollary} \label{cor:smoothing} Let $\Omega \Subset \mathbb{C}^n$ be a strongly $m$-pseudconvex domain. Let $u \in \mathcal {SH}_m (\Omega)$ a negative $m$-subharmonic function. Then there exists a decreasing sequence $(u_j)$ of continuous $m$-subharmonic functions on $\Omega$ with boundary values $0$ which converges pointwise to $u$ on $\Omega$. \end{corollary} \begin{proof} We can assume that $u$ is bounded on $\Omega$ and extend it as a semi-continous function on $\bar \Omega$. Let $(h_j)_{j \in \mathbb{N}}$ be a decreasing sequence of smooth functions in a neighbourhood of $\bar \Omega$ which converges to $u$ in $\bar \Omega$. For each $j \in \mathbb{N}$, consider the $m$-subharmonic envelope $v_j := P_{\Omega} h_j$ on $\Omega$ and set $u_j := \max \{v_j, j \rho\}$ on $\Omega$, where $\rho$ is a continuous $m$-subharmonic defining function for $\Omega$. Then by Lemma \ref{lem:projection}, by the Lemma $(u_j)$ is a decreasing sequence of continuous $m$-subharmonic functions on $\Omega$ which converges to $u$ on $\Omega$. \end{proof} Applying the smoothing method of Richberg it is possible to construct a decreasing sequence of smooth $m$-subharmonic functions on $\Omega$ which converges to $u$ in $ \Omega$ (see \cite{P14}). \section{Hessian measures of H\"older continuous potentials } In this section we will prove two important results which will be used in the proof of the main theorems stated in the introduction. \subsection{Hessian mass estimates near the boundary} Here we prove a comparison inequality which seems to be new even in the case of a complex Monge-Amp\`ere measure. \begin{lemma} \label{lem:ComparisonIneq} Let $\Omega \Subset \mathbb{C}^n$ be a bounded strongly $m$-pseudoconvex domain and $\varphi \in \mathcal{SH}_m(\Omega)\cap \mathcal{C}^{\alpha} (\bar{\Omega})$ ($0 < \alpha \leq 1$) with $\varphi \equiv 0$ on $\partial \Omega$. Then for any Borel set $K \subset \Omega$, we have $$ \int_K (dd^c\varphi)^m\wedge\beta^{n-m} \leq L^m \left[\delta_K (\partial \Omega)\right]^{ m \alpha} \, \mathrm{Cap}_m (K,\Omega), $$ where $$ \delta_K (\partial \Omega) := \sup_{z \in K} \mathrm{dist} (z ; \partial \Omega) $$ and $L > 0$ is the H\"older norm of $\varphi$. \end{lemma} The constant $\delta_K (\partial \Omega)$ is the Hausdorff distance of $K$ to the boundary in the sense that $\delta_K (\partial \Omega) \leq \varepsilon $ means that $K$ is contained in the $\varepsilon$-neighbourhood of $\partial \Omega$. The relevant point here is that the estimate takes care of the behaviour at the boundary. It shows in particular that if the volume of the compact set is fixed, the capacity tends to $+ \infty$ when the compact set approaches the boundary at a rate controlled by the Hausdorff distance of the compact to the boundary. \begin{proof} By inner regularity, we can assume that $K \subset \Omega$ is compact. Since $\varphi$ is H\"older continuous on $\bar \Omega$, we have $\varphi (\zeta) - \varphi (z) \leq L \vert \zeta - z\vert^{\alpha}$ for any $\zeta \in \partial \Omega$ and any $z \in \Omega$. Fix a compact set $K \subset \Omega$. Since $\varphi = 0$ in $\partial \Omega$, it follows that for any $z \in K$, $$ - \varphi (z) \leq \kappa \left[\mathrm{dist} (z,\partial \Omega)\right]^{\alpha} \leq L \left[\delta_K (\partial \Omega)\right]^{\alpha} =:a. $$ Therefore the function $v := a^{-1} \varphi \in \mathcal{SH}_m (\Omega)$ and $ v \leq 0$ on $\Omega$ and $v \geq - 1$ in $K$. Fix $\varepsilon >0$ and let $u_K$ be the relative extremal $m$-subharmonic function of $(K,\Omega)$. Then $ K \subset \{ (1 + \varepsilon) u_K^* < v\} \cup \{u_K < u_K^*\}$. Since the set $\{u_K < u_K^*\}$ has zero $m$-capacity (see \cite{Lu12}), it follows from the comparison principle that for any $\varepsilon > 0$, \begin{eqnarray*} \int_K (dd^c v)^m \wedge \beta^{n - m} &\leq &\int_{\{ (1 + \varepsilon) u_K^* < v\}} (dd^c v)^m \wedge \beta^{n - m} \\ &\leq & (1+ \varepsilon)^m \int_{\{ (1 + \varepsilon) u_K^* < v\}} (dd^c u_K^*)^m \wedge \beta^{n - m} \\ & \leq & (1+ \varepsilon)^m \text{Cap}_m (K,\Omega). \end{eqnarray*} The lsat inequality follows from (\ref{eq:cap}). The estimate of the Lemma follows by letting $\varepsilon \to 0$. \end{proof} \subsection{H\"older continuity of Hessian measures} In order to prove the H\"older continuous subsolution theorem we need an additional argument following an idea which goes back to \cite{DDGKPZ15} and used in a systematic way in \cite{N18a} (see also \cite{KN19}). Given a continuous function $g \in \mathcal{C}^0 (\partial \Omega)$ and a real number $R > 0$, we denote by $\mathcal{E}^g_m(\Omega,R)$ the convex set of bounded $m$-subharmonic functions $v $ on $\Omega$ such that $v = g$ on $\partial \Omega$ normalized by the mass condition $\int_{\Omega} (dd^c v)^m \wedge \beta^{n -m} \leq R$. In order to prove Theorem B, we will need the following lemma. \begin{lemma}\label{lem:ModC} Let $\varphi\in \mathcal{E}^0_m (\Omega)\cap \mathcal{C}^{\alpha} (\overline\Omega)$, with $ 0 < \alpha \leq 1$. Then there exists a constant $C_k = C (k,m,\varphi, \Omega) >0$ such that for any $0 < \delta < \delta_0$, and any $u,v\in \mathcal{SH}_m (\Omega,R)$ such that $u = v$ on $\partial \Omega$, we have for $1 \leq k \leq m$ \begin{equation} \label{eq:MocEst2} \int_{\Omega}|u-v| (dd^c\varphi)^k\wedge\beta^{n-k}\leq C_k \, R \, \left[\Vert u-v \Vert_1\right]^{\tilde \alpha_k }, \end{equation} where $\tilde \alpha_k := (\alpha\slash 2)^k \slash m$, provided that $\Vert u-v \Vert_1 \leq 1$. Moreover if $g \in C^{1,1} (\partial \Omega)$, for any $1 \leq k\leq m$, there exists a constant $C'_k = C' (k,m, \varphi,g,\Omega) > 0$ such that for any $0 < \delta < \delta_0$, and every $u,v\in \mathcal{SH}_m (\Omega,R)$ with $u = v = g$ on $\partial \Omega$, we have \begin{equation} \label{eq:MocEst1} \int_{\Omega}|u-v|(dd^c\varphi)^k\wedge\beta^{n-k}\leq C'_k R \left[\Vert u-v \Vert_1\right]^{\alpha_k}, \end{equation} where $\alpha_k := (\frac{\alpha}{ 2})^k$, provided that $\Vert u-v \Vert_1 \leq 1$. \end{lemma} \begin{proof} Recall the following notation for the complex Hessian measure of $\varphi$: $$ \sigma_k{(\varphi)} := (dd^c\varphi)^k\wedge\beta^{n-k} \ \ \ 1 \leq k\leq m. $$ Observe that for any $\varepsilon > 0,$ $ u_\varepsilon := \max \{u - \varepsilon, v \} \in \mathcal{E}^g_m(\Omega)$ and $u_\varepsilon = v$ near the boundary $\partial \Omega$. By the the comparison principle, this implies that $u_\varepsilon \in \mathcal{E}^g_m(\Omega,R)$. Therefore, replacing $u$ by $u_\varepsilon$, we can assume that $u \geq v$ on $\Omega$ and $u = v$ near the boundary $\partial \Omega$. Then the required estimates will follow from this case since $\vert u - v \vert = ( \max\{u,v\} - u) + (\max\{u,v\} - v)$. On the other hand by approximation on the support $S$ of $u-v$ which is compact, we can assume that $u$ and $ v $ are smooth on a neighbourhood of $S$. We first extend $\varphi$ as a H\"older continuous function on $\mathbb{C}^n$. Indeed recall that for any $z, \zeta \in \bar{\Omega}$, we have $\varphi (z) \leq \varphi (\zeta) + \kappa \vert z - \zeta\vert^\alpha $. Then it is easy to see that the following function \begin{equation} \label{eq:Holderextension} \bar{\varphi} (z) := \sup \{\varphi (\zeta) - \kappa \vert z - \zeta\vert^\alpha ; \zeta \in \bar{\Omega}\}, \, \, z \in \mathbb{C}^n. \end{equation} is H\"older continuous of order $\alpha$ on $\mathbb{C}^n$ and $ \bar{\varphi} = \varphi$ on $\Omega$. For simplicity, we will denote this extension by $\varphi$. We approximate $\varphi$ by convolution and denote by $\varphi_{\delta}$ ($0 < \delta < \delta_0$) the smooth approximants of $\varphi $ on $\mathbb{C}^n$, defined for $z \in \mathbb{C}^n$ by the formula \begin{equation} \label{eq:regularization2} \varphi_\delta (z) := \int_{\mathbb{C}^n} \varphi (\zeta) \chi_\delta(z - \zeta) d \lambda_{2n} (\zeta), \end{equation} where $(\chi_\delta)_{\delta}$ is a smooth radial kernel approximating the Dirac unit mass at the origin. Then by Lemma \ref{lem:Poisson-Jensen}, for $0 < \delta < \delta_0$, $\varphi_\delta \in \mathcal{SH}_m (\Omega_{\delta})\cap\mathcal{C}^{\infty}(\mathbb C^n)$. To prove the required estimates, we will argue by induction on $0 \leq k \leq m$. Fix $0 \leq k \leq m-1$ and $\delta > 0$ and write $$ \int_{\Omega}(u-v) (dd^c\varphi)^{k+1} \wedge\beta^{n-k-1} = A (\delta) + B (\delta), $$ where $$ A (\delta):=\int_{\Omega}(u-v)dd^c \varphi_{\delta}\wedge(dd^c\varphi)^k\wedge\beta^{n-k-1}, $$ and \begin{eqnarray*} B (\delta) &:=& \int_{\Omega}(u-v) dd^c(\varphi - \varphi_{\delta} - \kappa \delta^\alpha)\wedge(dd^c\varphi)^k\wedge\beta^{n-k-1}, \end{eqnarray*} where we recall that $ \varphi - \kappa \delta^\alpha \leq \varphi_\delta \leq \varphi + \kappa \delta^\alpha$ on $\bar{\Omega}$ by hypothesis. The first term $A (\delta)$ is estimated as follows. Observe that we have $dd^c \varphi_\delta \leq M_1 \kappa \delta^{\alpha -2} \beta$ pointwise on $\Omega$, where $M_1 > 0$ is a uniform bound on the second derivatives of $\chi$. Then since $u \geq v$ we deduce that \begin{equation} \label{eq:estimationA} \vert A (\delta) \vert \leq M_1 \kappa \delta^{\alpha -2} \int_{\Omega} (u-v) (dd^c \varphi)^k \wedge \beta^{n-k}. \end{equation} We now estimate the second term $B (\delta)$. Since $ u - v = 0$ near the boundary $\partial \Omega$ i.e. on $\Omega \setminus \Omega'$, where $\Omega' \Subset \Omega$ is an open set, we can integrate by parts to get the following formula $$ B (\delta) = \int_{\Omega'} (\varphi_{\delta} - \varphi + \kappa \delta^\alpha) dd^c (v-u) \wedge(dd^c\varphi)^k\wedge\beta^{n-k-1}, $$ and then since $0 \leq \varphi_{\delta}-\varphi + \kappa \delta^\alpha \leq 2 \kappa \delta^{\alpha}$ on $\Omega$, it follows that $$ \vert B (\delta)\vert \leq 2 \kappa \delta^{\alpha} \int_{\Omega'} dd^c v \wedge(dd^c\varphi)^k\wedge\beta^{n-k-1}. $$ Therefore, we get \begin{equation} \label{eq:estimationII} |B (\delta)| \leq 2 \kappa \delta^{\alpha} \, I'_k(v,\varphi), \end{equation} where $I'_k (v,\varphi) := \int_{\Omega'} dd^c v \wedge(dd^c\varphi)^k\wedge\beta^{n-k-1}$. The problem is to estimate the terms $I'_k (v,\varphi) $ with a uniform constant which does not depend on $\Omega' \Subset \Omega$. We could use the obvious inequality $$ \int_{\Omega'} dd^c v \wedge(dd^c\varphi)^k\wedge\beta^{n-k-1} \leq \int_{\Omega} dd^c v \wedge(dd^c\varphi)^k\wedge\beta^{n-k-1}, $$ to conclude if we can show that $ I_k (v,\varphi) := \int_{\Omega} dd^c v \wedge(dd^c\varphi)^k\wedge\beta^{n-k-1}$ is finite, which is the case thanks to Lemma \ref{lem:Cegrell2}. \smallskip \smallskip Therefore by (\ref{eq:claim}) and (\ref{eq:estimationII}), we get \begin{equation} \label{eq:estimation3} |B (\delta)| \leq \kappa \, d(m,n) \, R \, \delta^{\alpha}, \end{equation} Combining the inequalities (\ref{eq:estimationA}) and (\ref{eq:estimation3}), we obtain for $0 < \delta < \delta_0$, \begin{equation} \label{eq:funestimate} \int_{\Omega}(u-v) \sigma_{k +1}(\varphi)\leq M_1 \frac{\kappa \delta^\alpha}{\delta^2} \int_{\Omega} (u-v) \sigma_k (\varphi) + d(m,n) \, \kappa \, R \delta^{\alpha}. \end{equation} To finish the proof of the last statement of the lemma, we argue by induction on $k$ for $0 \leq k \leq m$. When $k=0$, the inequality is obviously satisfied with $C_0 = 1$ and $\alpha_0 = 1$. Assume that the inequality holds for some integer $0 \leq k\leq m-1$ i.e. \begin{equation}\label{eq:Hyprec} \int_{\Omega}( u-v) \sigma_k{(\varphi)}\leq C_{k} \left[\Vert u-v \Vert_1\right]^{\alpha_k}. \end{equation} We will show that there exists $C_{k+1} > 0$ such that $$ \int_{\Omega}(u-v) \sigma_{k+1}(\varphi)\leq C_{k+1} \left[\Vert u-v \Vert_1\right]^{\alpha_{k+1}}. $$ Indeed (\ref{eq:funestimate}) and (\ref{eq:Hyprec}) yields $$ \int_{\Omega}(u-v) \sigma_{k+1}(\varphi)\leq M_1 C_k \kappa \frac{\delta^{\alpha}}{\delta^2} [\|u-v\|_1]^{\alpha_k} + d(m,n) \kappa R \delta^{\alpha}. $$ We want to optimize the last estimate. Since $ \|u-v\|_1 \leq 1$, we can take $\delta= \delta_0 [\|u-v\|_1]^{\alpha_k\slash 2} < \delta_0$ in the last inequality to obtain \begin{eqnarray} \label{eq:lastestimate} \int_{\Omega}(u-v) \sigma_{k+1}(\varphi) &\leq & (M_1 C_k + d (m,n))\, \kappa \left(\|u-v\|_1]^{\alpha_k \slash 2}\right)^{\alpha } \nonumber \\ & \leq & C_{k + 1} R [\|u-v\|_1]^{\alpha_{k+ 1}}, \end{eqnarray} where $\alpha_{k + 1} := \alpha_k (\alpha \slash 2)$. This proves the last statement of the lemma. \smallskip \smallskip We now proceed to the proof of the first statement. As we saw before the main issue is to estimate uniformly the integrals like $ I'_k (v,\varphi)$, but this is not the case in general. We will rather consider the following integrals which behave much better : $$ J_{k} (u,v,\varphi) := \int_{\Omega} (u-v)^m (dd^c \varphi)^{k} \wedge \beta^{n -k } $$ By H\"older inequality, we have \begin{equation} \label{eq:Hineq2} \int_\Omega (u-v) (dd^c \varphi)^k \wedge \beta^{n-k} \leq \vert \Omega\vert^{ (m-1) \slash m}\left(J_k (u,v,\varphi)\right)^{1 \slash m}, \end{equation} where $\vert \Omega\vert$ is the volume of $\Omega$. It is then enough to estimate $J_k (u,v,\varphi)$. We will proceed by induction on $k$ ($0 \leq k \leq m$) to prove the following estimate \begin{equation} \label{eq:Jk} J_k (u,v) \leq C_k R (1 + \Vert u - v\Vert_\infty^{m - 1}) \, \|u-v\|_1^{\tilde{\alpha}_k}. \end{equation} If $k = 0$, we have $J_0 (u,v) \leq \Vert u - v\Vert_{\infty}^{m - 1} \Vert u - v\Vert_1$. The inequality is satisfied with $\\tilde{\alpha}_0 = 1$ and $C'_0 = 1$ Assume the estimate (\ref{eq:Jk}) is proved for some integer $0 \leq k \leq m-1$. To prove it for the integer $k + 1$, we write as before $$ J_{k+1} (u,v) = A' (\delta) + B' (\delta), $$ where $$ A' (\delta):=\int_{\Omega}(u-v)^m dd^c \varphi_{\delta}\wedge(dd^c\varphi)^{k} \wedge\beta^{n-k}, $$ and \begin{eqnarray*} B' (\delta) &:=& \int_{\Omega}(u-v)^m dd^c(\varphi - \varphi_{\delta} - \kappa \delta^\alpha)\wedge(dd^c\varphi)^{k} \wedge\beta^{n-k}. \end{eqnarray*} The first term $A' (\delta)$ is estimated as before \begin{equation} \label{eq:estimationA'} |A' (\delta)|\leq M_1 \frac{\kappa \delta^{\alpha}}{\delta^2} J_k (u,v) \leq M_1 \, C_k \, \kappa R \, \delta^{\alpha - 2} \left[\Vert u-v \Vert_1\right]^{\alpha_k}. \end{equation} We need to estimate the second term $B' (\delta)$. Since $ u - v = 0$ near the boundary, we can integrate by parts to get the following formula \begin{equation} \label{eq:estimationB'0} B' (\delta) = \int_{\Omega} (\varphi_{\delta} - \varphi + \kappa \delta^\alpha) \left(- dd^c[ (u-v)^m]\right) \wedge (dd^c\varphi)^k\wedge\beta^{n-k-1}. \end{equation} If $m = 1$, then $k = 0$ and $$- dd^c [(u-v) \wedge \beta^{n-1} \leq dd^c v \wedge \beta^{n-1}, $$ in the weak sense on $\Omega$. Since $0 \leq \varphi_{\delta} - \varphi + \kappa \delta^\alpha \leq 2 \kappa \delta^\alpha$, it follows from (\ref{eq:estimationB'0}) that \begin{equation} \label{eq:estimateB'1} \vert B' (\delta) \vert \leq 2 \kappa \delta^\alpha \int_\Omega dd^c v \wedge\beta^{n-1} \leq 2 R \, \kappa \, \delta^\alpha. \end{equation} If $ m \geq 2$, a simple computation shows that \begin{eqnarray*} - dd^c [(u-v)^m] & =& - m (u-v)^{m -1} dd^c (u - v) \\ &-& m (m-1) (u-v)^{m - 2} d(u-v) \wedge d^c(u-v) \\ &\leq & - m (u-v)^{m -1} dd^c (u- v), \end{eqnarray*} Since $ dd^c u \wedge (dd^c\varphi)^k\wedge\beta^{n-k-1} \geq 0$ weakly on $\Omega$, it follows that $$ - dd^c [(u-v)^m] \wedge (dd^c\varphi)^k\wedge\beta^{n-k-1} \leq m (u-v)^{m -1} dd^c v\wedge (dd^c\varphi)^k\wedge\beta^{n-k-1}, $$ weakly on $\Omega$. Hence since $\varphi_{\delta} - \varphi + \kappa \delta^\alpha \geq 0$, that $$ \vert B' (\delta)\vert \leq m \int_{\Omega} (\varphi_{\delta}-\varphi + \kappa \delta^\alpha) (u-v)^{m -1} dd^c v \wedge(dd^c\varphi)^k\wedge\beta^{n-k-1}. $$ Moreover, since $0 \leq \varphi_{\delta}-\varphi + \kappa \delta^\alpha \leq 2 \kappa \delta^{\alpha}$ on $\Omega$, it follows from (\ref{eq:estimationB'0}) that \begin{equation} \label{eq:estimation2} |B' (\delta)| \leq 2 m \, \kappa \, \delta^{\alpha} \int_\Omega (u-v)^{m -1} dd^c v \wedge (dd^c\varphi)^k\wedge\beta^{n-k-1} \end{equation} Recall that $\beta = dd^c \psi_0$, where $\psi_0 (z) := \vert z\vert^2 -r_0^2$, where $r_0 > 0$ is choosen so that $\psi_0 \leq 0$ on $\Omega$. Then the inequality (\ref{eq:estimation2}) implies that \begin{equation} \label{eq:estimationB'} |B' (\delta)| \leq 2 m \kappa \delta^{\alpha} \int_\Omega (u-v)^{m -1} dd^c v \wedge (dd^c\varphi)^k \wedge (dd^c \psi)^{m - k-1} \wedge\beta^{n-m}. \end{equation} Since $v, \varphi, \psi_0 \leq 0$ on $\Omega$, repeating the integration by parts $(m-1)$ times, we get \begin{equation} \label{eq:estimationC'} |B' (\delta)| \leq 2 m! \, \kappa \, \delta^{\alpha} \Vert \varphi\Vert_{\infty}^k \Vert \psi_0\Vert_{\infty}^{m - k-1} \int_\Omega (dd^c v)^m \wedge\beta^{n-m}. \end{equation} Combining the inequalities (\ref{eq:estimationA'}), (\ref{eq:estimationB'}) and (\ref{eq:estimationC'}), we obtain for $0 < \delta < \delta_0$, $$ \int_{\Omega}(u-v)^m \sigma_{k +1}(\varphi)\leq M_1 \kappa \delta^{\alpha- 2} J_k (u,v)+ d'(m,n) R \delta^{\alpha}, $$ where $d'(m,n) = d'(m,n,\varphi,g) > 0$ is a uniform constant. Applying the induction hypothesis (\ref{eq:Jk}), we get $$ \int_{\Omega}(u-v)^m \sigma_{k +1}(\varphi)\leq M_1 \kappa \delta^{\alpha- 2} C_k \Vert u - v \Vert_{\infty}^{m -1} \|u-v\|_1^{\alpha_k})+ d'(m,n) R \delta^{\alpha}, $$ We want to optimize the last estimate. Since $ \|u-v\|_1 \leq 1$, we can take $\delta= \delta_0 [\|u-v\|_1]^{\alpha_k\slash 2} < \delta_0$ in the last inequality to obtain \begin{eqnarray*} \int_{\Omega}(u-v)^m \sigma_{k+1}(\varphi) &\leq & (M_1 C_k \Vert u - v\Vert_{\infty}^{m -1} + R d (m,n))\, \kappa \left(\|u-v\|_1]^{\alpha_k \slash 2}\right)^{\alpha } \\ & \leq & C_{k + 1} (1 + \Vert u - v\Vert_{\infty}^{m -1} ) R [\|u-v\|_1]^{\alpha_{k+ 1}}, \end{eqnarray*} where $\alpha_{k + 1} := \alpha_k (\alpha \slash 2)$ and and $C'_{k+1} := M_1 C'_k + d'(m,n)$. This proves the estimate (\ref{eq:Jk}) for $k+1$. Taking into account the inequality (\ref{eq:Hineq2}) we obtain the estimate of the lemma with appropriate constants. This finishes the proof of the second part of the lemma. \end{proof} \smallskip We also don't know if the lemma is true when the total mass of the Hessian measure $\sigma_m (\varphi) $ on $\Omega$ is infinite. \section{Proofs of the main results} In this section we will give the proofs of Theorem A and Theorem B stated in the introduction using the previous results. \subsection{ Proof of Theorem A} For the proof of Theorem A, we will use the same idea as \cite{KN19}. However, since our measure has not a compact support, we need to use the control on the behaviour of the mass of the $m$-Hessian of the subsolution close to the boundary, given by Lemma \ref{lem:ComparisonIneq}. \begin{proof} We extend $\varphi$ as a H\"older continuous function on the whole of $\mathbb{C}^n$ with the same exponent and denote by $\varphi$ the extension (see (\ref{eq:Holderextension})). Then denote by $\varphi_{\delta}$ ($0 < \delta < \delta_0$) the smooth approximants of $\varphi $ defined by the formula (\ref{eq:regularization2}). Then $\varphi_\delta \in \mathcal{SH}_m (\Omega_{\delta})\cap\mathcal{C}^{\infty}(\mathbb{C}^n)$. We consider the $m$-subharmonic envelope of $\varphi_\delta$ on $\Omega$ defined by the formula $$ \psi_\delta := \sup \{\psi \in \mathcal{SH}_m (\Omega) ; \psi \leq \varphi_\delta \, \, \, \hbox{on} \, \, \, \Omega \}\cdot $$ It follows from Lemma \ref{lem:projection} that $\psi_\delta \in \mathcal{SH}_m (\Omega)$ and $\psi_\delta \leq \varphi_\delta $ on $\Omega$. Fix $0 < \delta < \delta_0$ and a compact set $K \subset \Omega_\delta$ and consider the set $$ E :=\{3 \kappa \delta^\alpha u_K^*+ \psi_\delta<\varphi- 2 \kappa \delta^{\alpha}\} \subset \Omega. $$ Since $\varphi $ is H\"older continuous on $\bar \Omega$, we have $\varphi - \kappa \delta^\alpha\leq \varphi_{\delta} \leq \varphi + \kappa \delta^{\alpha}$ on $\Omega$ and then $\varphi - \kappa \delta^\alpha \leq \psi_\delta \leq \varphi_{\delta} \leq \varphi (z) + \kappa \delta^\alpha$ on $\Omega$. Therefore $\liminf_{z \to \partial \Omega} (\psi_\delta - \varphi + \kappa \delta^\alpha ) \geq 0$, and then $E \Subset \Omega$. By the comparison principle, we conclude that \begin{eqnarray} \label{eq:fundmentalestimate} \int_{E}(dd^c\varphi)^m\wedge\beta^{n-m} & \leq & \int_{E}(dd^c(3 \kappa \delta^\alpha u_K^* + \psi_{\delta}))^m\wedge\beta^{n-m} \nonumber \\ & \leq & 3 \kappa L \delta^\alpha \int_{E}(dd^c(u_K^* + \psi_{\delta}))^m\wedge\beta^{n-m} \\ &+&\int_{E}(dd^c\psi_{\delta})^m\wedge\beta^{n-m}, \nonumber \end{eqnarray} where $L := \max_{0 \leq j \leq m - 1} (3 \kappa \delta_0^\alpha)^j$. Observe that $-1 + \varphi - \kappa \delta^\alpha \leq u_K^* + \psi_\delta \leq \varphi + \kappa \delta^\alpha$ on $\Omega$, hence $\vert u_K^* + \psi_\delta\vert \leq \sup_{\Omega} \vert \varphi\vert + 1 + \kappa \, \delta_0^\alpha =: M_0$ on $\Omega$. Therefore from inequality (\ref{eq:fundmentalestimate}), it follows that \begin{equation} \label{eq:finalestimate1} \int_{E}(dd^c\varphi)^m\wedge\beta^{n-m} \leq 3 \kappa \delta^\alpha L M_0^m \text{Cap}_m (E,\Omega) + \int_{E}(dd^c\psi_{\delta})^m\wedge\beta^{n-m}. \end{equation} Since $\varphi$ is H\"older continuous on $\bar \Omega$, we have \begin{equation}\label{eq:1} dd^c\varphi_{\delta}\leq\frac{M_1 \kappa \delta^\alpha}{\delta^{2}} \beta, \, \, \, \mathrm{on} \, \, \, \Omega, \end{equation} where $M_1 > 0$ is a uniform constant depending only on $\Omega$. Hence by Theorem \ref{thm:obstacle}, we have \begin{equation}\label{eq:1} (dd^c\psi_{\delta})^m\wedge\beta^{n-m} \leq (\sigma_m (\varphi_{\delta}))_+ \leq\frac{M_1^m \kappa^m \delta^{m \alpha}}{\delta^{2 m}} \beta^n, \end{equation} in the sense of currents on $ \Omega$. Therefore \begin{eqnarray*} \int_{E}(dd^c\psi_{\delta})^m\wedge\beta^{n-m} &\leq & M_1^{m} \kappa^m \delta^{ m (\alpha -2)} \lambda_{2 n} (E). \end{eqnarray*} From this estimate and the inequalities (\ref{eq:finalestimate1}) and (\ref{eq:1}), we deduce that \begin{equation} \label{eq:finalestimate2} \int_{E}(dd^c\varphi)^m\wedge\beta^{n-m} \leq 3 \kappa \delta^{\alpha} L M_0^m \text{Cap}_m (E,\Omega)+ M_1^{m} \kappa^m \delta^{(\alpha -2)m} \lambda_{2 n} (E). \end{equation} By the volume-capacity comparison inequality (\ref{eq:DK}), it follows that for any fixed $1 < r < \frac{m}{n - m}$, there exists a constant $N (r) > 0$ such that \begin{equation} \label{eq:volumeestimate} \lambda_{2 n}(E) \leq N (r) [\text{Cap}_m(E,\Omega)]^{1 + r}. \end{equation} Since $ E \subset\{u_K^* < - 1 \slash 3\}$, by the comparison principle we deduce the following inequality \begin{equation}\label{eq:2} \text{Cap}_m (E,\Omega) \leq 3^{m} \text{Cap}_m (K,\Omega). \end{equation} Since $K \setminus \{u_K < u_K^*\} \subset E$ and $K \cap \{u_K < u_K^*\}$ has zero capacity, it follows that $\int_K (dd^c\varphi)^m\wedge\beta^{n-m} \leq \int_E (dd^c\varphi)^m\wedge\beta^{n-m}$. Therefore if we set $c_m (\cdot) := \text{Cap}_m (\cdot,\Omega)$, we finally deduce from (\ref{eq:finalestimate2}), (\ref{eq:volumeestimate}) and (\ref{eq:2}) that for a fixed $0 < \delta < \delta_0$ and any compact set $K \subset \Omega_\delta$, we have \begin{equation} \int_K (dd^c\varphi)^m\wedge\beta^{n-m} \leq C_0\kappa \delta^{\alpha} c_m (K) + C_1 \kappa^m \delta^{(\alpha -2) m } [c_m(K)]^{1+ r}. \end{equation} where $C_0 := 3^{m + 1} L M_0^m$ and $C_1 := M_1^{m} 3^{m r} N (r)$. By inner regularity of the capacity, we deduce that the previous estimate holds for any Borel subset $B \subset \Omega_\delta$ i.e. \begin{equation} \label{eq:estimate1} \int_B (dd^c\varphi)^m\wedge\beta^{n-m} \leq C_0 \kappa \delta^{\alpha} c_m (B) + C_1 \kappa^{m \alpha } \delta^{(\alpha -2) m} [c_m(B)]^{1+r}. \end{equation} Let $K \subset \Omega$ be any fixed compact set and $0 < \delta < \delta_0$. Then $$ \int_K (dd^c\varphi)^m\wedge\beta^{n-m} = \int_{K \cap \Omega_\delta} (dd^c\varphi)^m\wedge\beta^{n-m} + \int_{K \setminus \Omega_\delta} (dd^c\varphi)^m\wedge\beta^{n-m}. $$ We will estimate each term separately. By (\ref{eq:estimate1}) the first term is estimated easily: $$ \int_{K \cap \Omega_\delta} (dd^c\varphi)^m\wedge\beta^{n-m} \leq C_0 \kappa \delta^{\alpha} c_m (K) + C_1 \kappa^{m \alpha}\delta^{-2m + m \alpha} [c_m(K)]^{1+ r}. $$ To estimate the second term we apply Lemma \ref{lem:ComparisonIneq} for the Borel set $ B := K \setminus \Omega_\delta$. Since $\delta_B (\partial \Omega) \leq \delta$ we get $$ \int_{K \setminus \Omega_\delta} (dd^c\varphi)^m\wedge\beta^{n-m} \leq \kappa^m \delta^{m \alpha} c_m (K). $$ Therefore we obtain the following estimate. For any $0 < \delta <\delta_0$ and any compact set $K \subset \Omega$, we have \begin{equation} \label{eq:fundIneq} \int_{K}(dd^c\varphi)^m\wedge\beta^{n-m} \leq C_0 \kappa \delta^{\alpha} c_m (K) + C_1 \kappa^m \delta^{(\alpha -2) m } [c_m(K)]^{1+ r} + \kappa^m \delta^{m \alpha} c_m (K). \end{equation} We want to optimize the right hand side of (\ref{eq:fundIneq}) by taking $\delta := [c_m (K)]^{\frac{r}{(2-\alpha) m + \alpha}}$. Observe that if $ \delta_K(\partial \Omega) \leq [c_m (K)]^{\frac{r}{(2-\alpha) m + \alpha}}$, then by Lemma \ref{lem:ComparisonIneq} we get \begin{eqnarray} \label{eq:finalIneq1} \int_{K}(dd^c\varphi)^m\wedge\beta^{n-m} \leq \kappa^m [c_m (K)]^{1 + \frac{m \alpha r}{(2-\alpha) m + \alpha}}. \end{eqnarray} Now assume that $ [c_m (K)]^{\frac{r}{(2-\alpha) m + \alpha}} < \delta_K (\partial \Omega) \leq \delta_0$. Then we can take $\delta := [c_m (K)]^{\frac{r}{(2-\alpha) m + \alpha}}$ in inequality (\ref{eq:fundIneq}) and get \begin{equation}\label{eq:finalIneq2} \int_{K}(dd^c\varphi)^m\wedge\beta^{n-m} \leq (C_0 \kappa + C_1 \kappa^m + \kappa^m) \, [c_m (K)]^{1 + \frac{\alpha r}{(2-\alpha) m + \alpha}}. \end{equation} Combining inequalities (\ref{eq:finalIneq1} ) and (\ref{eq:finalIneq2}), we obtain the estimate of the theorem with the constant $A$ given by the following formula: \begin{equation} \label{eq:finalConst} A := C_0 \kappa + C_1 \kappa^m + \kappa^m. \end{equation} \end{proof} \subsection{Proof of Theorem B} Now we are ready to prove Theorem B from the introduction using Theorem A and Lemma \ref{lem:ModC}. \begin{proof} According to Theorem \ref{thm:boundedsubsolution}, we know that there is a unique function $u\in \mathcal{SH}_m(\Omega)\cap L^{\infty}(\overline{\Omega}) $ such that $$ (dd^c u)^m\wedge\beta^{n-m}=\mu, $$ in the weak sense on $\Omega$ and $ u=g $ on $\partial\Omega$. We want to prove that $u$ is H\"older continuous up to the boundary. For $0 < \delta < \delta_0$ and denote as before by $u_{\delta}(z)$ the $\delta$-regularization of $u$. Recall that $u_\delta$ is smooth on $\mathbb{C}^n$ and $m$-subharmonic on $\Omega_\delta$. The first step is to use the H\"older continuity of the boundary datum $g$ to construct global barriers to show that $u$ is $m$-subharmonic near the boundary and to deduce bounded $m$-subharmonic global approximants $\tilde{u}_{\delta}$ close to ${u}_{\delta}$ on $\Omega_\delta$. By \cite{Ch16} there exists a continuous maximal $m$-subharmonic function $w \in \mathcal{SH} (\Omega) \cap C^{\alpha} (\bar \Omega)$ such that $w = g$ on $\partial \Omega$. Then $v : = w + \varphi \in \mathcal{SH} (\Omega) \cap \mathcal{C}^{\alpha} (\bar \Omega)$ is a subsolution to the Dirichlet problem (\ref{eq:DirPb}) such that $v = g$ on $\partial \Omega$. Hence $v \leq u \leq w$. This proves that $u$ is H\"older continuous near the boundary $\partial \Omega$ (see Remark \ref{rem:HolderBoundary}). Therefore to prove that $u$ is H\"older continuous on $\bar{\Omega}$ with exponent $\theta \in ]0,1[$, it's enough by Lemma \ref{lem:sup-mean} to prove that there exists a constant $L > 0$ such that for $\delta < \delta_0$, \begin{equation} \label{eq:Holdercontinous} \sup_{\Omega_\delta} (u_\delta - u) \leq C \delta^{\theta}. \end{equation} We claim that there exists a constant $\kappa > 0$ such that for $z\in\partial\Omega_{\delta}$, we have $ u (z) \geq {u}_{\delta}(z) - \kappa \delta^{\alpha} $. Indeed fix $z\in\partial\Omega_{\delta}$. Then there exists $\zeta \in \partial \Omega$ such that $\vert z -\zeta \vert = \delta$. Since $v \leq u \leq w$ on $\Omega$ and they are equal on $\partial \Omega$, it follows that \begin{eqnarray*} u_{\delta} (z) & \leq & w_\delta (z) \leq w (z) + \kappa_w \delta^\alpha \\ & \leq & w (\zeta) + 2 \kappa_w \delta^\alpha = v (\zeta) + 2 \kappa_w \delta^\alpha \\ & \leq & v (z) +( \kappa_v + 2 \kappa_w) \delta^\alpha \\ & \leq & u (z) + \kappa \delta^\alpha, \end{eqnarray*} where $\kappa := \kappa_v + 2 \kappa_w$ and $ \kappa_v$ (resp. $\kappa_w$) is the H\"older constant of $v$ (resp. $w$). This proves our claim. Therefore the following function $$ \tilde{u}_{\delta}:= \left\{ \begin{array}{lcl} \max\{{u}_{\delta} - \kappa \delta^{\alpha},u \} &\hbox{on} & \Omega_{\delta}, \\ u &\hbox{on} & \Omega\setminus\Omega_{\delta} \end{array}\right. $$ is $m$-subharmonic and bounded on $\Omega$ and satisfies $0 \leq \tilde{u}_{\delta}(z) - u (z) =(u_\delta (z) - u (z) - \kappa \delta^{\alpha})_+ \leq u_\delta (z) - u (z)$ for $z\in \Omega_{\delta}$ and $\tilde{u}_{\delta}(z) - u (z) = 0$ on $\Omega \setminus \Omega_\delta$. Moreover, since $ \tilde{u}_{\delta} \geq u$ on $ \Omega$ and $\tilde{u}_{\delta} = u$ on $\Omega \setminus \Omega_\delta$, Corollary \ref{coro:Comparison Principle} implies that for any $0 < \delta < \delta_0$, we have $$ \int_{\Omega}(dd^c\tilde{u}_{\delta})^m\wedge\beta^{n-m}\leq\int_{\Omega}(dd^c u)^m\wedge\beta^{n-m} \leq \mu (\Omega) < + \infty. $$ The second step is to apply stability estimates. Since $\tilde u_\delta = u$ on $\Omega \setminus \Omega_\delta$, Proposition \ref{prop:stability} implies that for $$ 0 <\gamma < \gamma (m,n,\alpha):= \frac{m \alpha}{ m (m + 1) \alpha + (n-m) [(2 - \alpha)m + \alpha]}, $$ there exists a constant $D_\gamma > 0$ such that any $0 < \delta < \delta_0$, \begin{equation} \label{eq2} \sup_{\Omega}(\tilde{u}_{\delta}-u) \leq D_{\gamma} \left(\int_{\Omega}(\tilde{u}_{\delta}-u) d\mu\right)^{\gamma}. \end{equation} On the other hand, since $\mu\leq (dd^c\varphi)^m\wedge\beta^{n-m}$ on $\Omega$, it follows from Theorem A that we can apply Lemma \ref{lem:ModC}, if we can ensure that $\Vert (\tilde{u}_{\delta}-u \Vert_{1} := \int_{\Omega}(\tilde{u}_{\delta}-u) d\lambda_{2n} \leq 1$ for $\delta > 0$ small enough. Indeed by the estimate (\ref{eq:PJ2}), we see that there exists a uniform constant $b_n > 0$ such that for $0 < \delta < \delta_0$, \begin{equation*} \int_{\Omega_\delta}({u}_{\delta}-u) d \lambda_{2 n} \leq b_n \, \text{osc}_{\Omega} u \leq 1, \end{equation*} for $0 < \delta < \delta_1$, where $\delta_1 > 0$ is small enough. To prove (\ref{eq:Holdercontinous}), we will consider the two cases separately. \smallskip 1) Assume first that $g \in \mathcal C^{1,1} (\partial \Omega)$. By the inequality (\ref{eq:PJ1}) we have for $0 < \delta < \delta_0$, \begin{equation} \label{eq:PJ} \int_{\Omega_\delta}({u}_{\delta}-u) d \lambda_{2 n} \leq b_n \delta^2\Vert \Delta u\Vert_{\Omega_\delta}. \end{equation} By the inequality (\ref{eq:claim}) of Lemma \ref{lem:Cegrell2}, we have for $0 < \delta < \delta_0$, \begin{eqnarray} \label{eq:PJ0} \|\Delta u\|_{\Omega_\delta} \leq \int_\Omega dd^c u \wedge \beta^{n - 1} &\leq & d(m,n) \left(M' + \mu (\Omega)^{1 \slash m}\right) . \end{eqnarray} On the other hand, applying the inequality (\ref{eq:MocEst1}) of Lemma \ref{lem:ModC}, we get for $0 < \delta < \delta_1$, \begin{eqnarray} \label{eq:stab} \int_{\Omega}(\tilde{u}_{\delta}-u)d\mu & \leq & C_m \left(\int_{\Omega}(\tilde{u}_{\delta}-u)(z)d\lambda_{2 n} (z)\right)^{\alpha_m} \\ &\leq & C_m\left(\int_{\Omega_{\delta}}({u}_{\delta}(z)-u(z)d\lambda_{2 n} (z)\right)^{\alpha_m}, \nonumber \end{eqnarray} where the last inequality follows from the fact that $0 \leq \tilde{u}_{\delta}-u \leq \tilde{u}_{\delta}-u$ on $\Omega$ and $\tilde{u}_{\delta}= u$ on $\Omega \setminus \Omega_\delta$. Therefore taking into account the inequalities (\ref{eq:PJ}), (\ref{eq:PJ0}) and (\ref{eq:stab}), we get for $0 < \delta < \delta_1$, \begin{equation} \label{eq:estimatemu} \int_{\Omega}(\tilde{u}_{\delta}-u)d\mu \leq C_m' \delta^{2 \alpha_m}. \end{equation} Finally from (\ref{eq2}) and (\ref{eq:estimatemu}) we conclude that for $0 < \delta < \delta_1$, \begin{equation} \label{eq:finalestimate} \begin{array}{lcl} \sup_{\Omega_{\delta}}( {u}_{\delta}-u)&\leq&\sup_{\Omega}(\tilde{u}_{\delta}-u)+ \kappa \delta^{\alpha} \\ &\leq & {C''_m} \delta^{2\gamma\alpha_m} + \kappa \delta^{\alpha}. \end{array} \end{equation} This proves the required estimate (\ref{eq:Holdercontinous}) with $\theta = 2 \gamma \alpha_m < \alpha$. \smallskip \smallskip 2) In the general case the estimate (\ref{eq:PJ0}) on $\|\Delta u\|_{\Omega_\delta}$ is not valid anymore. However by (\ref{eq:PJ}) and (\ref{eq:PJ2}), we get for $0 < \delta < \delta_1$, \begin{equation} \label{eq:6} \int_{\Omega_\delta}({u}_{\delta}-u)d\lambda_{2n} \leq {\tilde C}'_m \delta \leq 1. \end{equation} Therefore taking into account (\ref{eq2}), (\ref{eq:6}) and the inequality (\ref{eq:MocEst2}) of Lemma \ref{lem:ModC}, we get for $0 < \delta < \delta_1$, $$ \sup_{\Omega_{\delta}} ({u}_{\delta}-u) \leq \tilde C_m \, \delta^{\gamma\alpha_m \slash m} + \kappa \delta^{\alpha}, $$ since $\Vert \tilde u_{\delta} - u\Vert_{\infty} \leq \text{osc}_{\Omega} u$. This proves the required estimate (\ref{eq:Holdercontinous}) with $\theta = \gamma\alpha_m \slash m < \alpha$. \end{proof} \smallskip \smallskip \begin{remark} If we assume that $g \in C^{\theta} (\partial \Omega)$, it is easy to see that the solution in Theorem B is H\"older continuous with any exponent $0 < \alpha' < \min \{\theta \slash 2, \gamma \tilde \alpha_m\}$. \end{remark} \smallskip \noindent{\bf Warning :} This paper presents a corrected version of the one published recently in \cite{BZ20}. Indeed the proof of \cite[Theorem B]{BZ20} was not complete because of an error in \cite[Lemma 4.2]{BZ20}. An erratum has been submitted to the journal and hopefully it will appear soon. \smallskip \smallskip \noindent{\bf Aknowledgements :} The authors are indebted to Hoang Chinh Lu for his very careful reading of the first version of this paper and for valuable comments that helped to correct some errors and to improve the presentation of the paper. They also would like to thank Vincent Guedj for interesting discussions and Ngoc Cuong Nguyen for useful exchanges about his earlier work on this subject. This project started when the first author was visiting the Institute of Mathematics of Toulouse (IMT) during the spring $2017$ and $2018$. She would like to thank IMT for the invitation and for providing excellent research conditions.
1610.02991
\section{Discussion} Most social psychology data lie in the bounding box of language, culture, ethnicity, and even environment. Social media sites like Facebook and Twitter, however, are useful places for behavioral data mining wherein people without the care of language, ethnicity, and boundaries are free to speak their mind without the need to think of any consequence thereof. Therefore, social big data is suitable for observing and measuring human behavior in a natural setting. This complements data from social psychology experiments. We have analyzed spontaneous moral conversations on Twitter where people wrote about myriad topics such as abortion, homosexuality, immigration, religion, and immorality in general. By quantifying moral loadings from tweets, we observed that while the five moral foundations are mutually related, Care is the most dominant foundation and Purity is the most distinctive foundation---at least in online conversations about immorality. These findings need to be further tested with a larger size of corpora from various social media sites. With the MF dictionary, a simple word counting approach is often used to measure moral charge from texts, but the LSA-based approach used here has some advantages. It allows us to capture the meanings of moral words as vectors, and thus there are a wide variety of methods in vector semantics. For example, topic modeling is a promising application of NLP to gain new insights into moral psychology. Here, we fixed the size of a word-context matrix according to the purpose, but the estimation of the appropriate size is another important issue that future studies need to address. We are aware that translated versions of the MF dictionary are necessary for cross-cultural comparisons of moral conversations. This is also an important future direction. Although several forthcoming issues remain, the current study demonstrates a new possibility of NLP and social big data for moral psychology by quantifying moral diversity in everyday conversations. \section{Introduction} Social media communication is a regular part of our daily life and a large amount of linguistic information is posted and digitally recorded every day. Our digital behavioral traces allow us to quantify human behavior in a natural setting to complement experimental data. The availability of social big data for social science research is now widely acknowledged \cite{Lazer:2009ci,Golder:2014jm}, and computational social science provides new insights into human nature in the digital era \cite{Sasahara:2013eu,Takeichi:2015ia,Sasahara2016}. In this paper, we study moral behavior using social big data from a popular microblogging platform, Twitter. One of the most influential theories about moral behavior is Moral Foundations Theory (hereafter, MFT) proposed by Haidt and Joseph \cite{Haidt2012}. MFT can explain variations in moral behavior on the basis of the following innate moral foundations: \begin{itemize} \item Care: disliking the pain of others and feeling of protecting the vulnerable; \item Fairness: doing the right thing or justice based on shared rules; \item Ingroup: being loyal to social groups, including family and nation; \item Authority: respecting and obeying tradition and legitimating authority; \item Purity: feeling an antipathy for disgusting things and contamination. \end{itemize} According to Haidt et al.~\cite{Haidt2008}, Care and Fairness, which focus on individuals' rights and freedom, are grouped together and referred to as `individualizing foundations,' whereas Ingroup, Authority, and Purity are mainly centered around binding people together for the welfare of community or group and are called `binding foundations.' MFT is especially successful in explaining political ideology and cross-cultural differences in moral behavior. For example, previous studies have shown that political progressives or `liberals' stress only Care and Fairness in moral judgment, whereas political conservatives equally stress all moral foundations \cite{Haidt2012}. Another study has demonstrated that people in collectivist societies are more sensitive to violations of the community-related moral foundations, whereas people in Western societies are more likely to discriminate between care-related violations and convention-related violations \cite{Haidt2012}. While moral foundations have been confirmed experimentally, more recent studies have used natural language processing (NLP) techniques and social big data \cite{Sagi2014,Sagi:2014un,Dehghani2016}: they measured the moral loadings from written expressions on a particular real-world event (2013 government shutdown in the US) from Twitter, showing that Purity is related to social distance. The findings were then validated experimentally. These studies have led to a new method with empirical evidence for MFT, yet we know little about the roles and relationships between moral foundations in everyday moral situations. To address these issues, we quantified moral foundations from various topics in everyday conversations on Twitter. More specifically, we examined the relationships of five moral foundations by measuring moral loadings from a large amount of posted messages or tweets that are related to moral violating situations. Moreover, we analyzed tweets about moral topics such as abortion, homosexuality, immigration, and religion for insights into the roles of moral foundations in various topics. \section*{Acknowledgment} This work was supported by JSPS KAKENHI grant no.15H03446. We thank M. Karasawa for discussions. \bibliographystyle{abbrv} \section{Method} \subsection{Data collection} We collected tweets related to moral concerns---abortion (`abortion'), homosexuality (`homosexuality' OR `homosexual'), immigration (`immigration' OR `immigrant'), religion (`religion' OR `religious'), and immorality (`immorality' OR `immoral') that were posted between March 1 and April 24, 2016 using the Twitter Search API (https://dev.twitter.com/docs/api/). The search queries written in brackets were selected to observe daily conversations on moral topics. English tweets were used for our text analysis. The sizes of the datasets are 1516119 tweets for abortion, 456674 tweets for homosexuality, 2102886 tweets for immigration, 4628102 tweets for religion, and 217975 tweets for immorality, respectively. The topic datasets (i.e., tweets with `abortion', `homosexuality', `immigration', or `religion') were used only to identify keywords and context words associated with these topics. Except these, our analysis has been done with the immorality dataset (i.e., tweets with `immorality'). \subsection{Data pre-processing} Each tweet consists of multiple fields such as text, retweeted status, language, and other meta-data. We extracted `text' field and `retweeted\_status.text' field if the tweet has been re-tweeted. To clean the texts, we removed stop words (e.g., `a', `an', `the') that are defined in NLTK library (http://www.nltk.org), URLs, screen names (e.g., @BarackObama), special characters (e.g., ! and \$), numbers, leading and trailing white-spaces, and short words with length less than three (e.g., th, gf) from tweets. After that, hashtag symbol was removed from tweets (e.g., `\#harm' was replaced with `harm'). Furthermore, the words used in queries for Twitter Search API were removed because we are interested in popular words that characterize moral topics except those used in queries. For example, in the case of the immorality dataset, `immoral’ and `immorality’ were removed and thus not considered as keywords and context words for subsequent analyses. Further, we removed the duplicate tweets from our dataset. Finally, we tokenized tweet texts by splitting on white-spaces. Suppose a tweet ``@CharlesMBlow 50\% marginal taxrates aren't immoral. Letting the majority of public school kids live in poverty is. https://t.co/grEgeu9lPj”, by following the steps stated above we are left with `marginal', `taxrates', `letting', `majority', `public', `school', `kids', `live', `poverty.' This pre-processing was applied to the topic datasets and the immorality dataset after which we refer to them as the topic corpora and the immorality corpus, respectively. \subsection{Quantification of moral foundations} To quantify the moral foundations from the Twitter corpora, we used a method proposed by Dehghani et al.~\cite{Dehghani2016} with several modifications. Simply speaking, the method is based on the latent semantic analysis (LSA)~\cite{Deerwester1990, Landauer1997}: using a bag-of-words model, a corpus is represented by a word-context matrix and by reducing its dimensionality we get low-dimensional word vectors, in which similar meaning words are represented by similar vectors. With this method, we can measure moral foundations from everyday tweets by comparing the words in the moral foundations dictionary (described below) and words in the Twitter corpora. The details are described below. \subsubsection{Moral foundations dictionary} The moral foundations dictionary (hereafter, the MF dictionary) was created by Graham and Haidt~\cite{Graham2009}. It is available online at http://moralfoundations.org. The MF dictionary lists the words and word stems associated with five moral foundations---Care (called `Harm' in the MF dictionary), Fairness, Ingroup, Authority and Purity, along with general words associated with morality and immorality. These are further divided into two categories, `virtue' and `vice.' Virtue words are foundation-supporting words (e.g. safe* and shield for Care virtue), whereas vice words are foundation-violating words (e.g. kill and ravage for Care vice). To limit our analysis of moral violating situations, we used 149 vice words in the MF dictionary. \subsubsection{Selection of keywords and context words} To construct a word-context matrix used for the succeeding analysis, we selected keywords and context words based on the tf-idf score from the topic corpora and the immorality corpus (Fig.~\ref{fig:selection}). From each of the Twitter corpora, we created a word-tweet matrix $X$, respectively, in which each row denotes word $w$ and each column denotes tweet $t$, and the element $X_{ij}$ denotes the frequency of word $w_i$ in tweet $t_j$. Then, this matrix was converted to a tf-idf weighted matrix $Y$ by the following equation: \begin{equation} \label{eq:tfidf} tf{\mathchar`-}idf(w_i,t_j) = tf(w_i ,t_j)(\log(M+1) - \log(df(w_i))), \end{equation} where $tf(w_i , t_j)$ is the frequency of word $w_i$ in tweet $t_j$, $M$ is the total number of tweets and $df(w_i)$ is the number of tweets containing word $w_i$. The overlap score of a word $w_i$ in $Y$ is computed as a measure of word importance~\cite{Manning:2008:IIR:1394399}: \begin{equation} \label{eq:score} Score(w_i) = \sum_{j=0}^{M} (tf{\mathchar`-}idf(w_i, t_j)). \end{equation} According to (\ref{eq:score}), the words were ranked in the decreasing order and the top $N_1$ words were selected as keywords and the top $N_2$ words as context words for a word-context matrix in order to use the subsequent analysis. Using the same settings of Dehgani et al.~\cite{Dehghani2016}, we set $N_1=2000$ and $N_2=20000$. \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{SelectWords} \caption{Selection of keywords and context words} \label{fig:selection} \end{figure} \subsubsection{Word-context matrix and singular value decomposition} In using keywords as columns and context words as rows, we created a word-context matrix $C$, in which the element $C_{ij}$ represents the number of co-occurrence of word $w_i$ and context word $w_j$. In contrast to Dehghani et al. \cite{Dehghani2016}, we further converted it to the positive pointwise mutual information (PPMI) based matrix so as to not assign a higher weight to a popular general word that is irrelevant to moral issues \cite{Jurafsky2014}: \begin{equation} \label{eq:PPMI} PPMI(v_i, v_j) = max(\log_2 (P(v_i, v_j) / P(v_i)P(v_j)) ,0), \end{equation} Here, $v_i$ and $v_j$ are the words in $C$, $P(v_i)$ is the occurrence probability of a word $v_i$ and $P(v_i, v_j)$ is the joint occurrence probability of $v_i$ and $v_j$. Using the PPMI-based word-context matrix $C$, we applied singular value decomposition (SVD) to achieve a lower dimensional representation of keywords: $C = U\sum V^*$ where $U$ is a left singular matrix, $\sum$ is a diagonal matrix and $V^*$ is the right singular matrix. The top $k$ dimensions (in our case $k = 100$) of $U$ matrix are retained, and each row of this reduced matrix represents a word in $k$ dimensions. The matrices $\sum$ and $V^*$ are discarded. In this way, SVD converts the high-dimensional and sparse word-context matrix into a lower dimensional, real-valued matrix, which represents the semantic relationships between words~\cite{Jurafsky2014}. \subsubsection{Construction of context vectors} The resulting word vector space is linear. Therefore, it is possible to approximate a text by adding corresponding word vectors~\cite{Dehghani2016}. In doing this, we constructed tweet context vectors, topic context vectors, and moral foundation (MF) context vectors. \begin{figure}[hb] \centering \includegraphics[width=3in]{WordVectorAddition} \caption{Example of the construction of a tweet context vector} \label{fig:CV} \end{figure} Fig.~\ref{fig:CV} shows an example of how to construct a tweet context vector. Given a tweet, `Sin is disgusting to god.’ the tweet context vector is obtained by the addition of the vectors of corresponding words `sin’, `disgust’, and `god’. Note that the stop words `is’ and `to' were removed during pre-processing. Similarly, we constructed MF context vectors by adding the vectors corresponding to the MF dictionary's words present in the immorality corpus, and topic context vectors by adding the vectors corresponding to topic specific words. As mentioned earlier, we have used the topic corpus to select keywords describing a specific topic (e.g., topic `religion' may be described by various words such as `God', `church', `religious', `Islam', `Christianity', `Hindu'). For comparison, we created the topic context vectors using 10 keywords and 100 keywords, respectively. \subsubsection{Measurement of moral loadings} \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{MoralLoadings} \caption{Measurement of moral loadings for topics} \label{fig:moralloadings} \end{figure} Of the three kinds of context vectors described above, one possible way to quantify moral foundations from texts is to measure moral loadings \cite{Dehghani2016}, which is defined as similarity to MF context vectors. For example, the moral loading of a tweet is measured by the cosine similarity between the tweet context vector and the MF context vector. The cosine value of 1 signifies synonymy of expressions, while the cosine value of 0 indicates that these are semantically unrelated. In this manner, if tweet context vectors (TV) are represented by $\langle TV_{1}, TV_2, TV_3,...,TV_M \rangle$ and MF context vectors (MV) by $\langle MV_{\rm Care}, MV_{\rm Fairness}, MV_{\rm Ingroup}, MV_{\rm Authority}, MV_{\rm Purity}\rangle$, then the moral loadings are summarized in a $M \times 5$ matrix whose element represents the similarity between a tweet and each moral foundation. Similarly, we can compute moral loadings for topics (i.e., abortion, homosexuality, immigration, religion), which results in a 4 (number of topics) $\times$ 5 (number of moral foundations) matrix representing similarity between topics and moral foundations. This procedure is illustrated in Fig.~\ref{fig:moralloadings}. \section{Results} \subsection{Immorality in everyday tweets} We analyzed the immorality corpus in terms of moral violating situations and found that 81.1\% of the MF dictionary’s vice words were present in the corpus. Fig.~\ref{fig:wordcloud} shows examples of the MF dictionary’s vice words present in the corpus, in which the size of a word is proportional to its occurrence frequency and colors represent different moral foundations. For example, words such as `war', `killing', `violen*', `attack' are most frequently occurring words from Care foundation, and words such as `sin', `sick', `disgust', `dirt', `adulter*' were present in the Purity foundation. Note that the MF dictionary’s words such as `spurn', `favoritism', `jilt*', `obstruct', `blemish' are not shown in this figure because either these words did not pass the keyword selection described before or these were not present in our corpus. It is worth noticing that `Illegal' is the most frequent word in the five foundations. This suggests that illegal issues would be the most immoral in everyday moral situations. Table \ref{moralLoadings} shows examples of the tweets with the highest moral loadings from each moral foundation. Tweet \#1 has the highest similarity with the MF context vector for Care. This could be because of the presence of the word `kills' in the tweet. Tweet \#4 is an interesting example where the highest correlation of 0.553 is obtained with Authority, the second highest of 0.348 was with Ingroup, and the third highest was 0.346 and with Fairness. If we look at the MF dictionary, the word `treasonous' belongs to both Ingroup and Authority, and the word `illegal' belongs to Authority. Thus, this tweet has a higher correlation with Authority than Ingroup. This tweet also has a high correlation with Fairness even though it does not include any word from the Fairness category in the MF dictionary. It is because of the presence of words such as `patriotic' (0.33), `government' (0.23), etc. in the tweet which are highly correlated with Fairness in terms of cosine similarity. This is an example case where a standard word counting approach~\cite{Graham2009,Clifford2013} may fail because the MF dictionary includes only a limited number of words (e.g., 149 words in the vice category), and other everyday moral words are not considered. In contrast, the matrix $U$ resulting from the moral loading method provides an extended version of the MF dictionary, which we will detail later. Hence, it may be applicable to a wider class of texts. \begin{figure}[!th] \centering \includegraphics[width=3.5in]{MoralWordCloud} \caption{Word cloud of the MF dictionary’s vice words present in the immorality corpus} \label{fig:wordcloud} \end{figure} \begin{table*} \caption{Tweets showing the highest moral loadings with each foundation} \label{moralLoadings} \centering \begin{tabularx}{\linewidth}{|c|X|c|c|c|c|c|}\hline \# & Tweet & Care & Fairness & Ingroup & Authority & Purity \\\hline 1 & @harikriss it should it's immoral and kills innocent people & \textbf{0.772} & -0.016 & 0.199 & 0.063 & 0.113 \\\hline 2 & @goat777face @ColinUlster96 @dkm49321 Yeah, not being to discriminate is so unbelievably immoral and unjust. & 0.184 & \textbf{0.712} & 0.177 & 0.311 & 0.006 \\\hline 3 & @DanWosHere @malcolmtyson @JulianBurnside Foul immoral traitors will go to any lengths to defend the enemy they've allied with. & 0.225 & 0.189 & \textbf{0.544} & 0.336 & 0.186 \\\hline 4 & @Scribbles646 @Snowden So, we agree arming Al Qaeda is treasonous. Exposing the illegal/immoral actions of a government is patriotic. & 0.218 & 0.346 & 0.348 & \textbf{0.553} & 0.007 \\\hline 5 & \#PresstitutesDay Sick of these indecent, perverted, shameless, wicked, sinful, immoral, lewd, self-indulgent anti nationals. & 0.134 & 0.249 & 0.238 & 0.044 & \textbf{0.684} \\\hline \end{tabularx} \end{table*} To examine which moral foundations are actually used as well as how often those are used in Twitter conversations, we calculated moral loadings for each tweet and classified them into one moral foundation with maximum moral loading value, i.e., the tweet was assigned a dominant moral foundation with which it had the highest similarity. For example, tweet \#1 in Table \ref{moralLoadings} was assigned to Care. We calculated the total number of tweets in each moral foundation. This result is summarized in Table \ref{numberTweets}. In the immorality corpus, we observed that the maximum number of tweets (21135) belonged to Care, and the minimum number of tweets (4932) belonged to Authority. This suggests that Care is the most dominant foundation that people are knowingly or unknowingly concerned with when discussing in the context of immorality, and that people would less pay attention to Authority violation in everyday moral situations. \begin{table} \caption{Number of tweets present in each moral foundation} \label{numberTweets} \centering \begin{tabular}{|c|c|}\hline Foundation & Number of tweets\\\hline Care & 21135\\\hline Fairness & 15731\\\hline Ingroup & 6665\\\hline Authority & 4932 \\\hline Purity & 14587 \\\hline \end{tabular} \end{table} Table \ref{cosMF} shows the relationships between the five moral foundations in the context of moral violating situations. Here, the cosine similarity should be close to zero if the two foundations are independent or orthogonal to each other. We do observe a high correlation between Ingroup and Authority (0.598), which seems to be natural because both belong to the binding foundations~\cite{Haidt2008}. We expected the correlation between Care and Fairness to be higher than correlation between Care and Ingroup because both are individualizing moral foundations. However, this was not observed. Table \ref{cosMF} also shows that Purity is one of the foundations that does not have compelling correlations with either individualizing or binding foundations. \begin{table} \caption{Cosine similarities between MF context vectors} \label{cosMF} \centering \begin{tabular}{|c|c|c|c|c|c|}\hline Foundation & Care & Fairness & Ingroup & Authority & Purity \\ \hline Care & - & 0.113 & \textbf{0.394} & 0.133 & 0.229 \\ \hline Fairness & - & - & 0.147 & 0.223 & 0.14 \\ \hline Ingroup & - & - & - & \textbf{0.598} & 0.239 \\ \hline Authority & - & - & - & - & 0.081 \\ \hline Purity & - & - & - & - & - \\ \hline \end{tabular} \end{table} \subsection{Extended MF dictionary} Using the immorality corpus, we extended the MF dictionary by adding the semantically related words using the resulting word-context matrix. That is, we again calculated cosine similarities between five MF context vectors and word vectors and then listed the top 100 most similar words for each foundation. In this way, the extended MF dictionary had 500 total words. We compared the original and extended dictionaries and found several similarities and dissimilarities. While most words are commonly present in the same foundation category (e.g., `kill', `kills', `attack*' are involved in Care), others present in different foundation categories (e.g., `insurgent' is in Authority in the original dictionary but it appears in Care in the extended dictionary). Some words appear in more than one foundation in the extended dictionary (e.g., `terroris*', which is present in Ingroup in the original dictionary, appears both in Care and Ingroup). Importantly, words not present in the original dictionary have been added in the extended dictionary (e.g., `bombing', `isis', `torture', `murder', `nazis' , `stereotypes' in Care, `racist', `elitists', `racisim' in Ingroup and `lust', `flesh', `devil' in Purity). Furthermore, some morally neutral words, e.g., `Kashmir' (the name of a place in India), is present in Care, which in recent years has seen political and social instability leading to violent clashes and a loss of life and property. The presence of words such as 'homosexual' and `lesbians' along with other words of Purity (e.g., `disgust', `sinful', `sin') reflects the attitude of people towards these topics similar to `abortion' in Care. In this way, the extended MF dictionary reflects actuarial word usages in everyday conversations. To further examine how these foundations relate to each other, we mapped the extended dictionary's word vectors along with moral foundations vectors on a 2D plane using principal component analysis (PCA)~\cite{Jolliffe2002}. Fig.~\ref{fig:PCAplot} shows that except Purity, the words belonging to other four foundations overlap with each other along the PC1 axis. Thus, the PC1 axis differentiates Purity and non-Purity foundations, which can be explained in terms of `person-based attributes' vs.`situation-based attributes'~\cite{Chakroff:2015if}. The other four foundations are differentiated along the PC2 axis, although with some overlaps. Care and Fairness are placed at the extreme ends along the PC2 axis, indicating that these two foundations are dissimilar to each other. This is similar to the result found in the previous section. This result indicates that Care and Fairness are both individualizing tendencies but function in different manners. Overall, both axes well differentiate Purity, Fairness, and Care from each other, but there is a significant overlap between Authority and Ingroup. It can also be seen that Ingroup has an overlap with Care, and Authority overlaps with Fairness to some extent. These results imply that the five moral foundations are not mutually exclusive---at least in everyday conversations on immorality. \begin{figure}[th] \centering \includegraphics[width=4in]{PCA} \caption{PCA plot of the extended MF dictionary's words along with five moral context vectors} \label{fig:PCAplot} \end{figure} \subsection{Roles of moral foundations in various topics} To investigate the roles of moral foundations on different topics, we measured the moral loadings between different topics and moral foundations. Table~\ref{topic10} shows the result when we used the top 10 keywords according to score (eq. (\ref{eq:score})) for topic context vectors. We see that topics `abortion’ and `religion’ are correlated to Care, whereas topics `immigration’ and `homosexuality’ have the highest similarities with Ingroup and Purity, respectively. The similarity of homosexuality with Purity can be explained as it has been found by Pizarro et al.~\cite{Pizarro2011} that purity violations evoke feeling of disgust. Disgust is also considered to be positively correlated with the negative attitudes towards homosexuals~\cite{Terrizzi2010,Smith2011}. Graham et al.~\cite{Graham2012} mentioned that immigrants are a trigger for Purity violations, but our results show that its highest correlation is with Ingroup violations. The differences between moral concerns of Republicans and Democrats about `abortion’ were discussed by Sagi et al.~\cite{Sagi2014}. Their results showed that Democrats were mostly concerned with Fairness, whereas Republicans were concerned with the Purity aspects of abortion. In our case, abortion has the highest correlation with Care. When we used the top 100 keywords according to score (eq. (\ref{eq:score})) for constructing topic context vectors from topic corpora, all topics were maximally correlated with Care (Table~\ref{topic100}). This result suggests that although the most popular keywords (e.g, top 10 words) are related to the corresponding topics, many words ranked below the top 10 were related to Care. This is partly supported by the fact that the maximum number of tweets in our dataset were predominantly related to Care as shown Table~\ref{numberTweets}. \begin{table}[!th] \caption{Moral loadings for topics (based on top 10 keywords)} \label{topic10} \centering \begin{tabular}{c} \includegraphics[width=\linewidth, clip=true, trim=1.75cm 25cm 6.5cm 1.85cm]{Table4} \end{tabular} \end{table} \begin{table}[ht] \caption{Moral loadings for topics (based on top 100 keywords)} \label{topic100} \centering \begin{tabular}{c} \includegraphics[width=\linewidth, clip=true, trim=1.75cm 25cm 6.6cm 1.85cm]{Table5} \end{tabular} \end{table}
1610.02734
\section{Introduction}\label{s.Introduction} \subsection{Main result} In this paper we study the existence of non-trivial wandering domains in nonhyperbolic dynamics. Here a \emph{non-trivial wandering domain} for a given map $f$ on a manifold $M$ means a non-empty connected open set $D\subset M$ which satisfies the following conditions: \begin{itemize} \item $f^{i}(D)\cap f^{j}(D)= \emptyset$ for every $i, j\geq 0$ with $i\neq j$; \item the union of the $\omega$-limit sets of points in $D$ for $f$, denoted by $\omega(D, f)$, is not equal to a single periodic orbit. \end{itemize} See \cite[p.\ 36]{dMvS93} for the original definition in the one-dimensional case. A wandering domain $D$ is called \emph{contracting} if the diameter of $f^{n}(D)$ converges to zero as $n\to \infty$. In the early 20th century, Bohl \cite{Bo16} and Denjoy \cite{D32} constructed examples of $C^1$ diffeomorphisms on a circle which have contracting wandering domains in which the union of the $\omega$-limit sets of points is a Cantor set. Following these results, similar phenomena were observed for high dimensional examples, see \cite{Kn81, Ha89, Mc93, BGLT94, NS96, KM10}. On the other hand, the absence of wandering domains is the key of classification of one-dimensional unimodal as well as one-dimensional multimodal maps, in real analytic category, which were developed in \cite{dMvS89, Ly89,BlLy89, dMvS93, vSV04}, see the survey of van\ Strien \cite{vS10}. For wandering domains of rational maps of the Riemann sphere, see \cite{Su, MdMvS92}. Moreover, Berry and Mestel \cite{BM91} showed that any $C^{1}$ Lorenz map without gaps does not admit a wandering domain, but the corresponding assertion for the contracting case with gaps has not been shown yet, see \cite{GC12}. Topics about the existence of non-trivial wandering domains in nonhyperbolic dynamics were first studied by Colli-Vargas \cite{CV01} for some two-dimensional example which is made up of an affine thick horseshoe with $C^{2}$-persistent homoclinic tangencies. Moreover, their conjecture was recently proved to be true by the first and third authors \cite[Theorem A]{KS-ax}: any two-dimensional diffeomorphism in any Newhouse open set is contained in the $C^{r}$ ($2\leq r<\infty$) closure of diffeomorphisms having contracting non-trivial wandering domains for which the union of the $\omega$-limit sets of points contains a basic set. This result moreover implies an affirmative answer in the $C^{r}$ ($2\leq r<\infty$) category to one of the open problems of van Strien \cite{vS10} which is concerned with the existence of wandering domains for the H\'enon family, see \cite[Corollary B]{KS-ax}. Compare with a negative answer in the $C^{\omega}$ category given in \cite{Ou}. There is another well-studied nonhyperbolic phenomenon different from a homoclinic tangency, which is called a heterodimensional cycle. We say that a diffeomorphism has a \emph{heterodimensional cycle} associated with saddle periodic points if there are saddle periodic points $P$ and $Q$ for the diffeomorphism such that $$ W^{u}(P)\cap W^{s}(Q)\neq \emptyset,\ W^{u}(Q)\cap W^{s}(P)\neq \emptyset,\ \mathrm{u}\text{-}\mathrm{ind}(P)\neq \mathrm{u}\text{-}\mathrm{ind}(Q),$$ where $\mathrm{u}\text{-}\mathrm{ind}(\cdot)$ is the dimension of the unstable bundle, called the \emph{unstable index}, for a corresponding periodic point. Thus a natural question is the following: \begin{question} Let $f$ be a diffeomorphism having a heterodimensional cycle. Is the diffeomorphism $f$ contained in the $C^{r}$ closure of diffeomorphisms having contracting non-trivial wandering domains? \end{question} The next theorem is one of the main results in the present paper, which gives an affirmative answer to the question in the $C^1$-category. \begin{theorem}\label{thm1} Let $f$ be a diffeomorphism on a three-dimensional manifold which has a heterodimensional cycle associated with two saddle periodic points at which both the derivatives for $f$ have non-real eigenvalues. Then there exists a diffeomorphism $g$ arbitrarily $C^{1}$-close to $f$ such that $g$ has a contracting non-trivial wandering domain $D$ and $\omega(D, g)$ is a nonhyperbolic transitive Cantor set without periodic points. \end{theorem} Note that, in \cite{CV01, KS-ax}, to construct non-trivial wandering domains near a homoclinic tangency of a 2-dimensional diffeomorphism $F$, they added a countable series of perturbations to $F$ supported on some open sets which are respectively contained in mutually disjoint gaps in the complement of persistent tangencies. On the other hand in this paper, to show Theorem \ref {thm1} we adopt a quite different procedure as follows: \begin{description} \item[step 1] $C^1$-approximate the diffeomorphism $f$ given in Theorem \ref{thm1} by $C^r$, $r\geq 2$, diffeomorphisms with so called ``non-transverse equidimensional cycles"; \item[step 2] $C^{r}$-approximate the diffeomorphisms with the non-transverse equidimensional cycles by other diffeomorphisms having so called ``generalized homoclinic tangencies"; \item[step 3] Owing to one of Tatjer's results \cite{Tj01}, the generalized homoclinic tangencies lead to Bogdanov-Takens bifurcations, which create invariant circles; \item[step 4] Finally, by using a $C^1$ Denjoy-like construction along the invariant circle, one can obtain a diffeomorphism $g$ satisfying the conditions as in Theorem \ref {thm1}. \end{description} In the next subsection, we give essential ingredients to carry out these steps. \subsection{Outline of the paper} Denote by $\mathscr{A}$ the set of all diffeomorphisms which satisfy the assumption of Theorem \ref{thm1}. Furthermore denote by $\mathscr{Z}$ the set of all diffeomorphisms which satisfy the conclusion of Theorem \ref{thm1}. Then, Theorem \ref{thm1} can be rephrased as follows: \begin{corollary}\label{co1} $\mathscr{A}$ is contained in the $C^{1}$ closure of $\mathscr{Z}$. \end{corollary} To show Theorem \ref{thm1}, we have to prepare auxiliary classes $\mathscr{B}$, $\mathscr{C}$, $\mathscr{D}$ of diffeomorphisms on $M$ satisfying the following inclusion relations: $$ \mathscr{A}\subset \overline{(\mathscr{B})}_{C^{1}}, \quad \mathscr{B}\subset \overline{(\mathscr{C})}_{C^{r}}\subset \overline{(\mathscr{D})}_{C^{r}}, \quad \mathscr{D}\subset \overline{(\mathscr{Z})}_{C^{1}}, $$ where $\overline{( \cdot )}_{C^{1}}$ and $\overline{( \cdot )}_{C^{r}}$ respectively stand for the $C^{1}$ and $C^{r}$, $r\geq 2$, closures of the corresponding sets. We will give the definition of each class in the following sections. So, we here briefly explain what role each class plays in the proof of Theorem \ref{thm1}. Section \ref{sec2} contains step 1 where we show that any element of $\mathscr{A}$ leads to a heterodimensional cycle containing an ``intrinsic tangency''. It is a non-transverse intersection between some invariant manifold and some leaf of an invariant foliation which is contained in some transverse heterodimensional intersections, see Lemma \ref{lem:p5c}. Moreover, the intrinsic tangency yields the class $\mathscr{B}$ of $C^r$ diffeomorphsims for which each of elements has a ``non-transverse equidimensional cycle'', see Proposition \ref{prop2.1}. Section \ref{sec3} corresponds to steps 2 and 3. We here show that $\mathscr{B}$ is contained in the $C^{r}$ closure of the class $\mathscr{C}$ of $C^r$ diffeomorphsims where every element has a ``generalized homoclinic tangency'' presented by Tatjer \cite{Tj01}. See Proposition \ref{thm3.1}. In short, the generalized homoclinic tangency is a certain type of non-transverse codimension-two intersection which satisfies the Tatjer condition. Here the Tatjer condition consists of the geometric properties given in (\ref{C1})--(\ref{C3}) of section \ref{sec3}. As a matter of fact, a couple of propositions \ref{prop2.1} and \ref{thm3.1} is the key of this paper which directly implies another main result: \begin{theorem} Every element of $\mathscr{A}$ can be arbitrarily $C^{1}$-approximated by $C^r$ diffeomorphisms having a generalized homoclinic tangency. \end{theorem} Note that Tatjer presents several types of generalized homoclinic tangencies, which provide various phenomena according to the types, e.g., several types of limit return (H\'enon-like) maps of renormalizations, birth of attracting or saddle type invariant circles via Bogdanov-Takens bifurcations, the existence of strange attractors and the Newhouse phenomenon. See \cite[Theorem 1]{Tj01}. Indeed, by virtue of one of them, we can find the class $\mathscr{D}$ of diffeomorphisms which have attracting invariant circles created by the Bogdanov-Takens bifurcation. Section \ref{sec4} contains step 4 where we finally perform a Denjoy-like construction for a normal tubular neighborhood of the attracting invariant circle of any diffeomorphism in $\mathscr{D}$ to detect non-trivial wandering domains. \smallskip In closing, we note that all approximations in this paper can be done in $C^r$-category with any integer $r \geq 2$, except for Lemma 2.2 in step 1 and Proposition 4.2 in step 4. \section{Intrinsic tangencies of cycles}\label{sec2} Let $M$ be a three-dimensional Riemannian manifold and $N_{1}$ and $N_{2}$ be submanifolds of $M$. We say that a point $\boldsymbol{x}\in N_{1}\cap N_{2}$ is a \emph{transverse intersection} if it satisfies $T_{\boldsymbol{x}}M=T_{\boldsymbol{x}}N_{1} + T_{\boldsymbol{x}}N_{2}$. Denote by $N_{1} \pitchfork N_{2}$ the set of all transverse intersections of $N_{1}$ and $N_{2}$. On the other hand, a point $\boldsymbol{y}\in N_{1}\cap N_{2}$ satisfying $T_{\boldsymbol{y}}M\neq T_{\boldsymbol{y}}N_{1} + T_{\boldsymbol{y}}N_{2}$ is called a \emph{tangency}. In this section we consider the set $\mathscr{B}$ of all $C^{r}$, $r\geq2$, diffeomorphisms on $M$ for which any element has a \emph{non-transverse equidimensional cycle}, that is, for any $f\in\mathscr{B}$, there are saddle periodic points $P$ and $P^{\prime}$ for $f$ satisfying the following conditions: \begin{enumerate} \renewcommand{\theenumi}{B\arabic{enumi}} \renewcommand{\labelenumi}{(\theenumi)} \item \label{B1} $\mathrm{u}\text{-}\mathrm{ind}(P)=\mathrm{u}\text{-}\mathrm{ind}(P^{\prime})= 2$, and the unstable eigenvalues of $P$ are non-real, while the unstable eigenvalues of $P^{\prime}$ are real; \item \label{B2} $P$ and $P^{\prime}$ are \emph{homoclinically related} to each other, i.e., $W^{s}(P^{\prime})\pitchfork W^{u}(P)\neq \emptyset$ and $W^{u}(P^{\prime})\pitchfork W^{s}(P)\neq \emptyset$; \item \label{B3} there is a quadratic tangency between $W^{s}(P^{\prime})$ and $ W^{u}(P)$. \end{enumerate} Here the tangency $\boldsymbol{y}\in W^{s}(P^{\prime})\cap W^{u}(P)$ is said to be \emph{quadratic} (or a \emph{contact of order} $1$) if there exist an arc $\ell\subset W^{s}(P^{\prime})$, a regular surface $S\subset W^{u}(P)$, and some $C^2$ change of coordinates on an open neighborhood $U(\boldsymbol{y})$ of $\boldsymbol{y}$ such that (i) $\boldsymbol{y}= (0,0,0)$, $S=\{(x,y,z)\in U(\boldsymbol{y})\,;\, z = 0\}$; (ii) $\ell $ has a regular parametrization $\ell(t)=(x(t),y(t),z(t))$ with $\ell(0)=(0,0,0)$; (iii) $z^\prime(0)=0$ and $z^{\prime\prime}(0)\neq0$. The main result of this section is the following proposition. \begin{proposition}\label{prop2.1} $\mathscr{A}$ is contained in the $C^{1}$ closure of $\mathscr{B}$. \end{proposition} Since our setting is in dimension three, the following lemma can be immediately obtained from Bonatti-D\'{\i}az' result \cite[Theorem 2.1]{BD08}. \begin{lemma}\label{lem2.2} Let $f$ be any element of $\mathscr{A}$ which has a heterodimensional cycle associated with $P$ and $Q$ at which both the derivatives have non-real eigenvalues. Then every $C^{1}$ neighborhood of $f$ contains a diffeomorphism $f_1$ with a heterodimensional cycle having real central eigenvalues. Moreover, this cycle for $f_1$ can be taken associated with saddles $P^{\prime}_{f_1}$ and $Q^{\prime}_{f_1}$ which are homoclinically related to the continuations $P_{f_1}$ of $P$ and $Q_{f_1}$ of $Q$, respectively. \hfill$\square$ \end{lemma} \noindent Here the heterodimensional cycle for $f_1 $ is said to have \emph{real central eigenvalues} if there are an expanding real eigenvalue of $Df_1^{\mathrm{Per}(f_1)}(P_{f_1})$ and a contracting real eigenvalue of $Df_1^{\mathrm{Per}(f_1)}(Q_{f_1})$ whose multiplicities are equal to $1$. Note that this lemma implies that $\mathrm{u}\text{-}\mathrm{ind}(P)=\mathrm{u}\text{-}\mathrm{ind}(P_{f_1})$ and $\mathrm{u}\text{-}\mathrm{ind}(Q)=\mathrm{u}\text{-}\mathrm{ind}(Q_{f_1})$. See Figure \ref{fg1} for the configuration of each ingredient in Lemma \ref{lem2.2}. \begin{remark}\label{rmk2.3} \normalfont The heterodimensional cycle for $f_1$ in Lemma \ref{lem2.2} is \emph{simple} in the sense that the local dynamics in small neighborhoods of $P_{f_1}$ and of $Q_{f_1}$ are linear while the transitions between the neighborhoods are affine and preserve a partially hyperbolic splitting. See \cite[Definition 3.4]{BD08} for the precise description. Moreover the cycle contains a transverse intersection associated with the saddles $P^{\prime}_{f_1}$ and $Q^{\prime}_{f_1}$, see \cite[\S 5]{BD08}. \end{remark} We here introduce a concept of intrinsic tangencies. Let $f$ be a diffeomorphism on a three-dimensional manifold $M$ having saddle periodic points $P$ and $Q$ with $\mathrm{u}\text{-}\mathrm{ind}(P)=\mathrm{u}\text{-}\mathrm{ind}(Q) +1$. Suppose that all eigenvalues of the derivative $Df^{\mathrm{Per}(Q)}(Q)$ are real. We say that $W^{u}(P)$ and $W^{s}(Q)$ have an \emph{intrinsic tangency} if there is a leaf $\ell^{ss}$ of the $C^{1}$ strong stable foliation $\mathcal{F}^{ss}(Q)$ in $W^{s}(Q)$ such that $\ell^{ss}$ and $W^{u}(P)$ have a tangency. See Figure \ref{fg0}-(a). Note that the intrinsic tangency is not necessarily contained in a heterodimensional tangency between $W^{u}(P)$ and $W^{s}(Q)$ as shown in Figure \ref{fg0}-(b). See \cite{DR92, KS12} for its precise definition. Indeed, it is not difficult to give examples where intrinsic tangencies are contained in $W^{u}(P)\pitchfork W^{s}(Q)$, \emph{e.g.}, circular transverse heterodimensional intersections in \cite{DR92, KS12} contain at least two such intrinsic tangencies. \begin{figure}[hbt] \centering \scalebox{0.8}{\includegraphics[clip]{fg0}} \caption{Intrinsic tangencies in transverse intersection and heterodimensional tangency} \label{fg0} \end{figure} Let $f$ be any element of $\mathscr{A}$ which has a heterodimensional cycle associated with saddle periodic points $P$ and $Q$ at which both the derivatives have non-real eigenvalues. One may suppose that $\mathrm{u}\text{-}\mathrm{ind}(P)> \mathrm{u}\text{-}\mathrm{ind}(Q)$, $W^{u}(P)\pitchfork W^{s}(Q)\neq \emptyset$ and $W^{u}(Q)\cap W^{s}(P)$ contains a quasi-transverse intersection. By Lemma \ref{lem2.2} and Remark \ref{rmk2.3}, one obtains a $C^{1}$ diffeomorphism $f_{1}$ arbitrarily $C^{1}$-close to $f$ which satisfies the following properties: \begin{enumerate} \renewcommand{\theenumi}{A\arabic{enumi}} \renewcommand{\labelenumi}{(\theenumi)} \item\label{1f1} $f_{1}$ has saddle periodic points $P^{\prime}_{f_{1}}$ and $Q^{\prime}_{f_{1}}$ with the following conditions: \begin{enumerate}[(i)] \item the eigenvalues of the derivatives of $f_{1}$ at $P^{\prime}_{f_{1}}$ and $Q^{\prime}_{f_{1}}$ are distinct real numbers; \item $P^{\prime}_{f_{1}}$ is homoclinically related to the continuation $P_{f_{1}}$ of $P$, while $Q^{\prime}_{f_{1}}$ is homoclinically related to the continuation $Q_{f_{1}}$ of $Q$ ; \item $W^u(P_{f_1}) \cap W^s(Q_{f_1})$ contains a transverse intersection; \end{enumerate} \item\label{2f1} $f_{1}$ has a heterodimensional cycle associated with $P^{\prime}_{f_{1}}$ and $Q^{\prime}_{f_{1}}$, i.e., \begin{enumerate}[(i)] \item $W^{u}(P^{\prime}_{f_{1}})\cap W^{s}(Q^{\prime}_{f_{1}})$ contains a transverse intersection; \item $W^{s}(P^{\prime}_{f_{1}})\cap W^{u}(Q^{\prime}_{f_{1}})$ contains a quasi-transverse intersection $X^{\prime}$. \end{enumerate} \end{enumerate} Here $X^{\prime}$ is called a \emph{quasi-transverse intersection} if it satisfies $T_{X^{\prime}}W^{s}(P^{\prime}_{f_{1}})+T_{X^{\prime}}W^{u}(Q^{\prime}_{f_{1}}) =T_{X^{\prime}}W^{s}(P^{\prime}_{f_{1}})\oplus T_{X^{\prime}}W^{u}(Q^{\prime}_{f_{1}})$. See Figure \ref{fg1}. \begin{figure}[hbt] \centering \scalebox{0.85}{\includegraphics[clip]{fg1-1}} \caption{Heterodimensional and equidimensional cycles for $f_{1}$ } \label{fg1} \end{figure} \begin{lemma}\label{lem:p5c} Arbitrarily $C^{1}$-close to $f_{1}$ satisfying (\ref{1f1}) and (\ref{2f1}), there is a $C^{r}$ diffeomorphism $f_{2}$ satisfying (\ref{1f1}) and (\ref{2f1}) such that $W^u(P_{f_2})\pitchfork W_{\mathrm{loc}}^s(Q^{\prime}_{f_2})$ contains an intrinsic tangency, where $P_{f_2}$ and $Q^{\prime}_{f_2}$ are the continuations of $P_{f_1}$ and $Q^{\prime}_{f_1}$, respectively. \end{lemma} \noindent Together with Lemma \ref{lem2.2} and Remark \ref{rmk2.3}, it immediately implies the next result. \begin{corollary}\label{coro:p5c} The intrinsic tangency is obtained by an arbitrarily small $C^1$-perturbation of any $f\in \mathscr{A}$. \hfill$\square$ \end{corollary} \begin{proof}[Proof of Lemma \ref{lem:p5c}] We here recall that the above $f\in \mathscr{A}$ has the saddle periodic point $Q$ such that $Df^{\mathrm{Per}(Q)}(Q)$ has a pair of non-real contracting eigenvalues. Hence, there exist a small neighborhood $V$ of $Q_{f_1}$, a local chart $(x,y,z)$ in $V$ and real constants $a_s, a_u, \vartheta \in \mathbb R$ with $0<\vert a_s\vert <1 <\vert a_u\vert$ such that \begin{itemize} \item $\overline{V}\cap W^{u}_{\mathrm{loc}}(P_{f_{1}})=\emptyset$ and $\overline{V}\cap \mathcal{O}_{f_{1}}(X^{\prime})=\emptyset$, where $\overline{V}$ is the closure of $V$ and $\mathcal{O}_{f_{1}}(\cdot)$ is the orbit of the corresponding point for $f_{1}$; \item $Q_{f_1} =(0,0,0)$ and \begin{equation}\label{eq:p4curve3} f_1^{\mathrm{Per}(Q_{f_1})}(x,y,z) =\left( (x,y)\; {}^t\!A_s,\ a_uz \right), \ A_s=a_s\left( \begin{array}{cc} \cos 2\pi \vartheta & -\sin 2\pi \vartheta\\ \sin 2\pi \vartheta & \cos 2\pi \vartheta \end{array} \right) \end{equation} for any $(x,y,z) \in V$. Moreover, after a small local perturbation if necessary, we may assume that the above $\vartheta$ is irrational. \end{itemize} On the one hand, by (\ref{1f1})-(iii), there exist an unstable disk $$D^{u}\subset W^{u}_{\mathrm{loc}}(P_{f_{1}})\cap V^{c}$$ and a positive integer $n_{0}$ such that the set of transverse intersections $f_{1}^{n_{0}}(D^{u})\pitchfork W^{s}_{\mathrm{loc}}(Q_{f_1})$ contains an arc $\ell^{u}_{n_{0}}$. See Figure \ref{fg1}. On the other hand, by (\ref{1f1})-(ii) and the Inclination Lemma, it follows that $W^{s}(Q^{\prime}_{f_{1}})$ contains two-dimensional disks for which the backward images converge to $W_{\mathrm{loc}}^{s}(Q_{f_{1}})$ in $C^{1}$ topology. Hence, for any $\epsilon>0$, there exist an integer $m_{0}\geq 0$ and a stable disk $D^{s}\subset W^{s}_{\mathrm{loc}}(Q^{\prime}_{f_{1}})$ containing a point of $W^{u}(Q_{f_{1}})\pitchfork W^{s}_{\mathrm{loc}}(Q^{\prime}_{f_{1}})$ and such that, for any integer $m\geq 0$, $$ d_{C^1}\left(D^{s}_{m},\ W_{\mathrm{loc}}^{s}(Q_{f_{1}})\right)<\epsilon $$ where $d_{C^1}(\cdot ,\cdot)$ is the $C^1$ distance between corresponding submanifolds, and $D^{s}_{m}$ is a component of $f_{1}^{-m\mathrm{Per}(Q_{f_{1}})-m_{0}}(D^{s})\cap V$. Note that, by Remark \ref{rmk2.3} together with (\ref{1f1})-(i), one has the strong stable foliation $\mathcal{F}^{ss}(Q^{\prime}_{f_{1}})$ of $Q^{\prime}_{f_{1}}$ whose leaves are of codimension one in $W^{s}(Q^{\prime}_{f_{1}})$. Moreover, observe that, by the irrational rotation of (\ref{eq:p4curve3}) with $\vartheta\not\in \mathbb{Q}$, the images of $\ell ^u_{n_0}$ by the forward iterations of $f_1^{\mathrm{Per}(Q_1)}$ rotate and converge to $Q_{f_1}$ in $W^{s}_{\mathrm{loc}}(Q_{f_1})$, see Figure \ref{fg1-2}-(a). Hence, there are an integer $m_{1}\geq 0$ and an arc $\ell^{ss}\subset D^{s}\cap \mathcal{F}^{ss}_{\mathrm{loc}}(Q^{\prime}_{f_{1}})$ such that \begin{itemize} \item $\ell^{u}_{n_{0}}\cap \ell^{ss}_{m_{1}}\neq \emptyset$ where $\ell^{ss}_{m_{1}}:=\pi^{s}\left(f_{1}^{-m_{1}\mathrm{Per}(Q_{f_{1}})-m_{0}}(\ell^{ss})\right)$ for the canonical projection $\pi^{s}:V\to W^{s}_{\mathrm{loc}}(Q_{f_1})$ with $\pi^{s}(x, y, z)=(x, y, 0)$; \item there is a point ${\bm p}_{0}\in \ell^{u}_{n_{0}}\cap \ell^{ss}_{m_{1}}$ such that $$ \angle\left(T_{{\bm p}_{0}}\ell^{u}_{n_{0}},\ T_{{\bm p}_{0}} \ell^{ss}_{m_{1}}\right)<\epsilon, $$ where $\angle(\cdot,\cdot)$ stands for the angle between the corresponding subspaces in $T_{{\bm p}_{0}} W^{s}_{\mathrm{loc}}(Q_{f})$. \end{itemize} \begin{figure}[hbt] \centering \scalebox{0.85}{\includegraphics[clip]{fg1-2}} \caption{Images of $\ell^{u}_{n_{0}}$ and $\ell^{ss}_{m_{1}}$} \label{fg1-2} \end{figure} Therefore, one can obtain a diffeomorphism $f_2$ with a tangency $Z_{1}$ near ${\bm p}_{0}$ between $\ell^{u}_{n_{0}}$ and $\ell^{ss}_{m_{1}}$, if necessary perturbing $f_{1}^{n_{0}}$ slightly in a small neighborhood of $f_{1}^{-n_{0}}({\bm p}_{0})$ in $D^{u}$. It follows from the above conditions that such a $C^r$ perturbation deforming $\ell^{u}_{n_{0}}$ as shown in Figure \ref{fg1-2}-(b) can be defined by the composition of appropriate bump function and isometry. Thus we have the intrinsic tangency $$Z_{0}:=f_{2}^{m_{1}\mathrm{Per}(Q_{f_{2}})+m_{0}}(Z_{1})$$ between $W^u(P_{f_2})$ and $W^{s}_{\mathrm{loc}}(Q_{f_{2}}^{\prime})$. This concludes the proof of Lemma \ref{lem:p5c}. \end{proof} \begin{proof}[Proof of Proposition \ref{prop2.1}] Arbitrarily $C^1$ close to $f\in \mathscr{A}$, from Lemmas \ref{lem2.2}, one has a diffeomorphism $f_1$ with a heterodimensional cycle associated with saddles $P^{\prime}_{f_1}$ and $Q^{\prime}_{f_1}$ with distinct real eigenvalues which are homoclinically related to the continuations $P_{f_1}$ and $Q_{f_1}$, respectively. Moreover, it follows by Lemma \ref{lem:p5c} and Corollary \ref{coro:p5c} that, arbitrarily $C^1$ near $f_1$, one obtains a $C^r$ diffeomorphism $f_2$ satisfying (\ref{1f1}) and (\ref{2f1}) which has an intrinsic tangency $Z_0$ between $W^u(P_{f_2})$ and a leaf $\ell^{ss}$ of the strong stable foliation $\mathcal{F}^{ss}(Q^{\prime}_{f_{2}})$. Note that $\ell^{ss}$ is almost parallel to $W_{\mathrm{loc}}^{ss}(Q^{\prime}_{f_{2}})$ in the linearizing coordinates in a neighborhood of $Q^{\prime}_{f_{2}}$. See Figure \ref{fg2}-(a). Recall that, by (\ref{2f1}), $W^{s}_{\mathrm{loc}}(P^{\prime}_{f_{2}})\cap W^{u}(Q^{\prime}_{f_{2}})$ contains the quasi-transverse intersection $X^{\prime}$ as shown in Figure \ref{fg1}. Hence one has a positive integer $k_{0}$ such that the point $X^{\prime}_0=f_{2}^{k_{0}}(X^{\prime})$ is contained in $W^{u}_{\mathrm{loc}}(Q_{f_{2}}^{\prime})$. Note that since $\overline{V}\cap \mathcal{O}_{f_{2}}(X^{\prime})=\emptyset$ where $V$ is the neighborhood $V$ of $Q_{f_1}$, $X^{\prime}_0\not \in \overline{V}$. We here consider a small segment $L^{s}\subset W^{s}(P^{\prime}_{f_{2}})$ with $X^{\prime}_0\in L^{s}$ and $L^{s}\cap \overline{V}=\emptyset$. Observe that if necessary perturbing $f_{2}$ slightly in a small neighborhood of $L^{s}$, it follows from Inclination Lemma that, for any large integer $m>0$, the backward image $f_{2}^{-m\mathrm{Per}(Q^{\prime}_{f_2})}(L^{s})$ contains a segment which is sufficiently $C^{1}$-close to $W_{\mathrm{loc}}^{ss}(Q^{\prime}_{f_{2}})$. Denote $f_{2}^{-m\mathrm{Per}(Q^{\prime}_{f_2})}(X^{\prime}_{0})$ by $X^{\prime}_{m}$. See Figure \ref{fg2}-(b). \begin{figure}[hbt] \centering \scalebox{0.85}{\includegraphics[clip]{fg2}} \caption{The appearance of an intrinsic tangency} \label{fg2} \end{figure} We here consider a one-parameter family of $C^{r}$ diffeomorphisms, $\{ f_{2, t}\}_{t\in \mathbb{R}}$ with $f_{2,0}=f_{2}$, which \emph{unfolds generically} the quasi-transverse intersection at $X^{\prime}_{m}$ used in \cite{DR92, KNS10}, that is, there are a $C^{1}$-curve $\{X(t)\}_{t\in \mathbb{R}}$ and a $C^{1}$ map $\rho : \mathbb{R}\to \mathbb{R}^{+}$ with $X(t)\in f_{2, t}^{k_{0}+m}(W_{\mathrm{loc}}^{u}(P_{f_{2, t}}))$ for every $t$ and $\rho(0)>0$ such that: \begin{itemize} \item $X(0)=X^{\prime}_{m}$ and $\mathrm{dist}\left(X(t), W^{u}_{\mathrm{loc}}(Q^{\prime}_{f_{2, t}}) \right)=|t| \rho(t)$; \item $T_{X^{\prime}_{m}} W^{s}(P^{\prime}_{f_{2, t}}) \oplus T_{X^{\prime}_{m}} W^{u}(Q^{\prime}_{f_{2, t}}) \oplus N = T_{X^{\prime}_{m}} M$, where $N$ is the one-dimensional space spanned by $\frac{d X}{dt}(0)$. \end{itemize} Observe that, there is a real number $t_{0}$ arbitrarily near $0$ such that $f_{2, t_{0}}^{-m\mathrm{Per}(Q^{\prime}_{f_{2, t_0}})}(L^s)$ and $W^{u} (P_{f_{2, t_{0}}})$ have a quadratic tangency. See Figure \ref{fg2}-(c). Let us denote $f_{2, t_{0}}$ by $g$. It follows that $g$ satisfies the condition (\ref{B3}). Note that it follows from (\ref{1f1})-(ii) that (\ref{B2}) holds for $g$. In addition, since the amount of all perturbations can be taken arbitrarily small, (\ref{B1}) holds for $g$. In conclusion, $g$ is contained in $\mathscr{B}$. This ends the proof of Proposition \ref{prop2.1}. \end{proof} \noindent The above proof contains the following remark: \begin{remark} Every $C^r$ diffeomorphism satisfying (\ref{1f1}) and (\ref{2f1}) can be $C^r$ approximated by diffeomorphisms with a non-transverse equidimensional cycle satisfying the conditions (\ref{B1}), (\ref{B2}) and (\ref{B3}). \end{remark} \section{Homoclinic tangencies with the Tatjer condition}\label{sec3} We first recall a diffeomorphism with a generalized homoclinic tangency satisfying some conditions presented in \cite{Tj01}, which plays an important role in the proof of Theorem \ref{thm1}. The purpose of this section is to show Proposition \ref{thm3.1}, where we claim that diffeomorphisms satisfying the Tatjer condition are \emph{not} so special in a neighborhood of $\mathscr{B}$. Let $f$ be a diffeomorphism on a three-dimensional Riemannian manifold $M$ which has a homoclinic tangency of a saddle periodic point $P^{\prime}$. We here note that one of the requirement in Tajter's conditions is that the central-stable bundle $E^{cs} = E^s \oplus E^c$ at $P^\prime$ must be extended along the stable manifold $W^s(P^\prime)$ of $P^\prime$. See the explanation just below (\ref{C2}). To guarantee it as well as the quadratic condition in (\ref{C1}), we have to assume that the regularity of $f$ is at least $C^2$. Suppose the derivative for $f$ at $P^\prime$ has real eigenvalues $\lambda_{s}, \lambda_{cu}$ and $\lambda_{u}$ satisfying $|\lambda_{s}|<1<|\lambda_{cu}|<|\lambda_{u}|$. Assume that there are $C^{1}$ linearizing coordinates $(x, y, z)$ for $f$ on a neighborhood $U^{\prime}$ of $P^{\prime}$ such that $$P^{\prime}=(0,0,0),\quad f(x,y,z)=(\lambda_{cu} x,\ \lambda_{u} y,\ \lambda_{s}z)$$ for any $(x, y, z)\in U$. In $U^{\prime}$, the local unstable and stable manifolds of $P^{\prime}$ are given respectively as $$ W_{\mathrm{loc}}^{u}(P^{\prime})=\left\{(x, y, 0)\ ;\ |x|, |y|<\delta\right\},\ W_{\mathrm{loc}}^{s}(P^{\prime})=\left\{(0, 0, z)\ ;\ |z|<\delta\right\} $$ for some $\delta>0$. Moreover one has the local strong unstable $C^{1}$ foliation $\mathcal{F}^{uu}_{\mathrm{loc}}(P^{\prime})$ in $W_{\mathrm{loc}}^{u}(P^{\prime})$ such that, for any point $\bar{\boldsymbol{x}}=(\bar{x}, \bar{y}, 0)\in W_{\mathrm{loc}}^{u}(P^{\prime})$, the leaf $\ell^{uu}(\bar{\boldsymbol{x}})$ of $\mathcal{F}^{uu}_{\mathrm{loc}}(P^{\prime})$ containing $\bar{\boldsymbol{x}}$ is given as $$ \ell^{uu}(\bar{\boldsymbol{x}})= \left\{(\bar{x}, y, 0)\ ;\ |y|<\delta \right\}\!. $$ We say that a homoclinic tangency satisfies the \emph{Tatjer condition} (which corresponds to the type I of case B in \cite[Theorem 1]{Tj01}) if the following (\ref{C1})--(\ref{C3}) hold. \begin{enumerate} \renewcommand{\theenumi}{C\arabic{enumi}} \renewcommand{\labelenumi}{(\theenumi)} \item \label{C1} $W^{u}(P^{\prime})$ and $W^{s}(P^{\prime})$ have a quadratic tangency at $\boldsymbol{x}_{0}$ which does not belong to the strong unstable manifold $W^{uu}(P^{\prime})$ of $P^{\prime}$; \item \label{C2} $W^{s}(P^{\prime})$ is tangent to the leaf $\ell^{uu}(\boldsymbol{x}_{0})$ of $\mathcal{F}_{\mathrm{loc}}^{uu}(P^{\prime})$ at $\boldsymbol{x}_{0}$. \end{enumerate} For the tangency $\boldsymbol{x}_{0}$, we here consider the forward image $\bar{\boldsymbol{x}}_{0}=f^{n_{0}}(\boldsymbol{x}_{0})$ for a large $n_{0}\geq 0$ satisfying $\bar{\boldsymbol{x}}_{0}\in W^{s}_{\mathrm{loc}}(P^{\prime})$. In addition, we consider a plane $S(\bar{\boldsymbol{x}}_{0})$ containing $\bar{\boldsymbol{x}}_{0}$ such that $T_{\bar{\boldsymbol{x}}_{0}} S(\bar{\boldsymbol{x}}_{0})$ is generated by $(\frac{\partial}{\partial x})_{\bar{\boldsymbol{x}}_{0}}$, $(\frac{\partial}{\partial z})_{\bar{\boldsymbol{x}}_{0}}\in T_{\bar{\boldsymbol{x}}_{0}} M$. Note that by the chosen linearizing coordinates on a neighborhood of $P^\prime$, the plane $S(\bar{\boldsymbol{x}}_{0})$ in $T_{\bar{\boldsymbol{x}}_{0}} M$ corresponds to the central-stable bundle at this tangent point. See Figure \ref{fg3}. The last condition is the following: \begin{enumerate} \setcounter{enumi}{2} \renewcommand{\theenumi}{C\arabic{enumi}} \renewcommand{\labelenumi}{(\theenumi)} \item \label{C3} $S(\bar{\boldsymbol{x}}_{0})$ and $W^{u}(P^{\prime})$ are transverse at $\bar{\boldsymbol{x}}_{0}$. \end{enumerate} \begin{figure}[hbt] \centering \scalebox{0.86}{\includegraphics[clip]{fg3}} \caption{The Tatjer condition} \label{fg3} \end{figure} Denote by $\mathscr{C}$ the set of all $C^{r}$ diffeomorphisms on $M$ which have homoclinic tangencies satisfying the Tatjer condition. This set $\mathscr{C}$ is contained in the class of diffeomorphisms having ``generalized homoclinic tangencies'' defined in \cite{Tj01}. \smallskip We here claim that homoclinic tangencies satisfying the Tatjer condition are \emph{not} so rare in our context. Let $\ell_{1}$ and $\ell_{2}$ be submanifolds of $M$. For an intersection $\boldsymbol{x}\in \ell_{1}\cap \ell_{2}$, define $$c_{\boldsymbol{x}}( \ell_{1}, \ell_{2})= \dim M-\bigr(\dim T_{\boldsymbol{x}}\ell_{1}+\dim T_{\boldsymbol{x}}\ell_{2}-\dim (T_{\boldsymbol{x}}\ell_{1}\cap T_{\boldsymbol{x}}\ell_{2})\bigr), $$ which is called the \emph{codimension} at the intersection $\boldsymbol{x}$ associated with $\ell_{1}$ and $\ell_{2}$. See \cite{BR-ax}. The condition (\ref{C3}) implies that the codimension at the intersection $\bar{\boldsymbol{x}}_{0}$ associated with $S(\bar{\boldsymbol{x}}_{0})$ and $W^{u}(P^{\prime})$ is zero. However, (\ref{C1}) and (\ref{C2}) require pairs of submanifolds of codimension one as well as two: $$c_{\boldsymbol{x}_{0}}( W^{u}(P^{\prime}), W^{s}(P^{\prime}))=1, \quad c_{\boldsymbol{x}_{0}}( W^{s}(P^{\prime}), \ell^{uu}(\boldsymbol{x}_{0}))=2.$$ Thus, the homoclinic tangency with the Tatjer condition seems to be very special, as mentioned in \cite{Tj01}. However, it can be realized by $C^{r}$-small perturbations of any elements of $\mathscr{B}$. That is, \begin{proposition}\label{thm3.1} $\mathscr{B}$ is contained in the $C^{r}$, $r\geq 2$, closure of $\mathscr{C}$. \end{proposition} \begin{proof} Let us consider $f\in \mathscr{B}$ having saddle periodic points $P_{f}$ and $P^{\prime}_{f}$ and satisfying (\ref{B1})--(\ref{B3}). Since the unstable eigenvalues at $P_{f}$ are non-real, if necessary perturbing $f$ slightly in a small neighborhood of $P_{f}$ without breaking (\ref{B1})--(\ref{B3}), we may assume from the beginning that there exist a local chart $(\bar{x}, \bar{y}, \bar{z})$ in a small neighborhood $U$ of $P_{f}$ and real constants $ b_{s}, b_{u}, \vartheta$ such that \begin{enumerate} \item $0<|b_{s}|<1<|b_{u}|$ and $\vartheta\in [0, 1]$ which is an irrational number; \item $P_{f}=(0,0,0)$ and for any $(\bar{x}, \bar{y}, \bar{z})\in U$, \begin{equation} \label{eq3.1.1} f^{\mathrm{Per}(P_{f})}(\bar{x}, \bar{y}, \bar{z})= \left( (\bar{x}, \bar{y})\; {}^t\!B_{u},\ b_{s} \bar{z} \right), \ B_{u}= b_{u}\left(\begin{array}{cc} \cos 2\pi\vartheta & -\sin 2\pi\vartheta \\ \sin 2\pi\vartheta & \cos 2\pi\vartheta \\ \end{array}\right). \end{equation} \end{enumerate} Let $\bar{\boldsymbol{x}}_{0}=(0, 0, \bar{z}_{0})$ be a point in $W^{u}(P_{f}^{\prime})\pitchfork W^{s}_{\mathrm{loc}}(P_{f})$ given by (\ref{B2}) as shown in Figure \ref{fg4}-(a). Without loss of generality we may suppose that $\bar{\boldsymbol{x}}_{0}\not\in W^{uu} (P_{f}^{\prime})$. Define $$\bar{\boldsymbol{x}}_{n}:=f^{n\mathrm{Per}(P_{f})} (\bar{\boldsymbol{x}}_{0})$$ for any integer $n>0$, see Figure \ref{fg4}-(b). \begin{figure}[hbt] \centering \scalebox{0.86}{\includegraphics[clip]{fg4}} \caption{Non-transverse equidimensional cycle with rotation} \label{fg4} \end{figure} By the Inclination Lemma, for a large $n$, there is a two-dimensional disk $D_{n}^{u}(\bar{\boldsymbol{x}}_{0})\subset W^{u}(P_{f}^{\prime})$ containing $\bar{\boldsymbol{x}}_{0}$ such that $f^{n\mathrm{Per}(P_{f})} (D_{n}^{u}(\bar{\boldsymbol{x}}_{0}) )$ converges to $W^{u}_{\mathrm{loc}}(P_{f})$ in the $C^{r}$ topology as $n\to \infty$. Write $$\hat{D}_{n}^{u}:= f^{n\mathrm{Per}(P_{f})} (D_{n}^{u}(\bar{\boldsymbol{x}}_{0}) )$$ for each $n>0$. Note that there is a segment $\ell^{uu}_{n}(\bar{\boldsymbol{x}}_{0})$ contained in the the leaf through $\bar{\boldsymbol{x}}_{0}$ of the strong unstable foliation $\mathcal{F}^{uu}(P^{\prime}_{f})$, which is carried to $\hat{D}_{n}^{u}$ by $f^{n \mathrm{Per}(P_{f})}$. For each $n>0$, define $$\hat{\ell}_{n}^{uu}:= f^{n \mathrm{Per}(P_{f})}(\ell^{uu}(\bar{\boldsymbol{x}}_{0})).$$ See Figure \ref{fg4}-(b). Let us take the following steps: \smallskip \noindent \textbf{Step 1:} Let $\boldsymbol{v}_n^{uu}$ be a unit vector tangent to $\hat{\ell}_n^{uu}$ at $\bar{\boldsymbol{x}}_n$. Since the sequence $\{\bar{\boldsymbol{x}}_n\}$ converges to $P_f$ in $M$ as $n\to \infty$, we have a subsequence $\{\boldsymbol{v}_{n_i}^{uu}\}$ of $\{\boldsymbol{v}_n^{uu}\}$ converging to a unit vector $\boldsymbol{v}_\infty\in T_{P_f} W^u_{\mathrm{loc}}(P_f) $ as $n_i\to \infty$ in the tangent vector space $TM$. This implies that, for any $\varepsilon>0$, there is an integer $n_{0}:=n_{i_{0}}>0$ such that \begin{equation}\label{eq3.1.2} \left|P_{f}- \bar{\boldsymbol{x}}_{n_{0}}\right|<\varepsilon/2,\quad \mathrm{dist}_{TM}(\boldsymbol{v}_\infty, \boldsymbol{v}_{n_0}^{uu})<\varepsilon/2. \end{equation} This finishes the first step. \hspace*{ \fill}$\blacksquare$ \smallskip Next, we consider a quadratic tangency $\bar{\boldsymbol{y}}_{0}$ between $W^{s}(P_{f}^{\prime})$ and $ W^{u}_{\mathrm{loc}}(P_{f})$ which is given by the condition (\ref{B3}). For every integer $m\geq 0$, we write $$\bar{\boldsymbol{y}}_{-m}:=f^{-m \mathrm{Per}(P_{f})}(\bar{\boldsymbol{y}}_{0}).$$ See Figure \ref{fg4}-(b). \smallskip \noindent \textbf{Step 2:} Let $L^{s}_{-m}$ be a subarc of $W^{s}(P_{f}^{\prime})$ passing through $\bar{\boldsymbol{y}}_{-m}$ for any integer $m\geq 0$. Consider a unit vector $\boldsymbol{w}_{-m}^s$ tangent to $L^{s}_{-m}$ at $\bar{\boldsymbol{y}}_{-m}$. Since $|b_u|>1$ in (\ref{eq3.1.1}), $\{\bar{\boldsymbol{y}}_{-m}\}$ converges to $P_f$ in $M$ as $m\to \infty$. Since moreover $\vartheta$ is irrational, there exists a subsequence $\{m_i\}$ of $\{m\}$ such that $\{\boldsymbol{w}_{-m_i}^s\}$ converges to $\boldsymbol{v}_\infty$ in $TM$. This implies that, for any $\varepsilon>0$, there is an integer $m_{0}=m_{i_{0}}>0$ such that \begin{equation}\label{eq3.1.3} \left|P_{f}- \bar{\boldsymbol{y}}_{-m_{0}}\right|<\varepsilon/2, \quad \mathrm{dist}_{TM}(\boldsymbol{v}_\infty, \boldsymbol{w}_{-m_0}^s)<\varepsilon/2. \end{equation} This ends the second step. \hspace*{ \fill}$\blacksquare$ In the next final step, we combine the above two steps. \noindent \textbf{Step 3:} From (\ref{eq3.1.2}) and (\ref{eq3.1.3}), we have \begin{equation}\label{eq3.1.4} \left|\bar{\boldsymbol{x}}_{n_{0}}- \bar{\boldsymbol{y}}_{-m_{0}}\right|<\varepsilon, \quad \mathrm{dist}_{TM}(\boldsymbol{v}_{n_0}^{uu}, \boldsymbol{w}_{-m_0}^s)<\varepsilon. \end{equation} See Figure \ref{fg5}-(a). This concludes the third step. \hspace*{ \fill}$\blacksquare$ \smallskip \begin{figure}[hbt] \centering \scalebox{0.88}{\includegraphics[clip]{fg5}} \caption{The advent of tangency of codimension two } \label{fg5} \end{figure} Now we add a small perturbation to $f$ on a neighborhood of $\bar{\boldsymbol{y}}_{0}$ to obtain a $C^r$ diffeomorphism $f_{1}$ which is $C^{r}$-close to $f$ and such that the continuation $L^{s}_{-m_{0}}(f_{1})$ is obtained from $L^{s}_{-m_{0}}$ by a ``shifting down'' operation along the $\bar{z}$-axis and has a quadratic tangency $\bar{\boldsymbol{z}}_{0}$ with $\hat{D}_{n_{0}}^{u}$. See Figure \ref{fg5}-(b). Let $\tilde{\ell}_{n_0}^{uu}$ be a curve in $\hat D_{n_0}^u$ passing through $\bar{\boldsymbol{z}}_{0}$ and such that $f_1^{-n_0 \mathrm{Per}(P_{f_1})}(\tilde{\ell}_{n_0}^{uu})$ is contained in one of leaves of $\mathcal{F}^{uu}(P_{f_1}')$. In general, $T_{\bar{\boldsymbol{z}}_{0}} L^{s}_{-m_{0}}(f_{1})$ does not coincide with $T_{\bar{\boldsymbol{z}}_{0}} \tilde{\ell}^{uu}_{n_{0}}$. However, the condition (\ref{eq3.1.4}) implies that these spaces are sufficiently close to each other. Hence, by adding a perturbation again to $f_{1}$ on the neighborhood of $\bar{\boldsymbol{y}}_{0}$, we obtain a $C^r$ diffeomorphism $f_{2}$ which is $C^{r}$-close to $f_{1}$ and such that the continuation $L^{s}_{-m_{0}}(f_{2})$ of $L^{s}_{-m_{0}}(f_{1})$ has a quadratic tangency with $\hat{D}_{n_0}^u$ at $\bar{\boldsymbol{z}}_{0}$ and satisfies $$T_{\bar{\boldsymbol{z}}_{0}} L^{s}_{-m_{0}}(f_{2})=T_{\bar{\boldsymbol{z}}_{0}} \tilde{\ell}^{uu}_{n_{0}},$$ see Figure \ref{fg5}-(c). More precisely, $L^{s}_{-m_{0}}(f_{2})$ is obtained from $L^{s}_{-m_{0}}(f_{1})$ by a small rotation around the axis meeting $\hat D_{n_0}^u$ orthogonally at $\bar{\boldsymbol{z}}_{0}$. At last we have obtained the $C^{r}$ diffeomorphism $g:=f_{2}$ which is $C^{r}$-near $f$ and such that \begin{itemize} \item $W^{s}(P^{\prime}_{g})$ and $\hat{D}_{n_{i_{0}}}^{u}$ have a quadratic tangency at $\bar{\boldsymbol{z}}_{0}$; \item moreover, at the point $\bar{\boldsymbol{z}}_{0}$, $W^{s}(P_{g}^{\prime})$ and $\tilde{\ell}^{uu}_{n_{0}}$ have a tangency with $$ c_{\bar{\boldsymbol{z}}_{0}}(W^{s}(P_{g}^{\prime}), \tilde{\ell}^{uu}_{n_{0}})=2. $$ \end{itemize} This implies the condition (\ref{C2}). Note that, since $\bar{\boldsymbol{x}}_{0}\not\in W^{uu} (P_{f}^{\prime})$, the quadratic tangency $\bar{\boldsymbol{z}}_{0}$ does not belong to $W^{uu} (P_{f}^{\prime})$. It implies that (\ref{C1}) holds for $g$. Moreover, since (\ref{C3}) is the condition about the transversality, it still holds after arbitrarily small perturbations of $f$. Consequently, $g$ belongs to $\mathscr{C}$. This completes the proof of Proposition \ref{thm3.1}. \end{proof} In the end of this section, we present the next lemma which is indispensable for our discussion in the final section. Since this is just an extract from the main result of \cite{Tj01}, we here skip the proof. \begin{lemma}[see the items 1--3 in {\cite[Theorem 1]{Tj01}}]\label{lem3.2} Let $f$ be a $C^{r}$ $(r\geq 2)$ diffeomorphism on a three-dimensional manifold $M$ which has a homoclinic tangency of a saddle periodic point $P^{\prime}$ with real eigenvalues $\lambda_{1}, \lambda_{2}, \lambda_{3}$ satisfying $|\lambda_{1}|<1<|\lambda_{2}|<|\lambda_{3}|$. In addition, suppose that the homoclinic tangency satisfies the Tatjer condition. Then there are a two-parameter family $\{f_{a, b}\}_{a,b\in \mathbb{R}}$ with $f_{0,0}=f$ and a sequence $\{(a_{n}, b_{n})\}_{n\in\mathbb{N}}$ of the parameter values with $(a_{n}, b_{n})\to (0,0)$ as $n\to \infty$ such that, for any sufficiently large $n$, $f_{a_{n}, b_{n}}$ has an $n$-periodic smooth attracting invariant circle. \hfill$\square$ \end{lemma} \noindent We remark that the attracting invariant circles in Lemma \ref{lem3.2} are really generated by the Hopf (also known as the Neimark-Sacker) bifurcation for three-dimensional diffeomorphisms, which is a part of the codimension-two bifurcation called Bogdanov-Takens, see Broer et al \cite{BRS96}, and also \cite[\S 2.4]{Tj01}. \begin{add}[about strange attractors/infinitely many sinks] \normalfont By using other results in \cite[Theorem 1]{Tj01}, one can show that any element of $\mathscr{C}$ can be approximated by diffeomorphisms having strange attractors or infinitely many sinks. However, by \cite[Theorem C]{KS12} together with \cite[Corollary B]{KNS10}, both the phenomena can be directly derived from the existence of intrinsic tangencies in the proof of Proposition \ref{prop2.1} without detouring to the constructing of generalized homoclinic tangencies. That is, any element of $\mathscr{A}$ can be approximated by diffeomorphisms having these phenomena. \hfill$\square$ \end{add} \section{Constructions of wandering domains}\label{sec4} Let $\mathscr{D}$ be the set of all $C^{r}$ diffeomorphisms on a three-dimensional manifold $M$ having smooth attracting invariant circles which are given by the Hopf bifurcation as in the conclusion of Lemma \ref{lem3.2}. Here the regularity $r$ should be at least $5$ so that it can be expressed by the following normal form (\ref{nf}) of the Hopf bifurcation, see \cite[\S 7.5]{Rb}. More specifically, for any $f\in \mathscr{D}$, there exists a one-parameter family $\{f_{\mu}\}_{\mu\in\mathbb{R}}$ of $C^{r}$ diffeomorphisms such that $f_{0}$ is arbitrarily $C^{5}$-close to $f$ and $f_{\mu}$ undergoes the generic Hopf bifurcation at an $n$-periodic point $\boldsymbol{p}$ for some integer $n\geq 0$ for $\mu=0$ which creates an attracting invariant circle. In fact, one has polar coordinates $(r, \theta)$ and a real coordinate $t$ in a small neighborhood of $\boldsymbol{p}$ such that $\boldsymbol{p}=(0,0,0)$ and \begin{equation}\label{nf} f_{\mu}^{n}(r, \theta, t)= \left( (1+\mu)r-a_{\mu} r^{3}+O_{\mu}(r^{4}),\ \theta+\beta_{\mu}+O_{\mu}(r^{2}),\ \gamma t \right) \end{equation} where $O_{\mu}(r^{k})$ is a smooth function of order $r^{k}$ with $k\in\mathbb{N}$ near $(r,\mu)=(0,0)$ which depends on $\mu$ smoothly, $a_{\mu}$, $\beta_{\mu}$ are real constants depending on $\mu$ smoothly with $a_{0}> 0$, and $\gamma$ is a real constant with $0<|\gamma|<1$. Notice that $f$ restricted to the $r\theta$-space has the same form as the normal form of the original two-dimensional Hopf bifurcation, see \cite[\S 7.5]{Rb} or \cite[\S\S 4.6--4.7]{Kuz} for more details. Observe that, as $\mu>0$, this form has a saddle periodic point at $\boldsymbol{p}=(0,0,0)$ and has an attracting invariant circle surrounding $\boldsymbol{p}$ of radius $(\mu a_{\mu}^{-1})^{1/2} +O(\mu)$. Using the notations $\mathscr{C}$ and $\mathscr{D}$, one can rewrite Lemma \ref{lem3.2} as follows: \begin{corollary}\label{coro3.3} $\mathscr{C}$ is contained in the $C^{1}$ closure of $\mathscr{D}$. \end{corollary} The next result is the final step for the proof of Theorem \ref{thm1}. \begin{proposition}\label{prop4.1} Let $f$ be a diffeomorphism contained in $\mathscr{D}$. There is a diffeomorphism $g$ arbitrarily $C^{1}$-close to $f$ such that $g$ has a contracting non-trivial wandering domain $D$ for which $\omega(D, g)$ is a transitive nonhyperbolic Cantor set without periodic points. \end{proposition} \begin{proof} For $f\in \mathscr{D}$, one has a $C^r$ diffeomorphism $f_{\mu}$ arbitrarily $C^{1}$-close to $f$ such that $f_{\mu}^{n}$ given by (\ref{nf}) has an attracting invariant circle $S$ with $f_{\mu}^{i}(S)\cap S=\emptyset$ for any $i=0,\cdots, n-1$. By perturbing $f_\mu$ slightly if necessary, we may assume that $\beta_{\mu}/(2\pi)\not\in\mathbb{Q}$. Moreover, since $\mu$ is close to $0$, we have a diffeomorphism $\tilde{f}$ $C^{1}$-near $f_{\mu}$ such that \begin{equation}\label{rt} \tilde{f}^{n}(r, \theta, t)= \left( (1+\mu)r-a_{\mu} r^{3},\ \theta+\beta_{\mu},\ \gamma t \right) \end{equation} in the neighborhood of $\boldsymbol{p}$ of radius $2(\mu a_{\mu}^{-1})^{1/2}$. By this it follows that there is an attracting invariant circle $\tilde S$ and the restriction of $\tilde{f}^{n}$ to $\tilde S$ is an irrational rigid rotation. Hence, by the same construction as that of Denjoy's counter-example, see \cite{D32} and \cite{Her79}, we have a $C^{1}$ diffeomorphism $g$ arbitrarily $C^{1}$-close to $\tilde{f}$, a sequence $\{ \ell_{i} \}_{i\geq 0}$ of open arcs which are contained in a circle $S_{g}$ sufficiently $C^{1}$-close to $\tilde S$ and satisfy the following conditions: \begin{itemize} \item $S_{g}$ is an attracting invariant circle for $g^{n}$, and $g^{n}\vert_{S_{g}}$ is a rigid irrational rotation; \item for any $i, j\geq 0$ with $i\neq j$, \begin{equation}\label{wi} g^{n}(\ell_{i})=\ell_{i+1},\quad \ell_{i}\cap \ell_{j}=\emptyset; \end{equation} \item $\omega(\ell_{0}, g^{n})$ is a topologically transitive Cantor set on $S_{g}$ without periodic points, where $g^{n}$ has zero Lyapunov exponent. \end{itemize} \begin{figure}[hbt] \centering \scalebox{0.8}{\includegraphics[clip]{fg6}} \caption{Denjoy-like construction} \label{fg6} \end{figure} We here consider a normal tubular neighborhood of each arc $\ell_{i}$, see Figure \ref{fg6}, which is defined as $$D_{i}:=\bigcup_{x\in \ell_{i}} \Delta_{i}(x),$$ where $\Delta _i(x)$ is the open disk of radius $\delta$ centered at $x\in \ell _i$ which lies in a plane normal to $\ell _i$ for each $i\geq 0$, where $\delta >0$ is a given small number independent of $x$ and $i$. By the form of (\ref{nf}), the restrictions of $f_\mu^n(r,\theta,t)$ to the first and third entries are contracting maps. It follows from this fact together with the wandering condition (\ref{wi}) that $$ g^{n}(D_{i})\subset D_{i+1},\quad D_i\cap D_j=\emptyset $$ for every $i, j \geq 0$ with $i\neq j$. In consequence, the open set $D_{0}$ is a contracting non-trivial wandering domain for $g^{n}$. Since $g^{i}(S_{g})\cap S_{g}=\emptyset$ for $i=1,\cdots, n-1$, $D_{0}$ is a contracting non-trivial wandering domain also for $g$. \end{proof} We are at last ready to show the main result of this paper. \begin{proof}[Proof of Theorem \ref{thm1}] By Propositions \ref{prop2.1}, \ref{thm3.1}, and Corollary \ref{coro3.3}, $$ \mathscr{A}\subset \overline{(\mathscr{B})}_{C^{1}}, \quad \mathscr{B}\subset \overline{(\mathscr{C})}_{C^{r}}\subset \overline{(\mathscr{D})}_{C^{r}}. $$ Hence, for any $f\in \mathscr{A}$, there is an $\hat{f}\in \mathscr{D}$ arbitrarily $C^{1}$-close to $f$. By Proposition \ref{prop4.1}, we obtain a diffeomorphism $g$ arbitrarily $C^{1}$-close to $\hat{f}$ which has a contracting non-trivial wandering domain. It implies that $g$ is an element of $\mathscr{Z}$. This completes the proof. \end{proof} \section*{Acknowledgements.} This paper was partially supported by JSPS KAKENHI Grant Numbers 25400112 and 26400093. The authors thank hospitality of the Kyoto Dynamical Seminar. The authors also thank helpful comments and suggestions by Alexandre Rodrigues and the anonymous referees.
1011.3447
\section{Introduction} \subsection{Representations of $\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$} \label{rpgqgl} The starting point for this survey is that one can attach representations of the group $\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$ to some objects which occur in arithmetic geometry, for example elliptic curves and modular forms. Suppose for instance that $A$ is an elliptic curve defined over $\mathbf{Q}$ and choose a prime number $p$. The group $\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$ acts on the $p^n$-th torsion points $A[p^n](\overline{\mathbf{Q}})$ of $A$ and this gives rise to the Tate module of $A$. Tensoring the Tate module with $\mathbf{Q}_p$, we get a $2$-dimensional $\mathbf{Q}_p$-vector space $V_pA$ which is the $p$-adic representation of $\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$ attached to $A$. Let $\ell$ be a prime number and choose an embedding $\iota_\ell : \overline{\mathbf{Q}} \to \overline{\mathbf{Q}}_\ell$. This gives rise to a map $\operatorname{Gal}(\overline{\mathbf{Q}}_\ell/\mathbf{Q}_\ell) \to \operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$ which is injective and whose image is the decomposition group $D_\ell$ of a place above $\ell$ (a different choice of $\iota_\ell$ gives rise to another subgroup of $\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$ which is conjugate to $D_\ell$). The group $D_\ell$ contains the inertia subgroup $I_\ell$ and the quotient $D_\ell / I_\ell$ is isomorphic to $\operatorname{Gal}(\overline{\mathbf{F}}_\ell/\mathbf{F}_\ell)$. The group $\operatorname{Gal}(\overline{\mathbf{F}}_\ell/\mathbf{F}_\ell)$ is isomorphic to $\widehat{\mathbf{Z}}$ and is topologically generated by the Frobenius map $\operatorname{Frob}_\ell = [ z \mapsto z^\ell]$. We then have the following theorem which says that the representation $V_pA$ is also ``attached to $A$'' in a deeper way. \begin{theo}\label{grellc} If $\ell \nmid p \cdot \mathrm{Disc}(A)$, then the restriction of $V_p A$ to $I_\ell$ is trivial and $\det(X-\operatorname{Frob}_\ell \mid V_pA) = X^2 - a_\ell X + \ell$, where $a_\ell = \ell+1-\operatorname{Card}(A(\mathbf{F}_\ell))$. \end{theo} As $\ell$ runs through a set of primes of density $1$, the groups $D_\ell$ and their conjugates form a dense subset of $\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$ by Chebotarev's theorem and therefore theorem \ref{grellc} determines the semisimplification of $V_pA$. If $\ell \neq p$ but $\ell \mid \mathrm{Disc}(A)$, then we also have a description of $V_p A \mid_{D_\ell}$ which now depends on the geometry of $A \bmod{\ell}$. A much deeper problem is the description of the restriction of $V_p A$ to $D_p$ and this is one of the goals of Fontaine's theory, which we discuss in this survey. Let us recall that one can also attach $p$-adic representations of $\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$ to modular forms as follows. Let $f=\sum_{n \geq 0} a_n q^n$ be a normalized modular eigenform of weight $k$, level $N$ and character $\varepsilon$, and let $E$ be the field generated over $\mathbf{Q}_p$ by the images of the $a_n$ in $\overline{\mathbf{Q}}_p$ under some embedding. The field $E$ is a finite extension of $\mathbf{Q}_p$ and we have the following theorem, which combines results of Deligne, Eichler-Shimura and Igusa (see theorem 6.1 of Deligne-Serre \cite{DS74}). \begin{theo}\label{dsfm} There exists a semisimple $2$-dimensional $E$-linear representation $V_p f$ of $\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$ such that for every prime number $\ell \nmid pN$, the restriction of $V_p f$ to $I_\ell$ is trivial and $\det(X-\operatorname{Frob}_\ell \mid V_p f) = X^2 - a_\ell X + \varepsilon(\ell)\ell^{k-1}$. \end{theo} If $f$ is of weight $2$, then it corresponds to an elliptic curve $A$ and the representation $V_p f$ is the same as the representation $V_p A$ defined above. \subsection{Trianguline representations and $(\phi,\Gamma)$-modules} \label{fontint} Let $E$ be a finite extension of $\mathbf{Q}_p$ which is the field of coefficients of the representations we consider. The goal of Fontaine's theory is to study the $E$-linear representations of $\Gal(\Qpbar/\Qp)$. These may arise as the restriction to $D_p$ of representations of $\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$ as above, but they are also interesting considered on their own. A $p$-adic representation of $\Gal(\Qpbar/\Qp)$ is then a finite dimensional $E$-vector space $V$, along with a continuous $E$-linear action of $\Gal(\Qpbar/\Qp)$. Fontaine's approach has been to construct some ``rings of periods'', for example $\bcris$, $\bst$ and $\bdr$, and to use these rings to define and study crystalline, semistable and de Rham representations (see \S\ref{fonsec} for reminders about this). These constructions allow one to give a complete description of the restriction to $D_p$ of the representations $V_p A$ and $V_p f$ of \S\ref{rpgqgl} (see \S \ref{ptsec}). Another construction of Fontaine's which is crucial in this survey is the theory of $(\phi,\Gamma)$-modules. There are three variants of this theory, and we now describe (and will describe again in more detail in \S\S \ref{pgsec}--\ref{kedsec}) the theory of $(\phi,\Gamma)$-modules over the Robba ring. Let $\mathcal{R}$ be the Robba ring, that is the ring of power series $f(X)= \sum_{n \in \mathbf{Z}} a_n X^n$ where $a_n \in E$ and for which there exists $\rho(f)$ such that $f(X)$ converges on the $p$-adic annulus $\rho(f) \leq |X|_p < 1$. This ring is endowed with a Frobenius $\phi$ given by $(\phi f)(X)=f((1+X)^p-1)$ and with an action of $\mathbf{Z}_p^\times$ (now called $\Gamma$) given by $([a] f)(X)=f((1+X)^a-1)$ if $a \in \mathbf{Z}_p^\times$. A $(\phi,\Gamma)$-module is a free $\mathcal{R}$-module of finite rank $d$, endowed with a semilinear Frobenius $\phi$ such that $\operatorname{Mat}(\phi)$ (the matrix of $\phi$ in some basis) belongs to $\operatorname{GL}_d(\mathcal{R})$, and with a commuting semilinear continuous action of $\Gamma$. The main result relating $(\phi,\Gamma)$-modules and $p$-adic Galois representations is the following (it combines theorems of Fontaine, Fontaine-Wintenberger, Cherbonnier-Colmez and Kedlaya). Let $\mathcal{O}_{\mathcal{E}}^\dag$ be the set of $f(X) \in \mathcal{R}$ with $|a_n|_p \leq 1$ for all $n \in \mathbf{Z}$. We say that a $(\phi,\Gamma)$-module is \'etale if there exists a basis in which $\operatorname{Mat}(\phi) \in \operatorname{GL}_d(\mathcal{O}_{\mathcal{E}}^\dag)$. The ring $\btdagrig$ below denotes one of Fontaine's rings of periods. \begin{theo}\label{pgint} If $\mathrm{D}$ is an \'etale $(\phi,\Gamma)$-module, then $V(\mathrm{D})=(\btdagrig \otimes_{\mathcal{R}} \mathrm{D})^{\phi=1}$ is a $p$-adic representation of $\Gal(\Qpbar/\Qp)$, and the resulting functor $\mathrm{D} \mapsto V(\mathrm{D})$ gives rise to an equivalence of categories: \{\'etale $(\phi,\Gamma)$-modules\} $\to$ \{$p$-adic representations\}. \end{theo} We denote by $V \mapsto \mathrm{D}(V)$ the inverse functor. The category of \'etale $(\phi,\Gamma)$-modules is a full subcategory of the larger category of all $(\phi,\Gamma)$-modules. In particular, if $V$ is an irreducible $p$-adic representation, then $\mathrm{D}(V)$ is irreducible in the category of \'etale $(\phi,\Gamma)$-modules but it can be reducible in the larger category of all $(\phi,\Gamma)$-modules. \begin{defi}\label{dti} If $V$ is a $p$-adic representation of $\Gal(\Qpbar/\Qp)$, then we say that $V$ is trianguline if $\mathrm{D}(V)$ is a successive extension of $(\phi,\Gamma)$-modules of rank $1$ (after possibly enlarging $E$). \end{defi} This definition was first given by Colmez in his construction of the ``unitary principal series of $\operatorname{GL}_2(\mathbf{Q}_p)$'', which is an important building block of the $p$-adic local Langlands correspondence for $\operatorname{GL}_2(\mathbf{Q}_p)$ (see \S \ref{llsec}). Some important examples of trianguline representations are (1) the semistable representations of $\Gal(\Qpbar/\Qp)$ and (2) the restriction to $\Gal(\Qpbar/\Qp)$ of the representations of $\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$ attached to finite slope overconvergent modular forms. This survey has three chapters. In the first one, we give a more detailed description of the definition and properties of $(\phi,\Gamma)$-modules, including Kedlaya's theory of Frobenius slopes. In the second one, we give some examples of trianguline representations by relating the theory of $(\phi,\Gamma)$-modules to $p$-adic Hodge theory, and then we give Colmez' construction of a parameter space for all $2$-dimensional trianguline representations. In the last chapter, we explain how trianguline representations occur in the $p$-adic local Langlands correspondence, in the theory of overconvergent modular forms and in the study of Selmer groups. \subsection{Notations and conventions} \label{nc} The field $E$ is a finite extension of $\mathbf{Q}_p$ with ring of integers $\mathcal{O}_E$ whose maximal ideal is $\mathfrak{m}_E$ and whose residue field is $k_E$. All the characters, representations and group actions in this survey are assumed to be continuous (note that a character $\delta : \mathbf{Q}_p^\times \to E^\times$ is necessarily continuous by exercise 6 of \S 4.2 of \cite{S94}). When we say that an $E$-linear object is irreducible, we mean that it is absolutely irreducible, meaning that it remains irreducible when we extend scalars from $E$ to a finite extension. The cyclotomic character $\chi_{\operatorname{cycl}}$ gives an isomorphism $\chi_{\operatorname{cycl}} : \operatorname{Gal}(\mathbf{Q}_p(\mu_{p^\infty}) / \mathbf{Q}_p) \to \mathbf{Z}_p^\times$. The maximal abelian extension of $\mathbf{Q}_p$ is $\mathbf{Q}_p^{\operatorname{ab}} = \mathbf{Q}_p^{\operatorname{nr}} \cdot \mathbf{Q}_p(\mu_{p^\infty})$, and every element of $\operatorname{Gal}(\mathbf{Q}_p^{\operatorname{ab}}/\mathbf{Q}_p)$ can be written as $\operatorname{Frob}_p^n \cdot g$ where $\operatorname{Frob}_p$ is the lift of $[z \mapsto z^p]$ and $n \in \widehat{\mathbf{Z}}$ and $g \in \operatorname{Gal}(\mathbf{Q}_p^{\operatorname{ab}} / \mathbf{Q}_p^{\operatorname{nr}})$. If $\delta : \mathbf{Q}_p^\times \to \mathcal{O}_E^\times$ is a unitary character, then by local class field theory $\delta$ gives rise to a character (still denoted by $\delta$) of $\Gal(\Qpbar/\Qp)$ which is determined by the formula $\delta(\operatorname{Frob}_p^n \cdot g) = \delta(p)^{-n} \cdot\delta(\chi(g))$ if $n \in \mathbf{Z}$. In other words, we normalize class field theory so that $p$ corresponds to the geometric Frobenius $\operatorname{Frob}_p^{-1}$. \section{Galois representations and $(\phi,\Gamma)$-modules} \label{pgch} In this chapter, we explain the theory of $(\phi,\Gamma)$-modules and its relation to $p$-adic representations. This allows us to define trianguline representations. \subsection{The Robba ring and $(\phi,\Gamma)$-modules}\label{pgsec} The \emph{Robba ring} $\mathcal{R}$ is the ring of power series $f(X) = \sum_{n \in \mathbf{Z}} a_n X^n$ where $a_n \in E$ such that $f(X)$ converges on an annulus of the form $\rho(f) \leq |X|_p < 1$. For example, the power series $t = \log(1+X)$ belongs the Robba ring (and here one can take $\rho(t)=0$). The Robba ring is endowed with a Frobenius map $\phi$ given by $(\phi f)(X) = f((1+X)^p-1)$. Let $\Gamma$ be another notation for $\mathbf{Z}_p^\times$ with the isomorphism $\mathbf{Z}_p^\times \to \Gamma$ denoted by $a \mapsto [a]$. The Robba ring is endowed with an action of $\Gamma$ given by $([a]f)(X) = f((1+X)^a-1)$ and this action commutes with $\phi$. For example, we have $\phi(t)=pt$ and $[a](t) = at$. \begin{defi}\label{dfpgr} A \emph{$(\phi,\Gamma)$-module over $\mathcal{R}$} is a free $\mathcal{R}$-module of finite rank $d$, endowed with a semilinear Frobenius $\phi$ such that $\operatorname{Mat}(\phi)$ (the matrix of $\phi$ in some basis) belongs to $\operatorname{GL}_d(\mathcal{R})$ and a semilinear action of $\Gamma$ which commutes with $\phi$. \end{defi} There is then an obvious notion of morphism of $(\phi,\Gamma)$-modules, and this gives rise to the category of $(\phi,\Gamma)$-modules. This category is not abelian, since the quotient of two $(\phi,\Gamma)$-modules is not necessarily free (consider for instance the map: $t \cdot \mathcal{R} \to \mathcal{R}$). If $\delta : \mathbf{Q}_p^\times \to E^\times$ is a character, then we define $\mathcal{R}(\delta)$ as the $(\phi,\Gamma)$-module of rank $1$ having $e_\delta$ as a basis where $\phi(e_\delta)=\delta(p)e_\delta$ and $[a](e_\delta) = \delta(a)e_\delta$. The following theorem is proposition 3.1 of \cite{PC08}. \begin{theo}\label{pgr1} Every $(\phi,\Gamma)$-module of rank $1$ over $\mathcal{R}$ is isomorphic to $\mathcal{R}(\delta)$ for a well-defined character $\delta : \mathbf{Q}_p^\times \to E^\times$. \end{theo} \subsection{Etale $(\phi,\Gamma)$-modules and Galois representations}\label{rosec} The ring $\mathcal{E}^{\dagger}$ is the subring of $\mathcal{R}$ consisting of those power series $f(X) = \sum_{n \in \mathbf{Z}} a_n X^n$ for which the sequence $\{a_n\}_{n \in \mathbf{Z}}$ is bounded. The subring of $\mathcal{E}^\dag$ consisting of those $f(X) = \sum_{n \in \mathbf{Z}} a_n X^n$ for which $|a_n|_p \leq 1$ is denoted by $\mathcal{O}_{\mathcal{E}}^\dagger$. This is a henselian local ring with residue field $k_E \dpar{X}$. \begin{defi}\label{dfet} We say that a $(\phi,\Gamma)$-module over $\mathcal{R}$ is \emph{\'etale} if it has a basis in which $\operatorname{Mat}(\phi) \in \operatorname{GL}_d(\mathcal{O}_{\mathcal{E}}^\dag)$. \end{defi} In \S 2.3 of \cite{LB02}, a ring $\btdagrig$ is constructed which has the following properties: it is endowed with a Frobenius $\phi$ and a commuting action of $\Gal(\Qpbar/\Qp)$, and it contains the Robba ring $\mathcal{R}$. This inclusion is compatible with $\phi$ and with the action of $\Gamma$ on $\mathcal{R}$, in the sense that if $y \in \mathcal{R}$ and $g \in \Gal(\Qpbar/\Qp)$, then $g(y) = [\chi_{\operatorname{cycl}}(g)](y)$. One can think of $\btdagrig$ as some sort of ``algebraic closure'' of $\mathcal{R}$. If $\mathrm{D}$ is a $(\phi,\Gamma)$-module over $\mathcal{R}$, then $V(\mathrm{D}) = (\btdagrig \otimes_{\mathcal{R}} \mathrm{D})^{\phi=1}$ is an $E$-vector space, endowed with the action of $\operatorname{Gal}(\overline{\mathbf{Q}}_p/\mathbf{Q}_p)$ given by $g(x \otimes e) = g(x) \otimes [\chi_{\operatorname{cycl}}(g)](e)$. This $E$-vector space can be finite or infinite-dimensional in general, but we have the following theorem which combines results of Fontaine (theorem 3.4.3 of \cite{F90}), Cherbonnier-Colmez (corollary III.5.2 of \cite{CC98}) and Kedlaya (theorem 6.3.3 of \cite{KK05}). \begin{theo}\label{pgrep} If $\mathrm{D}$ is an \'etale $(\phi,\Gamma)$-module of rank $d$ over $\mathcal{R}$, then $V(\mathrm{D})$ is an $E$-linear representation of dimension $d$ of $\operatorname{Gal}(\overline{\mathbf{Q}}_p/\mathbf{Q}_p)$, and the resulting functor, from the category of \'etale $(\phi,\Gamma)$-modules over $\mathcal{R}$ to the category of $E$-linear representations of $\operatorname{Gal}(\overline{\mathbf{Q}}_p/\mathbf{Q}_p)$, is an equivalence of categories. \end{theo} We denote by $V \mapsto \mathrm{D}(V)$ the inverse functor, which to a $p$-adic representation attaches the corresponding \'etale $(\phi,\Gamma)$-module over $\mathcal{R}$. For example, the $(\phi,\Gamma)$-module $\mathcal{R}(\delta)$ is \'etale if and only if $\operatorname{val}_p(\delta(p))=0$. In this case, the representation $V(\mathcal{R}(\delta))$ is the character of $\Gal(\Qpbar/\Qp)$ corresponding to $\delta$ by local class field theory, as recalled in \S\ref{nc}. \subsection{Trianguline representations}\label{trsec} We can now give the definition of trianguline representations (see \S 0.4 of \cite{PC08}). \begin{defi}\label{dtr} If $V$ is a $p$-adic representation of $\Gal(\Qpbar/\Qp)$, then \begin{enumerate} \item we say that $V$ is \emph{split trianguline} if the $(\phi,\Gamma)$-module $\mathrm{D}(V)$ is a successive extension of $(\phi,\Gamma)$-modules of rank $1$; \item we say that $V$ is \emph{trianguline} if there exists a finite extension $F$ of $E$ such that $F \otimes_E V$ is split trianguline. \end{enumerate} \end{defi} In other words, a $p$-adic representation $V$ is split trianguline if and only if $\mathrm{D}(V)$ has a basis in which the matrices of $\phi$ and of the elements of $\Gamma$ are all upper-triangular. On the level of $(\phi,\Gamma)$-modules, the possible extension of scalars from $E$ to $F$ consists in extending scalars from the Robba ring with coefficients in $E$ to the Robba ring with coefficients in $F$. For example, we'll see later on that semistable representations are always trianguline, and they are split trianguline if and only if $E$ contains the eigenvalues of $\phi$ on $\mathrm{D}_{\operatorname{st}}(V)$. It is important to understand that a representation $V$ may well be trianguline without $V$ itself being an extension of representations of dimension $1$. Indeed, the definition is that $\mathrm{D}(V)$ is a successive extension of $(\phi,\Gamma)$-modules of rank $1$, but these $(\phi,\Gamma)$-modules are generally not \'etale and therefore do not correspond to subquotients of $V$. Note also that a $(\phi,\Gamma)$-module may be written as a successive extension of $(\phi,\Gamma)$-modules of rank $1$ in several different ways. In the rest of this survey, we'll see several examples of trianguline representations, but we give here the two main classes: \begin{enumerate} \item the representations of $\Gal(\Qpbar/\Qp)$ which become semistable when restricted to $\operatorname{Gal}(\overline{\mathbf{Q}}_p/\mathbf{Q}_p(\zeta_{p^n}))$ for some $n \geq 0$; \item the representations of $\Gal(\Qpbar/\Qp)$ which arise from overconvergent modular eigenforms of finite slope. \end{enumerate} In \cite{LBDP}, some explicit families of $2$-dimensional representations are constructed and the trianguline ones are determined. \subsection{Slopes of $(\phi,\Gamma)$-modules}\label{kedsec} We now recall Kedlaya's theory of slopes for $\phi$-modules over the ring $\mathcal{R}$ (i.e.\ free $\mathcal{R}$-modules of finite rank $d$ with a semilinear $\phi$ such that $\operatorname{Mat}(\phi) \in \operatorname{GL}_d(\mathcal{R})$). If $s = a/h \in \mathbf{Q}$ is written in lowest terms, then we say that a $\phi$-module over $\mathcal{R}$ is \emph{pure of slope $s$} if it is of rank $\geq 1$ and has a basis in which $\operatorname{Mat}(p^{-a} \phi^h) \in \operatorname{GL}_d(\mathcal{O}_{\mathcal{E}}^\dag)$ (being \'etale is therefore equivalent to being pure of slope zero). A $\phi$-module over $\mathcal{R}$ which is pure of a certain slope is said to be \emph{isoclinic}. For example, the $(\phi,\Gamma)$-module $\mathcal{R}(\delta)$ is pure of slope $\operatorname{val}_p(\delta(p))$. The main result of the theory of slopes is theorem 6.10 of \cite{KK04}. \begin{theo}\label{kedsl} If $\mathrm{D}$ is a $\phi$-module over $\mathcal{R}$, then there exists a unique filtration $\{0\} = \mathrm{D}_0 \subset \mathrm{D}_1 \subset \cdots \subset \mathrm{D}_\ell = \mathrm{D}$ of $\mathrm{D}$ by sub-$\phi$-modules such that: \begin{enumerate} \item for all $i \geq 1$, $\mathrm{D}_i / \mathrm{D}_{i-1}$ is an isoclinic $\phi$-module; \item if $s_i$ is the slope of $\mathrm{D}_i / \mathrm{D}_{i-1}$, then $s_1 < s_2 < \cdots < s_\ell$. \end{enumerate} \end{theo} If $\mathrm{D}$ is a $(\phi,\Gamma)$-module, then each of the $\mathrm{D}_i$ is stable under the action of $\Gamma$ since the filtration is unique, and hence each $\mathrm{D}_i$ is itself a $(\phi,\Gamma)$-module. A delicate but crucial point of the theory of slopes is that a $\phi$-module over $\mathcal{R}$ which is pure of slope $s$ has no subobject of slope $<s$ by theorem \ref{kedsl}, but it may well have subobjects of slope $> s$. This helps to explain the definition of trianguline representations: an \'etale $(\phi,\Gamma)$-module over $\mathcal{R}$ may be irreducible in the category of \'etale $(\phi,\Gamma)$-modules but it can still admit some nontrivial subobjects in the larger category of all $(\phi,\Gamma)$-modules. Theorem \ref{kedsl} also helps to understand theorem \ref{pgrep}. If $\mathrm{D}$ is a $(\phi,\Gamma)$-module, then $V(\mathrm{D}) = (\btdagrig \otimes_{\mathcal{R}} \mathrm{D})^{\phi=1}$ is constructed by solving $\phi$-equations determined by the matrix of $\phi$ on $\mathrm{D}$. If the slopes of $\mathrm{D}$ are $>0$, then these equations have no nonzero solutions, while if the slopes of $\mathrm{D}$ are $<0$, then the space of solutions if infinite dimensional (see theorem A of \cite{LB09} for more precise results). The condition that $\mathrm{D}$ is \'etale is exactly the right one for $V(\mathrm{D})$ to be a finite dimensional $E$-vector space of the correct dimension. \section{Examples of trianguline representations} \label{exch} In this chapter, we explain how to relate $(\phi,\Gamma)$-modules and $p$-adic Hodge theory, which allows us to give important examples of trianguline representations. After that, we explain how to compute extensions of $(\phi,\Gamma)$-modules and Colmez' resulting construction of all $2$-dimensional trianguline representations. \subsection{Fontaine's rings of periods}\label{fonsec} The purpose of Fontaine's theory is to sort through $p$-adic representations, and to classify the interesting ones by using objects from linear algebra. Recall that Fontaine has constructed in \cite{FP} a number of rings, for example $\bcris$, $\bst$ and $\bdr$. The construction of these rings is quite complicated but they have a number of properties, some of which we now recall and which suffice for this survey. All of them are $\mathbf{Q}_p$-algebras endowed with an action of $\Gal(\Qpbar/\Qp)$ and some extra structures, which are all compatible with the action of $\Gal(\Qpbar/\Qp)$. The ring $\bst$ has a Frobenius $\phi$ and a monodromy operator $N$ which satisfy the relation $N \phi = p \phi N$, and the ring $\bcris$ is then $\bst^{N=0}$. The ring $\bdr$ is actually a field, and is endowed with a filtration. The ring $\bcris$ contains $\widehat{\mathbf{Q}}_p^{\operatorname{nr}}$ and the choice of $\log_p(p)$ gives rise to an injective map $\overline{\mathbf{Q}}_p \otimes_{\mathbf{Q}_p^{\operatorname{nr}}} \bst \to \bdr$. If $V$ is a $p$-adic representation of $\Gal(\Qpbar/\Qp)$ and $\ast \in \{$cris, st, dR$\}$, then we set $\mathrm{D}_\ast(V) = (\mathbf{B}_\ast \otimes_{\mathbf{Q}_p} V)^{\Gal(\Qpbar/\Qp)}$. The space $\mathrm{D}_\ast(V)$ is then an $E$-vector space of dimension $\leq \dim_E(V)$ and we say that $V$ is \emph{crystalline} or \emph{semistable} or \emph{de Rham} if we have equality of dimensions with $\ast$ being cris, st or dR. The $E$-vector space $\mathrm{D}_{\operatorname{dR}}(V)$ is then endowed with an $E$-linear filtration, the space $\mathrm{D}_{\operatorname{st}}(V) \subset \mathrm{D}_{\operatorname{dR}}(V)$ is a filtered $(\phi,N)$-module (that is, a finite dimensional $E$-vector space with an $E$-linear map $\phi$, an $E$-linear map $N$ satisfying the relation $N \phi = p \phi N$, and a filtration by $E$-vector subspaces, which is not assumed to be stable under either $\phi$ or $N$) and $\mathrm{D}_{\operatorname{cris}}(V)=\mathrm{D}_{\operatorname{st}}(V)^{N=0}$ is a filtered $\phi$-module. If $D$ is a filtered $(\phi,N)$-module, then we define the Newton number $t_N(D)$ as the $p$-adic valuation of $\phi$ on $\det(D)$ and the Hodge number $t_H(D)$ as the unique integer $h$ such that $\operatorname{Fil}^h(\det(D))=\det(D)$ and $\operatorname{Fil}^{h+1}(\det(D))=\{0\}$. We say that $D$ is \emph{admissible} if $t_H(D)=t_N(D)$ and if $t_H(D') \leq t_N(D')$ for every $(\phi,N)$-stable subspace $D'$ of $D$. The following theorem combines results of Fontaine (\S 5.4 of \cite{FST}) and the Colmez-Fontaine theorem (theorem A of \cite{CF00}). \begin{theo}\label{cf} If $V$ is a semistable representation of $\Gal(\Qpbar/\Qp)$, then $\mathrm{D}_{\operatorname{st}}(V)$ is an admissible filtered $(\phi,N)$-module, and the functor $V \mapsto \mathrm{D}_{\operatorname{st}}(V)$ gives an equivalence of categories: \{semistable representations\} $\to$ \{admissible filtered $(\phi,N)$-modules\}. \end{theo} All of these constructions also work for representations of $\Gal(\Qpbar/K)$ if $K$ is a finite extension of $\mathbf{Q}_p$. In particular, we say that a $p$-adic representation of $\Gal(\Qpbar/\Qp)$ is \emph{potentially semistable} if its restriction to $\Gal(\Qpbar/K)$ is semistable for some finite extension $K$ of $\mathbf{Q}_p$. Potentially semistable representations of $\Gal(\Qpbar/\Qp)$ are always de Rham. \subsection{$p$-adic Hodge theory}\label{ptsec} If $X$ is a proper and smooth scheme over $\mathbf{Q}_p$, then the \'etale cohomology groups $\mathrm{H}^i_{\operatorname{\acute{e}t}}(X_{\overline{\mathbf{Q}}_p},\mathbf{Q}_p)$ are $p$-adic representations of $\Gal(\Qpbar/\Qp)$. If $X$ has good reduction at $p$, then we can consider its crystalline cohomology groups which have the structure of filtered $\phi$-modules, and if $X$ has bad semistable reduction at $p$, then one can replace the crystalline cohomology groups with a generalization: the log-crystalline cohomology groups, which have the structure of filtered $(\phi,N)$-modules. We then have the following theorem of Tsuji (theorem 0.2 of \cite{TT99}), which is the former conjecture $C_{\operatorname{st}}$ of Fontaine-Jannsen (see \S 6.2 of \cite{FST}). \begin{theo}\label{tsu} If $X$ is a proper scheme over $\mathbf{Z}_p$ with semistable reduction, then $\mathrm{H}^i_{\mathrm{\operatorname{\acute{e}t}}}(X_{\overline{\mathbf{Q}}_p},\mathbf{Q}_p)$ is a semistable representation of $\Gal(\Qpbar/\Qp)$, and there is a natural isomorphism of filtered $(\phi,N)$-modules: $\mathrm{D}_{\operatorname{st}}(\mathrm{H}^i_{\mathrm{\operatorname{\acute{e}t}}}(X_{\overline{\mathbf{Q}}_p},\mathbf{Q}_p)) = \mathrm{H}^i_{\mathrm{log}\text{-}\operatorname{cris}}(X)$. \end{theo} If $f$ is a modular eigenform, then one can attach to it a $p$-adic representation $V_p f$ as recalled in theorem \ref{dsfm}. The representation $V_p f$ is always potentially semistable and a result of Saito (the main theorem of \cite{TS97}) completely describes the restriction of $V_p f$ to $D_p$. If $p \nmid N$, then Saito's theorem was previously proved by Scholl (see theorem 1.2.4 of \cite{AS90}). In this case, $V_p f$ is crystalline and $\mathrm{D}_{\operatorname{cris}}((V_p f)^*) = D_{k,a_p}$ where $k$ is the weight of $f$, $a_p$ is the eigenvalue of the Hecke operator $T_p$, and $D_{k,a_p} = E e_1 \oplus E e_2$ with \[ \operatorname{Mat}(\phi) = \pmat{0 & -1 \\ \varepsilon(p)p^{k-1} & a_p} \text{ and } \operatorname{Fil}^i D_{k,a_p} = \begin{cases} D_{k,a_p} & \text{ if $i \leq 0$,} \\ E e_1 & \text{ if $1 \leq i \leq k-1$,} \\ \{0\} & \text{ if $i \geq k$.} \end{cases} \] The following is known as the Fontaine-Mazur conjecture (conjecture 1 of \cite{FM95}). \begin{conj}\label{fontmaz} If $V$ is an irreducible $p$-adic representation of $\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$, whose restriction to $I_\ell$ is trivial for all $\ell$ except a finite number, and whose restriction to $D_p$ is potentially semistable, then $V$ is a subquotient of an \'etale cohomology group of some algebraic variety over $\mathbf{Q}$. \end{conj} If in addition $\dim(V)=2$ and $V$ is odd, then we actually expect $V$ to be the representation attached to a modular eigenform, and we have the following precise conjecture (conjecture 3c of \cite{FM95}). The \emph{Hodge-Tate weights} of a de Rham representation $V$ are the opposites of the jumps of the filtration on $\mathrm{D}_{\operatorname{dR}}(V)$. \begin{conj}\label{fm} If $V$ is an irreducible $2$-dimensional $p$-adic representation of $\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$, whose restriction to $I_\ell$ is trivial for all $\ell$ except a finite number, and whose restriction to $D_p$ is potentially semistable with distinct Hodge-Tate weights, then $V$ is a twist of the Galois representation attached to a cuspidal eigenform with weight $k \geq 2$. \end{conj} Let us write $\overline{V}$ for the reduction modulo $\mathfrak{m}_E$ of $V$. \begin{theo}\label{emkis} Conjecture \ref{fm} is true, if we suppose that $\overline{V}$ satisfies some technical hypotheses. \end{theo} This theorem has been proved independently by Kisin (it is the main theorem of \cite{MK09}) and by Emerton (theorem 1.2.4 of \cite{ME10}). The ``technical hypotheses'' of Kisin are the following ($\chi_{\operatorname{cycl}}$ is now the reduction mod $p$ of the cyclotomic character, and $*$ denotes a cocycle which may be equal to $0$). \begin{enumerate} \item $p \neq 2$ and $\overline{V}$ is odd, \item $\overline{V} \mid_{\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q}(\zeta_p))}$ is irreducible, \item $\overline{V} \mid_{\operatorname{Gal}(\overline{\mathbf{Q}}_p/\mathbf{Q}_p)}$ is not of the form $\smat{\eta \chi_{\operatorname{cycl}} & * \\ 0 & \eta}$ for any character $\eta$. \end{enumerate} The ``technical hypotheses'' of Emerton are (1) and (2) and \begin{itemize} \item[3'.] $\overline{V} \mid_{\operatorname{Gal}(\overline{\mathbf{Q}}_p/\mathbf{Q}_p)}$ is not of the form $\smat{\eta & * \\ 0 & \eta \chi_{\operatorname{cycl}}}$ nor of the form $\smat{\eta & * \\ 0 & \eta}$ for any character $\eta$. \end{itemize} \subsection{Crystalline and semistable $(\phi,\Gamma)$-modules}\label{bersec} In \S \ref{fonsec}, we recalled the definition of $\mathrm{D}_{\operatorname{cris}}(V)$ and $\mathrm{D}_{\operatorname{st}}(V)$ for a $p$-adic representation $V$. We now explain how to extend this definition to $(\phi,\Gamma)$-modules. Recall that we denote by $t$ the element $\log(1+X) \in \mathcal{R}$. \begin{defi}\label{dfdc} If $\mathrm{D}$ is a $(\phi,\Gamma)$-module, let $\mathrm{D}_{\operatorname{cris}}(\mathrm{D}) = (\mathcal{R}[1/t] \otimes_{\mathcal{R}} \mathrm{D})^{\Gamma}$. \end{defi} In order to define $\mathrm{D}_{\operatorname{st}}(\mathrm{D})$, we add a variable to $\mathcal{R}$ as follows. The power series $\log(\phi(X)/X^p)$ and $\log(\gamma(X)/X)$ (for $\gamma \in \Gamma$) both converge in $\mathcal{R}$. Let $\log(X)$ be a variable which we adjoin to $\mathcal{R}$, with the Frobenius and the action of $\Gamma$ extending to $\mathcal{R}[\log(X)]$ by $\phi(\log(X)) = p \log(X) + \log(\phi(X)/X^p)$ and $\gamma(\log(X)) = \log(X) + \log(\gamma(X)/X)$. We also define a monodromy map $N$ on $\mathcal{R}[\log(X)]$ by $N=-p/(p-1) \cdot d/d \log(X)$. \begin{defi}\label{dfds} If $\mathrm{D}$ is a $(\phi,\Gamma)$-module, let $\mathrm{D}_{\operatorname{st}}(\mathrm{D}) = (\mathcal{R}[\log(X),1/t] \otimes_{\mathcal{R}} \mathrm{D})^{\Gamma}$. \end{defi} Definitions \ref{dfdc} and \ref{dfds} make sense for any $(\phi,\Gamma)$-module. We say that $\mathrm{D}$ is \emph{crystalline} or \emph{semistable} if $\mathrm{D}_{\operatorname{cris}}(\mathrm{D})$ or $\mathrm{D}_{\operatorname{st}}(\mathrm{D})$ is an $E$-vector space of dimension $\operatorname{rk}(\mathrm{D})$. The space $\mathrm{D}_{\operatorname{st}}(\mathrm{D})$ is then a $(\phi,N)$-module and $\mathrm{D}_{\operatorname{cris}}(\mathrm{D}) = \mathrm{D}_{\operatorname{st}}(\mathrm{D})^{N=0}$. One can also define a filtration on these two spaces by using the filtration of $\mathcal{R}$ given by ``the order of vanishing at $\zeta_{p^n}-1$ for $n \gg 0$'' so that $\mathrm{D}_{\operatorname{st}}(\mathrm{D})$ becomes a filtered $(\phi,N)$-module (which in general is not admissible). The following result is theorem 0.2 of \cite{LB02}. \begin{theo}\label{recip} If $V$ is a $p$-adic representation of $\Gal(\Qpbar/\Qp)$, and if $\mathrm{D}(V)$ is the attached $(\phi,\Gamma)$-module, then $\mathrm{D}_{\operatorname{cris}}(V)=\mathrm{D}_{\operatorname{cris}}(\mathrm{D}(V))$ and $\mathrm{D}_{\operatorname{st}}(V)=\mathrm{D}_{\operatorname{st}}(\mathrm{D}(V))$. \end{theo} The proof of this requires a number of delicate computations in several of Fontaine's rings of periods. Recall that $\btdagrig$ is the ring used in \S \ref{rosec} in order to attach $p$-adic representations to $(\phi,\Gamma)$-modules. One can show that the ring $\bcris$ of Fontaine admits a subring $\btrig$ such that \begin{enumerate} \item for any $p$-adic representation $V$, the inclusion $(\btrig[1/t] \otimes_{\mathbf{Q}_p} V)^{\Gal(\Qpbar/\Qp)} \subset \mathrm{D}_{\operatorname{cris}}(V)$ is an isomorphism; \item there is a natural inclusion $\btrig \subset \btdagrig$. \end{enumerate} These facts allow one to go from the usual $p$-adic periods to the theory of $(\phi,\Gamma)$-modules, and then to prove theorem \ref{recip}. The spaces $\mathrm{D}_{\operatorname{st}}(V)$ and $\mathrm{D}_{\operatorname{st}}(\mathrm{D}(V))$ are then equal as subspaces of $\btdagrig[1/t] \otimes_{\mathbf{Q}_p} V$. It is also possible to define $\mathrm{D}_{\operatorname{dR}}(\mathrm{D})$ as well as de Rham $(\phi,\Gamma)$-modules in the same way, and to prove an analogue of theorem \ref{recip}, but this is slightly more complicated and we do not give the recipe here. If $V$ is a semistable representation, and if $M$ is a $(\phi,N)$-stable subspace of $\mathrm{D}_{\operatorname{st}}(V)$, then it is easy to see that $( \mathcal{R}[\log(X),1/t] \otimes_E M )^{N=0} \cap \mathrm{D}(V)$ is a sub $(\phi,\Gamma)$-module of $\mathrm{D}(V)$ of rank $\dim(M)$. Using this observation and theorem \ref{recip}, we get the following result. \begin{theo}\label{sstrig} Semistable representations of $\Gal(\Qpbar/\Qp)$ are trianguline. \end{theo} We see that the $(\phi,\Gamma)$-module of a semistable representation may then admit several different triangulations, corresponding to flags of $\mathrm{D}_{\operatorname{st}}(V)$ stable under $\phi$ and $N$. Another consequence of theorem \ref{recip}, which is proved in the same way, is the following useful result (proposition 4.3 of \cite{PC08}). \begin{theo}\label{dcnz} If $V$ is a $p$-adic representation of dimension $2$, then $V$ is trianguline if and only if there exists a character $\eta$ of $\Gal(\Qpbar/\Qp)$ such that $\mathrm{D}_{\operatorname{cris}}(V(\eta)) \neq 0$. \end{theo} \subsection{Weights of trianguline representations}\label{wtsec} Recall that $p$-adic representations of $\Gal(\Qpbar/\Qp)$ have weights: Sen's theory (\S 2.2 of \cite{SS80}) allows us to attach to $V$ a polynomial $P(X) \in E[X]$ of degree $\dim(V)$, whose roots are the \emph{generalized Hodge-Tate weights of $V$} (warning: in \cite{BC09} as in other places, the opposite sign is chosen for the weights. For us, the cyclotomic character has weight $+1$). For example if $V$ is de Rham, then these weights are the opposites of the jumps of the filtration on $\mathrm{D}_{\operatorname{dR}}(V)$, and are then the classical Hodge-Tate weights of $V$. If $V$ is a trianguline representation and if $\{0\} = \mathrm{D}_0 \subset \mathrm{D}_1 \subset \cdots \subset \mathrm{D}_d = \mathrm{D}(V)$ is a triangulation of $V$, then each $\mathrm{D}_i / \mathrm{D}_{i-1}$ is of rank $1$ and hence of the form $\mathcal{R}(\delta_i)$ by theorem \ref{pgr1}. If $\delta : \mathbf{Q}_p^\times \to E^\times$ is a character, then $w(\delta)=\log_p \delta(u)/\log_p u$ does not depend on the choice of $u \in 1+p\mathbf{Z}_p \setminus \{1\}$ and is called the \emph{weight} of $\delta$. \begin{theo}\label{wtri} If $V$ is a trianguline representation and $\delta_1$, \dots, $\delta_d$ are as above, then $w(\delta_1)$, \dots, $w(\delta_d)$ are the generalized Hodge-Tate weights of $V$. \end{theo} The following theorem (proposition 2.3.4 of \cite{BC09}) can be seen as a generalization of Perrin-Riou's theorem 1.5 of \cite{PR94} that ``ordinary representations are semistable''. \begin{theo}\label{trdr} Let $V$ be a trianguline representation. If $V$ admits a triangulation with characters $\delta_1$, \dots, $\delta_d$ such that $w(\delta_1)$, \dots, $w(\delta_d)$ are integers and $w(\delta_1) > \cdots > w(\delta_d)$, then $V$ is de Rham. \end{theo} \subsection{Cohomology of $(\phi,\Gamma)$-modules}\label{liusec} Since trianguline representations are successive extensions of $(\phi,\Gamma)$-modules of rank $1$, an important part of the study of these representations is the determination of the extension groups of $(\phi,\Gamma)$-modules. Let $\mathrm{D}$ be a $(\phi,\Gamma)$-module and let $\gamma$ be a topological generator of $\Gamma$ (the group $\mathbf{Z}_p^\times$ is topologically cyclic if $p\neq 2$; if $p=2$, then the definitions have to be slightly modified). Let $C(\phi,\gamma)$ be the complex (first considered by Herr in \cite{LH98}) \[ 0 \to \mathrm{D} \xrightarrow{z \mapsto ((\gamma-1)z,(\phi-1)z)} \mathrm{D} \oplus \mathrm{D} \xrightarrow{(x,y) \mapsto (\phi-1)x - (\gamma-1)y} \mathrm{D} \to 0. \] The $E$-vector spaces $\mathrm{H}^i(C(\phi,\gamma))$ do not depend on the choice of $\gamma$ and we define the cohomology groups of $\mathrm{D}$ to be $\mathrm{H}^i(\mathrm{D}) = \mathrm{H}^i(C(\phi,\gamma))$. Note that by construction $\mathrm{H}^i(\mathrm{D})=0$ if $i \geq 3$. The following theorem (theorems 1.1 and 1.2 and \S 3.1 of \cite{RL08}) summarizes several properties of the groups $\mathrm{H}^i(\mathrm{D})$; we write $h^i(\mathrm{D})$ for $\dim_E \mathrm{H}^i(\mathrm{D})$. \begin{theo}\label{pgcoh} If $\mathrm{D}$ is a $(\phi,\Gamma)$-module, then: \begin{enumerate} \item the $\mathrm{H}^i(\mathrm{D})$ are finite dimensional $E$-vector spaces and $h^0(\mathrm{D})-h^1(\mathrm{D})+h^2(\mathrm{D})=-\operatorname{rk}(\mathrm{D})$; \item $\mathrm{H}^0(\mathrm{D})=\mathrm{D}^{\Gamma=1,\phi=1}$ and $\mathrm{H}^1(\mathrm{D})=\operatorname{Ext}^1(\mathcal{R},\mathrm{D})$; \item if $V$ is a $p$-adic representation, then $\mathrm{H}^i(\mathrm{D}(V)) \simeq \mathrm{H}^i(\Gal(\Qpbar/\Qp),V)$; \end{enumerate} \end{theo} Combining (3) and (1), we recover Tate's Euler characteristic formula. In the special case when $\mathrm{D}$ is of rank $1$, Colmez has computed explicitly $\mathrm{H}^1(\mathrm{D})$. We have the following result (theorem 0.2 of \cite{PC08}) which we use in \S\ref{exsec}. Let $x : \mathbf{Q}_p^\times \to E^\times$ be the map $z \mapsto z$ and let $|\cdot|_p : \mathbf{Q}_p^\times \to E^\times$ be the map $z \mapsto p^{-\operatorname{val}_p(z)}$. \begin{theo}\label{extpgr} If $\delta_1$ and $\delta_2 : \mathbf{Q}_p^\times \to E^\times$ are two characters, then $\operatorname{Ext}^1(\mathcal{R}(\delta_2), \mathcal{R}(\delta_1))$ is a $1$-dimensional $E$-vector space, unless $\delta_1 \delta_2^{-1}$ is either of the form $x^{-i}$ with $i \geq 0$ or $|x|_px^i$ with $i \geq 1$, in which case $\operatorname{Ext}^1(\mathcal{R}(\delta_2), \mathcal{R}(\delta_1))$ is of dimension $2$. \end{theo} In the first case, there is therefore one nonsplit extension $0 \to \mathcal{R}(\delta_1) \to \mathrm{D} \to \mathcal{R}(\delta_2) \to 0$ while in the second case, the set of such extensions is parameterized by $\mathbf{P}^1(E)$. The parameter for such an extension is called the \emph{$\mathcal{L}$-invariant} and turns out to be a generalization of the usual $\mathcal{L}$-invariants (see \cite{PCLI} for a discussion of $\mathcal{L}$-invariants of modular forms). \subsection{Trianguline representations of dimension $2$}\label{exsec} We now explain how we can use the results of the preceding paragraph in order to construct a parameter space for all irreducible trianguline representations of dimension $2$. If $\delta : \mathbf{Q}_p^\times \to E^\times$ is a character, then we set $u(\delta)=\operatorname{val}_p(\delta(p))$, so that $u(\delta)$ is the slope of $\mathcal{R}(\delta)$ as in \S\ref{kedsec}. Recall that $w(\delta)$ is the weight of $\delta$, defined in \S \ref{wtsec}. If $V$ is a trianguline representation of dimension $2$, then $\mathrm{D}(V)$ is an extension of two $(\phi,\Gamma)$-modules of rank $1$, so that we have an exact sequence $0 \to \mathcal{R}(\delta_1) \to \mathrm{D}(V) \to \mathcal{R}(\delta_2) \to 0$. The fact that $\mathrm{D}(V)$ is \'etale implies that $u(\delta_1) + u(\delta_2)=0$, and (because of theorem \ref{kedsl}) $u(\delta_1) \geq 0$. If $u(\delta_1)=u(\delta_2)=0$, then $\mathcal{R}(\delta_1)$ and $\mathcal{R}(\delta_2)$ are \'etale, and $V$ itself is an extension of two representations. Let $\mathcal{S}$ be the set $\{ (\delta_1,\delta_2,\mathcal{L}) \}$ where $\delta_1$ and $\delta_2$ are characters $\mathbf{Q}_p^\times \to E^\times$, and $\mathcal{L} = \infty$ if $\delta_1 \delta_2^{-1}$ is neither of the form $x^{-i}$ with $i \geq 0$ nor of the form $|x|_p x^i$ with $i \geq 1$, and $\mathcal{L} \in \mathbf{P}^1(E)$ otherwise. Theorem \ref{extpgr} above allows us to construct for every $s \in \mathcal{S}$ a nontrivial extension $\mathrm{D}(s)$ of $\mathcal{R}(\delta_2)$ by $\mathcal{R}(\delta_1)$, and vice versa. If $s \in \mathcal{S}$, then we set $w(s)=w(\delta_1)-w(\delta_2)$. We define $\mathcal{S}_*$ as the set of $s \in \mathcal{S}$ such that $u(\delta_1)+u(\delta_2)=0$ and $u(\delta_1)>0$, and we then set $u(s)=u(\delta_1)$ if $s \in \mathcal{S}_*$. We define the ``crystalline'', ``semistable'' and ``nongeometric'' parameter spaces as follows. \begin{enumerate} \item $\mathcal{S}^{\operatorname{cris}}_* = \{ s \in \mathcal{S}_*$ such that $w(s) \in \mathbf{Z}_{\geq 1}$ and $u(s)<w(s)$ and $\mathcal{L}=\infty \}$; \item $\mathcal{S}^{\operatorname{st}}_* = \{ s \in \mathcal{S}_*$ such that $w(s) \in \mathbf{Z}_{\geq 1}$ and $u(s)<w(s)$ and $\mathcal{L} \neq \infty \}$; \item $\mathcal{S}^{\operatorname{ng}}_* = \{ s \in \mathcal{S}_*$ such that $w(s) \notin \mathbf{Z}_{\geq 1} \}$. \end{enumerate} Let $\mathcal{S}_{\operatorname{irr}} = \mathcal{S}^{\operatorname{cris}}_* \sqcup \mathcal{S}^{\operatorname{st}}_* \sqcup \mathcal{S}^{\operatorname{ng}}_*$. \begin{theo}\label{dsirr} If $s \in \mathcal{S}_{\operatorname{irr}}$, then $\mathrm{D}(s)$ is \'etale, and the attached representation $V(s)$ is trianguline and irreducible. Every $2$-dimensional irreducible trianguline representation is of the form $V(s)$ for some $s \in \mathcal{S}_{\operatorname{irr}}$ (after possibly extending scalars), and we have $V(s)=V(s')$ if and only if $s \in \mathcal{S}^{\operatorname{cris}}_*$ and $s'=(x^{w(s)} \delta_2, x^{-w(s)} \delta_1,\infty)$. \end{theo} In particular, if $s \in \mathcal{S} \setminus \mathcal{S}_{\operatorname{irr}}$, then either $\mathrm{D}(s)$ is \'etale but $V(s)$ is reducible, or $\mathrm{D}(s)$ is not even \'etale (this happens for example if $w(s) \in \mathbf{Z}_{\geq 1}$ and $u(s)>w(s)$). These cases are examined in \S 3 of \cite{PC08}. If $s \in \mathcal{S}^{\operatorname{cris}}_*$, then the representation $V(s)$ becomes crystalline over an abelian extension of $\mathbf{Q}_p$ after possibly twisting by a character. If $s \in \mathcal{S}^{\operatorname{st}}_*$, then the representation $V(s)$ becomes semistable (non crystalline) over an abelian extension of $\mathbf{Q}_p$ after possibly twisting by a character. If $s \in \mathcal{S}^{\operatorname{ng}}_*$, then $V(s)$ is not a twist of a de Rham representation. In the cases where $V(s)$ is crystalline or semistable, Colmez has explicitly determined in \S 4.5 and \S 4.6 of \cite{PC08} the filtered $\phi$- and $(\phi,N)$-modules $\mathrm{D}_{\operatorname{cris}}(V(s))$ and $\mathrm{D}_{\operatorname{st}}(V(s))$ in terms of $s$. Let us give as an example the description of the parameter $s$ corresponding to the representation $V_p f$ arising from a modular eigenform of level $N$ prime to $p$, weight $k \geq 2$ and character $\varepsilon$. Let $a_p \in \mathcal{O}_E$ be the eigenvalue of the Hecke operator $T_p$. We assume that $a_p \in \mathfrak{m}_E$ so that $V_p f$ restricted to $D_p$ is irreducible. If $y \in E^\times$, let $\mu_y : \mathbf{Q}_p^\times \to E^\times$ be the character defined by $\mu_y(z)=y^{\operatorname{val}_p(z)}$. Let $x_0 : \mathbf{Q}_p^\times \to E^\times$ be the character defined by $x_0(z) = z|z|_p$, so that $x_0(p)=1$ and $x_0(z)=z$ if $z \in \mathbf{Z}_p^\times$. The result below then follows from the computations of \S 4.5 of \cite{PC08}. \begin{theo}\label{tricry} We have $(V_p f)^* = V(\mu_y,\mu_{\varepsilon(p)/y} x_0^{1-k},\infty)$ where $y \in \mathfrak{m}_E$ is such that $a_p = y + \varepsilon(p) p^{k-1}/y$. \end{theo} The equation for $y$ has (in general) two solutions, giving two different parameters $s$ and $s'$ for the same representation. This corresponds to the phenomenon described at the end of theorem \ref{dsirr}. The constructions of this paragraph have been generalized to $2$-dimensional trianguline representations of $\Gal(\Qpbar/K)$ by Nakamura in \cite{KN09}, for $K$ a finite extension of $\mathbf{Q}_p$. \section{Arithmetic applications} \label{aach} In this chapter, we explain the role that trianguline representations play in the $p$-adic local Langlands correspondence and then in the theory of overconvergent modular forms. \subsection{The $p$-adic local Langlands correspondence}\label{llsec} We only give a cursory description of the $p$-adic local Langlands correspondence, and refer to the Bourbaki seminar \cite{LBNB} for a detailed survey and adequate references. The $p$-adic local Langlands correspondence for $\operatorname{GL}_2(\mathbf{Q}_p)$ is a bijection, between certain $2$-dimensional $p$-adic representations of $\Gal(\Qpbar/\Qp)$, and certain representations of $\operatorname{GL}_2(\mathbf{Q}_p)$. The first examples of this correspondence were constructed by Breuil, for semistable and crystalline representations of $\Gal(\Qpbar/\Qp)$. These examples inspired Colmez to use $(\phi,\Gamma)$-modules in order to give a ``functorial'' construction of these examples, and he realized that the natural condition to impose on the $p$-adic representations which he was considering was that the attached $(\phi,\Gamma)$-module be an extension of two $(\phi,\Gamma)$-modules of rank $1$. This is what led him to define trianguline representations. In the notations of \S\ref{exsec}, if $s \in \mathcal{S}_{\operatorname{irr}}$, then the representation of $\operatorname{GL}_2(\mathbf{Q}_p)$ corresponding to $V(s)$ by the $p$-adic local Langlands correspondence is a $p$-adic unitary Banach space representation $\Pi(s)$ of $\operatorname{GL}_2(\mathbf{Q}_p)$ constructed as follows. Let $\log_{\mathcal{L}}$ be the logarithm normalized by $\log_{\mathcal{L}}(p)=\mathcal{L}$ (if $\mathcal{L}=\infty$, we set $\log_\infty=\operatorname{val}_p$) and if $s \in \mathcal{S}$, let $\delta_s$ be the character $(x|x|_p)^{-1} \delta_1 \delta_2^{-1}$. Note that if $s \in \mathcal{S}_{\operatorname{irr}}$ then we can have $\mathcal{L} \neq \infty$ only if $\delta_s$ is of the form $x^i$ with $i \geq 0$. We can define the notion of a class $\mathcal{C}^u$ function for $u \in \mathbf{R}_{\geq 0}$ generalizing the usual case $u \in \mathbf{Z}_{\geq 0}$. We denote by $\operatorname{B}(s)$ the space of functions $f : \mathbf{Q}_p \to E$ which are of class $\mathcal{C}^{u(s)}$ and such that $x \mapsto \delta_s(x) f(1/x)$ extends at $0$ to a function of class $\mathcal{C}^{u(s)}$. The space $\operatorname{B}(s)$ is then endowed with an action of $\operatorname{GL}_2(\mathbf{Q}_p)$ given by the formula: \[ \left[ \pmat{a & b \\ c & d} \cdot f \right] (y) = (x|x|_p\delta_1^{-1})(ad-bc) \cdot \delta_s(cy+d) \cdot f\left( \frac{ay+b}{cy+d}\right). \] The space $M(s)$ is defined by \begin{enumerate} \item if $\delta_s$ is not of the form $x^i$ with $i \geq 0$, then $M(s)$ is the space generated by $1$ and by the functions $y \mapsto \delta_s(y-a)$ with $a \in \mathbf{Q}_p$; \item if $\delta_s$ is of the form $x^i$ with $i \geq 0$, then $M(s)$ is the intersection of $\operatorname{B}(s)$ with the space generated by the functions $y \mapsto \delta_s(y-a)$ and $y \mapsto \delta_s(y-a)\log_{\mathcal{L}}(y-a)$ with $a \in \mathbf{Q}_p$. \end{enumerate} We finally set $\Pi(s)=\operatorname{B}(s)/\widehat{M}(s)$ where $\widehat{M}(s)$ is the closure of $M(s)$ inside $\operatorname{B}(s)$. \begin{theo}\label{coltri} The unitary Banach space representation $\Pi(s)$ of $\operatorname{GL}_2(\mathbf{Q}_p)$ is nonzero, topologically irreducible and admissible in the sense of Schneider-Teitelbaum. \end{theo} These representations $\Pi(s)$ are called the ``unitary principal series'' and the above theorem is theorem 0.4 of \cite{PC10}. Colmez then proceeds in \cite{PCGL} to attach to any $2$-dimensional $p$-adic representation of $\Gal(\Qpbar/\Qp)$ a representation of $\operatorname{GL}_2(\mathbf{Q}_p)$, and he proves that they have the required properties by using the fact that this is true for trianguline representations, that his construction is suitably continuous, and that trianguline representations are Zariski dense in the deformation space of all $2$-dimensional $p$-adic representations. \subsection{Families of Galois representations}\label{locsec} In this paragraph, we recall the existence of certain rigid analytic spaces which parameterize some families of $p$-adic Galois representations. Recall that the rigid analytic space attached to $\mathbf{Q}_p \otimes_{\mathbf{Z}_p} \mathbf{Z}_p\dcroc{X}$ is the $p$-adic open unit disk and that more generally, the rigid analytic space attached to $E \otimes_{\mathcal{O}_E} \mathcal{O}_E \dcroc{X_1,\hdots,X_n}$ is the $n$-dimensional ball over $E$. We start with a simple example; the group $1+p\mathbf{Z}_p$ is topologically generated by the element $1+p$ (unless $p=2$; the constructions of this paragraph can easily be adapted to work when $p=2$). This implies that a character on $1+p \mathbf{Z}_p$ is determined by its value at $1+p$. Consider the ring $R=\mathbf{Z}_p\dcroc{X}$ and the character $\eta_R : 1+p \mathbf{Z}_p \to R^\times$ given by $\eta_R(1+p) = 1+X$. Any character $\eta = 1+p \mathbf{Z}_p \to 1+\mathfrak{m}_E$ is obtained from $\eta_R$ by the formula $\eta(g) = f \circ \eta_R(g)$, where $f : R \to \mathfrak{m}_E$ is given by $f(X) = \eta(1+p)-1$. There is therefore a bijection beween the $E$-valued points of the space attached to $\mathbf{Q}_p \otimes_{\mathbf{Z}_p} \mathbf{Z}_p\dcroc{X}$ and the set of characters $\eta : 1+p \mathbf{Z}_p \to 1+\mathfrak{m}_E$. The ring $R$ is an example of a \emph{universal deformation ring}, and the rigid analytic space attached to $R[1/p]$ (the $p$-adic open unit disk) parameterizes the family of all $\overline{\mathbf{Q}}_p$-valued characters of $1+p\mathbf{Z}_p$. Suppose now that $\overline{\eta} : \mathbf{Z}_p^\times \to k_E^\times$ is a character, and let $E_0$ be the smallest extension of $\mathbf{Q}_p$ whose residue field is $k_E$ (that is, $E_0 = E \cap \mathbf{Q}_p^{\operatorname{nr}}$). The natural parameter space for characters $\eta : \mathbf{Z}_p^\times \to \overline{\mathbf{Z}}_p^\times$ whose reduction modulo $\mathfrak{m}_{\overline{\mathbf{Z}}_p}$ is $\overline{\eta}$ is, as above, the rigid analytic space attached to $E_0 \otimes_{\mathcal{O}_{E_0}} \mathcal{O}_{E_0} \dcroc{X}$. We denote this space by $\mathscr{X}_{\overline{\eta}}$. There is likewise a parameter space $\mathscr{X}^u_{\overline{\delta}}$ for characters $\delta : \mathbf{Q}_p^\times \to \overline{\mathbf{Q}}_p^\times$ which have a fixed slope $u$ and such that $\overline{\delta(p)/p^u} \in k_E^\times$ and $\overline{\delta} \mid_{\mathbf{Z}_p^\times} \in k_E^\times$ are fixed, and this parameter space is the rigid analytic space attached to $E_0 \otimes_{\mathcal{O}_{E_0}} \mathcal{O}_{E_0} \dcroc{X_1,X_2}$. Denote by $\delta(x)$ the character corresponding to a point $x \in \mathscr{X}^u_{\overline{\delta}}$. Colmez proves in \S 5.1 of \cite{PC08} that the representations $V(s)$ live in analytic families of trianguline representations, and his construction has been completed by Chenevier (see \S 3 of \cite{GC10}). \begin{theo}\label{trifam} If $(\delta_1,\delta_2,\infty) \in \mathcal{S}_{\operatorname{irr}}$, then there exists a neighborhood $\mathscr{U}$ of $(\delta_1,\delta_2) \in \mathscr{X}^{u_1}_{\overline{\delta}_1} \times \mathscr{X}^{u_2}_{\overline{\delta}_2}$ and a free $\mathcal{O}_{\mathscr{U}}$-module $V$ of rank $2$ with an action of $\Gal(\Qpbar/\Qp)$ such that $V(u)=V(\delta_1(u),\delta_2(u),\infty)$ if $u \in \mathscr{U}$. \end{theo} Recall that Mazur generalized the construction of $\mathscr{X}_{\overline{\eta}}$ in \cite{BM89}, where he proved that for certain groups $G$ and representations $\overline{\rho} : G \to \operatorname{GL}_d(\overline{\mathbf{F}}_p)$, there exists a parameter space $\mathscr{X}_{\overline{\rho}}$ for the set of all isomorphism classes of representations $\rho : G \to \operatorname{GL}_d(\overline{\mathbf{Z}}_p)$ having reduction modulo $\mathfrak{m}_{\overline{\mathbf{Z}}_p}$ isomorphic to $\overline{\rho}$. This applies for example if $\operatorname{End}(\overline{\rho})=\overline{\mathbf{F}}_p$ and if either $G=\operatorname{Gal}(\mathbf{Q}_S/\mathbf{Q})$ is the Galois group of the maximal extension $\mathbf{Q}_S$ of $\mathbf{Q}$ which is unramified outside of a finite set of places $S$, or if $G=\Gal(\Qpbar/\Qp)$. If $G=\Gal(\Qpbar/\Qp)$ and $d=2$, then for most representations $\overline{\rho} : G \to \operatorname{GL}_d(k_E)$, the rigid analytic space $\mathscr{X}_{\overline{\rho}}$ is the one attached to $E_0 \otimes_{\mathcal{O}_{E_0}} \mathcal{O}_{E_0} \dcroc{X_1,X_2,X_3,X_4,X_5}$. Theorem \ref{trifam} then implies that inside the $5$-dimensional space $\mathscr{X}_{\overline{\rho}}$, there is a countable number of $4$-dimensional subspaces corresponding to trianguline representations. The ``trianguline locus'' of $\mathscr{X}_{\overline{\rho}}$ is Zariski dense (it is however a ``thin subset'' of $\mathscr{X}_{\overline{\rho}}$ in the terminology of \S 4 of \cite{BC10}). This can be compared with the following result (theorems B and C of \cite{BC08}). \begin{theo}\label{clsht} If $b \geq a$, then the locus of $\mathscr{X}_{\overline{\rho}}$ corresponding to crystalline (or semistable or de Rham or Hodge-Tate) representations, with Hodge-Tate weights in the range $[a,b]$, is a closed analytic subspace of $\mathscr{X}_{\overline{\rho}}$. \end{theo} \subsection{Overconvergent modular forms}\label{eksec} Overconvergent modular forms are objects defined by Coleman in \cite{RC96}, which are $p$-adic generalizations of classical modular forms (for a survey about overconvergent modular forms, see the Bourbaki seminar \cite{ME09}). An ``overconvergent modular form of finite slope'' has a $q$-expansion, which is a $p$-adic limit of $q$-expansions of classical modular eigenforms. One can attach Galois representations to these objects, by taking the limit of the Galois representations attached to the eigenforms in the converging sequence. In this paragraph, we directly define some $p$-adic representations of $\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$ by a $p$-adic interpolation process, and merely recall that these representations are the ones which are attached to ``overconvergent modular eigenforms of finite slope''. Let $N \geq 1$ be an integer prime to $p$ and let $S$ be the set of primes dividing $pN$ and $\infty$. Fix some $2$-dimensional $\overline{\mathbf{F}}_p$-representation $\overline{\rho}$ of $\operatorname{Gal}(\mathbf{Q}_S/\mathbf{Q})$. Let $\mathscr{X}^S_{\overline{\rho}}$ be the rigid analytic space attached to the universal deformation space for $\overline{\rho}$, so that every $x \in \mathscr{X}^S_{\overline{\rho}}(E)$ corresponds to an $E$-linear representation $V_x$ of $\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$, which is unramified outside of $S$, and whose reduction modulo $\mathfrak{m}_E$ is isomorphic to $\overline{\rho}$. For most representations $\overline{\rho}$, $\mathscr{X}^S_{\overline{\rho}}$ is a $3$-dimensional rigid analytic ball by results of Weston (see theorem 1 of \cite{TW04}). Let $\mathcal{C}_{\mathrm{cl}}$ be the set of points $(x,\lambda) \in \mathscr{X}^S_{\overline{\rho}} \times \mathbf{G}_\mathrm{m}$ such that $V_x$ is the representation attached to a modular eigenform $f$ on $\Gamma_1(N)$ and $\lambda$ is a root of $X^2-a_p X+\varepsilon(p)p^{k-1}$ (where $k$ is the weight of $f$ and $T_p(f)=a_p f$). Let $\mathcal{C}$ be the Zariski closure of $\mathcal{C}_{\mathrm{cl}}$ inside $\mathscr{X}^S_{\overline{\rho}} \times \mathbf{G}_\mathrm{m}$. By \S 1.5 of \cite{CM98}, we have the following result. \begin{theo}\label{cmec} The variety $\mathcal{C}$ is a rigid analytic curve. \end{theo} Coleman and Mazur then show in \cite{CM98} that the Galois representations $V_x$ corresponding to points $(x,\lambda) \in \mathcal{C}(E)$ are the ones which are attached to the ``level $N$ overconvergent modular eigenforms of finite slope'' defined by Coleman. The curve $\mathcal{C}$ is called the \emph{eigencurve} (note that the construction of the eigencurve is given in \cite{CM98} assuming that $N=1$ and that $p>2$. The general case is treated in \cite{KB07}). The projection of $\mathcal{C}$ on $\mathscr{X}^S_{\overline{\rho}}$ is then a complicated space (for instance, it has infinitely many double points) which is the ``infinite fern'' of \cite{BM97} and \cite{GM98}, see \S 2.5 of \cite{ME09}. The following result (a consequence of theorem 6.3 of \cite{MK03} combined with theorem \ref{dcnz}) describes the restriction to $\Gal(\Qpbar/\Qp)$ of the representations of $\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$ which are constructed in this way. \begin{theo}\label{kmtr} If $f$ is an overconvergent modular eigenform of finite slope of level $N$ (i.e.\ if $(V_p f,\lambda) \in \mathcal{C}(E)$ by the above remark), then $V_p f$ is a trianguline representation. \end{theo} The idea is that this theorem is true if $f$ is a classical modular eigenform, by using Saito's theorem, and Kisin deduces that $V_p f$ satisfies the hypothesis of theorem \ref{dcnz} from the classical case by a $p$-adic interpolation argument. We then have the following result, Emerton's generalization of the Fontaine-Mazur conjecture for modular forms. \begin{theo}\label{fmtri} If $V$ is an irreducible $2$-dimensional $p$-adic representation of $\operatorname{Gal}(\overline{\mathbf{Q}}/\mathbf{Q})$, such that \begin{enumerate} \item the restriction of $V$ to $I_\ell$ is trivial for all $\ell$ except a finite number, \item the restriction of $V$ to $D_p$ is trianguline, \item $\overline{V}$ satisfies hypotheses (1), (2) and (3') of \S \ref{ptsec}, \end{enumerate} then $V$ is a twist of the Galois representation attached to an overconvergent cuspidal eigenform of finite slope. \end{theo} We now describe the parameter $s \in \mathcal{S}$ such that $(V_p f)^* = V(s)$, just as we did for classical modular forms at the end of \S \ref{exsec}. Let $f$ be a finite slope overconvergent modular eigenform of level $N$ and character $\varepsilon$. Let $k=w(\det(V_p f))+1 \in E$ (so that if $f$ is classical, then $k$ is the weight of $f$), let $\lambda \in E$ be such that $(V_pf,\lambda) \in \mathcal{C}$, and let $\mu_\lambda : \mathbf{Q}_p^\times \to E^\times$ be the character $z \mapsto \lambda^{\operatorname{val}_p(z)}$. The following result is proposition 5.2 of \cite{GC08} (it is a direct consequence of theorem 6.3 of \cite{MK03} and theorem 0.8 of \cite{PC08}). Note that if $k \in \mathbf{Z}_{\geq 1}$ and either $\operatorname{val}_p(\lambda) =0$ or $\operatorname{val}_p(\lambda) =k-1$, then $V_p f$ is reducible in an obvious way. \begin{theo}\label{gcoe} We have $(V_p f)^* = V(\delta_1,\det(V_p f)^{-1} \cdot \delta_1^{-1},\mathcal{L}_f)$ where \begin{enumerate} \item if $k \in \mathbf{Z}_{\geq 1}$ and $0 < \operatorname{val}_p(\lambda)<k-1$, then $\delta_1=\mu_\lambda$ and if $\mathcal{L}_f \neq \infty$, then $V_pf$ is semistable and $\mathcal{L}_f$ is the $\mathcal{L}$-invariant of $f$; \item if $k \in \mathbf{Z}_{\geq 1}$ and $\operatorname{val}_p(\lambda)>k-1$, then $\delta_1=x^{1-k}\mu_\lambda$ and $\mathcal{L}_f = \infty$; \item if $k \notin \mathbf{Z}_{\geq 1}$, then $\delta_1=\mu_\lambda$ and $\mathcal{L}_f = \infty$. \end{enumerate} \end{theo} Note that case (1) corresponds to $\mathcal{S}_*^{\operatorname{cris}} \sqcup \mathcal{S}_*^{\operatorname{st}}$ while cases (2) and (3) correspond to $\mathcal{S}_*^{\operatorname{ng}}$. Coleman's ``small slope criterion'' for the classicality of overconvergent modular eigenforms (\S 6 of \cite{RC96}) can then be interpreted as follows in terms of Galois representations: if $k \geq 1$ and $0 < \operatorname{val}_p(\lambda)<k-1$, then the representation $V_p f$ is potentially semistable at $p$, and therefore the overconvergent modular form $f$ is classical, as predicted by the Fontaine-Mazur conjecture (theorem \ref{emkis}). We finish this paragraph by discussing the weight-characters of overconvergent cuspidal eigenforms of finite slope. Let $\mathscr{W}$ be the \emph{weight space}, that is the parameter space for characters of $\mathbf{Z}_p^\times$. The space $\mathscr{W}$ is the union of the $p-1$ balls $\mathscr{X}_{\overline{\chi}_{\operatorname{cycl}}^i}$ where $0 \leq i \leq p-2$ (unless $p=2$; $\mathscr{W}$ is then the union of two balls). If $V$ is a $p$-adic representation of $\Gal(\Qpbar/\Qp)$, then by class field theory $x_0^{-1} \cdot \det(V)$ gives a character of $\mathbf{Q}_p^\times$ whose restriction to $\mathbf{Z}_p^\times$ is the \emph{weight-character} $\kappa_V$ of $V$. This gives rise to a map $\kappa : \mathscr{X}^S_{\overline{\rho}} \to \mathscr{W}$ and by composition to a map $\mathcal{C} \to \mathscr{W}$ which satisfies the following property by \S 1.5 of \cite{CM98}. \begin{theo}\label{egwt} The map $\mathcal{C} \to \mathscr{W}$ is, locally in the domain, finite and flat. \end{theo} We now explain that if $N=1$ and $p \in \{2,3,5,7\}$, then one can give a ``local'' realization of the eigencurve. A point $(\kappa,\lambda) \in \mathscr{W} \times \mathbf{G}_\mathrm{m}$ is said to be \emph{special} if $\kappa = x_0^k$ for some $k \geq 2$ and $\lambda^2=p^{k-2}$. Let $\mathscr{W} \widetilde{\times} \mathbf{G}_\mathrm{m}$ be the blow-up of $\mathscr{W} \times \mathbf{G}_\mathrm{m}$ at the special points (so that $\mathscr{W} \widetilde{\times} \mathbf{G}_\mathrm{m}$ can be seen as a subspace of the space $\mathcal{S}$ of \S\ref{exsec}). Consider the map $\mathcal{C} \to \mathscr{W} \widetilde{\times} \mathbf{G}_\mathrm{m}$ given by $(V_x,\lambda) \mapsto (\kappa_x,\lambda,\mathcal{L}_x)$ at the special points and by $(V_x,\lambda) \mapsto (\kappa_x,\lambda)$ elsewhere. The following theorem is the main result of \cite{GC08}. \begin{theo}\label{egcol} The map $\mathcal{C} \to \mathscr{W} \widetilde{\times} \mathbf{G}_\mathrm{m}$ is a rigid analytic map, and if $N=1$ and $p \in \{2,3,5,7\}$, then it is a closed immersion. \end{theo} Some important ideas underlying the proof of this theorem are Colmez' theorem 0.5 of \cite{PCLI} expressing the $\mathcal{L}$-invariant as the derivative of the $U_p$-eigenvalue, the fact that if $p \in \{2,3,5,7\}$ and $S=\{p,\infty\}$, then an odd $2$-dimensional $p$-adic representation of $\operatorname{Gal}(\mathbf{Q}_S/\mathbf{Q})$ is determined by its restriction to $\Gal(\Qpbar/\Qp)$ (proposition 1.8 of \cite{GC08}), and a local study of families of trianguline representations. \subsection{Trianguline representations and Selmer groups}\label{bcsec} Since the $(\phi,\Gamma)$-module attached to a trianguline representation $V$ has a particularly simple structure, one can use this structure to study the cohomology groups attached to $V$, in particular the Selmer group and its variants. Some of the techniques which are available in the ordinary case for that study (such as \cite{RG89}) can be extended to the case of trianguline representations. For example, it is possible to give a generalized definition of the usual $\mathcal{L}$-invariant (see Benois' \cite{DB09}, where an $\mathcal{L}$-invariant is constructed for representations which are not necessarily $2$-dimensional), and to study the Selmer groups corresponding to families of trianguline representations, such as those carried by the eigencurve or more general eigenvarieties (as in the book \cite{BC09} by Bella\"{\i}che and Chenevier, and in Pottharst's \cite{JP08} and \cite{JP10}). In this way, it is possible to prove some new cases of the Bloch-Kato conjectures, by establishing some ``lower semicontinuity'' results about the rank of the Selmer groups (see \cite{BC09} and Bella\"{\i}che's \cite{JB10}). The systematic study of families of trianguline representations, in connection with the theory of families of automorphic forms, is an increasingly important topic which we do not say anything more about, because it is rapidly progressing and deserves a survey of its own. \noindent \textbf{Acknowledgements}: I had several very helpful conversations with Ga\"etan Chenevier. The two referees' detailed comments allowed me to correct many (hopefully, all) inaccuracies. \providecommand{\bysame}{\leavevmode ---\ } \providecommand{\og}{``} \providecommand{\fg}{''} \providecommand{\smfandname}{and} \providecommand{\smfedsname}{eds.} \providecommand{\smfedname}{ed.} \providecommand{\smfphdthesisname}{PhD}
1011.3651
\section{Introduction} Recent years have witnessed a great deal of interest in the possible existence of mass dimension two condensates in gauge theories, see for example \cite{Gubarev:2000eu,Gubarev:2000nz,Verschelde:2001ia,Dudal:2002pq,Dudal:2003vv,Vercauteren:2007gx,Dudal:2005na,Boucaud:2001st,Furui:2005he,Gubarev:2005it,Browne:2003uv,Andreev:2006vy,RuizArriola:2006gq,Chernodub:2008kf} and references therein for approaches based on phenomenology, operator product expansion, lattice simulations, an effective potential and the string perspective. There is special interest in the operator \begin{equation} \label{mingaugeorbit} A_{\min}^2 = \min_{U\in SU(N)} \mathcal V^{-1} \int d^4x (A_\mu^U)^2 \;, \end{equation} which is gauge invariant due to the minimization along the gauge orbit. It should be mentioned that obtaining the global minimum is delicate due to the problem of gauge (Gribov) ambiguities \cite{Gribov:1977wm}. As is well known, local gauge invariant dimension two operators do not exist in Yang-Mills gauge theories. The nonlocality of \eqref{mingaugeorbit} is best seen when it is expressed as \cite{Lavelle:1995ty}\footnote{We will always work in Euclidean spacetime.} \begin{equation} \label{akwadraatlang} A_{\min}^{2} =\int d^{4}x\left[ A_{\mu }^{a}\left( \delta _{\mu \nu }-\frac{\partial _{\mu }\partial _{\nu }}{\partial ^{2}}\right) A_{\nu}^{a} - gf^{abc}\left( \frac{\partial _{\nu}}{\partial ^{2}}\partial A^{a}\right) \left( \frac{1}{\partial^{2}}\partial {A}^{b}\right) A_{\nu }^{c}\right] + \mathcal O(A^{4}) \;. \end{equation} The relevance of the condensate $\langle A_\mu^2\rangle_\mathrm{min}$ was discussed in \cite{Gubarev:2000eu,Gubarev:2000nz}, where it was shown that it can serve as a measure for the monopole condensation in the case of compact QED. All efforts so far have concentrated on the Landau gauge $\partial_\mu A_\mu = 0$. The preference for this particular gauge fixing is obvious since the nonlocal expression \eqref{akwadraatlang} reduces to the local operator $A^2_\text{min} = A_\mu^2$. In the case of a local operator, the Operator Product Expansion (OPE) becomes applicable, and consequently a measurement of the soft (infrared) part $\langle A_\mu^2\rangle_\text{OPE}$ becomes possible. Such an approach was followed in e.g. \cite{Boucaud:2001st} by analyzing the appearance of $1/q^2$ power corrections in (gauge variant) quantities like the gluon propagator or strong coupling constant, defined in a particular way, from lattice simulations. Let us mention that already three decades ago attention was paid to the condensate $\langle A_\mu^2\rangle$ in the OPE context \cite{Lavelle:1988eg}. Recently, Chernodub and Ilgenfritz \cite{Chernodub:2008kf} have considered the asymmetry in the dimension two condensate. They performed lattice simulations, computing the expectation value of the electric-magnetic asymmetry in Landau gauge, which they defined as \begin{equation}\label{ci} \Delta_{A^2} = \langle g^2 A_0^2 \rangle - \frac1{d-1} \sum_{i=1}^{d-1} \langle g^2 A_i^2 \rangle\, . \end{equation} At zero temperature, this quantity must, of course, be zero due to Lorentz invariance\footnote{We shall deliberately use the term Lorentz invariance, though we shall be working in Euclidean space throughout this paper.}. Necessarily it cannot diverge as divergences at finite $T$ are the same as for $T=0$, hence this asymmetry is, in principle, finite, and it can be computed without renormalization, for all temperatures. Their results are depicted in \figurename\ \ref{maximplot}. At high temperatures, general thermodynamic arguments predict a polynomial behavior $\propto T^2$, and this is also what the authors of \cite{Chernodub:2008kf} found\footnote{A perturbative computation gives a positive proportionality constant, in contrary to what is erronously \cite{maxim} found in \cite{Chernodub:2008kf}. The lattice computations for $T<6\;T_c$ find a negative proportionality constant, so one would expect the real high-temperature behavior to start yet later.}. For the low-temperature behavior, however, one would expect an exponential fall-off with the lowest glueball mass in the exponent, $\Delta\sim e^{-m_{\text{gl}}T}$. Instead, they found an exponential with a mass $m$ significantly smaller than $m_\text{gl}$. \begin{figure}\begin{center} \includegraphics[width=6cm]{maxim.eps} \caption{Lattice results for the electric-magnetic assymetry $\Delta_{A^2}$ as function of the temperature for SU(2) pure gauge theory, as found in \cite{Chernodub:2008kf}. \label{maximplot}} \end{center}\end{figure} \section{$\langle A_\mu^2\rangle$ and $\Delta_{A^2}$ in the LCO formalism} In order to get more insight in the behavior of the asymmetry, we have investigated it using the formalism presented in \cite{Verschelde:2001ia}. A meaningful effective potential for the condensation of the \emph{Local Composite Operator} (LCO) $A_\mu^2$ was constructed by means of the LCO method. This is a nontrivial task due to the compositeness of the considered operator. We consider pure Euclidean SU(N) Yang--Mills theories with action \begin{equation} S_\text{YM+gf} = \int d^4x \left(\frac14 (F_{\mu\nu}^a)^2 + S_\text{gf} + b^a\partial_\mu A_\mu^a + \bar c^a\partial_\mu\mathcal D_\mu^{ab}c^b\right) \;. \end{equation} We couple the operator $A_\mu^2$ to the Yang--Mills action by means of a source $J$: \begin{equation} S_J = S_\text{YM} + \int d^4x \left(\frac12J(A_\mu^a)^2 - \frac12\zeta J^2\right) \;. \end{equation} The last term, quadratic in the source $J$, is necessary to kill the divergences in vacuum correlators like $\langle A^2(x)A^2(y)\rangle$ for $x\to y$, or equivalently in the generating functional $W[J]$, defined as \begin{equation} e^{-W[J]} = \int [\text{fields}] e^{-S_J} \;. \end{equation} The presence of the LCO parameter $\zeta$ ensures a homogenous renormalization group equation for $W[J]$. Its arbitrariness can be overcome by making it a function $\zeta(g^2)$ of the strong coupling constant $g^2$, allowing one to fix $\zeta(g^2)$ order by order in perturbation theory in accordance with the renormalization group equation. In order to access the electric-magnetic asymmetry, a second source $K_{\mu\nu}$ is coupled to the traceless part of $A_\mu^a A_\nu^a$. This second operator will not mix with $A_\mu^2$ itself, which allows control over the renormalization group of these two operators. Again a term quadratic in the new source must be added, introducing a second parameter $\omega(g^2)$ which can, again, be fixed order by order in accordance with the renormalization group equation. We have proven the all-order perturbative renormalizability of this extention of the formalism using the algebraic method based on the Ward identies \cite{Dudal:2009tq}. In order to recover an energy interpretation, the terms $\propto J^2$ and $k^2$ can be removed by employing a Hubbard--Stratonovich transformation, amounting to inserting the following unities into the path integral: \begin{equation} \label{unities} 1 = \int [d \sigma] e^{-\frac{1}{2\zeta} \int d^d x \left( \frac{\sigma}{g} + \frac{1}{2} A_\mu^2 - \zeta J \right)^2 } = \int [d \varphi_{\mu\nu}] e^{- \frac{1}{2\omega} \int d^d x \left( \frac{1}{g} \varphi + \frac{1}{2} A_\mu A_\nu -\omega\ k_{\mu\nu} \right)^2}\;, \end{equation} with $\varphi_{\mu\nu}$ a traceless field, leading to the action \begin{equation} S = S_\text{YM} + \int d^dx\left[\frac{1}{2\zeta} \frac{\sigma^2}{g^2} + \frac{1}{2\zeta g}\sigma A_\mu^2 + \frac{1}{8\zeta} (A_\mu^2)^2 + \frac{1}{2\omega}\frac{\varphi_{\mu\nu}^2}{g^2} + \frac{1}{2\omega g}\varphi_{\mu\nu} A_\mu A_\nu + \frac{1}{8\omega} (A^a_\mu A^a_\nu)^2\right] \;. \end{equation} Starting from this, it is possible to compute the effective potential $V(\sigma,\varphi_{\mu\nu})$, given the correspondences \begin{equation} \langle\sigma\rangle = -\frac g2\langle A_\mu^2\rangle \;, \quad \langle\varphi_{\mu\nu}\rangle = -\frac g2 \left\langle A_\mu A_\nu-\frac{\delta_{\mu\nu}}d A_\lambda^2\right\rangle \;. \end{equation} Now we determine the values of $\zeta$ and $\omega$ from the renormalization group equations for the sources $J$ and $K_{\mu\nu}$. For this, some anomalous dimensions and renormalization factors have to be computed up to one loop order higher than the intended loop order we are interested in. We have done this using the {\sc Mincer} algorithm. The final result is up to one-loop order: \begin{equation} \zeta = \dfrac{N^2 -1}{16 \pi^2} \left[\dfrac{9}{13} \dfrac{16 \pi^2}{ g^2 N} + \dfrac{161}{52} \right] \;, \qquad \omega = \dfrac{N^2 -1}{16 \pi^2} \left[\dfrac{1}{4} \dfrac{16 \pi^2}{ g^2 N} + \dfrac{73}{1044} \right] \;. \end{equation} \section{Computation and minimalization of the action} The effective potential $V(\sigma,\varphi_{\mu\nu})$ can now be computed using standard techniques. We have taken the background fields $\sigma$ and $\varphi_{\mu\nu}$ to have space-time independent vacuum expectation values and $\varphi_{\mu\nu}$ to be the traceless diagonal matrix $\operatorname{diag}(A,-\frac1{d-1}A,\ldots,-\frac1{d-1}A)$ \cite{Vercauteren:2010rk}. \subsection{Low temperatures} \begin{figure}\begin{center} \includegraphics[width=5cm]{akwadraatass.eps} \caption{The $\langle g^2A_\mu^2\rangle$ condensate (full line) and the assymetry $\Delta_{A^2}$ (dashed line) as functions of the temperature, in units $\Lambda_{\overline{\text{MS}}}$. \label{eerstefiguur}} \end{center}\end{figure} Computing the effective action up to one-loop order at zero temperature, yields only the minimum found in \cite{Verschelde:2001ia}, which is what we expect. For finite but still not too high temperatures, the potential can be minimized numerically. The result is depicted in \figurename\ \ref{eerstefiguur}. We see that the asymmetry rises at low temperatures, which agrees qualitatively with the findings of \cite{Chernodub:2008kf}. The low-temperature expansion of $\Delta_{A^2}$ reads \begin{equation} \Delta_{A^2} = (N^2-1) \frac{g^2\pi^2}{30} \left(1-\frac{85}{1044}\frac{g^2N}{(4\pi)^2}\right) \frac{T^4}{m^2} \;, \end{equation} and there is no correction to $\langle A_\mu^2\rangle$ at this order. Remark that we find a polynomial behavior $\propto T^4/m^2$ instead of an exponential. This does not agree with the lattice results, but in \cite{Chernodub:2008kf} the lowest temperatures reached were $T = 0.4\;T_c$, where our expansion is not valid anymore. This also does not agree with the thermodynamic argument that, in a theory with a massgap, one would expect exponential scaling. However, the asymmetry does not get a gauge invariant meaning by going to the Landau gauge. \subsection{High temperatures} At temperatures higher than $0.67\;\Lambda_{\overline{\text{MS}}}$, the minimum disappears. This signals a phase transition to the perturbative vacuum. In order to access this regime, it is possible to expand the effective potential for high temperatures, which yields \begin{equation} \langle g^2A_\mu^2 \rangle = g^2(N^2-1) \frac{T^2}4 \ , \qquad \Delta_{A^2} = g^2(N^2-1) \frac{T^2}{12} \;. \end{equation} This coincides with the perturbative result. \begin{figure}\begin{center} \includegraphics[width=4cm]{debye1.eps} \hspace{3cm} \includegraphics[width=4cm]{debye2.eps} \caption{\textbf{Left:} The diagrams giving the Debye mass in HTL resummation. The ghost loop is not necessary \cite{braatenpisarski}. \textbf{Right:} More diagrams that need to be resummed, coming from the $\sigma$ and $\varphi_{\mu\nu}$ parts of the LCO Lagrangian. The dotted line is the $\sigma$ or $\phi_{\mu\nu}$ propagator, and the labeled vertex depicts the additional vertices introduced by the formalism. \label{debye}} \end{center}\end{figure} In order to compute higher-order corrections to this, it is necessary to perform a Hard Thermal Loop (HTL) resummation \cite{andersenstrickland}, as nonresummed perturbation theory leads to a tachyonic mass in our case\footnote{The condensate and the mass generated by it differ by a negative multiplicative constant.}. In ordinary pure Yang--Mills theory at this order, HTL amounts to giving the timelike gluon a Debye mass $m_D^2 = \tfrac N3g^2T^2$, which effectively resums the hard (high momentum) contributions of the diagrams left in \figurename\ \ref{debye}. In our formalism, however, there are four additional vertices giving rise to four extra diagrams that need to be resummed, shown at the right in \figurename\ \ref{debye}. Computing these additional diagrams, it turns out that they exactly cancel the lowest-order contribution from the condensate. Solving in the large-$T$ limit yields \begin{equation} \langle g^2A_\mu^2\rangle = g^2(N^2-1)\left(\frac{T^2}4 - \frac{m_DT}{4\pi} + \cdots\right) \ , \qquad \Delta_{A^2} = g^2(N^2-1) \left(\frac{T^2}{12} - \frac{m_DT}{36\pi} + \cdots \right) \;, \end{equation} which is exactly what one would expect from perturbation theory. \section{Conclusions} The temperature dependence of the nonperturbative dimension two condensate and its electric-magnetic asymmetry have been studied analytically. At low temperatures, we find qualitative agreement with the lattice results. At high temperatures, the perturbative vacuum is recovered. We wish to thank M.~N.~Chernodub for encouraging this research.
2203.13088
\section{Introduction} \label{sec:intro} Traditional retrieval systems have long relied on bag-of-words representations to search within unstructured text collections. This has led to mature architectures that are known to support highly-efficient search. The compact inverted indexes enable fast top-$k$ retrieval strategies, while also exhibiting interpretable behavior, where retrieval scores can directly be attributed to the contributions of individual terms. Despite these attractive qualities, recent progress in Information Retrieval (IR) has firmly demonstrated that pre-trained language models can considerably boost retrieval quality over classical approaches. This progress has not come without downsides, as it is less clear how to control the computational cost and how to ensure interpretability of these neural models. This has sparked an unprecedented tension in IR between achieving the best retrieval quality, maintaining low computational costs, and prioritizing interpretable modeling. \begin{figure}[t] \begin{FlushLeft} \faSearch \hspace{0.1cm} \textbf{does doxycycline contain sulfa} \end{FlushLeft} \begin{FlushLeft} \begin{adjustwidth}{0.5cm}{} \textit{BERT tokenized (9 subword-tokens): 'does', 'do', '\#\#xy', \\ '\#\#cy', '\#\#cl', '\#\#ine', 'contain', 'sul', '\#\#fa'} \end{adjustwidth} \end{FlushLeft} \vspace{0.1cm} \vspace{0.1cm} \begin{FlushLeft} \textbf{ColBERTer BOW$^2$} \textit{(30 saved vectors from 84 subword-tokens)}: \vspace{0.1cm} \pill{\vphantom{Hy}photosensitivity} \pillMatch{\vphantom{Hy}doxycycline}\pillMatchScore{\footnotesize \vphantom{Hy}12.9} \pillMatch{\vphantom{Hy}sulfa}\pillMatchScore{\footnotesize \vphantom{Hy}14.2} \pill{\vphantom{Hy}sunburned} \pill{\vphantom{Hy}rash} \pill{\vphantom{Hy}clothing} \pill{\vphantom{Hy}sunlight} \pill{\vphantom{Hy}allergic} \pill{\vphantom{Hy}compound} \pill{\vphantom{Hy}drugs} \pillMatch{\vphantom{Hy}containing}\pillMatchScore{\footnotesize \vphantom{Hy}6.6} \pill{\vphantom{Hy}take} \pill{\vphantom{Hy}safely} \pill{\vphantom{Hy}wear} \pill{\vphantom{Hy}.} \pill{\vphantom{Hy}is} \pillMatch{\vphantom{Hy}no}\pillMatchScore{\footnotesize \vphantom{Hy}4.7} \pill{\vphantom{Hy}exposed} ... \vspace{-0.1cm} \small{\textbf{Fulltext:} No doxycycline is not a sulfa containing compound, so you may take it safely if you are allergic to sulfa drugs. You should be aware, however, that doxycycline may cause photosensitivity, so you should wear appropriate clothing, or you may get easily sunburned or develop a rash if you are exposed to sunlight.} \end{FlushLeft} \vspace{-0.2cm} \caption{Example of ColBERTer's BOW$^2$ (Bag Of Whole-Words): ColBERTer stores and matches unique whole-word representations. The words in BOW$^2$ are ordered by implicitly learned query-independent term importance. Matched words are highlighted in blue with whole-word scores displayed in a user-friendly way next to them.} \label{fig:hero_example} \vspace{-0.4cm} \end{figure} For practical applications, ranking model architectures are confined to strict cost constraints, primarily query latency and index space footprint. While a larger disk space consumption might not be a critical cost factor, keeping many pre-computed representations in memory---as often needed for low query latency---does increase hardware costs significantly. For multi-vector models like ColBERT~\cite{khattab2020colbert}, space consumption is determined by a multiplication of three variables: 1) the number of vectors per document; 2) the number of dimensions per vector; 3) the number of bytes per dimension. A motivating observation of this work is that reducing any of these three variables by a certain factor directly reduces the storage requirement by that factor and yet this does not necessary translate to the same ratio of keeping or reducing the effectiveness. Well-studied \textit{low hanging fruits} for good tradeoffs include reducing the number of dimensions (either inside the model or outside with PCA) and reducing the number of bytes with quantization~\cite{ma2021simple,ji2019efficient,lewis2021boosted,gao2021coil}. The remaining parameter, the number of vectors per document, offers many development opportunities determined by the model architecture and the retrieval approach employed. In addition to efficiency considerations, with the accelerating adoption and impact of machine learning models, there are indications that future regulatory environments will require deployed models to provide transparent and reliably interpretable output to their users.\footnote{Such as a recent 2021 proposal by the EU Commission on AI regulation, see: \\ \url{https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206} (Art. 13)} This crucial need for interpretability is especially pronounced in IR, where the ranking models are demanded to be fair and transparent \cite{castello2019fairranking}. Despite this, we observe that the two largest classes of neural models at the moment---namely, cross-encoders and single-vector bi-encoders---rely on opaque aggregation functions which do not expose transparently the contributions of the terms of a query or passage to the score. This paper presents a novel end-to-end retrieval and ranking model called \textbf{ColBERTer}. ColBERTer extends the popular ColBERT model with effective \underline{\textbf{e}}nhanced \underline{\textbf{r}}eduction approaches. These enhanced reductions increase the level of interpretability and reduce the storage and latency cost greatly, while at the same time maintaining the quality of retrieval. Our first main architectural contribution is fusing a single-vector retrieval and multi-vector refinement model into one with explicit multi-task training. \citet{mackenzie2021wacky} showed that fitting subword models into established retrieval systems negatively impacts the performance of previously developed efficient query evaluation techniques, such as early exiting. Therefore, we introduce neural Bag of Whole-Words (BOW$^2$) representations for increasing interpretability and reducing the number of stored vectors in the ranking process. The BOW$^2$ consist of the aggregation of all subword token representations contained in a unique whole word. In order to further reduce the number of stored vectors we learn to reduce the BOW$^2$ representations with simplified contextualized stopwords (CS) \cite{Hofstaetter2020_cikm}. To maximally reduce the dimensionality of the token vectors to one, we also employ an Exact Matching (EM) component, which matches only the vector representations of whole-words which are exact matches from the query. We refer to this model variant as \textbf{Uni-ColBERTer} (following the nomenclature of \citet{lin2021few}). In Figure \ref{fig:hero_example} we show ColBERTer's BOW$^2$ representation and how we can display whole-word scores to the user in a keyword view. As we are aggregating all subwords to whole-words, we can confidently display the whole-word scores of this complex medical-domain query to show ColBERTer's interpretability capabilities, without cherry picking an example that only contains words that are fully part of BERT's vocabulary. The ColBERTer encoding architecture enables many different indexing and retrieval scenarios. Building on recent works \cite{gao2021coil,lin2021few}, which already proposed and analyzed some dense and sparse retrieval modes we provide a holistic categorization and ablation study of five possible usage scenarios of ColBERTer encoded sequences: sparse token retrieval, dense single vector retrieval, as well as refining either one of the retrieval sources and a full hybrid mode. Specifically, we study for our ColBERTer model: \newcommand{\begin{itemize}{\begin{itemize} \item[\textbf{RQ1}] Which aggregation and training regime works best for combined retrieval and refinement capabilities of ColBERTer? \end{itemize}} \begin{itemize We find that multi-task learning with two weighted loss functions for the retrieval and refinement and a learned score aggregation of the retrieval and refinement score outperforms consistently fixed score aggregation. Furthermore we investigate the joint training of score aggregation, BOW$^2$, and contextualized stopwords with a weighted multi-task loss. We find that tuning the different weights does improve the tradeoff between removed vectors and retrieval quality, however overall the results are robust to small changes in the hyperparameter space. Following our holistic definition of dense and sparse combinations and to provide guidance on how to employ ColBERTer we study various deployment scenarios and answer: \newcommand{\begin{itemize}{\begin{itemize} \item[\textbf{RQ2}] What is the best indexing and refinement strategy for ColBERTer? \end{itemize}} \begin{itemize Interestingly, we find that a full hybrid retrieval deployment is not practically necessary, and only results in very modest and not significant gains compared to either a sparse or dense index with passage refinement of the other component. Using a dense index results in higher recall than a sparse index, however after the initial retrieval results are refined, the effect on the top $10$ results becomes negligible -- especially on TREC-DL. This novel result could lead to less complexity in deployment, as only one index is required. Practitioners could choose to keep a sparse index, if they already made significant investments or choose only a dense approximate nearest neighbor index for more predictable query latency. Both sparse and dense encodings of ColBERTer can be optimized with common indexing improvements. With our hyperparameters fixed, we aim to understand the quality effect of reducing storage factors along 2 axes of ColBERTer: \newcommand{\begin{itemize}{\begin{itemize} \item[\textbf{RQ3}] How do different configurations of dimensionality and vector count affect the retrieval quality of ColBERTer? \end{itemize}} \begin{itemize We study the effect of BOW$^2$, CS, and EM reductions on different token vector dimensionalities ($32$, $16$, $8$, and $1$). We find that, as expected, retrieval quality is reduced with each dimension reduction, however the delta is small. Furthermore, we observe that BOW$^2$ and CS reductions result -- on every dimension setting -- in a Pareto improvement over simply reducing the number of dimensions. While we want to emphasize that it becomes increasingly hard to contrast neural retrieval architectures -- due to the diversity surrounding training procedures -- and make conclusive statements about "SOTA" -- due to evaluation uncertainty -- we still compare ColBERTer to related approaches: \newcommand{\begin{itemize}{\begin{itemize} \item[\textbf{RQ4}] How does the fully optimized ColBERTer system compare to other end-to-end retrieval approaches? \end{itemize}} \begin{itemize We find that ColBERTer does improve effectiveness compared to related approaches, especially for cases with low storage footprint. Our Uni-ColBERTer variant especially outperforms previous single-dimension token encoding approaches, while at the same time offering improved transparency and making it easier to showcase model scores with mappings to whole words. In order to evaluate the generalizability of ColBERTer we study its results on seven high-quality and diverse retrieval collections from different domains using a meta-analysis \cite{soboroff2018meta}, that allows us to robustly observe whether statistical significant gains are achieved over multiple collections or not. We investigate: \newcommand{\begin{itemize}{\begin{itemize} \item[\textbf{RQ5}] How robust is ColBERTer when applied to out of domain collections? \end{itemize}} \begin{itemize We find that ColBERTer with token embeddings of $32$ or Uni-ColBERTer with $1$ dimension both show an overall significantly higher retrieval effectiveness compared to BM25, with not a single collection worse than BM25. Compared to a TAS-Balanced trained dense retriever \cite{Hofstaetter2021_tasb_dense_retrieval} ColBERTer is not statistically significantly worse on any single collection. While we observe an overall positive effect it is not statistically significant within a $95\%$ confidence interval. This robust analysis tries to not overestimate the benefits of ColBERTer, while at the same time giving us more confidence in the results. We publish our code, trained models, and documentation at: \\ \textit{github.com/sebastian-hofstaetter/colberter} \section{Background} In this section we motivate storing unique whole-word representations instead of all sub-word tokens with statistics of our collections; we review the single-vector $\bertdot$ and multi-vector $\colbert$ architectures; and give an overview of other related approaches. \subsection{Tokenization} We use, as most other neural ranking approaches, a BERT \cite{devlin2018bert} variant to contextualize sequences. Therefore, we are locked into the specific tokenization the language model uses. The BERT tokenization process first splits full text on all whitespace and punctuation characters, leaving whole words intact. In the second step it uses the WordPiece algorithm \cite{schuster2012japanese} to split words to sub-word tokens in the reduced vocabulary. In Table \ref{tab:word_stats} we showcase word count statistics based on the final sub-word tokens as well as the whole-words from the first step. We count all units as well as unique sets of lowercased and stemmed words. We can clearly observe that aggregating unique+stemmed whole-words only save from 59 \% to 36 \% of the original sub-word units that correspond to the BERT output. Related multi-vector methods, such as the ColBERT or (Uni)COIL models save all BERT tokens (leftmost column), while the BOW$^2$ aggregation we propose in \sref{sec:bow2} saves only stemmed unique whole-words (rightmost column). \begin{table}[t] \centering \caption{Average token count statistics per passage for our used collections using different tokenization and aggregation approaches. \textit{\% Ret. refers to the percent of retained vectors between all BERT tokens and unique+stemmed whole words.} } \label{tab:word_stats} \setlength\tabcolsep{2.0pt} \vspace{-0.3cm} \begin{tabular}{l!{\color{gray}\vrule}rr!{\color{lightgray}\vrule}rrr!{\color{lightgray}\vrule}r} \toprule \multirow{2}{*}{\textbf{Collection}} & \multicolumn{2}{c!{\color{lightgray}\vrule}}{\textbf{BERT Tokens}}& \multicolumn{3}{c!{\color{lightgray}\vrule}}{\textbf{Whole-Words}}&\textbf{\%}\\ & All & Unique & All & Unique & U.+Stem & \textbf{Ret.} \\ \midrule \arrayrulecolor{lightgray} \textbf{MS MARCO} & 76.9 & 50.2 & 68.6 & 44.1 & 43.2 & \textbf{56 \%}\\ \midrule \textbf{TREC Covid} & 237.7 & 119.5 & 199.5 & 98.7 & 94.3 & \textbf{40 \%}\\ \textbf{TripClick} & 380.2 & 173.1 & 324.1 & 144.6 & 137.6 & \textbf{36 \%}\\ \textbf{NFCorpus} & 348.0 & 164.4 & 296.6 & 137.4 & 131.2 & \textbf{37 \%}\\ \textbf{DBPedia Entity} & 72.8 & 49.1 & 61.0 & 41.0 & 40.5 & \textbf{55 \%}\\ \textbf{Antique} & 53.1 & 36.4 & 48.3 & 32.1 & 31.5 & \textbf{59 \%}\\ \textbf{TREC Podcast} & 421.2 & 183.1 & 409.4 & 173.2 & 166.6 & \textbf{40 \%}\\ \textbf{TREC Robust 04} & 379.9 & 191.9 & 363.5 & 172.1 & 165.0 & \textbf{43 \%}\\ \arrayrulecolor{black} \bottomrule \end{tabular} \end{table} \subsection{BERT\texorpdfstring{$_\textbf{DOT} $}: Architecture} The BERT$_\text{DOT}$ model matches a single vector of the query with a single vector of a passage. The vectors are the output of independent $\bert$ computations as follows: \begin{equation} \begin{aligned} \tilde{q} &= \bert([\text{CLS};{q}_{1:m};\text{SEP}])_\text{CLS} * W_d \\ \tilde{p} &= \bert([\text{CLS};{p}_{1:n};\text{SEP}])_\text{CLS} * W_d \end{aligned} \end{equation} where we pool a single vector as the index selection corresponding to the CLS vector and optionally compress the vector dimensionality with a single shared linear layer $W_d$. This setup allows us to pre-compute every contextualized passage representation $\tilde{p}$. Then, either the model or a nearest neighbor index, computes the final scores as the dot product $\cdot$ of $\tilde{q}$ and $\tilde{p}$: \begin{equation} \begin{aligned} \bertdot({q}_{1:m},{p}_{1:n}) & = \tilde{q} \cdot \tilde{p} \end{aligned} \end{equation} While the model architecture itself does not allow for many modifications, the training regime is a widely studied way of improving the results of the single vector retrieval platform \cite{xiong2020approximate,luan2020sparse,lu2020twinbert}. \subsection{ColBERT Architecture} \label{sec:colbert} The $\colbert$ model \cite{khattab2020colbert} delays the interactions between query and document to after the BERT computation. $\colbert$ contextualizes every query and document subword representation, by feeding the subword-tokenized query ${q}_{1:m}$ and passage ${p}_{1:n}$ sequences through a BERT model: \begin{equation} \begin{aligned} \tilde{q}_{1:m+r+2} &= \bert([\text{CLS};{q}_{1:m};\operatorname{rep}(\text{MASK},r);\text{SEP}]) * W_d \\ \tilde{p}_{1:n+2} &= \bert([\text{CLS};{p}_{1:n};\text{SEP}]) * W_d \label{eq:colbert_1} \end{aligned} \end{equation} where the $\operatorname{rep}(MASK,r)$ method repeats the MASK token a number of times, set by the hyperparameter $r$. \citet{khattab2020colbert} introduced this query augmentation method to increase the computational capacity of the BERT model for short queries. The dimensions of all representation vectors are reduced by a single shared linear layer $W_d$. The interactions in the $\colbert$ model are aggregated with a max-pooling per query term and sum of query-term scores as follows: \begin{equation} \begin{aligned} \colbert({q}_{1:m},{p}_{1:n}) = \sum_{1}^{m+r+2} \max_{1..n+2} \tilde{q}_{1:m+r+2}^T \cdot \tilde{p}_{1:n+2} \label{eq:colbert_2} \end{aligned} \end{equation} The max-sum operator scans the matrix of all term-by-term interactions, which is a technique inspired by earlier works on kernel-pooling \cite{Xiong2017,Hofstaetter2020_ecai}. The term-by-term interaction matrix creates transparency in the scoring, as it allows to inspect the source of different scoring parts, while being mapped to human-readable word units \cite{Hofstaetter2020_ecir}. However, the usefulness of this feature is reduced by the use of special tokens, especially by the query expansion with MASK tokens, as it is non-trivial to explain reliably to the users what each MASK token stands for. \begin{figure*}[t] \includegraphics[clip,trim={0.2cm 0.2cm 0.2cm 0.2cm}, width=0.98\textwidth]{figures/colberter_architecture.pdf} \centering \vspace{-0.3cm} \caption{The ColBERTer encoding architecture, followed by the query-time workflow. The passage representations (both the single CLS and token vectors) are pre-computed during indexing time. The enhanced reductions with the 2-way dimension reduction, the unique BOW$^2$ aggregation, contextualized stopwords and token dimensionality reduction are applied symmetrically to passages and queries (except for the stopword removal).} \label{fig:colberter_architecture} \vspace{-0.1cm} \end{figure*} \subsection{Related Work} \paragraph{\textbf{Vector Reduction}} Our dynamic approach to reduce the number of vectors needed to represent a passage differs from previous works that focus on fixed numbers of vectors across all passages: \citet{lassance2021study} prune ColBERT representations to either 50 or 10 vectors for MSMARCO by sorting tokens either by Inverse Document Frequency (IDF) or the last-layer attention scores of BERT. \citet{zhou2021multi} extend ColBERT with temporal pooling, by sliding a window over the passage representations to create a representation vector every window size steps, with a fixed target count of representation vectors. \citet{luan-etal-2021-sparsemebert} represent each passage with a fixed number of contextualized embeddings of the CLS token and the first $m$ token of the passage and score the relevance of the passage with the maximum score of the embeddings. \citet{humeau2020polyencoders} compute a fixed number of vectors per query, and aggregate them by softmax attention against document vectors. \citet{lee2020learning} learn phrase (multi-word) representations for QA collections. This approach also reduces the vector count compared to indexing all tokens, however it fully depends on the availability of exact answer spans in the passages and is therefore not universally applicable in the IR setting. \citet{tonelloto2021embeddingpruning} prune the embeddings of the query terms, however they do not reduce the number of embeddings for the documents in the index. In summary, the clearest difference from our contributions to related vector reduction techniques is: 1) we reduce a dynamic number of vectors per passage; 2) we keep a mapping between human-readable tokens and vectors, allowing scoring information to be used in the user interface; 3) we learn the full pruning process end-to-end without term-based supervision. \paragraph{\textbf{Vector Compression}} \citet{ma2021simple} study various learning and post-processing methods to reduce dimensionality of dense retrieval vectors. In contrast to our study, they find that learned dimensionality reduction performs poorly. Also for single vector retrieval \citet{zhan2021jointly} optimize product quantization as part of the training. Recently, \citet{santhanam2021colbertv2} study residual compression of all saved token vectors as part of the ColBERT end-to-end retrieval setting. There are concurrent efforts revisiting lexical matching with learned sparse representations \cite{gao2021coil,formal2021splade,lin2021few}, which employ the efficiency of exact lexical matches. Different to our work, they focus on reducing the number of dimensions of the learned embeddings without reducing the number of stored tokens. Many of these approaches can be considered complementary to our proposed methods, and future work should evaluate how well these methods compose to achieve even larger compression rates. \section{\texorpdfstring{C\lowercase{ol}BERT\lowercase{er}: Enhanced Reduction}.} ColBERT with enhanced reduction, or \textbf{ColBERTer}, combines the encoding architectures of $\bertdot$ and $\colbert$, while extremely reducing the token storage and latency requirements along the effectiveness Pareto frontier. Our enhancements maintain model transparency, creating a concrete mapping of scoring sources and human-readable whole-words. We provide an overview of the encoding and retrieval workflow in Figure~\ref{fig:colberter_architecture}. ColBERTer independently encodes the query and the document using a transformer encoder like BERT, producing token-level representation similar to ColBERT: \begin{equation} \begin{aligned} \tilde{q}_{1:m+2} &= \bert([\text{CLS};{q}_{1:m};\text{SEP}]) \\ \tilde{p}_{1:n+2} &= \bert([\text{CLS};{p}_{1:n};\text{SEP}]) \label{eq:colberter_1} \end{aligned} \end{equation} To maximize transparency, we do not apply the query augmentation mechanism of \citet{khattab2020colbert} (see~\sref{sec:colbert}), which appends MASK tokens to the query with the goal of implicit -- and thus potentially opaque -- query expansion. \subsection{2-Way Dimension Reduction} \label{sec:dim-red} Given the transformer encoder output, ColBERTer uses linear layers to reduce the dimensionality of the output vectors in two ways: 1) we use the linear layer $W_{CLS}$ to control the dimension of the first \texttt{CLS}-token representation (e.g. $128$ dimensions): \begin{equation} \begin{aligned} {q}_{CLS} &= \tilde{q}_{1} * W_{CLS} \\ {p}_{CLS} &= \tilde{p}_{1} * W_{CLS} \label{eq:colberter_dim_red_cls} \end{aligned} \end{equation} and 2) the layer $W_t$ projects the representations of the remaining tokens down to the token embedding dimension (usually smaller, e.g. $32$): \begin{equation} \begin{aligned} \dot{q}_{1:m} &= \tilde{q}_{2:m+1} * W_t \\ \dot{p}_{1:n} &= \tilde{p}_{2:n+1} * W_t \label{eq:colberter_dim_red_token} \end{aligned} \end{equation} This 2-way reduction combined with our novel training workflow (\sref{sec:training_proc}) serves to reduce our space footprint compared to ColBERT and at the same time provides more expressive encodings than a single vector $\bertdot$ model. Furthermore, it enables a multitude of potential dense and sparse retrieval workflows (\sref{sec:retrieval_workflows}). \subsection{\texorpdfstring{BOW$^\textbf{2}$} :: Bag of Unique Whole-Words} \label{sec:bow2} Given the token representations ($\dot{q}_{1:m}$ and $\dot{p}_{1:n}$), ColBERTer applies its novel key transformation: BOW$^2$ to the sequence of vectors. Whereas ColBERT and COIL maintain one vector for each BERT token, including tokens corresponding to sub-words in the BERT vocabulary, we create a single representation for each unique whole word. This serves to further reduce the storage overhead of our model by reducing the number of tokens, while preserving an explicit mapping of score parts to human understandable words. During tokenization we build a mapping between each sub-word token and corresponding unique whole word (as defined by a simple split on punctuation and whitespace characters). The words can also be transformed through classical IR techniques such as stemming. Then, inside the model we aggregate whole word representations for each whole word $w$ in passage $p$ by computing the mean of the embeddings of $w$'s constituent sub-words $\dot{p}_i$. We get the set of unique whole-word representation of the passage $p$: \begin{equation} \begin{aligned} \hat{p}_{1:\hat{n}} &= \bigg\{ \frac{1}{|\dot{p}_i \in w|} \sum_{\dot{p}_i \in w} \dot{p}_i \ \bigg| \ \forall \ w \in \operatorname{BOW^2}(p) \bigg\} \label{eq:colberter_bow_aggreagtion} \end{aligned} \end{equation} We apply the same procedure symmetrically to the query vectors $\dot{q}_{1:m}$ from equation (7) as well to produce $\hat{q}_{1:\hat{m}}$. The resulting sets are still dynamic in length as their length now depends on the number of whole words ($\hat{n}$ and $\hat{m}$ for passage and query sequences respectively). We refer to the new sets as \textit{bag of words}, as we only save one word once and the order of the vectors now does not matter anymore, because the language model contextualization already happened. \subsection{Simplified Contextualized Stopwords} To further reduce the number of passage tokens to store, we adopt a simplified version of \citet{Hofstaetter2020_cikm}'s contextualized stopwords (CS), which was first introduced for the TK-Sparse model. CS learns a \textit{removal gate} of tokens solely based on their context-dependent vector representations. We simplify the original implementation of CS and adapt the removal process to fit into the encoding phase of the ColBERTer model. Every whole-word passage vector $\hat{p}_j$ is transformed by a linear layer (with weight matrix $W_{s}$ and bias vector $b_{s}$), followed by a ReLU activation, to compute a single-dimensional stopword removal gate $r_j$: \begin{equation} r_{j} = \operatorname{ReLU}(\hat{p}_{j} W_s + b_s) \label{eq:stopword_gate} \end{equation} The original implementation \cite{Hofstaetter2020_cikm} masks scores after TK's kernel-activation, meaning the non-zero gates have to be saved as well, which increases the systems' complexity. In contrast, we directly apply the gate to the representation vectors. In particular, we drop every representation where the gate $r_j=0$, and otherwise scale the magnitude of the remaining representations using their gate scores: \begin{equation} \hat{p}_j = \hat{p}_j * \hat{r}_{j} \label{eq:stopword_gate_application} \end{equation} This fully differentiable approach allows us to learn the stopword gate during training and remove all nullified vectors at indexing time, as they do not contribute to document scores. Applying the stopword gate directly to the representation vector allows us to observe much more stable training than the authors of TK-Sparse observed -- we do not need to adapt the training procedure with special mechanisms to keep the model from collapsing. Following \citet{Hofstaetter2020_cikm} we train the removal gate with a regularization loss, which forces the stopword removal gate to become active during training as described in \sref{sec:training_proc}. \subsection{Matching \& Score Aggregation} \label{sec:matching} After we complete the independent encoding of query and passage sequences, we need to match and score them. ColBERTer creates two scores, one for the CLS vector and one for the token vectors. The CLS score is a dot product of the two CLS vectors: \begin{equation} \begin{aligned} s_{CLS} = {q}_{CLS} \cdot {p}_{CLS} \label{eq:colberter_cls_match} \end{aligned} \end{equation} The token score follows the scoring regime of ColBERT, with a match matrix of word-by-word dot product and max-pooling the document word dimension followed by a sum over all query words: \begin{equation} \begin{aligned} s_{token} = \sum_{j=1}^{\hat{m}} \max_{i=1..\hat{n}} \hat{q}_{j}^T \cdot \hat{p}_{i} \label{eq:colberter_token_match} \end{aligned} \end{equation} The final score of a query-passage pair is computed with a learned aggregation of the two score components: \begin{equation} \begin{aligned} s_{ColBERTer} = \sigma(\gamma) * s_{CLS} + (1-\sigma(\gamma)) * s_{token} \label{eq:learned_score_agg} \end{aligned} \end{equation} where $\sigma$ is the sigmoid function, and $\gamma$ is a trainable scalar parameter. For the purpose of ablation studies $\sigma(\gamma)$ can be set to a fixed number, such as $0.5$. At first glance the learned weighting factor seems superfluous, as the upstream linear layers could already learn to change the magnitudes of the two components. However, we show in \sref{sec:source_of_effect} that the explicit weighting is crucial for the correct functioning of both components. \subsection{\textbf{Uni-ColBERTer}: Extreme Reduction with Lexical Matching} While ColBERTer already is able to considerably reduce the dimensionality of the token representations, we found in pilot studies that for an embedding dimension of $8$ or lower the full match matrix is detrimental to the effectiveness. \citet{lin2021few} showed that a token score model can be effectively reduced to one dimension in UniCOIL. This effectively reduces the token representations to scalar \textit{weights}, necessitating an alternative mechanism to match query tokens with ``similar'' document tokens. To fit the same reduction we need to apply more techniques to our ColBERTer architecture to create Uni-ColBERTer with single dimensional whole word vectors. While we now occupy the same bytes per vector, our vector reduction techniques make Uni-COLBERTer 2.5 times smaller than UniCOIL (on MSMARCO). To reduce the token encoding to 1 dimension we apply a second linear layer after the contextualized stopword component: \begin{equation} \begin{aligned} \doublehat{q}_{1:m+2} &= \hat{q}_{1:\hat{m}} * W_u \\ \doublehat{p}_{1:n+2} &= \hat{p}_{1:\hat{n}} * W_u \label{eq:colberter_dim_red_token_two} \end{aligned} \end{equation} Furthermore, we need to apply a lexical match bias, following COIL's approach to only match identical words with each other. This however brings an interesting engineering challenge: We do not build a global vocabulary with ids of whole-words during training nor inference. This is because doing so would be complex: Modern GPUs are so fast that to be able to saturate them we need multiple CPU processes (4-10 depending on the system) that prepare the input with tokenization, data transformation, and subsequent tensor batching of sequences. To keep track of a global vocabulary, these CPU processes would need to synchronize with a read-write dictionary on every token. This is simply not feasible in python multiprocessing while keeping the necessary speed to fully use even a single GPU. To overcome this problem, while still inducing an exact match bias, we propose approximate lexical interactions, by creating an n-bit hash $H$ from every whole-word without accounting for potential collisions and applying a mask of equal hashes to the match matrix. Depending on the selection of bits to keep this introduces different numbers of collisions.\footnote{In the case of MSMARCO we found the first 32 bits of sha256 to produce very few collisions (303 collisions out of 1.6 million hashes).} Depending on the collection size one can adjust the number of bits to save from the hash. With the hashed global id of whole words we can adjust the match matrix of whole-words for low dimension token models as follows: \begin{equation} \begin{aligned} s_{token} = \sum_{1}^{\hat{m}} \max_{1..\hat{n} | H(w_{\hat{n}}) = H(w_{\hat{m}})} \doublehat{q}_{1:\hat{m}+2}^T \cdot \doublehat{p}_{1:\hat{n}+2} \label{eq:colberter_approx_lexical_match} \end{aligned} \end{equation} In practice, we implement this procedure by masking the full match matrix, so that the operation works on batched tensors. Besides allowing us reduce the token dimensionality to one, the lexical matching component of Uni-ColBERTer enables the sparse indexing of tokens in an inverted index, following UniCOIL. \section{Model Lifecycle} In this section we describe how we train our ColBERTer architecture and how we can deploy the trained model into a retrieval system. \subsection{Training Workflow} \label{sec:training_proc} We train our ColBERTer model with triples of one query, and two passages where one is more relevant than the other. To incorporate the degree of relevance, as provided by a teacher model we use the Margin-MSE loss \cite{hofstaetter2020_crossarchitecture_kd}, formalized as follows: \begin{equation} \begin{aligned} \mathcal{L}_{MarginMSE}(M_s) = \operatorname{MSE}(&M_s^{+} - M_s^{-},\\ &M_t^{+} - M_t^{-})) \end{aligned} \end{equation} Where a teacher model $M_t$ provides a teacher signal for our student model $M_s$ (in our case ColBERTer's output parts). From the outside ColBERTer looks and acts like a single model, however it is in essence a multi-task model: aggregating sequences into a single vector, representing individual words, and actively removing uninformative words. Therefore, we need to train these three components in a balanced form, with a combined loss function as follows: \begin{equation} \mathcal{L} = \alpha_b * \mathcal{L}_b + \alpha_{CLS} * \mathcal{L}_{CLS} + \alpha_{CS} * \mathcal{L}_{CS} \end{equation} where $\alpha$'s are hyperparamters governing the weighting of the individual losses, which we explain in the following. The combined loss for both sub-scores $\mathcal{L}_b$ uses MarginMSE supervision on the final score: \begin{equation} \mathcal{L}_b = \mathcal{L}_{MarginMSE}(s_{ColBERTer}) \end{equation} In pilot studies and shown in \sref{sec:source_of_effect} we observed that training ColBERTer only with a combined loss strongly reduces the effectiveness of the CLS vector alone. To overcome this issue and be able to use single vector retrieval we define $\mathcal{L}_{CLS}$ as: \begin{equation} \mathcal{L}_{CLS} = \mathcal{L}_{MarginMSE}(s_{CLS}) \end{equation} Finally, to actually force the model to learn sparsity in the removal gate vector $r$ of the contextualized stopword component, we follow \citet{Hofstaetter2020_cikm} and add an $\mathcal{L}_{CS}$ loss of the L1-norm of the positive \& negative $r$: \begin{equation} \mathcal{L}_{CS} = ||r^{+}||_1 + ||r^{-}||_1 \end{equation} By minimizing this loss, we introduce tension into the training process, as the sparsity loss needs to move as many entries to zero or close to zero, while the token loss as part of $\mathcal{L}_b$ needs non-zero entries to be able to determine relevance matches. To reduce volatility during training we train the enhanced reduction components one after another: We start with a ColBERT checkpoint, followed by the 2-way dimensionality reduction, BOW$^2$ and CS, and finally for Uni-ColBERTer we apply another round of dimensionality reduction. \begin{figure}[t] \includegraphics[clip,trim={0.2cm 0.2cm 0.2cm 0.7cm}, width=0.45\textwidth]{figures/colberter_query_workflow.pdf} \centering \vspace{-0.3cm} \caption{The potential retrieval and refine workflows of ColBERTer at query time. Broadly categorized by: full hybrid (\ding{202}), single index, then refine with the other (\ding{203} + \ding{204}), or only one index for ablation purposes (\ding{205} + \ding{206}).} \label{fig:colberter_workflow} \vspace{-0.1cm} \end{figure} \subsection{Indexing and Query Workflow} \label{sec:retrieval_workflows} Once we have trained our ColBERTer model we need to decide how to deploy it into a wider retrieval workflow. ColBERTer's passage encoding can be fully pre-computed in an offline setting, which allows for low latency query-time retrieval. Previous works, such as COIL \cite{gao2021coil} or ColBERT \cite{khattab2020colbert} have already independently established many of the potential workflows. In addition to related approaches, we aim to give a holistic overview of all possible usage scenarios, including ablation studies to select the best method with the lowest possible complexity. We give a schematic overview over ColBERTer's retrieval workflows in Figure \ref{fig:colberter_workflow}. We assume that all passages have been encoded and stored accessibly by their id, as shown in Figure \ref{fig:colberter_architecture}. Each of the two storage categories can be transformed into an index structure for fast retrieval: the CLS index uses an (approximate) nearest neighbor index, while the BOW$^2$ index could use either a dense nearest neighbor index, or a classic inverted index (with activated exact matching component). Figure \ref{fig:colberter_workflow} \ding{202} shows how we can index both scoring components of ColBERTer and then consequently use the id-based storages to fill in missing scores for passages retrieved only by one of the indices. A similar workflow has already been explored by \citet{lin2021densifying} and \citet{gao2021coil}. Figure \ref{fig:colberter_workflow} \ding{203} \& \ding{204} utilize only one retrieval index, and fill up the missing scores from the complementary id-based storage. This approach works vice-versa for dense or sparse indices, and represents a clear complexity and additional index storage reduction, at the potential of lower recall. This approach is very much akin to a two stage retrieve and re-rank pipeline that has been studied extensively \cite{hofstaetter2020_crossarchitecture_kd,Hofstaetter2021_tasb_dense_retrieval,lin2020distilling}, but mostly with separate models for separate stages (which requires more indexing and encoding resources than our single ColBERTer model). Figure \ref{fig:colberter_workflow} \ding{205} \& \ding{206} represent ablation studies that only rely on one or the other index while disregarding the other scoring part. Using different workflows may have large implications in terms of complexity, storage requirements, and effectiveness; therefore we always indicate the type of query workflow used (with the numbers given in Figure \ref{fig:colberter_workflow}) in our results section and provide a detailed ablation study in \sref{sec:source_of_effect}. \section{Experiment Design} Our main training and inference dependencies are PyTorch~\cite{pytorch2017}, HuggingFace Transformers \cite{wolf2019huggingface}, and the nearest neighbor search library Faiss \cite{faiss2017}. For training we utilize TAS-Balanced \cite{Hofstaetter2021_tasb_dense_retrieval} retrieved negatives with BERT-based teacher ensemble scores \cite{hofstaetter2020_crossarchitecture_kd}. \vspace{-0.2cm} \subsection{Passage Collection \& Query Sets} For training and in-domain evaluation we use the MSMARCO-Passage (V1) collection \cite{msmarco16} with the sparsely-judged MSMARCO-DEV query set of 6,980 queries (used in the leaderboard) as well as the densely-judged 97 query set of combined TREC-DL '19 \cite{trec2019overview} and '20 \cite{trec2020overview}. For TREC graded relevance (0 = non relevant to 3 = perfect), we use the recommended binarization point of 2 for the recall metric. For out of domain experiments we refer to the ir\_datasets catalogue \cite{macavaney:sigir2021-irds} for collection specific information, as we utilized the standardized test sets for the collections. \vspace{-0.2cm} \subsection{Parameter Settings} As a basis for our model instances we use a 6-layer DistilBERT \cite{sanh2019distilbert} encoder as their initialization starting point. For our CLS vector we followed guidance by \citet{ma2021simple} to utilize 128 dimensions, as it provides enough capacity for retrieval. For token vectors, we study and present multiple parameter configurations between 32 and 1 dimension. We initialize models with final token output smaller than 32 with the checkpoint of the 32 dimensional model. The BOW$^2$ and CS components do not need any parameterization, other than using a Porter stemmer to aggregate unique words. These components only need to be parameterized in terms of the training loss influence $\alpha$'s. We thoroughly studied the robustness of the model to various configurations, as presented in \sref{sec:source_of_effect}. \section{Results} \label{sec:results} We now address the individual research questions we introduced earlier. First, we study the source of ColBERTer's effectiveness, and under which conditions its components work; then we compare our results to related approaches; and finally we investigate the robustness of ColBERTer to out of domain collections. \subsection{Source of Effectiveness} \label{sec:source_of_effect} \begin{table}[t] \centering \caption{Analysis of different score aggregation and training methods for ColBERTer (2-way dim reduction only; CLS dim: 128, token dim: 32; Workflow \ding{203}) in terms of retrieval effectiveness. We compare refining full-retrieval results from ColBERTer's CLS vector (Own) and a TAS-Balanced retriever (TAS) with different multi-task loss weights $\alpha_b$ and $\alpha_{CLS}$. \textit{Highest Own in bold, lowest underlined.} } \label{tab:agg_loss_ablation} \setlength\tabcolsep{3.2pt} \vspace{-0.3cm} \begin{tabular}{ccc!{\color{gray}\vrule}ll!{\color{lightgray}\vrule}ll!{\color{lightgray}\vrule}ll!{\color{lightgray}\vrule}ll} \toprule &\multicolumn{2}{c!{\color{lightgray}\vrule}}{\multirow{2}{*}{\textbf{Train Loss}}} & \multicolumn{4}{c!{\color{lightgray}\vrule}}{\textbf{TREC-DL'19+20}}& \multicolumn{4}{c}{\textbf{MSMARCO DEV}}\\ &&& \multicolumn{2}{c!{\color{lightgray}\vrule}}{nDCG@10} & \multicolumn{2}{c!{\color{lightgray}\vrule}}{R@1K} & \multicolumn{2}{c!{\color{lightgray}\vrule}}{MRR@10} & \multicolumn{2}{c}{R@1K} \\ &$\alpha_b$&$\alpha_{CLS}$& \textit{Own} & \textit{TAS} & \textit{Own} & \textit{TAS} & \textit{Own} & \textit{TAS} & \textit{Own} & \textit{TAS} \\ \midrule \arrayrulecolor{lightgray} \multicolumn{6}{l}{\textbf{Fixed Score Aggregation}} \\ \textcolor{gray}{1} & 1 & 0 & \underline{.684} & .740 & \underline{.565} & .861 & \underline{.336} & .386 & \underline{.773} & .978 \\ \midrule \multicolumn{6}{l}{\textbf{Learned Score Aggregation}} \\ \textcolor{gray}{2} & 1 & 0.1 & .726 & .728 & .783 & .861 & .384 & .386 & .952 & .978 \\ \textcolor{gray}{3} & 1 & 0.2 & .728 & .731 & .794 & .861 & .384 & .385 & .957 & .978 \\ \textcolor{gray}{4} & 1 & 0.5 & \textbf{.734} & .734 & \textbf{.807} & .861 & \textbf{.386} & .386 & .961 & .978 \\ \textcolor{gray}{5} & 1 & 1.0 & .730 & .730 & .806 & .861 & .381 & .381 & \textbf{.962} & .978 \\ \arrayrulecolor{black} \bottomrule \end{tabular} \vspace{-.5cm} \end{table} The ColBERTer model is essentially a complex multi-task architecture, even though these learning tasks work together to form an eventual end-to-end retrieval model. As the complexity of the architecture and training procedure grows, so must our understanding of the conditions in which the model works and where it fails. For our first investigation we aim to understand the interdependence between the CLS retrieval and token refinement capabilities. The related COIL architecture \cite{gao2021coil} aggregates their two-way dimension reduction in a sum without explicit weighting and feeds the sum through a single loss function. COIL uses both representation types (namely, CLS and token representations) as index, therefore it is not necessary for any of the components to work standalone. In the ColBERTer architecture, we want to support full retrieval capabilities of the CLS vector as candidate generator. If it fails, the quality of the refinement process does not matter anymore. Therefore, we study: \begin{itemize \begin{table}[t] \centering \caption{Analysis of the bag of whole-words (BOW$^2$) and contextualized stopword training of ColBERTer (CLS dim: 128, token dim: 32; Workflow \ding{203}) using different multi-task loss parameters. } \label{tab:stopword_ablation} \setlength\tabcolsep{2.0pt} \vspace{-0.3cm} \begin{tabular}{cccc!{\color{gray}\vrule}rr!{\color{lightgray}\vrule}rr!{\color{lightgray}\vrule}rr} \toprule &\multicolumn{3}{c!{\color{lightgray}\vrule}}{\textbf{Train Loss}} & \multicolumn{2}{c!{\color{lightgray}\vrule}}{\textbf{BOW$^2$ Vectors}} & \multicolumn{2}{c!{\color{lightgray}\vrule}}{\textbf{DL'19+20}}& \multicolumn{2}{c}{\textbf{DEV}}\\ &$\alpha_b$&$\alpha_{CLS}$&$\alpha_{CS}$&\# Saved & \% Stop. & \footnotesize{nDCG@10} & \footnotesize{R@1K} & \footnotesize{MRR@10} & \footnotesize{R@1K} \\ \midrule \arrayrulecolor{lightgray} \multicolumn{6}{l}{\textbf{BOW$^2$ only}} \\ \textcolor{gray}{1} & 1 & 0.5 & 0 & 43.2 & 0 \% & .731 & .815 & .387 & .963 \\ \textcolor{gray}{2} & 1 & 0.1 & 0 & 43.2 & 0 \% & .736 & .806 & .387 & .960 \\ \midrule \multicolumn{8}{l}{\textbf{BOW$^2$ + Contextualized Stopwords}} \\ \textcolor{gray}{3} & 1 & 0.5 & 1 & 29.1 & 33 \% & .731 & .811 & .382 & .965 \\ \textcolor{gray}{4} & 1 & 0.1 & 1 & 27.8 & 36 \% & .729 & .802 & .385 & .960 \\ \textcolor{gray}{5} & 1 & 0.1 & 0.75 & 30.9 & 29 \% & .730 & .805 & .387 & .961 \\ \textcolor{gray}{6} & 1 & 0.1 & 0.5 & 36.7 & 15 \% & .725 & .806 & .387 & .962 \\ \arrayrulecolor{black} \bottomrule \end{tabular} \vspace{-.5cm} \end{table} To isolate the CLS retrieval performance for workflow \ding{203} (dense CLS retrieval, followed by BOW$^2$ storage refinement) we compare different training and aggregation strategies with ColBERTer's CLS retrieval vs. re-ranking the candidate set retrieved by a standalone TAS-Balanced retriever in Table \ref{tab:agg_loss_ablation}. Using COIL's aggregation and training approach (by fixing $\sigma(\gamma) = 0.5 $ in Eq. \ref{eq:learned_score_agg} and setting $\alpha_{CLS} = 0$) we observe in line 1 that the CLS retrieval component fails substantially, compared to utilizing TAS-B. We postulate that this happens, as the token refinement component is more capable in determining relevance and therefore it dominates the changes in gradients, which minimizes the standalone capabilities of CLS retrieval. Now, with our proposed multi-task and learned score aggregation (lines 2-5) we observe much better CLS retrieval performance. While it still lacks a bit behind TAS-B in recall, these deficiencies do not manifest itself after refining the token scores for top-10 results in both TREC-DL and MSMARCO DEV. We selected the best performing setting in line 4 for our future experiments. The next addition in our multi-task framework is the learned removal of stopwords. For this we add a third loss function $\mathcal{L}_{CS}$ that directly contradicts the objective of the main $\mathcal{L}_b$ loss. In Table \ref{tab:stopword_ablation} we show the tradeoff between retained BOW$^2$ vectors and effectiveness. In lines 1 \& 2 we see ColBERTer without the stopword components, here 43 vectors are saved with unique BOW$^2$ for MSMARCO (compared to 77 for all subword tokens). In lines 3 to 6 we study different loss weighting combinations with CS. While the ratio of removed stopwords is rather sensitive to the selected parameters, the effectivness values largely remain constant for lines 4 to 6. Based on the MRR value of the DEV set (with the smallest effectiveness change, but still 29 \% removed vectors) we select configuration 5 going forward, although we stress that our approach would also work well with the other settings, and cherry picking parameters is not needed. This setting reduces the number of vectors and therefore the storage requirement by a factor of $2.5$ compared to ColBERT, while keeping the same top-10 effectiveness (comparing Table \ref{tab:stopword_ablation} line 5 vs. Table \ref{tab:agg_loss_ablation} line 1 (TAS-B re-ranked). A curious path for future work based on the results of Table \ref{tab:stopword_ablation} would be to use a conservative loss setting (such as line 6) that does not force a lot of the word removal gates to become zero (so as to not take away capacity from the loss surface for the ranking tasks), followed by the removal words with a non-zero (but still small) threshold during inference. Following the ablation of training possibilities, we now turn towards the possible usage scenarios, as laid out in \sref{sec:retrieval_workflows}, and answer: \begin{itemize For this study we use ColBERTer with exact matching with 8 and 1 dimensions (Uni-ColBERTer) for BOW$^2$ vectors, as they are more likely to be used in an inverted index. The inverted index lookup is performed by our hashed id, with potential, but highly unlikely conflicts. Then we followed the approach proposed by COIL \cite{gao2021coil} and UniCOIL \cite{lin2021few} to compute dot products for all entries of a posting list for all exact matches between the query and the inverted index, followed by a summation per document, and subsequent sorting to receive a ranked list. In Table \ref{tab:query_modes} we show the results of our study grouped by the type of indexing and retrieval. For all indexing schemes, we use the same trained models. We start with an ablation of only one of the two scoring parts in line 1-4. Unsurprisingly, using only one of the scoring parts of ColBERTer results in an effectiveness reduction. What is surprising though, is the magnitude of the effectiveness drop of the inverted index only workflow \ding{206} compared both using only CLS retrieval (workflow \ding{205}) or refining the results with CLS scores (workflow \ding{202}). Continuing the results, in the single retrieval then refinement section in line 5-8, we see that once we combine both scoring parts, the underlying indexing approach matters very little at the top-10 effectiveness (comparing lines 5 \& 7, as well as lines 6 \& 8), only the reduced recall of the BOW$^2$ indexing is carried over. This a great result for the robustness of our system, it shows that it can be deployed in a variety of approaches, and practitioners are not locked into a specific retrieval approach. For example if one has made large investments in an inverted index system, they could build on these investments with Uni-ColBERTer. \begin{table}[t] \centering \caption{Analysis of the retrieval quality for different query-time retrieval and refinement workflows of ColBERTer with vector dimension of 8 or 1 (Uni-ColBERTer). \textit{nDCG and MRR at cutoff 10.} } \label{tab:query_modes} \setlength\tabcolsep{2.5pt} \vspace{-0.3cm} \begin{tabular}{ccl!{\color{lightgray}\vrule}l!{\color{lightgray}\vrule}rr!{\color{lightgray}\vrule}rr} \toprule &\multicolumn{2}{l!{\color{lightgray}\vrule}}{\multirow{2}{*}{\textbf{Workflow}}} & \multirow{2}{*}{\textbf{Model}} & \multicolumn{2}{c!{\color{lightgray}\vrule}}{\textbf{DL'19+20}}& \multicolumn{2}{c}{\textbf{DEV}}\\ && & &\footnotesize{nDCG} & \footnotesize{R@1K} & \footnotesize{MRR} & \footnotesize{R@1K} \\ \midrule \arrayrulecolor{lightgray} \multicolumn{6}{l}{\textbf{Retrieval Only Ablation}} \\ \textcolor{gray}{1} & \multirow{2}{*}{\ding{206}} & \multirow{2}{*}{BOW$^2$ only} & ColBERTer (Dim8) & .323 & .780 & .131 & .895 \\ \textcolor{gray}{2} & & & Uni-ColBERTer & .280 & .758 & .122 & .880 \\ \midrule \textcolor{gray}{3} & \multirow{2}{*}{\ding{205}} & \multirow{2}{*}{CLS only} & ColBERTer (Dim8) & .669 & .795 & .326 & .958 \\ \textcolor{gray}{4} & & & Uni-ColBERTer & .674 & .789 & .328 & .958 \\ \midrule \multicolumn{8}{l}{\textbf{Single Retrieval > Refinement}} \\ \textcolor{gray}{5} & \multirow{2}{*}{\ding{203}} & \multirow{2}{*}{BOW$^2$ > CLS} & ColBERTer (Dim8) & .730 & .780 & .373 & .895 \\ \textcolor{gray}{6} & & & Uni-ColBERTer & .724 & .673 & .369 & .880 \\ \midrule \textcolor{gray}{7} & \multirow{2}{*}{\ding{204}} & \multirow{2}{*}{CLS > BOW$^2$} & ColBERTer (Dim8) & .733 & .795 & .375 & .958 \\ \textcolor{gray}{8} & & & Uni-ColBERTer & .727 & .789 & .373 & .958 \\ \midrule \multicolumn{8}{l}{\textbf{Hybrid Retrieval \& Refinement}} \\ \textcolor{gray}{9} & \multirow{2}{*}{\ding{202}} & \multirow{2}{*}{Merge (\ding{203}+\ding{204})} & ColBERTer (Dim8) & .734 & .873 & .376 & .981 \\ \textcolor{gray}{10} & & & Uni-ColBERTer & .728 & .865 & .374 & .979 \\ \arrayrulecolor{black} \bottomrule \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth,clip, trim=0.0cm 0.0cm 0.0cm 0.0cm]{figures/pareto_figure.pdf} \vspace{-0.7cm} \caption{Tradeoff between storage requirements and effectiveness on MSMARCO Dev. \textit{Note the log scale of the y-axis.}} \label{fig:pareto-figure} \vspace{-0.5cm} \end{figure} Finally, we investigate a hybrid indexing workflow \ding{202}, where both index types generate candidates and all candidates are refined with the complimentary scoring part. We observe that the recall does increase compared to only one index, however, these improvements do not manifest themselves in the top-10 effectiveness. Here, the results are very close to the simpler workflows \ding{203} \& \ding{204}. Therefore, \textit{to keep it simple} we continue to use workflow \ding{203} and would suggest it as the primary way of using ColBERTer, if no previous investments make workflow \ding{204} more attractive. A general observation in the neural IR community is that more capacity in the number of vector dimensions usually leads to better results, albeit with diminishing returns. To see how our enhanced reduction fit into this assumption, we study: \begin{itemize Our main concern of the enhanced reductions for the number of vectors introduced in ColBERTer is if they actually improve either or both effectiveness vs. costs, or if it would be better to keep the number of vectors the same and simply reduce the number of dimensions. In this case our work would be nullified. In Figure \ref{fig:pareto-figure} we show the tradeoff between storage requirements and effectiveness of our model configurations and closely related baselines. First, we observe that the results of the single vector TAS-B \cite{Hofstaetter2021_tasb_dense_retrieval} and multi-vector staged pipeline of TAS-B + ColBERT (ours) form a corridor in which our ColBERTer results are expected to reside. Conforming with the expectations, all ColBERTer results are between the two in terms of effectiveness. In Figure \ref{fig:pareto-figure} we display 3 ColBERTer reduction configurations for 32, 16, 8, and 1 (Uni-ColBERTer) token vector dimensions. Inside each configuration, we observe the usual pattern that more capacity improves effectiveness at the cost of increased storage requirements. Between configurations, we observe that removing half the vectors is more efficient and at the same time equal or even slightly improved effectiveness. Thus, using our enhanced reductions improves the Pareto frontier, compared to just reducing the dimensionality. In the case of Uni-ColBERTer, there is no way of further reducing the dimensionality, so every removed vector enables previously unattainable efficiency gains. Our most efficient Uni-ColBERTer with all (BOW$^2$ and CS) reductions enabled reaches parity with the plaintext size it indexes. This includes the dense index which at 128 dimensions roughly takes up 2/3 of the total space. \begin{table*}[t] \centering \caption{Comparing ColBERTers retrieval effectiveness to related approaches grouped by storage requirements. The storage factor refers to ratio of index to plaintext size of 3.05 GB. \textit{* indicates an estimation by us.} } \label{tab:related_work_comp} \setlength\tabcolsep{2.4pt} \vspace{-0.3cm} \begin{tabular}{cll!{\color{lightgray}\vrule}ll!{\color{lightgray}\vrule}r!{\color{lightgray}\vrule}c!{\color{lightgray}\vrule}ll!{\color{lightgray}\vrule}ll!{\color{lightgray}\vrule}ll} % % \toprule &\multicolumn{2}{c!{\color{lightgray}\vrule}}{\textbf{Model}} & \multicolumn{2}{c!{\color{lightgray}\vrule}}{\textbf{Storage}} & \textbf{Query} & \textbf{Interpret.} & \multicolumn{2}{c!{\color{lightgray}\vrule}}{\textbf{TREC-DL'19}}& \multicolumn{2}{c!{\color{lightgray}\vrule}}{\textbf{TREC-DL'20}}& \multicolumn{2}{c}{\textbf{DEV}}\\ && & Total & Factor & \textbf{Latency} & \textbf{Ranking} & \footnotesize{nDCG@10} & \footnotesize{R@1K} & \footnotesize{nDCG@10} & \footnotesize{R@1K} & \footnotesize{MRR@10} & \footnotesize{R@1K} \\ \midrule \arrayrulecolor{lightgray} \multicolumn{12}{l}{\textbf{Low Storage Systems (max. 2x Factor)}} \\ \textcolor{gray}{1} & \cite{mackenzie2021wacky} & BM25 (PISA) & 0.7 GB & $\times$ 0.2 & 8 ms & \cmark & .501 & .739 & .475 & .806 & .194 & .868 \\ \midrule \textcolor{gray}{2} & \cite{zhan2021jointly} & JPQ & 0.8 GB & $\times$ 0.3 & 90 ms & \xmark & .677 & -- & -- & -- & .341 & -- \\ \textcolor{gray}{3} & \cite{lin2021few} & UniCOIL-Tok & N/A & N/A & N/A & \cmark & -- & -- & -- & -- & .315 & -- \\ \textcolor{gray}{4} & \cite{mackenzie2021wacky} & UniCOIL-Tok (+docT5query) & 1.4 GB & $\times$ 0.5 & 37 ms & \cmark & -- & -- & -- & -- & .352 & -- \\ \textcolor{gray}{5} & \cite{formal2021splade,mackenzie2021wacky} & SPLADEv2 (PISA) & 4.3 GB & $\times$ 1.4 & 220 ms & \xmark & .729 & -- & -- & -- & .369 & \textbf{.979} \\ \textcolor{gray}{6} & \cite{lin2021densifying} & DSR-SPLADE + Dense-CLS (Dim 128) & 5 GB & $\times$ 1.6 & 32 ms & \xmark & .709 & -- & .673 & -- & .344 & -- \\ \midrule \textcolor{gray}{7} & & Uni-ColBERTer (Dim 1) & 3.3 GB & $\times$ 1.1 & 55 ms & \cmark & .727 & .761 & .726 & .812 & .373 & .958 \\ \textcolor{gray}{8} & & ColBERTer w. EM (Dim 8) & 5.8 GB & $\times$ 1.9 & 55 ms & \cmark & \textbf{.732} & \textbf{.764} & \textbf{.734} & \textbf{.819} & \textbf{.375} & .958 \\ \arrayrulecolor{black} \midrule \arrayrulecolor{lightgray} \multicolumn{12}{l}{\textbf{Higher Storage Systems}} \\ \textcolor{gray}{9} & \cite{gao2021coil} & COIL (Dim 128, 8) & 12.5 GB* & $\times$ 4.1* & 21 ms & \cmark & .694 & -- & -- & -- & .347 & .956 \\ \textcolor{gray}{10} & \cite{gao2021coil} & COIL (Dim 768, 32) & 54.7 GB* & $\times$ 17.9 & 41 ms & \cmark & .704 & -- & -- & -- & .355 & \textbf{.963} \\ \textcolor{gray}{11} & \cite{lin2021densifying} & DSR-SPLADE + Dense-CLS (Dim 256) & 11 GB & $\times$ 3.6 & 34 ms & \xmark & .711 & -- & .678 & -- & .348 & -- \\ \textcolor{gray}{12} & \cite{lin2021few,lin2020distilling} & TCT-ColBERTv2 + UniCOIL (+dT5q) & 14.4 GB* & $\times$ 4.7* & 110 ms & \cmark & -- & -- & -- & -- & .378 & -- \\ \midrule \textcolor{gray}{13} & & ColBERTer (Dim 16) & 9.9 GB & $\times$ 3.2 & 51 ms & \cmark & .726 & .782 & .719 & .829 & .383 & .961 \\ \textcolor{gray}{14} & & ColBERTer (Dim 32) & 18.8 GB & $\times$ 6.2 & 51 ms & \cmark & \textbf{.727} & \textbf{.781} & \textbf{.733} & \textbf{.825} & \textbf{.387} & .961 \\ \arrayrulecolor{black} \bottomrule \vspace{-.7cm} \end{tabular} \end{table*} \subsection{Comparing to Related Work} We want to emphasize that it becomes increasingly hard to contrast neural retrieval models and make conclusive statements about "SOTA" (State-of-the-art). This is because there are numerous factors at play for the effectiveness including training data sampling, distillation, generational training, and even hardware setups. At the same time it is highly important to compare systems not only by their effectiveness, but factor in the efficiency as well, to avoid misleading claims. We believe it is important to show that we do not observe substantial differences in effectiveness compared to other systems of similar efficiency and that small deviations of effectiveness should not strongly impact our overall assessment -- even if those small differences come out in our favor. With that in mind, we study: \begin{itemize In Table \ref{tab:related_work_comp} we group models by our main efficiency focus: the storage requirements, measured as the factor of the indexed plaintext size. \paragraph{\textbf{Low Storage Systems}} We find that ColBERTer does improve on the existing Pareto frontier compared to other approaches, especially for cases with low storage footprint. Our Uni-ColBERTer (line 7) especially outperforms previous single-dimension token encoding approaches, while at the same time offering improved transparency and making it easier to showcase model scores with mappings to whole words. However, we also observe that we could further improve the dense retrieval component with a technique similar to JPQ \cite{zhan2021jointly} (line 2) to further reduce our storage footprint. \paragraph{\textbf{Higher Storage Systems}} At first glance in this section, we see that while 32 dimensions per token does not sound much, the resulting increase of the total storage requirement is staggering. ColBERTer outperforms similarly sized architectures as well, however a fair comparison becomes more difficult than in the low storage systems, as the absolute size differences become much larger. Another curious observation is that larger ColBERTer models (lines 13 \& 14) seem to be slightly faster than our smaller instances (lines 7 \& 8). We believe this is due to the fact that we utilize non-optimized python code to lookup the top-1000 token storage memory locations per query, which takes 10ms for ColBERTer without exact matching and 15 ms for ColBERTer with exact matching as there we need to access 2 locations per passage (one for the values and one for the ids). There is definitely potential for strong optimizations in the future or production implementation work for this aspect. \subsection{Out-of-Domain Robustness} In this section we evaluate the zero-shot performance of our ColBERTer architecture, when it is applied on retrieval collections from domains outside the training data to answer: \begin{itemize Our main aim is to present an analysis grounded in robust evaluation \cite{voorhees2001philosophy,zovel1998reliability} that does not fall for common problematic shortcuts in IR evaluation like influence of effect sizes \cite{fuhr2018commonmistakes, webber2008stat_power}, relying on too shallow pooled collections \cite{arabzadeh2021shallow,webber2009poolbias,lu2016effect}, not accounting for pool bias in old collections \cite{buckley2004retrieval, Tetsuya2007dashJ, sakai2008poolbias}, and aggregating metrics over different collections which are not comparable \cite{soboroff2018meta}. We first describe our evaluation methodology and then discuss our results presented in Figure \ref{fig:ood-robustness-effect-size}. \paragraph{\textbf{Methodology}} We selected seven datasets from the \texttt{ir\_datasets} catalogue \cite{macavaney:sigir2021-irds}: Bio medical (TREC Covid \cite{Wang2020Cord19,Voorhees2020TrecCovid}, TripClick \cite{Rekabsaz2021TripClick}, NFCorpus \cite{Boteva2016Nfcorpus}), Entity centric (DBPedia Entity \cite{Hasibi2017DBpediaEntityVA}), informal language (Antique \cite{hashemi2020antique}, TREC Podcast \cite{jones2021trec}), news cables (TREC Robust 04 \cite{Voorhees2004Robust}). The datasets are not based on web collections, have at least $50$ queries, and importantly contain judgements from both relevant and non-relevant categories. Three datasets are also part of the BEIR \cite{thakur2021beir} catalogue. We choose not to use other datasets from BEIR, as they do not contain non-relevant judgements, which makes it impossible to conduct pooling bias corrections. We follow \citet{Tetsuya2007dashJ} to correct our metric measurements for pool bias by observing only measuring effectiveness on judged passages (which means removing all retrieved passages that are not judged and then re-assign the ranks of the remaining ones). This is in contrast with the default assumption that non-judged passages are not relevant (which of course favors methods that have been part of the pooling process). Additionally, we follow \citet{soboroff2018meta} to utilize an effect size analysis that is popular in medicine and social sciences. \citet{soboroff2018meta} proposed to use this effect size as meta analysis tool to be able to compare statistical significance across different retrieval collections. In this work we combine the evaluation approaches of \citet{Tetsuya2007dashJ} and \citet{soboroff2018meta} for the first time to receive strong confidence in our results and analysis. \begin{figure*}[t] \centering \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[height=5.3cm,clip, trim=0.2cm 0.2cm 0.2cm 0.2cm]{figures/effect_size_smd-judged-only-bm25-vs-colberter-dim1-bow-cs.pdf} \end{subfigure}% \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[height=5.3cm,clip, trim=5.1cm 0.2cm 0.2cm 0.2cm]{figures/effect_size_smd-judged-only-bm25-vs-colberter-dim32-bowonly.pdf} \end{subfigure}% \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[height=5.3cm,clip, trim=5.1cm 0.2cm 0.2cm 0.2cm]{figures/effect_size_smd-judged-only-tasb-vs-colberter-dim32-bowonly.pdf} \end{subfigure}% \caption{Effect size based evaluation of ColBERTer's zero-shot out of domain robustness. We compare three pairings between the control vs. the treatment retrieval method. The comparison is dependent on the effect size of each collection and the mean NDCG@10 differences are standardized with the effect size. The confidence intervals are plotted as interval around the standardized mean difference \ding{117}. The Summary Effect of the treatment is computed with the Random-Effect (RE) model, here we see an overall significant improvement for ColBERTer (Dim1 and Dim32) to BM25.} \label{fig:ood-robustness-effect-size} \end{figure*} We take the standardized mean difference (SMD) in nDCG@10 score between a baseline model and our model as the effect. Besides the variability within different collections, we assume, as \citet{soboroff2018meta}, a between collection heterogeneity. Following \citet{soboroff2018meta}, we use a random-effect model to estimate the summary effect of our model and each individual effect's contribution, i.e., weight. We use the DerSimonian and Laird estimate \cite{dersimonian2015meta} to obtain the between collection variance. We illustrate the outcome of our meta-analysis as forest plots. Diamonds \ding{117} show the effect in each collection and, in turn, in summary. Each effect is accompanied by its $95\%$ confidence interval -- the grey line. The dotted vertical line marks \textit{null effect}, i.e., zero SMD in nDCG@10 score between our model and the compared baseline. A confidence interval crossing the null effect line indicates that the corresponding effect is statistically not significant; in all other cases, it contains the actual effect of our model $95\%$ of the time. As baseline, we utilize BM25 as implemented by Pyserini \cite{lin2021pyserini}. We apply our models, trained on MSMARCO, end-to-end in a zero-shot fashion with our default settings for retrieval. We compare a ColBERTer version with $32$ token dimensions, as well as Uni-ColBERTer with a single token dimension and exact matching prior. \paragraph{\textbf{Discussion}} We illustrate the effect of using Uni-ColBERTer instead of BM25 within the different collections and the corresponding summary effect in Figure \ref{fig:ood-robustness-effect-size}a. Compared to the retrospective approach of hypothesis testing with p-values, confidence intervals are predictive \cite{soboroff2018meta}. Considering the TripClick collection, for example, we expect the effect to be between .09 and .25 95\% of the time, indicating that we can detect the effect size of .17 SMD at the given confidence level and underlining the significant effectiveness gains using Uni-ColBERTer over BM25. Only on TREC Robust 04 is the small improved difference inside a 95\% confidence interval. Overall, by judging the summary effect in Figure \ref{fig:ood-robustness-effect-size}a, we expect that choosing Uni-ColBERTer over BM25 consistently and significantly improves effectiveness. Similarly, considering Figure \ref{fig:ood-robustness-effect-size}b, we expect ColBERTer (Dim32) to consistently and significantly outperform BM25. However, comparing the summary effects in Figure \ref{fig:ood-robustness-effect-size}a and Figure \ref{fig:ood-robustness-effect-size}b, we expect Uni-ColBERTer and ColBERTer (Dim32) to behave similarly if run against BM25, suggesting to use the more efficient model. We also compare our model to a more effective neural dense retriever TAS-B \cite{Hofstaetter2021_tasb_dense_retrieval}, which has been shown to work effectively in an out of domain setting \cite{thakur2021beir}. We report the effect of using ColBERTer (Dim32) vs. TAS-B in Figure \ref{fig:ood-robustness-effect-size}c. Here, we see a much less clear image than in the other two cases. Most collections overlap inside the 95\% CI, including the summary effect model, which must lead us to the conclusion that these models are equally effective. Only the Antique collection is significantly improved by ColBERTer. The TREC Covid collection is a curious case -- looking at absolute numbers, one would easily assume a substantial improvement, however, because it only evaluates 50 queries the confidence interval is very wide. Finally, what does this mean for a deployment decision of ColBERTer vs. TAS-B? We need to consider other aspects, such as transparency. We would argue ColBERTer increases transparency over TAS-B as laid out in this paper, and at the same time it does not show a single collection with significantly worse results, therefore we would select it. \section{Conclusion} In this paper, we proposed ColBERTer, an efficient and effective retrieval model that improves the storage efficiency, the retrieval complexity, and the interpretability of the ColBERT architecture along the effectiveness Pareto frontier. To this end, ColBERTer learns whole-word representations that exclude contextualized stopwords, yielding 2.5$\times$ fewer vectors than ColBERT while supporting user-friendly query--document scoring patterns at the level of whole words. ColBERTer also uses a multi-task, multi-stage training objective---as well as an optional lexical matching component---that together enable it to aggressively reduce the vector dimension to 1. Extensive empirical evaluation shows that ColBERTer is highly effective on MS MARCO and TREC-DL and highly robust out of domain, while demonstrating highly-competitive storage efficiency with prior dense and sparse models. \section{Out-of-Domain Raw Results}
2203.12993
\section{Introduction} According to the physical theory, if an incompressible viscous Newtonian fluid occupies the whole space $\mathbb{R}^{n}$ in the absence of external forces, then the velocity $U(\tau,x)$ and kinematic pressure $\varPi(\tau,x)$ of the fluid at time $\tau>0$ and position $x\in\mathbb{R}^{n}$ satisfy the Navier-Stokes equations \begin{equation*} \left\{\begin{array}{l} \partial_{\tau}U-\nu\Delta U+(U\cdot\nabla)U+\nabla\varPi=0, \\ \nabla\cdot U=0, \end{array}\right. \end{equation*} where the coefficient $\nu>0$ is the kinematic viscosity of the fluid. By considering the rescaled quantities \begin{equation*} t=\nu\tau, \quad u(t,x)=\nu^{-1}U(\nu^{-1}t,x), \quad \varpi(t,x)=\nu^{-2}\varPi(\nu^{-1}t,x), \end{equation*} whose physical dimensions are powers of length alone, we may rewrite the Navier-Stokes equations in the standardised form \begin{equation}\label{intro-navier-stokes} \left\{\begin{array}{l} \partial_{t}u-\Delta u+(u\cdot\nabla)u+\nabla\varpi=0, \\ \nabla\cdot u=0. \end{array}\right. \end{equation} At the formal level, if $(u,\varpi)$ satisfy \eqref{intro-navier-stokes} then $\varpi$ may be recovered from $u$ by the formula \begin{equation}\label{pressure} \varpi = {(-\Delta)}^{-1}\nabla^{2}:(u\otimes u). \end{equation} Writing $\Lambda$ to denote the physical dimension of length, the quantities $x,t,u,\varpi$ have physical dimensions \begin{equation*} [x] = \Lambda, \quad [t] = \Lambda^{2}, \quad [u] = \Lambda^{-1}, \quad [\varpi] = \Lambda^{-2}; \end{equation*} this is related to the fact that the standardised Navier-Stokes equations are preserved under the rescaling \begin{equation}\label{navier-stokes-scaling} x_{\lambda}=\lambda x, \quad t_{\lambda} = \lambda^{2}t, \quad u_{\lambda}(t_{\lambda},x_{\lambda}) = \lambda^{-1}u(t,x), \quad \varpi_{\lambda}(t_{\lambda},x_{\lambda})=\lambda^{-2}\varpi(t,x). \end{equation} \begin{definition}\label{regsoldef} For $T\in(0,\infty]$, a function $u:(0,T)\times\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ is said to be a {\em regular solution} to the standardised Navier-Stokes equations on $(0,T)$ if \begin{enumerate}[label=(\roman*)] \item $u$ is smooth on $(0,T)\times\mathbb{R}^{n}$, with every derivative belonging to $C((0,T);L^{2}(\mathbb{R}^{n}))$; \item $(u,\varpi)$ satisfy \eqref{intro-navier-stokes}, with $\varpi$ being given by \eqref{pressure}. \end{enumerate} \end{definition} \begin{definition} For $T\in(0,\infty)$, a regular solution $u$ on $(0,T)$ is said to {\em blow up} at (rescaled) time $T$ if $u$ doesn't extend to a regular solution on $(0,T')$ for any $T'>T$. \end{definition} In spatial dimension $n\geq3$, it remains unknown whether there exist regular solutions which blow up. Since Leray's seminal paper \cite{leray1934}, prospective blowing-up solutions have been studied using homogeneous norms: \begin{definition} A norm ${\|\cdot\|}_{X}$ (defined on a subspace $\mathcal{X}\subseteq\mathcal{S}'(\mathbb{R}^{n})$ which is closed under dilations of $\mathbb{R}^{n}$) is said to be {\em homogeneous of degree $\alpha=\alpha(X)$} if, under the rescaling $x_{\lambda}=\lambda x$ and $f_{\lambda}(x_{\lambda})=f(x)$, we have ${\|f_{\lambda}\|}_{X}\approx_{X}\lambda^{\alpha}{\|f\|}_{X}$ for all $\lambda\in(0,\infty)$ and $f\in\mathcal{X}$. \end{definition} If ${\|\cdot\|}_{X}$ is homogeneous of degree $\alpha$, then under the scaling \eqref{navier-stokes-scaling} of the Navier-Stokes equations we have ${\|u_{\lambda}(t_{\lambda})\|}_{X}\approx_{X}\lambda^{\alpha-1}{\|u(t)\|}_{X}$, so the quantity ${\|u\|}_{X}$ has physical dimension $\Lambda^{\alpha-1}$. In the context of the Navier-Stokes equations, the homogeneous norm ${\|\cdot\|}_{X}$ is said to be {\em subcritical} if $\alpha(X)<1$, {\em critical} if $\alpha(X)=1$, and {\em supercritical} if $\alpha(X)>1$. If ${\|\cdot\|}_{X}$ is subcritical, then the blowup estimate \begin{equation}\label{abstract-blowup} u\text{ blows up at time }T\quad\Rightarrow\quad{\|u(t)\|}_{X}\gtrsim_{X}{(T-t)}^{-\left(1-\alpha(X)\right)/2} \end{equation} makes dimensional sense. We investigate \eqref{abstract-blowup} in the context of the homogeneous Besov norms \begin{equation*} {\|f\|}_{\dot{B}_{p,q}^{s}(\mathbb{R}^{n})} := {\left\|j\mapsto2^{js}{\left\|\mathcal{F}^{-1}\varphi(2^{-j}\xi)\mathcal{F}f\right\|}_{L^{p}(\mathbb{R}^{n})}\right\|}_{l^{q}(\mathbb{Z})} \quad \text{for }s\in\mathbb{R},\,p,q\in[1,\infty], \end{equation*} where $\mathcal{F}$ is the Fourier transform and $\varphi$ is a cutoff function satisfying certain properties.\footnote{We will give more detailed definitions in section \ref{besov-spaces}. Choosing a different function $\varphi$ yields an equivalent norm.} Amongst other things, for any $p, p_j,q,q_j \in [1,\infty]$ and $\delta,s \in \mathbb{R}$ the Besov norms satisfy \begin{equation}\label{intro-besov-embedding} {\|f\|}_{\dot{B}_{p_{1},q_{1}}^{\frac{n}{p_{1}}+\delta}(\mathbb{R}^{n})} \gtrsim_{n} {\|f\|}_{\dot{B}_{p_{2},q_{2}}^{\frac{n}{p_{2}}+\delta}(\mathbb{R}^{n})} \quad \text{if }p_{1}\leq p_{2},\,q_{1}\leq q_{2}, \end{equation} \begin{equation}\label{intro-lebesgue} {\|f\|}_{\dot{B}_{p,1}^{0}(\mathbb{R}^{n})} \geq {\|f\|}_{L^{p}(\mathbb{R}^{n})} \gtrsim_{\varphi} {\|f\|}_{\dot{B}_{p,\infty}^{0}(\mathbb{R}^{n})}, \end{equation} \begin{equation}\label{intro-sobolev} {\|f\|}_{\dot{B}_{2,2}^{s}(\mathbb{R}^{n})} \approx_{\varphi} {\|f\|}_{\dot{H}^{s}(\mathbb{R}^{n})}, \end{equation} and ${\|\cdot\|}_{\dot{B}_{p,q}^{s}(\mathbb{R}^{n})}$ is homogeneous of degree $\alpha(\dot{B}_{p,q}^{s}(\mathbb{R}^{n}))=\frac{n}{p}-s$. In the context of Navier-Stokes we define \begin{equation*} s_{p} = s_{p}(n) := -1+\frac{n}{p}, \end{equation*} so the norm ${\|\cdot\|}_{\dot{B}_{p,q}^{s_{p}+\epsilon}(\mathbb{R}^{n})}$ is critical for $\epsilon=0$, and subcritical for $\epsilon>0$. It is known that if a regular solution $u$ blows up at a finite time $T>0$, then\footnote{These estimates follow from the local theory and regularity properties of mild solutions with initial data $f$, where the existence time is bounded below by a constant multiple of ${\|f\|}_{\dot{B}_{\infty,\infty}^{-1+\epsilon}(\mathbb{R}^{n})}^{-2/\epsilon}\vee{\|f\|}_{L^{\infty}(\mathbb{R}^{n})}^{-2}$. Estimate \eqref{intro-leray} comes from Leray \cite{leray1934}, while the local theory for initial data in Besov spaces is discussed in Lemari\'{e}-Rieusset's book \cite{lemarie2016}. We aim to give a more detailed account of the regularity properties of such solutions in an upcoming paper.} \begin{equation}\label{intro-easy-besov} {\|u(t)\|}_{\dot{B}_{\infty,\infty}^{-1+\epsilon}(\mathbb{R}^{n})} \gtrsim_{\varphi,\epsilon} {(T-t)}^{-\epsilon/2} \quad \text{for all }t\in(0,T),\,\epsilon\in(0,1), \end{equation} \begin{equation}\label{intro-leray} {\|u(t)\|}_{L^{\infty}(\mathbb{R}^{n})} \gtrsim_{n} {(T-t)}^{-1/2} \quad \text{for all }t\in(0,T). \end{equation} By virtue of \eqref{intro-besov-embedding} and \eqref{intro-lebesgue}, the left-hand side of \eqref{intro-easy-besov} may be replaced by ${\|u(t)\|}_{\dot{B}_{p,q}^{s_{p}+\epsilon}(\mathbb{R}^{n})}$ for any $p,q\in[1,\infty]$, while the left-hand side of \eqref{intro-leray} may be replaced by ${\|u(t)\|}_{\dot{B}_{p,1}^{s_{p}+1}(\mathbb{R}^{n})}$ for any $p\in[1,\infty]$. Adapting the energy methods of \cite{mccormick2016, robinson2014, robinson2012}, we will prove the following blowup estimates in the case $\epsilon\in[1,2]$: \begin{theorem}\label{main-theorem} Let $n\geq3$ and $T\in(0,\infty)$. If $u$ is a regular solution (see Definition \ref{regsoldef}) to the standardised Navier-Stokes equations on $(0,T)$ which satisfies\footnote{Note that this is a natural assumption for a solution blowing up at a finite time $T$ in view of \eqref{intro-easy-besov} with $\epsilon =\tfrac 12$. In fact, to prove the parts of \eqref{eps-less-2} - \eqref{eps-is-2} with $\epsilon \neq 1$, one may replace this assumption with $\lim_{t\nearrow T}{\|u(t)\|}_{L^\infty(\mathbb{R}^{n})}=\infty$ (coming from \eqref{intro-leray}) by replacing \eqref{interpbesonehalf} in the proof with the estimate $$ {\|u(t)\|}_{L^\infty(\mathbb{R}^n)} \leq {\|u(t)\|}_{\dot{B}_{\infty,1}^{0}(\mathbb{R}^n)} \lesssim_{n,\epsilon}{\|u(t)\|}_{\dot{B}_{\infty,\infty}^{-n/2}(\mathbb{R}^n)}^{\lambda}{\|u(t)\|}_{\dot{B}_{\infty,\infty}^{-1+\epsilon}(\mathbb{R}^n)}^{1-\lambda} \quad \text{with} \quad \lambda=\frac{\epsilon-1}{\epsilon-1+\frac{n}{2}}\, . $$ } $\lim_{t\nearrow T}{\|u(t)\|}_{\dot{B}_{\infty,\infty}^{-1/2}(\mathbb{R}^{n})}=\infty$, then \begin{equation}\label{eps-less-2} {\|u(t)\|}_{\dot{B}_{p,q}^{s_{p}+\epsilon}(\mathbb{R}^{n})} \gtrsim_{\varphi,\epsilon,(p\vee q\vee2)} {(T-t)}^{-\epsilon/2} \quad \text{for all }t\in(0,T),\,\epsilon\in[1,2),\,p,q\in\left[1,\frac{n}{2-\epsilon}\right) \end{equation} and \begin{equation}\label{eps-is-2} {\|u(t)\|}_{\dot{B}_{p,1}^{s_{p}+2}(\mathbb{R}^{n})} \gtrsim_{\varphi,(p\vee2)} {(T-t)}^{-1} \quad \text{for all }t\in(0,T),\,p\in[1,\infty). \end{equation} \end{theorem} Under the additional restrictions that $p,q\in[1,2]$ and $n=3$, the blowup estimate \eqref{eps-less-2} is implied by the blowup estimate for $\dot{H}^{s_{2}+\epsilon}(\mathbb{R}^{3})$, which was proved in the case $\epsilon\in(1,2)$ by Robinson, Sadowski and Silva \cite{robinson2012}, and in the case $\epsilon=1$ by McCormick et al.\ \cite{mccormick2016}. Under the additional restrictions that $p\in[1,2]$ and $n=3$, the blowup estimate \eqref{eps-is-2} is implied by the blowup estimate for $\dot{B}_{2,1}^{5/2}(\mathbb{R}^{3})$, which was proved by McCormick et al.\ \cite{mccormick2016}. The rest of this paper is organised as follows. In section \ref{besov-spaces} we recall some standard properties of Besov spaces, using \cite{bahouri2011} as our main reference. In section \ref{commutator-estimates} we prove some commutator estimates, adapting the ideas of \cite[Lemma 2.100]{bahouri2011}. In section \ref{navier-stokes-blowup-rates} we prove Theorem \ref{main-theorem}. We will henceforth use the abbreviations $L^{p}=L^{p}(\mathbb{R}^{n})$, $\dot{H}^{s}=\dot{H}^{s}(\mathbb{R}^{n})$, $\dot{B}_{p,q}^{s}=\dot{B}_{p,q}^{s}(\mathbb{R}^{n})$ and $l^{q}=l^{q}(\mathbb{Z})$. \section{Besov spaces}\label{besov-spaces} \begin{lemma}\label{bahouri-prop-2.10} (\cite{bahouri2011}, Proposition 2.10). Let $\mathcal{C}$ be the annulus $B(0,8/3)\setminus\overline{B}(0,3/4)$. Then the set $\widetilde{\mathcal{C}}=B(0,2/3)+\mathcal{C}$ is an annulus, and there exist radial functions $\chi\in\mathcal{D}(B(0,4/3))$ and $\varphi\in\mathcal{D}(\mathcal{C})$, taking values in $[0,1]$, such that \begin{equation*} \left\{\begin{array}{ll} \chi(\xi)+\sum_{j\geq0}\varphi(2^{-j}\xi)=1 & \forall\,\xi\in\mathbb{R}^{n}, \\ \sum_{j\in\mathbb{Z}}\varphi(2^{-j}\xi)=1 & \forall\,\xi\in\mathbb{R}^{n}\setminus\{0\}, \\ |j-j'|\geq2\Rightarrow\supp\varphi(2^{-j}\cdot)\cap\supp\varphi(2^{-j'}\cdot)=\emptyset, \\ j\geq1\Rightarrow\supp\chi\cap\supp\varphi(2^{-j}\cdot)=\emptyset, \\ |j-j'|\geq5\Rightarrow2^{j'}\widetilde{\mathcal{C}}\cap2^{j}\mathcal{C}=\emptyset, \\ 1/2\leq\chi^{2}(\xi)+\sum_{j\geq0}\varphi^{2}(2^{-j}\xi)\leq1 & \forall\,\xi\in\mathbb{R}^{n}, \\ 1/2\leq\sum_{j\in\mathbb{Z}}\varphi^{2}(2^{-j}\xi)\leq1 & \forall\,\xi\in\mathbb{R}^{n}\setminus\{0\}. \\ \end{array}\right. \end{equation*} \end{lemma} We fix $\chi,\varphi$ satisfying Lemma \ref{bahouri-prop-2.10}. For $j\in\mathbb{Z}$ and $u\in\mathcal{S}'$, we define\footnote{We adopt the convention that $\mathcal{F}f(\xi) = \int_{\mathbb{R}^{n}}e^{-\mathrm{i}\xi\cdot x}f(x)\,\mathrm{d}x$ for $f\in\mathcal{S}$. We recall that the Fourier transform of a compactly supported distribution is a smooth function.} \begin{equation*} \dot{S}_{j}u := \chi(2^{-j}D)u = \mathcal{F}^{-1}\chi(2^{-j}\xi)\mathcal{F}u, \end{equation*} \begin{equation*} \dot{\Delta}_{j}u := \varphi(2^{-j}D)u = \mathcal{F}^{-1}\varphi(2^{-j}\xi)\mathcal{F}u. \end{equation*} \begin{lemma}\label{truncation-lemma} For any $j,j'\in\mathbb{Z}$ and $u,v\in\mathcal{S}'$, we have \begin{equation*} |j-j'|\geq 2\Rightarrow \dot{\Delta}_{j}\dot{\Delta}_{j'}u=0, \qquad |j-j'|\geq5\Rightarrow\dot{\Delta}_{j}\left(\dot{S}_{j'-1}u\,\dot{\Delta}_{j'}v\right)=0. \end{equation*} \end{lemma} \begin{proof} This is a consequence of Lemma \ref{bahouri-prop-2.10}. In particular, the implication $|j-j'|\geq 2\Rightarrow \dot{\Delta}_{j}\dot{\Delta}_{j'}u=0$ follows from the implication $|j-j'|\geq2\Rightarrow\supp\varphi(2^{-j}\cdot)\cap\supp\varphi(2^{-j'}\cdot)=\emptyset$, while the implication $|j-j'|\geq5\Rightarrow\dot{\Delta}_{j}\left(\dot{S}_{j'-1}u\,\dot{\Delta}_{j'}v\right)=0$ follows\footnote{Note that $\dot{S}_{j'-1}u$ is spectrally supported on $2^{j'}B(0,2/3)$, while $\dot{\Delta}_{j'}v$ is spectrally supported on $2^{j'}\mathcal{C}$, so by properties of convolution we have that $\dot{S}_{j'-1}u\,\dot{\Delta}_{j'}v$ is spectrally supported on $2^{j'}\widetilde{\mathcal{C}}$.} from the implication $|j-j'|\geq5\Rightarrow2^{j'}\widetilde{\mathcal{C}}\cap2^{j}\mathcal{C}=\emptyset$. \end{proof} We recall the following useful properties: \begin{lemma}\label{useful-inequalities} (\cite{bahouri2011}, Lemmas 2.1-2.2, Remark 2.11). Let $\rho$ be a smooth function on $\mathbb{R}^{n}\setminus\{0\}$ which is positive homogeneous of degree $\lambda\in\mathbb{R}$. Then for all $j\in\mathbb{Z}$, $u\in\mathcal{S}'$, $t\in(0,\infty)$ and $1\leq p\leq q\leq\infty$ we have \begin{equation}\label{useful-inequality-1} {\|\dot{S}_{j}u\|}_{L^{p}}\vee{\|\dot{\Delta}_{j}u\|}_{L^{p}} \lesssim_{\varphi} {\|u\|}_{L^{p}}, \end{equation} \begin{equation} {\|\rho(D)\dot{\Delta}_{j}u\|}_{L^{q}} \lesssim_{\rho} 2^{j\lambda}2^{j\left(\frac{n}{p}-\frac{n}{q}\right)}{\|\dot{\Delta}_{j}u\|}_{L^{p}}, \end{equation} \begin{equation} {\|\dot{\Delta}_{j}u\|}_{L^{p}} \lesssim_{n} 2^{-j}{\|\nabla\dot{\Delta}_{j}u\|}_{L^{p}}. \end{equation} \end{lemma} One can give meaning to the decomposition $u=\sum_{j\in\mathbb{Z}}\dot{\Delta}_{j}u$ in view of the following lemma: \begin{lemma}\label{littlewood-paley-decomposition} (\cite{bahouri2011}, Propositions 2.12-2.14) If $u\in\mathcal{S}'$, then $\dot{S}_{j}u\overset{j\rightarrow\infty}{\rightarrow}u$ in $\mathcal{S}'$. Define\footnote{For example, if $\mathcal{F}u$ is locally integrable near $\xi=0$, then $u\in\mathcal{S}_{h}'$. We remark that the condition $u\in\mathcal{S}_{h}'$ is independent of our choice of $\varphi$.} \begin{equation*} \mathcal{S}_{h}' := \left\{u\in\mathcal{S}'\text{ }:\text{ }{\|\dot{S}_{j}u\|}_{L^{\infty}}\overset{j\rightarrow-\infty}{\rightarrow}0\right\}, \end{equation*} so if $u\in\mathcal{S}_{h}'$ then $u=\sum_{j\in\mathbb{Z}}\dot{\Delta}_{j}u$ in $\mathcal{S}'$. \end{lemma} For $s\in\mathbb{R}$ and $p,q\in[1,\infty]$, we define the Besov seminorm\footnote{Choosing a different function $\varphi$ yields an equivalent seminorm \cite[Remark 2.17]{bahouri2011}.} \begin{equation*} {\|u\|}_{\dot{B}_{p,q}^{s}} := {\left\|j\mapsto2^{js}{\|\dot{\Delta}_{j}u\|}_{L^{p}}\right\|}_{l^{q}} \quad \text{for }u\in\mathcal{S}' \end{equation*} and the Besov space \begin{equation*} \dot{B}_{p,q}^{s} := \left\{u\in\mathcal{S}_{h}'\text{ : }{\|u\|}_{\dot{B}_{p,q}^{s}}<\infty\right\}, \end{equation*} so that $\left(\dot{B}_{p,q}^{s},{\|\cdot\|}_{\dot{B}_{p,q}^{s}}\right)$ is a normed space \cite[Proposition 2.16]{bahouri2011}. Lemma \ref{useful-inequalities} and Lemma \ref{littlewood-paley-decomposition} yield the inequalities \begin{equation}\label{besov-embedding} {\|u\|}_{\dot{B}_{p_{2},q_{2}}^{\frac{n}{p_{2}}+\epsilon}} \lesssim_{n} {\|u\|}_{\dot{B}_{p_{1},q_{1}}^{\frac{n}{p_{1}}+\epsilon}} \quad \text{for }p_{1}\leq p_{2},\,q_{1}\leq\,q_{2},\,\epsilon\in\mathbb{R},\,u\in\mathcal{S}', \end{equation} \begin{equation}\label{rough-lp} {\|u\|}_{\dot{B}_{p,\infty}^{0}} \lesssim_{\varphi} {\|u\|}_{L^{p}} \quad \text{for }u\in\mathcal{S}' \end{equation} and \begin{equation}\label{smooth-lp} {\|u\|}_{L^{p}} \leq {\|u\|}_{\dot{B}_{p,1}^{0}} \quad \text{for }u\in\mathcal{S}_{h}'. \end{equation} We also have the interpolation inequalities \begin{equation}\label{interpolation-holder} {\|u\|}_{\dot{B}_{\frac{p_{1}p_{2}}{\lambda p_{2}+(1-\lambda)p_{1}},\frac{q_{1}q_{2}}{\lambda q_{2}+(1-\lambda)q_{1}}}^{\lambda s_{1}+(1-\lambda)s_{2}}} \leq {\|u\|}_{\dot{B}_{p_{1},q_{1}}^{s_{1}}}^{\lambda}{\|u\|}_{\dot{B}_{p_{2},q_{2}}^{s_{2}}}^{1-\lambda} \quad \text{for }\lambda\in(0,1),\,u\in\mathcal{S}', \end{equation} \begin{equation}\label{interpolation-geometric} {\|u\|}_{\dot{B}_{p,1}^{\lambda s_{1}+(1-\lambda)s_{2}}} \lesssim \frac{1}{\lambda(1-\lambda)(s_{2}-s_{1})}{\|u\|}_{\dot{B}_{p,\infty}^{s_{1}}}^{\lambda}{\|u\|}_{\dot{B}_{p,\infty}^{s_{2}}}^{1-\lambda} \quad \text{for }\lambda\in(0,1),\,s_{1}<s_{2},\,u\in\mathcal{S}', \end{equation} where \eqref{interpolation-holder} comes from H\"{o}lder's inequality, while \eqref{interpolation-geometric} comes from writing $\sum_{j\in\mathbb{Z}}=\sum_{j\leq j_{0}}+\sum_{j>j_{0}}$ with $2^{j_{0}(s_{2}-s_{1})}{\|u\|}_{\dot{B}_{p,\infty}^{s_{1}}}\approx{\|u\|}_{\dot{B}_{p,\infty}^{s_{2}}}$ and applying geometric series. We now recall the following convergence lemma: \begin{lemma}\label{convergence-lemma} (\cite{bahouri2011}, Lemma 2.23). Let $\mathcal{C}'$ be an annulus and ${(u_{j})}_{j\in\mathbb{Z}}$ be a sequence of functions such that $\supp\mathcal{F}u_{j}\subseteq2^{j}\mathcal{C}'$ and ${\left\|j\mapsto2^{js}{\|u_{j}\|}_{L^{p}}\right\|}_{l^{q}}<\infty$. If the series $\sum_{j\in\mathbb{Z}}u_{j}$ converges in $\mathcal{S}'$ to some $u\in\mathcal{S}'$, then \begin{equation*}\label{convergence-inequality} {\|u\|}_{\dot{B}_{p,q}^{s}} \lesssim_{\varphi} C_{\mathcal{C}'}^{1+|s|}{\left\|j\mapsto2^{js}{\|u_{j}\|}_{L^{p}}\right\|}_{l^{q}}. \end{equation*} Note: If $(s,p,q)$ satisfy the condition \begin{equation}\label{negative-scaling} s<\frac{n}{p}, \quad \text{or} \quad s=\frac{n}{p}\text{ and }q=1, \end{equation} then the hypothesis of convergence is satisfied, and $u\in\mathcal{S}_{h}'$. \end{lemma} A useful consequence of Lemma \ref{convergence-lemma} is that if $u\in\mathcal{S}'$ satisfies ${\|u\|}_{\dot{B}_{p,q}^{s}}<\infty$ for some $(s,p,q)$ satisfying \eqref{negative-scaling}, then $u\in\mathcal{S}_{h}'$. If $u\in\dot{B}_{p_{1},1}^{0}$ and $v\in\dot{B}_{p_{2},1}^{0}$ with $\frac{1}{p_{1}}+\frac{1}{p_{2}}\leq1$, then the series $uv=\sum_{(j,j')\in\mathbb{Z}^{2}}\dot{\Delta}_{j}u\,\dot{\Delta}_{j'}v$ converges absolutely in $L^{\frac{p_{1}p_{2}}{p_{1}+p_{2}}}$, which justifies the Bony decomposition \begin{equation*} uv = \dot{T}_{u}v+\dot{T}_{v}u+\dot{R}(u,v), \end{equation*} \begin{equation*} \dot{T}_{u}v = \sum_{j\in\mathbb{Z}}\dot{S}_{j-1}u\,\dot{\Delta}_{j}v, \end{equation*} \begin{equation*} \dot{R}(u,v) = \sum_{j\in\mathbb{Z}}\sum_{|\nu|\leq1}\dot{\Delta}_{j}u\,\dot{\Delta}_{j-\nu}v. \end{equation*} We will require the following estimates for the operators $\dot{T}$ and $\dot{R}$: \begin{lemma}\label{paraproduct} (\cite{bahouri2011}, Theorem 2.47). Suppose that $s=s_{1}+s_{2}$, $p=\frac{p_{1}p_{2}}{p_{1}+p_{2}}$ and $q=\frac{q_{1}q_{2}}{q_{1}+q_{2}}$. Let $u,v\in\mathcal{S}'$, and assume that the series $\sum_{j\in\mathbb{Z}}\dot{S}_{j-1}u\,\dot{\Delta}_{j}v$ converges in $\mathcal{S}'$ to some $\dot{T}_{u}v\in\mathcal{S}'$. Then \begin{equation}\label{bony-estimate-1} {\|\dot{T}_{u}v\|}_{\dot{B}_{p,q}^{s}} \lesssim_{\varphi} C_{n}^{1+|s|}{\|u\|}_{L^{p_{1}}}{\|v\|}_{\dot{B}_{p_{2},q}^{s}}, \end{equation} \begin{equation}\label{bony-estimate-2} {\|\dot{T}_{u}v\|}_{\dot{B}_{p,q}^{s}} \lesssim_{\varphi} \frac{C_{n}^{1+|s|}}{-s_{1}}{\|u\|}_{\dot{B}_{p_{1},q_{1}}^{s_{1}}}{\|v\|}_{\dot{B}_{p_{2},q_{2}}^{s_{2}}} \quad \text{if } s_{1}<0. \end{equation} Note: If $(s,p,q)$ satisfy \eqref{negative-scaling}, and the right hand side of either \eqref{bony-estimate-1} or \eqref{bony-estimate-2} is finite, then the hypothesis of convergence is satisfied, and $\dot{T}_{u}v\in\mathcal{S}_{h}'$. \end{lemma} \begin{lemma}\label{remainder} (\cite{bahouri2011}, Theorem 2.52). Suppose that $s=s_{1}+s_{2}$, $p=\frac{p_{1}p_{2}}{p_{1}+p_{2}}$ and $q=\frac{q_{1}q_{2}}{q_{1}+q_{2}}$. Let $u,v\in\mathcal{S}'$, and assume that the series $\sum_{j\in\mathbb{Z}}\sum_{|\nu|\leq1}\dot{\Delta}_{j}u\,\dot{\Delta}_{j-\nu}v$ converges in $\mathcal{S}'$ to some $\dot{R}(u,v)\in\mathcal{S}'$. Then \begin{equation}\label{bony-estimate-3} {\|\dot{R}(u,v)\|}_{\dot{B}_{p,q}^{s}} \lesssim_{\varphi} \frac{C_{n}^{1+|s|}}{s}{\|u\|}_{\dot{B}_{p_{1},q_{1}}^{s_{1}}}{\|v\|}_{\dot{B}_{p_{2},q_{2}}^{s_{2}}} \quad \text{if } s>0, \end{equation} \begin{equation}\label{bony-estimate-4} {\|\dot{R}(u,v)\|}_{\dot{B}_{p,\infty}^{s}} \lesssim_{\varphi} C_{n}^{1+|s|}{\|u\|}_{\dot{B}_{p_{1},q_{1}}^{s_{1}}}{\|v\|}_{\dot{B}_{p_{2},q_{2}}^{s_{2}}} \quad \text{if }q=1\text{ and } s\geq0. \end{equation} Note: If $(s,p,q)$ satisfy \eqref{negative-scaling} and the right hand side of \eqref{bony-estimate-3} is finite, or if $(s,p,\infty)$ satisfy \eqref{negative-scaling} and the right hand side of \eqref{bony-estimate-4} is finite, then the hypothesis of convergence is satisfied, and $\dot{R}(u,v)\in\mathcal{S}_{h}'$. \end{lemma} \section{Commutator estimates}\label{commutator-estimates} In this section, we will adapt the proof of \cite[Lemma 2.100]{bahouri2011} to prove the commutator estimates in the following proposition, which will be crucial to the proof of Theorem \ref{main-theorem}: \begin{proposition}\label{commutator-prop} For $v,f\in\cap_{r\in[0,\infty)}\dot{H}^{r}$ and $j\in\mathbb{Z}$, define\footnote{We apply the summation convention to the index $k$.} \begin{equation*} R_{j}=[v\cdot\nabla,\dot{\Delta}_{j}]f=[v_{k},\dot{\Delta}_{j}]\nabla_{k}f \end{equation*} where $[\,\cdot\,,\,\cdot\,]$ denotes the commutator $[A,B]=AB-BA$, and suppose that $s=s_{1}+s_{2}$, $p=\frac{p_{1}p_{2}}{p_{1}+p_{2}}$ and $q=\frac{q_{1}q_{2}}{q_{1}+q_{2}}$ ($s_j\in \mathbb{R}$ and $p_j,q_j \in [1,\infty]$). Then we have the decomposition $R_{j}=\sum_{i=1}^{6}R_{j}^{i}$ with\footnote{Note that $R_{j}^{6}=0$ whenever $\nabla\cdot v=0$.} \begin{equation*} R_{j}^{1} = [\dot{T}_{v_{k}},\dot{\Delta}_{j}]\nabla_{k}f, \quad R_{j}^{2} = \dot{T}_{\nabla_{k}\dot{\Delta}_{j}f}v_{k}, \quad R_{j}^{3} = -\dot{\Delta}_{j}\dot{T}_{\nabla_{k}f}v_{k}, \end{equation*} \begin{equation*} R_{j}^{4} = \dot{R}(v_{k},\nabla_{k}\dot{\Delta}_{j}f), \quad R_{j}^{5} = -\nabla_{k}\dot{\Delta}_{j}\dot{R}(v_{k},f), \quad R_{j}^{6} = \dot{\Delta}_{j}\dot{R}(\nabla_{k}v_{k},f) \end{equation*} which satisfy the estimates \begin{equation*} \begin{aligned} {\left\|j\mapsto2^{js}{\|R_{j}^{1}\|}_{L^{p}}\right\|}_{l^{q}} &\lesssim {\|\nabla v\|}_{L^{p_{1}}}{\|f\|}_{\dot{B}_{p_{2},q}^{s}}, \\ {\left\|j\mapsto2^{js}{\|R_{j}^{1}\|}_{L^{p}}\right\|}_{l^{q}} &\lesssim {\|\nabla v\|}_{\dot{B}_{p_{1},q_{1}}^{s_{1}}}{\|f\|}_{\dot{B}_{p_{2},q_{2}}^{s_{2}}} \quad \text{if }s_{1}<0, \\ {\left\|j\mapsto2^{js}{\|R_{j}^{2}\|}_{L^{p}}\right\|}_{l^{q}} &\lesssim {\|\nabla v\|}_{\dot{B}_{p_{1},q_{1}}^{s_{1}}}{\|f\|}_{\dot{B}_{p_{2},q_{2}}^{s_{2}}} \quad \text{if }s_{1}>-1, \\ {\left\|j\mapsto2^{js}{\|R_{j}^{3}\|}_{L^{p}}\right\|}_{l^{q}} &\lesssim {\|\nabla v\|}_{\dot{B}_{p_{1},q_{1}}^{s_{1}}}{\|f\|}_{\dot{B}_{p_{2},q_{2}}^{s_{2}}} \quad \text{if }s_{2}<1, \\ {\left\|j\mapsto2^{js}{\|R_{j}^{4}\|}_{L^{p}}\right\|}_{l^{q}} &\lesssim {\|\nabla v\|}_{\dot{B}_{p_{1},q_{1}}^{s_{1}}}{\|f\|}_{\dot{B}_{p_{2},q_{2}}^{s_{2}}}, \\ {\left\|j\mapsto2^{js}{\|R_{j}^{5}\|}_{L^{p}}\right\|}_{l^{q}} &\lesssim {\|\nabla v\|}_{\dot{B}_{p_{1},q_{1}}^{s_{1}}}{\|f\|}_{\dot{B}_{p_{2},q_{2}}^{s_{2}}} \quad \text{if }s>-1, \\ {\left\|j\mapsto2^{js}{\|R_{j}^{6}\|}_{L^{p}}\right\|}_{l^{q}} &\lesssim {\|\nabla v\|}_{\dot{B}_{p_{1},q_{1}}^{s_{1}}}{\|f\|}_{\dot{B}_{p_{2},q_{2}}^{s_{2}}} \quad \text{if }s>0, \end{aligned} \end{equation*} where the implied constants depend on $\varphi,s_{1},s_{2},p_{1},p_{2}$. \end{proposition} \begin{remark} Write $A_{1},A_{2},A_{3},A_{5},A_{6}$ to denote the constraints \begin{equation*} A_{1}\text{ : }s_{1}\leq0, \quad A_{2}\text{ : }s_{1}\geq-1, \quad A_{3}\text{ : }s_{2}\leq1, \quad A_{5}\text{ : }s\geq-1, \quad A_{6}\text{ : }s\geq0. \end{equation*} For $i=1,2,3,5,6$, a simple modification of our arguments yields the estimates \begin{equation*} {\left\|j\mapsto2^{js}{\|R_{j}^{i}\|}_{L^{p}}\right\|}_{l^{\infty}} \lesssim {\|\nabla v\|}_{\dot{B}_{p_{1},q_{1}}^{s_{1}}}{\|f\|}_{\dot{B}_{p_{2},q_{2}}^{s_{2}}} \quad \text{if }q=1\text{ and }A_{i}\text{ holds}, \end{equation*} but we will not need these estimates when proving Theorem \ref{main-theorem}. \end{remark} As in \cite[Lemma 2.100]{bahouri2011}, to prove Proposition \ref{commutator-prop} we will rely on the following lemma: \begin{lemma}\label{commutator-lemma} (\cite{bahouri2011}, Lemma 2.97). Let $\theta\in C^{1}(\mathbb{R}^{n})$ be such that $\int_{\mathbb{R}^{n}}(1+|\xi|)|\mathcal{F}\theta(\xi)|\,\mathrm{d}\xi<\infty.$ Then for any $a\in C^{1}(\mathbb{R}^{n})$ with $\nabla a\in L^{p}(\mathbb{R}^{n})$, any $b\in L^{q}(\mathbb{R}^{n})$, and any $\lambda\in(0,\infty)$, we have \begin{equation*} {\|[\theta(\lambda^{-1}D),a]b\|}_{L^{\frac{pq}{p+q}}(\mathbb{R}^{n})} \lesssim_{\theta} \lambda^{-1}{\|\nabla a\|}_{L^{p}(\mathbb{R}^{n})}{\|b\|}_{L^{q}(\mathbb{R}^{n})}. \end{equation*} \end{lemma} \begin{remark}\label{commutator-remark} If we take $\theta=\varphi$ and $\lambda=2^{j}$, then Lemma \ref{commutator-lemma} yields the estimate \begin{equation*} {\|[\dot{\Delta}_{j},a]b\|}_{L^{\frac{pq}{p+q}}(\mathbb{R}^{n})} \lesssim_{\varphi} 2^{-j}{\|\nabla a\|}_{L^{p}(\mathbb{R}^{n})}{\|b\|}_{L^{q}(\mathbb{R}^{n})}. \end{equation*} \end{remark} \begin{proof}[Proof of Proposition \ref{commutator-prop}] The decomposition $R_{j}=\sum_{i=1}^{6}R_{j}^{i}$ comes from applying the Bony decomposition; the very strong regularity assumption $v,f\in\cap_{r\in[0,\infty)}\dot{H}^{r}$ is more than sufficient to address any convergence issues that may arise. In the following computations, we write ${(c_{j})}_{j\in\mathbb{Z}}$ to denote a sequence satisfying ${\|(c_{j})\|}_{l^{q}}\leq1$, and the constants implied by the notation $\lesssim$ depend on $\varphi,s_{1},s_{2},p_{1},p_{2}$. {\bf Bounds for $\boldsymbol{2^{js}{\|R_{j}^{1}\|}_{L^{p}}}$.} By Lemma \ref{truncation-lemma} we have \begin{equation*} R_{j}^{1} = \sum_{|j-j'|\leq4}[\dot{S}_{j'-1}v_{k},\dot{\Delta}_{j}]\nabla_{k}\dot{\Delta}_{j'}f, \end{equation*} so by Remark \ref{commutator-remark} and Lemma \ref{useful-inequalities} we have \begin{equation}\label{R1-estimate} 2^{js}{\|R_{j}^{1}\|}_{L^{p}} \lesssim \sum_{|j-j'|\leq4}2^{js}2^{j'-j}{\|\nabla\dot{S}_{j'-1}v\|}_{L^{p_{1}}}{\|\dot{\Delta}_{j'}f\|}_{L^{p_{2}}}. \end{equation} By \eqref{useful-inequality-1}, we deduce that \begin{equation*} 2^{js}{\|R_{j}^{1}\|}_{L^{p}} \lesssim \sum_{|j-j'|\leq4}2^{js}2^{j'-j}{\|\nabla v\|}_{L^{p_{1}}}{\|\dot{\Delta}_{j'}f\|}_{L^{p_{2}}} \lesssim c_{j}{\|\nabla v\|}_{L^{p_{1}}}{\|f\|}_{\dot{B}_{p_{2},q}^{s}}. \end{equation*} On the other hand, if $s_{1}<0$ then \eqref{R1-estimate} implies that \begin{equation*} \begin{aligned} 2^{js}{\|R_{j}^{1}\|}_{L^{p}} &\lesssim \sum_{\substack{|j-j'|\leq4 \\ j''\leq j'-2}}2^{js}2^{j'-j}{\|\nabla\dot{\Delta}_{j''}v\|}_{L^{p_{1}}}{\|\dot{\Delta}_{j'}f\|}_{L^{p_{2}}} \\ &\lesssim \sum_{\substack{|j-j'|\leq4 \\ j''\leq j'-2}} 2^{(j-j'')s_{1}}2^{j''s_{1}}{\|\nabla\dot{\Delta}_{j''}v\|}_{L^{p_{1}}}2^{j's_{2}}{\|\dot{\Delta}_{j'}f\|}_{L^{p_{2}}} \\ &\lesssim c_{j}{\|\nabla v\|}_{\dot{B}_{p_{1},q_{1}}^{s_{1}}}{\|f\|}_{\dot{B}_{p_{2},q_{2}}^{s_{2}}}, \end{aligned} \end{equation*} where we used the inequality ${\|(\alpha*\beta)\gamma\|}_{l^{q}}\leq{\|\alpha*\beta\|}_{l^{q_{1}}}{\|\gamma\|}_{l^{q_{2}}}\leq{\|\alpha\|}_{l^{1}}{\|\beta\|}_{l^{q_{1}}}{\|\gamma\|}_{l^{q_{2}}}$ in the last line. {\bf Bounds for $\boldsymbol{2^{js}{\|R_{j}^{2}\|}_{L^{p}}}$.} By Lemma \ref{truncation-lemma} we have \begin{equation*} R_{j}^{2} = \sum_{j'\geq j+1}\dot{S}_{j'-1}\nabla_{k}\dot{\Delta}_{j}f\,\dot{\Delta}_{j'}v_{k}. \end{equation*} so by Lemma \ref{useful-inequalities} we have \begin{equation*} 2^{js}{\|R_{j}^{2}\|}_{L^{p}} \lesssim \sum_{j'\geq j+1}2^{js}2^{j-j'}{\|\nabla\dot{\Delta}_{j'}v\|}_{L^{p_{1}}}{\|\dot{\Delta}_{j}f\|}_{L^{p_{2}}}. \end{equation*} If $s_{1}>-1$, then we deduce that \begin{equation*} \begin{aligned} 2^{js}{\|R_{j}^{2}\|}_{L^{p}} &\lesssim \sum_{j'\geq j+1}2^{(j-j')(s_{1}+1)}2^{j's_{1}}{\|\nabla\dot{\Delta}_{j'}v\|}_{L^{p_{1}}}2^{js_{2}}{\|\dot{\Delta}_{j}f\|}_{L^{p_{2}}} \\ &\lesssim c_{j}{\|\nabla v\|}_{\dot{B}_{p_{1},q_{1}}^{s_{1}}}{\|f\|}_{\dot{B}_{p_{2},q_{2}}^{s_{2}}}, \end{aligned} \end{equation*} where we used the inequality ${\|(\alpha*\beta)\gamma\|}_{l^{q}}\leq{\|\alpha*\beta\|}_{l^{q_{1}}}{\|\gamma\|}_{l^{q_{2}}}\leq{\|\alpha\|}_{l^{1}}{\|\beta\|}_{l^{q_{1}}}{\|\gamma\|}_{l^{q_{2}}}$ in the last line. {\bf Bounds for $\boldsymbol{2^{js}{\|R_{j}^{3}\|}_{L^{p}}}$.} By Lemma \ref{truncation-lemma} we have \begin{equation*} \begin{aligned} R_{j}^{3} &= -\sum_{|j-j'|\leq4}\dot{\Delta}_{j}\left(\dot{S}_{j'-1}\nabla_{k}f\,\dot{\Delta}_{j'}v_{k}\right) \\ &= -\sum_{\substack{|j-j'|\leq4 \\ j''\leq j'-2}}\dot{\Delta}_{j}\left(\dot{\Delta}_{j''}\nabla_{k}f\,\dot{\Delta}_{j'}v_{k}\right), \end{aligned} \end{equation*} so by Lemma \ref{useful-inequalities} we have \begin{equation*} 2^{js}{\|R_{j}^{3}\|}_{L^{p}} \lesssim \sum_{\substack{|j-j'|\leq4 \\ j''\leq j'-2}}2^{js}2^{j''-j'}{\|\nabla\dot{\Delta}_{j'}v\|}_{L^{p_{1}}}{\|\dot{\Delta}_{j''}f\|}_{L^{p_{2}}}. \end{equation*} If $s_{2}<1$, then we deduce that \begin{equation*} \begin{aligned} 2^{js}{\|R_{j}^{3}\|}_{L^{p}} &\lesssim \sum_{\substack{|j-j'|\leq4 \\ j''\leq j'-2}}2^{j's_{1}}{\|\nabla\dot{\Delta}_{j'}v\|}_{L^{p_{1}}}2^{(j'-j'')(s_{2}-1)}2^{j''s_{2}}{\|\dot{\Delta}_{j''}f\|}_{L^{p_{2}}} \\ &\lesssim c_{j}{\|\nabla v\|}_{\dot{B}_{p_{1},q_{1}}^{s_{1}}}{\|f\|}_{\dot{B}_{p_{2},q_{2}}^{s_{2}}}, \end{aligned} \end{equation*} where we used the inequality ${\|\alpha(\beta*\gamma)\|}_{l^{q}}\leq{\|\alpha\|}_{l^{q_{1}}}{\|\beta*\gamma\|}_{l^{q_{2}}}\leq{\|\alpha\|}_{l^{q_{1}}}{\|\beta\|}_{l^{1}}{\|\gamma\|}_{l^{q_{2}}}$ in the last line. {\bf Bounds for $\boldsymbol{2^{js}{\|R_{j}^{4}\|}_{L^{p}}}$.} Defining $\widetilde{\Delta}_{j'}=\sum_{|\nu|\leq1}\dot{\Delta}_{j'-\nu}$, by Lemma \ref{truncation-lemma} we have \begin{equation*} R_{j}^{4} = \sum_{|j-j'|\leq2}\dot{\Delta}_{j'}v_{k}\,\nabla_{k}\dot{\Delta}_{j}\widetilde{\Delta}_{j'}f, \end{equation*} so by Lemma \ref{useful-inequalities} and the inequality ${\|\alpha\beta\|}_{l^{q}}\leq{\|\alpha\|}_{l^{q_{1}}}{\|\beta\|}_{l^{q_{2}}}$ we have \begin{equation*} 2^{js}{\|R_{j}^{4}\|}_{L^{p}} \lesssim c_{j}{\|\nabla v\|}_{\dot{B}_{p_{1},q_{1}}^{s_{1}}}{\|f\|}_{\dot{B}_{p_{2},q_{2}}^{s_{2}}}. \end{equation*} {\bf Bounds for $\boldsymbol{2^{js}{\|R_{j}^{5}\|}_{L^{p}}}$ and $\boldsymbol{2^{js}{\|R_{j}^{6}\|}_{L^{p}}}$.} By Lemma \ref{useful-inequalities} and \eqref{bony-estimate-3}, we have \begin{equation*} \begin{aligned} {\left\|j\mapsto2^{js}{\|R_{j}^{5}\|}_{L^{p}}\right\|}_{l^{q}} &\lesssim {\|\nabla v\|}_{\dot{B}_{p_{1},q_{1}}^{s_{1}}}{\|f\|}_{\dot{B}_{p_{2},q_{2}}^{s_{2}}} \quad \text{if }s>-1, \\ {\left\|j\mapsto2^{js}{\|R_{j}^{6}\|}_{L^{p}}\right\|}_{l^{q}} &\lesssim {\|\nabla v\|}_{\dot{B}_{p_{1},q_{1}}^{s_{1}}}{\|f\|}_{\dot{B}_{p_{2},q_{2}}^{s_{2}}} \quad \text{if }s>0. \end{aligned} \end{equation*} \end{proof} \section{Proof of blowup rates}\label{navier-stokes-blowup-rates} We now give the \begin{proof}[Proof of Theorem \ref{main-theorem} Note first that the regularity assumptions on $u$ are strong enough to justify the calculations used in this proof. \\\\ Fix $\epsilon\in[1,2]$ and $p,q\in[1,\frac{n}{2-\epsilon})$. By virtue of the inequality \eqref{besov-embedding}, it suffices to prove the estimate \begin{equation}\label{u-blowup} {\|u(t)\|}_{\dot{B}_{r,\widetilde{r}}^{s_{r}+\epsilon}} \gtrsim_{\varphi,\epsilon,r} {(T-t)}^{-\epsilon/2} \end{equation} for any fixed $r\in[p\vee q\vee2,\frac{n}{2-\epsilon})$ (e.g., $r:=p\vee q\vee2$), where $\widetilde{r}=r$ in the case $\epsilon<2$, and $\widetilde{r}=1$ in the case $\epsilon=2$. By the embeddings $L^{2}\hookrightarrow\dot{B}_{\infty,\infty}^{-n/2}$ and $\dot{B}_{r,\widetilde{r}}^{s_{r}+\epsilon}\hookrightarrow\dot{B}_{\infty,\infty}^{-1+\epsilon}$, the energy estimate $\limsup_{t\nearrow T}{\|u(t)\|}_{L^{2}}<\infty$, the assumption $\lim_{t\nearrow T}{\|u(t)\|}_{\dot{B}_{\infty,\infty}^{-1/2}}=\infty$, and the interpolation inequality \begin{equation}\label{interpbesonehalf} {\|u(t)\|}_{\dot{B}_{\infty,\infty}^{-1/2}} \leq {\|u(t)\|}_{\dot{B}_{\infty,\infty}^{-n/2}}^{\lambda}{\|u(t)\|}_{\dot{B}_{\infty,\infty}^{-1+\epsilon}}^{1-\lambda} \quad \text{for }\lambda=\frac{\epsilon-\frac{1}{2}}{\epsilon-1+\frac{n}{2}}, \end{equation} we obtain the qualitative blowup estimate \begin{equation}\label{qualitative-blowup} \lim_{t\nearrow T}{\|u(t)\|}_{\dot{B}_{r,\widetilde{r}}^{s_{r}+\epsilon}} = \infty. \end{equation} We will prove \eqref{u-blowup} by combining \eqref{qualitative-blowup} with the following ODE lemma. \begin{lemma}\label{ode-lemma} (\cite{mccormick2016}, Lemma 2.1). If $\gamma,c>0$, $\partial_{t}X\leq cX^{1+\gamma}$, and $\lim_{t\nearrow T}X(t)=\infty$, then \begin{equation*} X(t) \geq {(\gamma c(T-t))}^{-1/\gamma} \quad \text{for all }t\in(0,T). \end{equation*} \end{lemma} Our goal is to derive a suitable differential inequality that allows us to apply Lemma \ref{ode-lemma}. We will achieve this by considering the antisymmetric tensor\footnote{In the case $n=3$, this is related to the vorticity vector $\overrightarrow{\omega}:=\nabla\times u$ by $\overrightarrow{\omega}={(\omega_{23},\omega_{31},\omega_{12})}^{T}$. One could view $\omega$ and $\overrightarrow{\omega}$ as being different ways of representing the exterior derivative of the 1-form $\sum_{i=1}^{n}u_{i}\,\mathrm{d}x^{i}$.} \begin{equation}\label{w-from-u} \omega_{ij} := \nabla_{i}u_{j}-\nabla_{j}u_{i}. \end{equation} Since $u$ is divergence-free, we can express $u$ in terms of $\omega$ by the formula \begin{equation}\label{u-from-w} u_{i} = {(-\Delta)}^{-1}\nabla_{j}\omega_{ij}. \end{equation} By Lemma \ref{useful-inequalities}, we deduce that \eqref{u-blowup} is equivalent to \begin{equation}\label{w-blowup} {\|\omega(t)\|}_{\dot{B}_{r,\widetilde{r}}^{s_{r}+\epsilon-1}} \gtrsim_{\varphi,\epsilon,r} {(T-t)}^{-\epsilon/2}, \end{equation} and that \eqref{qualitative-blowup} is equivalent to \begin{equation}\label{w-qualitative-blowup} \lim_{t\nearrow T}{\|\omega(t)\|}_{\dot{B}_{r,\widetilde{r}}^{s_{r}+\epsilon-1}} = \infty. \end{equation} Applying the operator $X\mapsto\nabla_{i}X_{j}-\nabla_{j}X_{i}$ to the Navier-Stokes equations \eqref{intro-navier-stokes}, we see that $\omega$ satisfies \begin{equation}\label{w-equations} \partial_{t}\omega_{ij}-\Delta\omega_{ij}+(u\cdot\nabla)\omega_{ij}+\omega_{ik}\nabla_{k}u_{j}=\omega_{jk}\nabla_{k}u_{i}. \end{equation} Applying $\dot{\Delta}_{J}$ to the equation \eqref{w-equations}, multiplying the result by ${|\dot{\Delta}_{J}\omega|}^{r-2}\dot{\Delta}_{J}\omega_{ij}$, summing over $i,j$ and integrating over $\mathbb{R}^{n}$, we obtain \begin{equation}\label{w-energy} \begin{aligned} &\frac{1}{r}\frac{\partial}{\partial t}\left({\|\dot{\Delta}_{J}\omega\|}_{L^{r}}^{r}\right) - \left\langle\Delta\dot{\Delta}_{J}\omega,{|\dot{\Delta}_{J}\omega|}^{r-2}\dot{\Delta}_{J}\omega\right\rangle \\ &\quad = -\left\langle\dot{\Delta}_{J}((u\cdot\nabla)\omega),{|\dot{\Delta}_{J}\omega|}^{r-2}\dot{\Delta}_{J}\omega\right\rangle - \left\langle\dot{\Delta}_{J}(\omega_{ik}\nabla_{k}u_{j}-\omega_{jk}\nabla_{k}u_{i}),{|\dot{\Delta}_{J}\omega|}^{r-2}\dot{\Delta}_{J}\omega_{ij}\right\rangle. \end{aligned} \end{equation} By the identities $\nabla\cdot u=0$, $\nabla_{k}\left({|\dot{\Delta}_{J}\omega|}^{r}\right)=r(\nabla_{k}\dot{\Delta}_{J}\omega_{ij}){|\dot{\Delta}_{J}\omega|}^{r-2}\dot{\Delta}_{J}\omega_{ij}$ and $\omega_{ij}=-\omega_{ji}$, we see that the right hand side of \eqref{w-energy} is equal to \begin{equation*} \left\langle[u\cdot\nabla,\dot{\Delta}_{J}]\omega,{|\dot{\Delta}_{J}\omega|}^{r-2}\dot{\Delta}_{J}\omega\right\rangle - 2\left\langle\dot{\Delta}_{J}(\omega\cdot\nabla u),{|\dot{\Delta}_{J}\omega|}^{r-2}\dot{\Delta}_{J}\omega\right\rangle, \end{equation*} where we define ${(\omega\cdot\nabla u)}_{ij}:=\omega_{ik}\nabla_{k}u_{j}$. Writing $\Omega_{J}:=[u\cdot\nabla,\dot{\Delta}_{J}]\omega-2\dot{\Delta}_{J}(\omega\cdot\nabla u)$, and noting the inequality\footnote{Valid for $n\geq3$ and $r\in[2,\infty)$, proved in \cite[Lemmas 1-2]{robinson2014}.} \begin{equation*}\label{lhs-bound} -\left\langle\Delta v,{|v|}^{r-2}v\right\rangle \gtrsim_{n,r} {\|v\|}_{L^{\frac{rn}{n-2}}}^{r}, \end{equation*} we deduce that \begin{equation}\label{w-energy-2} \frac{\partial}{\partial t}\left({\|\dot{\Delta}_{J}\omega\|}_{L^{r}}^{r}\right) + {\|\dot{\Delta}_{J}\omega\|}_{L^{\frac{rn}{n-2}}}^{r} \lesssim_{n,r} \left\langle\Omega_{J},{|\dot{\Delta}_{J}\omega|}^{r-2}\dot{\Delta}_{J}\omega\right\rangle. \end{equation} {\bf The case $\boldsymbol{\epsilon<2}$.} Suppose first that $\epsilon \in [1,2)$. From \eqref{w-energy-2} we have \begin{equation}\label{w-energy-3} \frac{\partial}{\partial t}\left({\|\omega\|}_{\dot{B}_{r,r}^{s_{r}+\epsilon-1}}^{r}\right) + {\|\omega\|}_{\dot{B}_{\frac{rn}{n-2},r}^{s_{r}+\epsilon-1}}^{r} \lesssim_{n,r} \sum_{J\in\mathbb{Z}}2^{Jr(s_{r}+\epsilon-1)}\left\langle\Omega_{J},{|\dot{\Delta}_{J}\omega|}^{r-2}\dot{\Delta}_{J}\omega\right\rangle. \end{equation} Since $\epsilon\in(0,2)$ and $r\in(1,\infty)$, the interval $I_{n,\epsilon,r}:=(\frac{2n}{r}-\frac{4}{r},\frac{2n}{r})\cap(\frac{2n}{r}-2+\epsilon,\frac{2n}{r}-\frac{2}{r}+\epsilon)$ is non-empty, so we are free to choose $r_{1}$ satisfying $\frac{2n}{r_{1}}\in I_{n,\epsilon,r}$. Let $r_{2}$ and $r_{3}$ be given by \begin{equation*}\label{r2r3} -\frac{n}{r_{2}}=s_{r}+\epsilon-1-\frac{n}{r_{1}}, \quad r_{3}=\frac{r_{1}r_{2}}{r_{1}+r_{2}}. \end{equation*} Then $\frac{2n}{r}>\frac{2n}{r_{1}}>\frac{2n}{r}-\frac{4}{r}$ is equivalent to $r<r_{1}<\frac{rn}{n-2}$, while $\frac{2n}{r}-\frac{2}{r}+\epsilon>\frac{2n}{r_{1}}>\frac{2n}{r}-2+\epsilon$ is equivalent to $\frac{rn}{n+2(r-1)}<r_{3}<r$. Therefore \begin{equation*}\label{r1range} {\left(\frac{r'n}{n-2}\right)}'=\frac{rn}{n+2(r-1)}<r_{3}<r<r_{1}<\frac{rn}{n-2}. \end{equation*} Writing $r_{4}=r_{3}'(r-1)$, by H\"{o}lder's inequality we have \begin{equation}\label{rhs-bound} \begin{aligned} \sum_{J\in\mathbb{Z}}2^{Jr(s_{r}+\epsilon-1)}\left\langle\Omega_{J},{|\dot{\Delta}_{J}\omega|}^{r-2}\dot{\Delta}_{J}\omega\right\rangle &\leq \sum_{J\in\mathbb{Z}}2^{Jr(s_{r}+\epsilon-1)}{\|\Omega_{J}\|}_{L^{r_{3}}}{\|\dot{\Delta}_{J}\omega\|}_{L^{r_{4}}}^{r-1} \\ &\leq {\left\|J\mapsto2^{J(s_{r}+\epsilon-1)}{\|\Omega_{J}\|}_{L^{r_{3}}}\right\|}_{l^{r}}{\|\omega\|}_{\dot{B}_{r_{4},r}^{s_{r}+\epsilon-1}}^{r-1}. \end{aligned} \end{equation} Since ${\left(\frac{r'n}{n-2}\right)}'<r_{3}<r$, it follows that $r'<r_{3}'<\frac{r'n}{n-2}$ and hence $r<r_{4}<\frac{rn}{n-2}$. By \eqref{interpolation-holder}, we deduce that \begin{equation}\label{mu-interpolation} {\|\omega\|}_{\dot{B}_{r_{4},r}^{s_{r}+\epsilon-1}} \leq {\|\omega\|}_{\dot{B}_{r,r}^{s_{r}+\epsilon-1}}^{\mu}{\|\omega\|}_{\dot{B}_{\frac{rn}{n-2},r}^{s_{r}+\epsilon-1}}^{1-\mu} \quad \text{for }\mu=\frac{rn}{2}\left(\frac{1}{r_{4}}-\frac{n-2}{rn}\right). \end{equation} We now need to estimate ${\left\|J\mapsto2^{J(s_{r}+\epsilon-1)}{\|\Omega_{J}\|}_{L^{r_{3}}}\right\|}_{l^{r}}$. By the Bony estimates \eqref{bony-estimate-1} and \eqref{bony-estimate-3}, the inequality ${\|v\|}_{\dot{B}_{r_{2},\infty}^{0}}\lesssim_{\varphi}{\|v\|}_{L^{r_{2}}}$, and the assumption $r<\frac{n}{2-\epsilon}$ (which is equivalent to $s_{r}+\epsilon>1$), we have \begin{equation}\label{product-estimate} {\|\omega\cdot\nabla u\|}_{\dot{B}_{r_{3},r}^{s_{r}+\epsilon-1}} \lesssim_{\varphi,\epsilon,r} {\|\omega\|}_{\dot{B}_{r_{1},r}^{s_{r}+\epsilon-1}}{\|\nabla u\|}_{L^{r_{2}}}+{\|\omega\|}_{L^{r_{2}}}{\|\nabla u\|}_{\dot{B}_{r_{1},r}^{s_{r}+\epsilon-1}}. \end{equation} On the other hand, by Proposition \ref{commutator-prop} we have $[u\cdot\nabla,\dot{\Delta}_{J}]\omega=\sum_{I=1}^{5}R_{J}^{I}$, where \begin{equation}\label{commutator-estimate} \begin{aligned} {\left\|J\mapsto2^{J(s_{r}+\epsilon-1)}{\|R_{J}^{1}\|}_{L^{r_{3}}}\right\|}_{l^{r}} &\lesssim_{\varphi,\epsilon,r,r_{1}} {\|\nabla u\|}_{L^{r_{2}}}{\|\omega\|}_{\dot{B}_{r_{1},r}^{s_{r}+\epsilon-1}}, \\ {\left\|J\mapsto2^{J(s_{r}+\epsilon-1)}{\|R_{J}^{I}\|}_{L^{r_{3}}}\right\|}_{l^{r}} &\lesssim_{\varphi,\epsilon,r,r_{1}} {\|\nabla u\|}_{\dot{B}_{r_{1},r}^{s_{r}+\epsilon-1}}{\|\omega\|}_{\dot{B}_{r_{2},\infty}^{0}} \quad \text{for }I=2,3,4,5. \end{aligned} \end{equation} Combining \eqref{product-estimate} and \eqref{commutator-estimate}, and noting the relations \eqref{w-from-u}-\eqref{u-from-w} and Lemma \ref{useful-inequalities}, we therefore have \begin{equation}\label{W-estimate} {\left\|J\mapsto2^{J(s_{r}+\epsilon-1)}{\|\Omega_{J}\|}_{L^{r_{3}}}\right\|}_{l^{r}} \lesssim_{\varphi,\epsilon,r,r_{1}} {\|\omega\|}_{\dot{B}_{r_{1},r}^{s_{r}+\epsilon-1}\cap L^{r_{2}}}^{2}. \end{equation} The indices $r_{1},r_{2}$ we chosen to ensure that $r<r_{1}<\frac{rn}{n-2}$ and $0<s_{r}+\epsilon-1=\frac{n}{r_{1}}-\frac{n}{r_{2}}$, and that the spaces $\dot{B}_{r_{1},r}^{s_{r}+\epsilon-1}$ and $L^{r_{2}}$ have the same scaling; from these conditions we deduce the interpolation inequality \begin{equation}\label{nu-interpolation} {\|\omega\|}_{\dot{B}_{r_{1},r}^{s_{r}+\epsilon-1}\cap L^{r_{2}}} \lesssim_{n,\epsilon,r,r_{1}} {\|\omega\|}_{\dot{B}_{r,r}^{s_{r}+\epsilon-1}}^{\nu}{\|\omega\|}_{\dot{B}_{\frac{rn}{n-2},r}^{s_{r}+\epsilon-1}}^{1-\nu} \quad \text{for }\nu=\frac{rn}{2}\left(\frac{1}{r_{1}}-\frac{n-2}{rn}\right), \end{equation} where the estimate on ${\|\omega\|}_{\dot{B}_{r_{1},r}^{s_{r}+\epsilon-1}}$ follows from \eqref{interpolation-holder}, while the estimate on ${\|\omega\|}_{L^{r_{2}}}$ is justified (writing $r_{5}=\frac{rn}{n-2}$ and $s_{r}+\epsilon-1=s_{r_{5}}+\epsilon-1+\frac{2}{r}$) by the embedding ${\|\omega\|}_{L^{r_{2}}}\leq{\|\omega\|}_{\dot{B}_{r_{2},1}^{0}}$, and the calculations \begin{equation*} \text{If }r_{2}\geq r_{5}\text{ :}\quad{\|\omega\|}_{\dot{B}_{r_{2},1}^{0}}\lesssim_{n,\epsilon,r,r_{1}}{\|\omega\|}_{\dot{B}_{r_{2},\infty}^{s_{r_{2}}+\epsilon-1}}^{\nu}{\|\omega\|}_{\dot{B}_{r_{2},\infty}^{s_{r_{2}}+\epsilon-1+\frac{2}{r}}}^{1-\nu}\lesssim_{n}{\|\omega\|}_{\dot{B}_{r_{5},\infty}^{s_{r_{5}}+\epsilon-1}}^{\nu}{\|\omega\|}_{\dot{B}_{r_{5},\infty}^{s_{r_{5}}+\epsilon-1+\frac{2}{r}}}^{1-\nu} \end{equation*} \begin{equation*} \text{If }r_{2}< r_{5}\text{ :}\quad{\|\omega\|}_{\dot{B}_{r_{2},1}^{0}}\lesssim_{n,\epsilon,r,r_{1}}{\|\omega\|}_{\dot{B}_{r_{2},\infty}^{s_{r_{2}}+\epsilon-1}}^{\rho}{\|\omega\|}_{\dot{B}_{r_{2},\infty}^{s_{r}+\epsilon-1}}^{1-\rho}\lesssim_{n}{\|\omega\|}_{\dot{B}_{r,\infty}^{s_{r}+\epsilon-1}}^{\rho}{\|\omega\|}_{\dot{B}_{r,\infty}^{s_{r}+\epsilon-1}}^{(1-\rho)\sigma}{\|\omega\|}_{\dot{B}_{r_{5},\infty}^{s_{r}+\epsilon-1}}^{(1-\rho)(1-\sigma)} \end{equation*} for $\nu,\rho,\sigma\in(0,1)$ determined by \eqref{interpolation-holder}-\eqref{interpolation-geometric}. By the bounds \eqref{rhs-bound} and \eqref{W-estimate}, and the interpolation inequalities \eqref{mu-interpolation} and \eqref{nu-interpolation}, we obtain \begin{equation*} \sum_{J\in\mathbb{Z}}2^{Jr(s_{r}+\epsilon-1)}\left\langle\Omega_{J},{|\dot{\Delta}_{J}\omega|}^{r-2}\dot{\Delta}_{J}\omega\right\rangle \lesssim_{\varphi,\epsilon,r} {\|\omega\|}_{\dot{B}_{r,r}^{s_{r}+\epsilon-1}}^{(r-1)\mu+2\nu}{\|\omega\|}_{\dot{B}_{\frac{rn}{n-2},r}^{s_{r}+\epsilon-1}}^{r+1-[(r-1)\mu+2\nu]}, \end{equation*} where $\mu$ and $\nu$ are given by \eqref{mu-interpolation} and \eqref{nu-interpolation}. Noting that $$\frac{r-1}{r_{4}}+\frac{2}{r_{1}}=1-\frac{1}{r_{3}}+\frac{2}{r_{1}}= 1+\frac{1}{r_{1}}-\frac{1}{r_{2}}=1+\frac{1}{r}+\frac{\epsilon-2}n\, ,$$ we see that \begin{equation*} \begin{aligned} (r-1)\mu+2\nu &= \frac{rn}{2}\left((r-1)\left(\frac{1}{r_{4}}-\frac{n-2}{rn}\right)+2\left(\frac{1}{r_{1}}-\frac{n-2}{rn}\right)\right) \\ &= \frac{rn}{2}\left(\frac{r-1}{r_{4}}+\frac{2}{r_{1}}\right)-\frac{(r+1)(n-2)}{2} \\ &= 1+\frac{r\epsilon}{2} \end{aligned} \end{equation*} and hence\footnote{As a side remark, we observe that if an estimate of the form $\sum_{J\in\mathbb{Z}}2^{Jr(s_{r}+\epsilon-1)}\left\langle\Omega_{J},{|\dot{\Delta}_{J}\omega|}^{r-2}\dot{\Delta}_{J}\omega\right\rangle \lesssim_{\varphi,\epsilon,r,\alpha,\beta} {\|\omega\|}_{\dot{B}_{r,r}^{s_{r}+\epsilon-1}}^{\alpha}{\|\omega\|}_{\dot{B}_{\frac{rn}{n-2},r}^{s_{r}+\epsilon-1}}^{\beta}$ holds for all antisymmetric $\omega$, then necessarily $\alpha=1+\frac{r\epsilon}{2}$ and $\beta=\frac{r}{2}(2-\epsilon)$. This observation can be justified by considering the effect of the rescaling $\omega\mapsto\kappa\omega$ and $x\mapsto 2^{N}x$ for $\kappa>0$ and $N\in\mathbb{Z}$.} \begin{equation}\label{rhs-bound-2} \sum_{J\in\mathbb{Z}}2^{Jr(s_{r}+\epsilon-1)}\left\langle\Omega_{J},{|\dot{\Delta}_{J}\omega|}^{r-2}\dot{\Delta}_{J}\omega\right\rangle \lesssim_{\varphi,\epsilon,r} {\|\omega\|}_{\dot{B}_{r,r}^{s_{r}+\epsilon-1}}^{1+\frac{r\epsilon}{2}}{\|\omega\|}_{\dot{B}_{\frac{rn}{n-2},r}^{s_{r}+\epsilon-1}}^{\frac{r}{2}(2-\epsilon)}. \end{equation} By \eqref{w-energy-3}, \eqref{rhs-bound-2} and Young's product inequality, we deduce that \begin{equation*} \frac{\partial}{\partial t}\left({\|\omega\|}_{\dot{B}_{r,r}^{s_{r}+\epsilon-1}}^{r}\right) \lesssim_{n,\epsilon,r} {\|\omega\|}_{\dot{B}_{r,r}^{s_{r}+\epsilon-1}}^{r+\frac{2}{\epsilon}}. \end{equation*} Applying Lemma \ref{ode-lemma} with $X(t)={\|\omega(t)\|}_{\dot{B}_{r,r}^{s_{r}+\epsilon-1}}^{r}$ and $\gamma=\frac{2}{r\epsilon}$, and noting \eqref{w-qualitative-blowup}, we conclude that \eqref{w-blowup}${}_{\epsilon<2}$ holds, which (as noted above) implies \eqref{eps-less-2}. {\bf The case $\boldsymbol{\epsilon=2}$.} By \eqref{w-energy-2} and H\"{o}lder's inequality, for all $J,t$ satisfying $\dot{\Delta}_{J}\omega(t)\neq0$ in $\mathcal{S}'$ we have \begin{equation}\label{eps2-w-energy} \frac{\partial}{\partial t}\left({\|\dot{\Delta}_{J}\omega\|}_{L^{r}}\right) \lesssim_{n,r} {\|\Omega_{J}\|}_{L^{r}}. \end{equation} If $\dot{\Delta}_{J}\omega(t_{0})=0$ in $\mathcal{S}'$, then either ${\left.\frac{\partial}{\partial t}\left({\|\dot{\Delta}_{J}\omega\|}_{L^{r}}\right)\right|}_{t=t_{0}}=0$ (in which case \eqref{eps2-w-energy} is true for $t=t_{0}$) or ${\left.\frac{\partial}{\partial t}\left({\|\dot{\Delta}_{J}\omega\|}_{L^{r}}\right)\right|}_{t=t_{0}}\neq0$ (in which case \eqref{eps2-w-energy} is true for $t$ close to $t_{0}$, so by continuity it is true for $t=t_{0}$). Therefore \eqref{eps2-w-energy} holds for all $J\in\mathbb{Z}$ and $t\in(0,T)$, so we can estimate \begin{equation}\label{eps2-w-energy-2} \frac{\partial}{\partial t}\left({\|\omega\|}_{\dot{B}_{r,1}^{s_{r}+1}}\right) \lesssim_{n,r} {\left\|J\mapsto2^{J(s_{r}+1)}{\|\Omega_{J}\|}_{L^{r}}\right\|}_{l^{1}}. \end{equation} We now need to estimate ${\left\|J\mapsto2^{J(s_{r}+1)}{\|\Omega_{J}\|}_{L^{r}}\right\|}_{l^{1}}$. By the Bony estimates \eqref{bony-estimate-1} and \eqref{bony-estimate-3}, the inequality ${\|v\|}_{\dot{B}_{\infty,\infty}^{0}}\lesssim_{\varphi}{\|v\|}_{L^{\infty}}$, and the assumption $r<\infty$ (which is equivalent to $s_{r}+1>0$), we have \begin{equation}\label{eps2-prod} {\|\omega\cdot\nabla u\|}_{\dot{B}_{r,1}^{s_{r}+1}} \lesssim_{\varphi,r} {\|\omega\|}_{\dot{B}_{r,1}^{s_{r}+1}}{\|\nabla u\|}_{L^{\infty}} + {\|\omega\|}_{L^{\infty}}{\|\nabla u\|}_{\dot{B}_{r,1}^{s_{r}+1}}. \end{equation} On the other hand, by Proposition \ref{commutator-prop} we have $[u\cdot\nabla,\dot{\Delta}_{J}]\omega=\sum_{I=1}^{5}R_{J}^{I}$, where \begin{equation}\label{eps2-comm} \begin{aligned} {\left\|J\mapsto2^{J(s_{r}+1)}{\|R_{J}^{1}\|}_{L^{r}}\right\|}_{l^{1}} &\lesssim_{\varphi,r} {\|\nabla u\|}_{L^{\infty}}{\|\omega\|}_{\dot{B}_{r,1}^{s_{r}+1}}, \\ {\left\|J\mapsto2^{J(s_{r}+1)}{\|R_{J}^{I}\|}_{L^{r}}\right\|}_{l^{1}} &\lesssim_{\varphi,r} {\|\nabla u\|}_{\dot{B}_{r,1}^{s_{r}+1}}{\|\omega\|}_{\dot{B}_{\infty,\infty}^{0}} \quad \text{for }I=2,3,4,5. \end{aligned} \end{equation} Combining \eqref{eps2-prod} and \eqref{eps2-comm}, and noting the relations \eqref{w-from-u}-\eqref{u-from-w} and Lemma \ref{useful-inequalities}, we therefore have \begin{equation}\label{eps2-W-estimate} {\left\|J\mapsto2^{J(s_{r}+1)}{\|\Omega_{J}\|}_{L^{r}}\right\|}_{l^{1}} \lesssim_{\varphi,r} {\|\omega\|}_{\dot{B}_{r,1}^{s_{r}+1}}^{2}. \end{equation} By \eqref{eps2-w-energy-2} and \eqref{eps2-W-estimate} we have \begin{equation*} \frac{\partial}{\partial t}\left({\|\omega\|}_{\dot{B}_{r,1}^{s_{r}+1}}\right) \lesssim_{\varphi,r} {\|\omega\|}_{\dot{B}_{r,1}^{s_{r}+1}}^{2}. \end{equation*} Hence if $\epsilon =2$, applying Lemma \ref{ode-lemma} with $X(t)={\|\omega\|}_{\dot{B}_{r,1}^{s_{r}+1}}$ and $\gamma=1=\frac{2}{\epsilon}$, and noting \eqref{w-qualitative-blowup}${}_{\epsilon=2}$, we conclude that \eqref{w-blowup}${}_{\epsilon=2}$ holds, which implies \eqref{eps-is-2} and completes the proof of Theorem \ref{main-theorem}. \end{proof
2206.11691
\section{Introduction} \begin{figure}[ht!] \begin{center} \subfigure[]{\includegraphics[scale=0.30]{Fig1a_new.eps}} \subfigure[]{\includegraphics[scale=0.20]{Fig1b.eps}} \end{center} \caption{(a) The room temperature x-ray diffraction patterns for bulk and nanoscale samples and their refinement by FullProf; (b) crystallographic structure of Y$_2$NiMnO$_6$. } \end{figure} Alongside comprehensive investigation of multiferroicity in single-phase Type-I and Type-II systems and composites, in recent time, an alternative paradigm is emerging where spontaneously formed (such as domain wall or boundaries) and/or artificially fabricated (such as nanosized and heteroepitaxial architectures) surface/interface regions are found to be hosting the magnetic and ferroelectric orders and offering the test bed for inducing coupling among the order parameters \cite{Fiebig,Zutic,Spaldin}. For example, ferroelastic domain walls in SrTiO$_3$, below its structural phase transition (from cubic to tetragonal) at $\sim$105 K, host ferroelectric order and, therefore, respond to electrical tuning and, in turn, change the magnetic domain structure in La$_{\frac{1}{2}}$Sr$_{\frac{1}{2}}$MnO$_3$ deposited on the SrTiO$_3$ substrate \cite{Salje}. It has also been pointed out in a theoretical work \cite{Bellaiche} that, in orthoferrite SmFeO$_3$, magnetic domain boundaries could be polar while the domains themselves are not. Surface and interface regions in many systems have been found to be topologically protecting the magnetic and electrical vortices with often a coupling between them \cite{Mathur,Tokura,Loidl,Kezsmarki,Nahas,Wang,Goncalves,Das}. They were shown to result from extended spin-orbit coupling in lower dimension \cite{Fert}. In a recent theoretical work \cite{Betouras}, it has been claimed that a bulk and collinear ferro- or antiferromagnet supports surface ferroelectricity both in presence and absence of surface induced Dzyaloshinskii-Moriya (DM) exchange interaction. These new developments are, therefore, precipitating a new paradigm of surface/interface multiferroicity which is distinct even from multiferroicity in the composite systems where striction mediated coupling between magnetic and ferroelectric order parameters across the interfaces of constituent phases is the key feature. Given this background, a simple question naturally arises: whether in the nanosized particles as well, surface induced multiferroicity is possible (because of their enhanced surface area to volume ratio) in the absence of bulk multiferroicity. Surface ferromagnetism was already shown \cite{Sundaresan} to be ubiquitous in nanosized materials even in the absence of magnetic ions. However, it is not known whether ferroelectricity could also emerge along with surface magnetism in nanoscale. Earlier work \cite{Lu} on nanoscale multiferroic systems such as BiFeO$_3$, o-TbMnO$_3$, h-RMnO$_3$ (R = Sm, Eu, Gd, Dy, Tb) etc concentrated only on exploring the prevalence of bulk multiferroicity as a function of particle size and/or thickness of films. In this paper, we demonstrate that in nanorods of double perovskite Y$_2$NiMnO$_6$ (YNMO) compound - which in bulk form exhibits ferroelectricity due to magnetic ordering below $T_N$ $\approx$ 70 K - magnetoelectric coupling between surface ferromagnetism and surface ferroelectricity is quite significant. The oxygen vacancies at the surface induce surface ferromagnetism at room temperature while surface ferroelectricity emerges from Dzyloshinskii-Moriya (DM) exchange coupling interactions within the noncentrosymmetric surface in presence of large Rashba spin-orbit coupling. Measurement of remanent ferroelectric polarization under varying magnetic field at room temperature shows that the magnetoelectric coupling is substantial. The bulk YNMO assumes centrosymmetric $P2_1/n$ crystallographic structure at room temperature with ordering of Ni$^{2+}$/Mn$^{4+}$ ions. It exhibits reasonably strong magnetic order driven ferroelectricity ($P_S$ $\sim$ 0.15 $\mu$C/cm$^2$) below $T_N$ $\approx$ 70 K \cite{Su}. The magnetic structure turns out to be exchange striction driven collinear E-type antiferromagnetic ($\uparrow\uparrow\downarrow\downarrow$). The ferroelectricity in such systems arises from asymmetric shift of the O$^{2-}$ ions. \section{Experimental Details} The YNMO nanorods were prepared by hydrothermal method. The nitrate and acetate salts such as [Y(NO$_3$)$_3$,6H$_2$O], [Ni(NO$_3$)$_3$,6H$_2$O] and C$_4$H$_6$MnO$_4$ were first dissolved in 40 ml deionized water in stoichiometric ratio (2:1:1 molar ratio) under stirring. Then 10 ml of 5M NaOH solution was added. The precipitate formed was collected and stirred for 30 min. It was later transferred to a Teflon sealed stainless steel autoclave and heated at 180$^o$C for 24h. The final product was centrifuged by distilled water and ethanol and dried at 70$^o$C for 24h. The powder obtained thus was ground by mortar and calcined at 1000$^o$C for 4h. Bulk YNMO is prepared by solid state reaction method. The bulk and nanosized YNMO were characterized by x-ray diffraction (XRD), scanning electron and transmission electron microscopy (SEM and TEM), and x-ray photoelectron spectroscopy (XPS). The XRD data were refined by FullProf. X-ray photoelectron spectroscopy measurement was carried out to determine the charge states of the ions in both bulk sample and nanorods of YNMO. Temperature dependent magnetic properties were measured by a vibrating sample magnetometer (Cryogenics 16T VSM). The magnetic force microscopy (MFM) images were captured by the LT-AFM/MFM system of NanoMagnetics Instruments using commercial Co-alloy coated MFM cantilevers. Two-pass mode has been used in raster scan with cantilever oscillating at the resonant frequency ($\sim$70 kHz) by digital phase-lock-loop control system. The oscillation amplitude varies within 10-50 nm. The forward scan records the surface topography in semi-contact mode where cantilever oscillation amplitude is used as the feedback parameter. The cantilever is then lifted by 50-150 nm from the surface to reduce the influence of short-range force and record the magnetic interaction between tip of the cantilever and the sample surface. The corresponding phase shift is recorded as the magnetic domain image. To measure the ferroelectric polarization, YNMO nanorods were deposited on Si/SiO$_2$ substrate and silver electrodes were deposited in two-probe top-top electrode configuration. For bulk pellet samples, silver electrodes were used in top-bottom configuration. The remanent ferroelectric hysteresis loops were measured by Ferroelectric Loop Tester of Radiant Inc. (Precision LC-II model). \begin{figure*}[ht!] \begin{center} \subfigure[]{\includegraphics[scale=0.18]{Fig2a_new.eps}} \subfigure[]{\includegraphics[scale=0.15]{Fig2b_new.eps}} \subfigure[]{\includegraphics[scale=0.15]{Fig2c_new.eps}} \subfigure[]{\includegraphics[scale=0.20]{Fig2d_new.eps}} \end{center} \caption{The (a) field-effect scanning electron microscopy image of nanorods; (b) transmission electron microscopy image of a single nanorod; (c) high resolution TEM image showing the ($\bar{1}$13) planes; (d) HAADF image of the nanorod together with mapping of concentration of the elements such as Y, Mn, Ni, and O across the diameter of the nanorod as obtained from EDX line scanning; it helps in identifying the `stoichiometric' core and `nonstoichiometric' surface regions of the nanorod. } \end{figure*} \begin{figure*}[ht!] \begin{center} \subfigure[]{\includegraphics[scale=0.25]{Fig3a_new.eps}} \subfigure[]{\includegraphics[scale=0.25]{Fig3b.eps}} \subfigure[]{\includegraphics[scale=0.25]{Fig3c_new.eps}} \subfigure[]{\includegraphics[scale=0.25]{Fig3d_new.eps}} \end{center} \caption{The deconvoluted x-ray photoelectron spectra showing the core and satellite peaks for (a) Y, (b) Mn, (c) Ni, and (d) O and their fitting. } \end{figure*} \begin{figure*}[ht!] \begin{center} \subfigure[]{\includegraphics[scale=0.25]{Fig4a_final.eps}} \subfigure[]{\includegraphics[scale=0.25]{Fig4b_final.eps}} \subfigure[]{\includegraphics[scale=0.25]{Fig4c_new.eps}} \subfigure[]{\includegraphics[scale=0.25]{Fig4d_new.eps}} \end{center} \caption{(a) The zero-field cooled and field-cooled magnetization versus temperature plots for bulk and nansocale YNMO; (b) magnetic hysteresis loops at low, intermediate, and room temperature for nanoscale YNMO; (c) magnetic hysteresis loops for bulk and nanoscale samples of YNMO at room temperature; inset shows the blown up portion of the loop near origin for both the samples; (d) weak nonlinear M-H pattern near origin, extracted from the overall M-H loop, signifying subtle surface ferromagnetism at room temperature in nanorods.} \end{figure*} \section{Results and Discussion} The Figure 1 shows the room temperature x-ray diffraction patterns for bulk and nanorods of YNMO and their refinement by FullProf. The crystallographic structure is also shown in Fig. 1. The crystallographic structure is found to be monoclinic (space group $P2_1/n$) for both bulk and nanoscale samples though the nanorods appear to be oriented with ($\bar{1}$13) plane. It indicates a certain extent of texturing of the entire nanorod assembly on which the x-ray diffraction pattern was recorded. The lattice parameters were estimated to be $a$ = 5.221 \AA, $b$ = 5.553 \AA, $c$ = 7.479 \AA, $\beta$ = 89.87$^o$, and $a$ = 5.241 \AA, $b$ = 5.583 \AA, $c$ = 7.488 \AA, $\beta$ = 89.87$^o$, respectively, for bulk and nanoscale samples. Other crystallographic details such as ion positions, bond lengths and angles for both the bulk and nanoscale samples as well as the estimated standard deviation (corresponding to the lattice parameters and ion positions) and the fit statistics of the XRD data are included in the Supplemental Material \cite{Supplemental}. The preferential orientation of ($\bar{1}$13) plane could be observed in high resolution transmission electron microscopy (HRTEM) images as well (Fig. 2). The HRTEM images were analyzed by using fast fourier transformation (FFT) and its inverse (IFFT) in order to identify the lattice planes and vectors clearly. In the Supplemental Material \cite{Supplemental}, the HRTEM images and their FFT and IFFT versions are shown. The lattice planes and vectors have also been shown. The growth axis [$\bar{1}$13] of the nanorods could be observed in the images (Supplementary Materials). The SEM and bright field TEM images are shown in Fig. 2. The diameter of the nanorods is found to be around 100 nm. We further carried out detailed energy dispersive X-ray (EDX) elemental line profile analysis across the diameter of an individual nanorod by using scanning TEM (STEM) high angle annular dark field (HAADF) technique inside TEM. The Figure 2(d) shows the HAADF image of the nanorod as well as a profile of elemental concentration of Y, Mn, Ni, and O across the diameter of the nanorod. Clearly, while all the other elements are found to be homogeneously distributed across the entire diameter of the nanorod, concentration of oxygen was found to be decreasing near and across the surface region. The profile of oxygen concentration across the diameter of the nanorod yields the thickness of the region to be $\sim$10 nm where oxygen vacancies form. We, therefore, divide the whole nanorod into two regions - `stoichiometric' and `nonstoichiometric' - as shown in Fig. 2(d). Using this information, it is possible to estimate the total number of unit cells within the `nonstoichiometric' region of a typical nanorod across which oxygen vacancies form. Considering the typical dimensions of the nanorod to be length $\approx$600 nm and diameter $\approx$100 nm and using the volume of the crystallographic cell ($\approx$0.21913 nm$^3$), the total number of unit cells in the whole of the nanorod is estimated to be 21504992. Since, the thickness of the `nonstoichiometric' region is $\approx$10 nm, the number of the unit cells within the `stoichiometric' region of the nanorod turns out to be 13763195. The number of cells within the `nonstoichiometric' region is $\sim$36\%. This is a siginificantly large number and, therefore, emergence of multiferroicity in this region should have profound impact. In order to corroborate the data of oxygen vacancy formation at the surface of the nanorods, we have also carried out the XPS. The spectra for Y, Mn, Ni, and O are shown in Fig. 3. The overall spectra were deconvoluted to obtain the main and sattelite peaks for each of the ions. For example, Fig. 3(a) shows the peaks at 156.5 eV and 158.5 eV for Y$^{3+}$ states. The Figure 3(b) shows the deconvoluted peaks - main and satellite - for Ni$^{2+}$. In both these cases, fitting of the spectra following background subtraction yields the charge states to be unique - i.e., Y$^{3+}$ and Ni$^{2+}$. Interestingly, however, Mn ions appear to be in the mixed state. Fitting of the peaks show that both Mn$^{3+}$ and Mn$^{4+}$ charge states are present. The ratio of the area ($A$) under the corresponding peaks [$A_{Mn^{3+}}$/($A_{Mn^{3+}}+A_{Mn^{4+}}$] is calculated to be $\sim$0.3. From the spectra for O$^{2-}$, the charge state of oxygen ion turns out to be -2. Because of oxygen deficiency in the surface region, the Mn$^{4+}$ ions were found to be reduced to Mn$^{3+}$ and thus, XPS data corroborate the observations made by TEM. It appears that $\sim$30\% of the Mn ions assume Mn$^{3+}$ state within a region consisting of $\sim$36\% cells. The XPS data for the bulk sample are shown in the Supplemental Material \cite{Supplemental}. All the peaks corresponding to Y$^{3+}$, Ni$^{2+}$, Mn$^{4+}$, O$^{2-}$ ions are found to reflect unique charge states. Signature of the presence of mixed charged state could not be observed in this case. This observation is consistent with those made by others for bulk YNMO \cite{Su}. Only in nanoscale samples, Mn$^{3+}$ arises because of oxygen vacancies near the surface region. We now turn our attention to the results of magnetic property measurements. The zero-field-cooled (ZFC) and field-cooled (FC) magnetization versus temperature data are shown in Fig. 4(a) for both the bulk and nanoscale YNMO. While bulk sample exhibits the antiferromagnetic transition at $T_N$ $\approx$ 77 K, as expected, in the nanoscale samples, the $T_N$ is found to have dropped to $\sim$66 K. This is consistent with the observations made in other nanoscale magnetic compounds \cite{Zhao}. Interestingly, within this size range, the nanorods appear to retain the long-range magnetic order (albeit with weaker strength) in the bulk of the sample. The Figures 4(b) and (c) show, respectively, the magnetic hysteresis loops across $\pm$50 kOe for nanoscale YMNO at different temperatures across 10-300 K and those for both bulk and nanoscale samples at 300 K. Clearly, the bulk sample exhibits linear i.e., paramagnetic $M-H$ pattern at 300 K. In contrast, the nanoscale sample exhibits finite magnetic coercivity [Fig. 4(c) inset] and very weak nonlinearity because of subtle surface magnetism. Using Brillouin function for the nonlinear ferromagnetic component and linear function for the paramagnetic one, we extracted the ferromagnetic component of the overall magnetization observed experimentally [Fig. 4(d)]. It yields the saturation magnetization $M_S$ to be $\sim$0.005 emu/g at room temperature. This turns out to be $\sim$5\% of the maximum ferromagnetic moment expected for perfect `ferro' alignment of all the spins in YNMO. Near linearity in $M-H$ also signifies presence of long-range antiferromagnetic order in the surface region. The coercive fields for forward and reverse branches turn out to be $H_{C1}$ $\sim$735 Oe and $H_{C2}$ $\sim$600 Oe, respectively. This yields the cocercivity $H_C$ $\sim$666 Oe and exchange bias field $H_E$ $\sim$67.5 Oe. Finite $H_E$ confirms presence of surface antiferromagnetism and interface between ferromagnetic and antiferromagnetic regions. The shape of the hysteresis loop, though appears to be unusual, resembles closely the one expected of a dilute assembly of noninteracting three-dimensional nanoparticles with higher packing density \cite{Usov}. \begin{figure*}[ht!] \begin{center} \subfigure[]{\includegraphics[scale=0.30]{Fig5a_rev.eps}} \subfigure[]{\includegraphics[scale=0.15]{Fig5b.eps}} \subfigure[]{\includegraphics[scale=0.32]{Fig5c_final.eps}} \end{center} \caption{(a) Schematic of the cross-sectional view of nanorod of radius radius $R$ and height $h$. The Heisenberg spins interact at the surface (inner core) of the rod by the strength $J_s$ ($J_b$) and the interaction strength across the interface is $J_{int}$; (b) a randomly assembled structure; (c) M-H loop with $J_s = 0.2$ and $J_s = -0.2$.} \end{figure*} To examine the origin of magnetic coercivity of the assembled nanorods we consider that the surface of the nanorods are ferromagnetic in nature. We model the rod-like nanoparticles as cylinders of radius $R$ and height $h$ [as shown in Fig. 5(a)] consisting of classical Heisenberg spins ${\bf S}_i$ arranged in a cubic lattice \cite{Sahoo}, i.e., there are $2R + 1$ spins along the diameter of the cylinder; $N$ such identical nanorods are assembled randomly to form the superstructure as shown in Fig. 5(b) which mimics the self-assembled nanorods observed in our experiments. For comparison, we also consider two other kind of assmbled structures by attaching the rods as {\it end to end} and {\it side by side}, which is shown in the Supplemental Material \cite{Supplemental}. Each lattice site $i$ of the superstructure is associated with a classical Heisenberg spin ${\bf S}_i$; we denote the lattice sites in the bulk by a set {\bf B} and those on the surface by a set {\bf S}. These spins interact following the Hamiltonian, \bea {\cal H} &=& -J_b \sum_{i\in {\bf B}, j\in {\bf B}} {\bf S}_i. {\bf S}_j -J_{s} \sum_{i\in {\bf S}, j \in {\bf S}} {\bf S}_i. {\bf S}_j \cr &&- J_{int}\sum_{i \in {\bf B}, j \in {\bf B}} {\bf S}_i. {\bf S}_j - H \sum_{i\in {\bf B}, i\in {\bf S} } S_i^z, \label{eq:H} \eea where $j$ is the nearest neighbor of site $i$, $J_{b}$, ($J_{s}$) are the exchange interaction strength among the spins within the bulk {\bf B} (surface {\bf S}); $J_{int}$ represents interaction between spins in the bulk and the surface. The external magnetic field $H$ is applied along the easy-axis, which is chosen as the $z$-direction. To model a paramagnetic core (bulk) and ferro or antiferromagnetic surface we set $J_{b}$ to be very small and $J_{s}$ can take positive or negative values depending on whether the surface is ferro or antiferromagnetic, respectively. We study the hysteresis properties of these nanoparticle super-structures using Monte Carlo simulations with a single spin-flip Metropolis algorithm where a trial configuration is accepted with probability $ Min\{ 1, e^{-\beta \Delta E}\}$. Here, $\Delta E$ is energy difference between the present configuration and the trial one. The trial configuration is constructed by changing the angles of a randomly chosen spin by a small but random amount. To calculate the hysteresis of the self-assembled nanorods we consider $N=11$ rods arranged randomly, as shown in Fig. 5(b). The radius $R$ and the height $h$ of the nanorods are taken as $8$ and $30$ lattice units respectively. The interaction parameters of the Hamiltonian are set as follows. For the paramagnetic core and ferro or antiferromagnetic surface, we set $J_{b}=0.01$ and consider $J_{s}$ in the range $(-0.8, 0.8)$; $J_{int}$ is expected to be similar in magnitude as that of $J_b$ and we set $J_{int}=0.01$. The temperature of the system is set as $\beta^{-1}=1$, which is much smaller than the critical temperature $T_C$ of the system; such a state with $\beta^{-1}= 1$ can be achieved in the Monte Carlo simulation, starting from any random initial configuration, by relaxing the system in zero-field-cooled condition for a long time. Then, the magnetic field is raised slowly from $H=0$ to $H_{max}$ with a field sweep rate $\Delta H$ units per Monte Carlo sweep (MCS) and finally, the hysteresis loop calculations are undertaken for a cycle by varying the field from $H_{max}$ to $ -H_{max}$ and then to $ H_{max}$. To this end we take $H_{max}=1$ and $\Delta H$ = $2\times10^{-3}$, i.e., the magnetic field is raised from $0$ to $1$ in 500 MCS. We have calculated the hysteresis loop for different $-0.8<J_s <0.8$; the magnetization loop for $J_{s} = 0.2$, shown in Fig. 5(c), has {\it linear} behaviour and a small coercive field, which compares well with our experimental data. This perhaps confirm the presence of ferromagnetic ordering at the surface of the self-assembled nanorods along with a paramagnetic ordering in the bulk. For comparison, we, in Fig. 5(c), have also shown the hysteresis loop for $J_{s} = -0.2$, where the surface has a antiferromagnetic order. Higher coercivity [Fig. 5(c)] for $J_s$ = 0.2 corroborates the experimental result. It proves presence of surface ferromagnetism in the present case. \begin{figure*}[ht!] \begin{center} \subfigure[]{\includegraphics[scale=0.22]{Fig6a_final.eps}} \subfigure[]{\includegraphics[scale=0.22]{Fig6c.eps}} \subfigure[]{\includegraphics[scale=0.22]{Fig6c_new.eps}} \subfigure[]{\includegraphics[scale=0.22]{Fig6e.eps}} \end{center} \caption{The magnetic force microscopy images show the (a) surface topography and (b) phase contrast; (c) corresponding line profile scan data for the cluster of nanorods are shown across the lines shown in the images; (d) shows the mapping of the perpendicular stray field; line profile scan data corresponding to the phase contrast image have been generated from this analysis.} \end{figure*} The room temperature surface ferromagnetism is probed further by magnetic force microscopy (MFM). We show, respectively, in Figs. 6(a) and 6(b) the topography and MFM phase contrast images of nanorods of YNMO dispersed within ethanol and deposited on SiO$_2$/Si substrate. The sample was kept under $\sim$1000 Oe field prior to the measurements. The particles, as a result, appear to organize themselves in the form of rings via, possibly, magnetic interactions. The line profile data corresponding to the topography and phase contrast images are shown in Fig. 6(c). The phase contrast image [Fig. 6(b)] is also analyzed by using Gwyddion software for mapping the distribution of stray field. The details of the analysis is described in the Supplemental Material \cite{Supplemental} (see also the reference [S1] therein). The distribution of the stray field is shown in Fig. 6(d). The line profile data corresponding to the phase contrast MFM image have, in fact, been generated from this analysis. The comparison of the line scan data for topography and phase contrast images shows how the perpendicular stray field extends across the cluster of nanorods in different zones of the area under focus. From this, it appears that the magnetic domains extend across the individual nanorods and correspond closely to the respective cluster size as observed in many such magnetic nanoparticle assemblies \cite{Puntes}. All these information confirm the presence of weak yet finite surface ferromagnetism as paramagnetic strucure would not have resulted in such stray field distribution across the particle surface. The observation of long-range magnetic order and oxygen deficiency within the surface region, therefore, points out that Ni$^{2+}(3d^8; S=1)$-O$^{2-}(2p^6; S=0)$-Mn$^{3+}(3d^4; S=2)$-O$^{2-}(2p^6; S=0)$-Mn$^{4+}(3d^3; S=3/2)$ pathway is responsible for developing the exchange coupling interactions in the surface. We next examine the exchange coupling interaction in detail. The surface ferromagnetism, in many oxide perovskite systems, was shown \cite{Aliyu} to result from double exchange interactions involving mobile charge carriers arising from oxygen deficiency at the surface. Identification of the appropriate exchange interaction pathway is important. For single transition metal ion systems, it is straightforward. However, for systems involving more than one transition metal ion, as in our case, the exchange interaction pathway could be complex and, therefore, the mechanism too could be more involved. In the bulk YNMO, the exchange coupling interactions across Ni$^{2+}$-O$^{2-}$-Ni$^{2+}$, Mn$^{4+}$-O$^{2-}$-Mn$^{4+}$, and Ni$^{2+}$-O$^{2-}$-Mn$^{4+}$ bonds were found to be responsible to yield E-type antiferromagnetic structure $\uparrow\uparrow\downarrow\downarrow$ below $\sim$70 K. In the case of nanorods, because of the presence of Mn$^{3+}$ ions at the surface, exchange coupling interactions across Ni$^{2+}$-O$^{2-}$-Mn$^{3+}$, Mn$^{3+}$-O$^{2-}$-Mn$^{3+}$, Mn$^{3+}$-O$^{2-}$-Mn$^{4+}$ bonds yield the surface magnetism at room temperature. Within the surface crystallographic structure (which because of its lower dimension and consequent noncentrosymmetry could yield large Rashba spin-orbit coupling), the DM exchange across Mn$^{3+}$-O$^{2-}$-Mn$^{3+}$ could stabilize. It is, of course, important to mention here that the symmetry breaking magnetic structure could emerge either due to Dzyaloshinskii-Moriya (DM) antisymmetric exchange coupling interaction among the noncollinear spins or from exchange striction interaction among the collinear spins \cite{Nagaosa}. The spin-orbit coupling assumes immense importance in the former case but not in the latter. The theoretical and experimental work, carried out so far \cite{Fert}, have, of course, shown quite convincingly that the lower dimensional structures - such as surface and interface regions - or artificially constructed two-dimensional layers exhibit stabilization of noncollinear spin structures with DM exchange interaction. The role of Rashba spin-orbit coupling has been examined in such lower-dimensional structure in detail \cite{Fert}. Symmetric ``exchange striction" interaction may not be quite relevant in such systems. Direct experimental proof \cite{Gross,Chaurasiya} of prevalence of DM exchange interaction in the cases of surface/interface magnetism has also been provided. Since, in the present case, we observe ferroelectricity originating from surface magnetism at room temperature (where bulk magnetism has no role to play), it is quite likely that the surface magnetism in this case of Y$_2$NiMnO$_6$ nanorods involves noncollinear spin structure. Therefore, ferroelectricity here should be originating from the antisymmetric DM exchange coupling interaction (involving large Rashba spin-orbit coupling) confined within the lower dimensional surface region of the nanorods identified by the TEM experiments. The exchange striction interaction may not play any significant role here. This issue will be taken up further by using first-principles calculations separately. \begin{figure}[ht!] \begin{center} \subfigure[]{\includegraphics[scale=0.25]{Fig7a.eps}} \subfigure[]{\includegraphics[scale=0.25]{Fig7b_new.eps}} \end{center} \caption{(a) The remanent hysteresis loops - measured at room temperature - under different magnetic fields; inset shows the schematic of the sample-electrode configuration used for the measurement of remanent hysteresis loops; (b) magnetic field dependence of the remanent polarization $P_R$ at room temperature. } \end{figure} We finally measured the remanent ferroelectric hysteresis loops at room temperature under different magnetic fields 0, 5, 10, and 15 kOe [Fig. 7(a)]. The loops have been measured by using an involved protocol which sends out fourteen voltage pulses to switch the domains and measure both the switchable and nonswitchable components of the polarization. Elimination of the contribution of nonswitchable component from the overall polarization yields the switchable remanent polarization. The details of the protocol and its underlying physics have been described in Ref. [28]. The loop shape is consistent with that observed by others for different ferroelectric compounds \cite{Scott}. Interestingly, the remanent polarization $P_R$ is found to decrease by $\sim$80\% under $\sim$15 kOe field at room temperature [Fig. 7(b)]. The Figure 7(a) inset shows the typical sample and electrode configuration. The decrease in $P_R$ under 0-15 kOe field indicates strong magnetoelectric coupling in nanorods of YNMO at room temperature. Such strong coupling is expected in cases where ferroelectricity originates from magnetism and the magnetic structure - including the magnetic anisotropy - changes under a magnetic field. Because of lower dimension at the surface, large Rashba spin-orbit coupling is expected \cite{Fert} which, in turn, could stabilize the DM exchange interaction across Mn$^{3+}$-O$^{2-}$-Mn$^{3+}$ among the other possible superexchange and double exchange interactions. Distinct magnetic structure at the surface has earlier been observed in other systems as well \cite{Langridge}. Since the surface ferroelectricity originates here from the magnetic structure, because of change in the magnetic anisotropy and/or structure under field, ferroelectric polarization too should change. It turns out that the change in the magnetic structure and/or switch in anisotropy (quantified by switch or rotation of the DM vector) under field in individual nanorod leads to decrease in overall polarization when summed up over the entire ensemble of nanorods studied. Of course, more detailed experimental as well as theoretical work needs to be done to unravel the surface magnetic structure and its change under such moderate magnetic field 0-15 kOe at room temperature. The interesting results presented in this paper on nanorods of double perovskite Y$_2$NiMnO$_6$ should trigger deeper investigation. \section{Summary} In summary, we observed surface multiferroicity - magnetism, ferroelectricity, and significantly large magnetoelectric coupling - at room temperature in nanorods of double perovskite Y$_2$NiMnO$_6$ compound where bulk multiferroicity is observed only below $\sim$70 K. The surface magnetism has been probed by global magnetic measurements as well as imaging by magnetic force microscopy. It is found to be comprised of both ferromagnetic as well as antiferromagnetic domains. The Dzyloshinskii-Moriya exchange coupling interaction appears to stabilize and yield finite remanent ferroelectric polarization. Large magnetoelectric coupling, observed in this system, should trigger fresh research on other such candidate nanosized compounds for opening a new pathway of inducing room temperature surface multiferroicity even if its bulk counterpart either does not exist or exists only at low temperature. \begin{center} $\textbf{ACKNOWLEDGMENTS}$ \end{center} Two of the authors (S.M. and A.S.) acknowledge support (INSPIRE fellowship) from the Department of Science and Technology, Government of India, during this work.
2212.10996
\section{Renewal theory} \label{sec:renewaltheory} \begin{figure}[tp] \centering \includegraphics[width=\linewidth]{fig_1.pdf} \caption{Schematic of the run-and-tumble motion of a particle with mean run and tumbling times, $\tau_R$ and $\tau_T$, respectively. The particle moves at velocity $v$ during the run phase. $P_R$ and $P_T$ are the probabilities of the swimmer to be in a run or tumbling phase. Further, $T$ and $R$ denote the probabilities to start tumbling or running, respectively.} \label{fig:schematic} \end{figure} We consider a model of RT bacterium alternating between persistent runs in quasi-straight lines and finite-duration tumbles during which the cells fully randomize their directions~\cite{Berg:1972,Schnitzer:1993,celani2010bacterial,chatterjee2011chemotaxis,Angelani:2013}. The probability to find a bacterium displaced by a distance $\vec{r}$ after a lag time $\tau$ is $P(\vec{r},\tau)=P_R(\vec{r}, \tau)+ P_T(\vec{r}, \tau)$, where $P_R(\vec{r},\tau)$ and $P_T(\vec{r},\tau)$ correspond to the probability to be at position ${\bf r}$ after a lag time $\tau$ and to be in a running or tumbling phase, respectively. The ISF of non-interacting bacteria is then obtained via a Fourier transform: $f_{RT}(\vec{k},\tau)= \int \!\diff^3 r \exp(-\imath \, \vec{k}\cdot\vec{r})P(\vec{r},\tau)$. We denote by $\varphi_{R}(\tau)$ and $\varphi_{T}(\tau)$ the distributions of the durations of the RT phases, respectively. To allow for generic distributions, which need not correspond to Markovian processes, we also introduce the probabilities that a bacterium \textit{starts} running or tumbling at displacement $\vec{r}$ and lag time $\tau$, which we denote by $R(\vec{r},\tau)$ and $T(\vec{r},\tau)$, respectively. Finally, the propagators $\mathbb{P}_R(\vec{r},\tau)$ and $\mathbb{P}_T(\vec{r},\tau)$ measure the probability that a bacterium travels a distance $\vec{r}$ during a time $\tau$ in a running or a tumbling phase, respectively. To compute the ISF, we describe the RT dynamics as a renewal process~\cite{Feller:1971,Mendez:2014,Zaburdaev:2015} for which $P_R(\vec{r},\tau)$ satisfies \begin{equation} P_R=P_R^0+\int_0^\tau\!\! \diff t\!\int\!\diff^3 \ell \ R(\vec{r}-\boldsymbol{\ell}, \tau-t)\varphi_R^0(t)\mathbb{P}_R(\boldsymbol\ell,t), \label{eq:prob_PR} \end{equation} where $\varphi^0_R(t)=\int_t^\infty\!\diff t' \ \varphi_R(t')$ is the probability that the run time exceeds $t$. Further, we denote the probability that the bacterium arrives in $\vec{r}$ at time $\tau$ without having tumbled in $[0,\tau]$ by $P^0_R(\vec{r},\tau) := p_R \mathbb{P}_R(\vec{r},\tau)\int_{\tau}^\infty \diff t \, \varphi_R(t)(t-\tau)/\tau_R$. The probability depends on the fraction of time the cell spends running, $p_R=\tau_R/(\tau_R+\tau_T)$, and on the average times spent running or tumbling, $\tau_{R,T}=\int_0^\infty\diff t \, t\varphi_{R,T}(t)$. Equation~\eqref{eq:prob_PR} states that the probability to be at $\vec{r}$ at time $\tau$ is the sum of the probabilities of arriving in $\vec{r}$ without tumbling in $[0,\tau]$, $P_R^0$, and with at least one tumble. In the second case, the last tumble takes place at arbitrary displacements $\vec{r}-\boldsymbol{\ell}$ and lag times $\tau-t$, which should be summed over. Similarly, the probability that the bacterium starts a new run at displacement $\vec{r}$ and lag time $\tau$, $R(\vec{r},\tau)$, takes into account the possibility that this is the first run or that a run has already finished at $\tau-t$ and $\vec{r}-\boldsymbol\ell$: \begin{equation} R=R^1+\int_0^\tau\! \!\diff t\!\int\!\diff^3\ell \ T(\vec{r}-\boldsymbol\ell,\tau-t)\varphi_T(t)\mathbb{P}_T(\boldsymbol\ell,t)\;. \label{eq:prob_R} \end{equation} Here, $R^1(\vec{r},\tau):= (1-p_R)\mathbb{P}_T(\vec{r},\tau)\int_\tau^\infty \diff t \, \varphi_T(t)/\tau_T$ is the probability of starting the first run in $\vec{r}$ at time $\tau$. By swapping $R$ and $T$ everywhere in Eqs.~\eqref{eq:prob_PR}-\eqref{eq:prob_R} we obtain two other (formally identical) renewal equations for the probabilities $P_T(\vec{r},\tau)$ and $T(\vec{r},\tau)$: \begin{align} P_T&\!=\!P_T^0 + \int_0^\tau\!\! \diff t\!\int\!\diff^3 \ell \ T(\vec{r}-\boldsymbol{\ell}, \tau-t)\varphi_T^0(t)\mathbb{P}_T(\boldsymbol\ell,t), \label{{eq:prob_PT}}\\ T&\!=\!T^1+\int_0^\tau \!\!\diff t\!\int\!\diff^3\ell \ R(\vec{r}-\boldsymbol\ell,\tau-t)\varphi_R(t)\mathbb{P}_R(\boldsymbol\ell,t). \label{eq:prob_T} \end{align} A RT dynamical model is then entirely determined by the propagators $\mathbb{P}_{R,T}$ and by the choice of the reorientation process specified by the distribution $\varphi_{R,T}$ from which $P_{R,T}^0$, $R^1$, and $T^1$ follow. Since the renewal equations couple all positions and times, they are hard to solve in $\mathbf{r}$-space. Exploiting the convolution theorem, a Fourier transform yields a set of equations that are decoupled in Fourier space. Thanks to the isotropy of the system, they depend only on the wave number $k=|\vec{k}|$ and can be solved for each $k$ separately. Eqs.~\eqref{eq:prob_PR}-\eqref{eq:prob_T} lead to: \begin{align} P_R(k,\tau)&\!=\!P_R^0(k,\tau)\!+\!\int_0^\tau\!\! \diff t \ R(k, \tau\!-\!t)\varphi_R^0(t)\mathbb{P}_R(k,t),\label{eq:renewalFT1}\\ R(k,\tau)&\!=\!R^1(k,\tau)\!+\!\int_0^\tau\! \! \diff t \ T(k,\tau\!-\!t)\varphi_T(t)\mathbb{P}_T(k,t),\label{eq:renewalFT2} \\ P_T(k,\tau)&\!=\! P^0_T(k,\tau)\!+\!\int_0^\tau\!\!\diff t \ T(k,\tau\!-\!t)\varphi^0_T(t)\mathbb{P}_T(k,t),\label{eq:renewalFT3}\\ T(k,\tau) &\!=\!T^1(k,\tau)\!+\!\int_0^\tau\!\!\diff t \ R(k,\tau\!-\!t)\varphi_R(t)\mathbb{P}_R(k,t).\label{eq:renewalFT4} \end{align} Once a given RT dynamical model is chosen, Eqs.~\eqref{eq:renewalFT1}-\eqref{eq:renewalFT4} permit numerical evaluation of the ISF for RT particles, $f_{RT}(k,\tau)=P_R(k,\tau)+P_T(k,\tau)$. \begin{figure*}[htp] \includegraphics[width = \linewidth]{fig_2.pdf} \caption{ISFs, $f_{RT}(k,\tau)$, for our RT model. (a-b) Intrinsic speed variability model for different fractions of run times $p_R=\tau_R/(\tau_R+\tau_T)$. In (a) we vary the tumbling time with parameter values $\tau_R=1$~s and $\tau_T = 0,0.1,0.5~\text{s}$ and in (b) we vary the run time, i.e., $\tau_T=0.1$~s and $\tau_R = 2,1$~s. The other motility parameters are $\bar v = 15\, \upmu\text{m}\text{s}^{-1}$, $\sigma_v=4.5\, \upmu\text{m}\text{s}^{-1}$, and $D=0.3\, \upmu\text{m}^2\text{s}^{-1}$. (c) Comparison of the ISF for the model with speed fluctuations at the single cell ($S$; solid line) and population ($P$; dashed line) level using identical parameters ($\tau_R=1$~s, $\tau_T = 0.1~\text{s}$, $\bar v = 15\, \upmu\text{m}\text{s}^{-1}$, $\sigma_v=4.5\, \upmu\text{m}\text{s}^{-1}$, and $D=0.3\, \upmu\text{m}^2\text{s}^{-1}$). \label{fig:ISF_suppl}} \end{figure*} Inspection of Eqs.~\eqref{eq:renewalFT1}-\eqref{eq:renewalFT4} suggests that analytical progress can be made in Laplace space, following~\cite{Angelani:2013}. In particular, a Laplace transform of Eqs.~\eqref{eq:renewalFT1}-\eqref{eq:renewalFT4} yields the propagators, $P_R$ and $P_T$, for arbitrary RT distributions, $\varphi_R$ and $\varphi_T$, and probabilities, $\mathbb{P}_R$ and $\mathbb{P}_T$: \begin{widetext} \begin{align} P_R(k,s) &= P_R^0(k,s)+\mathcal{L}\left[\varphi_R^0(\tau)\mathbb{P}_R(k,\tau)\right](s) \frac{R^1(k,s)+T^1(k,s)\mathcal{L}\left[\varphi_T(\tau) \mathbb{P}_T(k,\tau)\right](s)}{1- \mathcal{L}\left[\varphi_R(\tau) \mathbb{P}_R(k,\tau)\right](s)\mathcal{L}\left[\varphi_T(\tau) \mathbb{P}_T(k,\tau)\right](s)}, \label{eq:PR_Laplace_time}\\ P_T(k,s) &= P_T^0(k,s)+\mathcal{L}\left[\varphi_T^0(\tau)\mathbb{P}_T(k,\tau)\right](s) \frac{T^1(k,s)+R^1(k,s)\mathcal{L}\left[\varphi_R(\tau) \mathbb{P}_R(k,\tau)\right](s)}{1- \mathcal{L}\left[\varphi_T(\tau) \mathbb{P}_T(k,\tau)\right](s)\mathcal{L}\left[\varphi_R(\tau) \mathbb{P}_R(k,\tau)\right](s)}, \label{eq:PT_Laplace_time} \end{align} \end{widetext} where $\mathcal{L}[f(\tau)](s):= \int_0^\infty\diff \tau \exp(-s\tau)f(\tau)$ denotes the Laplace transform of a function $f(\tau)$. \subsection{Intrinsic speed variability} So far we have introduced the general framework of a renewal process switching between the running and tumbling phases. Let us now specify the time distributions $\varphi_{R,T}$ and the propagators $\mathbb{P}_{R,T}$ for RT particles. We first consider the case in which a bacterium changes its swim speed after each tumble, which we refer to as \emph{intrinsic speed variability}. This accounts for the fluctuations of the propulsion speed over time that have recently been reported experimentally~\cite{Turner:2016}. Alternatively, the distribution of swimming speed can be accounted for at the population level, leading to a different model discussed in Sec.~\ref{sec:population_variability}. For simplicity, we here consider exponential distributions for the run and tumbling times with $\varphi_{R,T}(t)= \exp(-t/\tau_{R,T})/\tau_{R,T}$, where $\tau_R$ and $\tau_T$ denote the mean run and tumbling durations, respectively. We note, however, that our formalism allows discussing more general distributions as well. We now discuss the propagators $\mathbb{P}_R$ and $\mathbb{P}_T$. Assuming that tumbling particles diffuse with diffusivity $D$, the corresponding propagator is given by \begin{equation}\label{eqn:PT_intrinsic} \mathbb{P}_T(k,\tau)=\exp(-Dk^2\tau). \end{equation} For a swimming particle with speed $v$ and thermal diffusion constant $D$, the propagator instead reads $\exp(-Dk^2\tau)\sin(vk\tau)/(vk\tau)$. Assuming that, after each tumble, the particle samples a new swimming speed from a distribution $p(v)$, the propagator of the swimming particles is \begin{equation} \mathbb{P}_R(k,\tau)=\int_0^\infty p(v)\exp(-Dk^2\tau)\frac{\sin(vk\tau)}{vk\tau}\diff v. \end{equation} We use the Schulz distribution which is characterized by a mean velocity $\bar v$ and standard deviation $\sigma_v$~\cite{Martinez:2012}, \begin{equation} p(v)=\frac{v^Z}{\Gamma(Z+1)}\left(\frac{Z+1}{\bar v}\right)^{Z+1}e^{-(Z+1)v/\bar v}, \label{eqn:schultz} \end{equation} with $Z=\bar v^2/\sigma_v^2-1$. Then $\mathbb{P}_R(k,\tau)$ can be computed analytically as \begin{equation}\label{eqn:PR_intrinsic} \mathbb{P}_R(k,\tau)=e^{-Dk^2\tau}\left(\frac{Z+1}{Zk\bar v\tau}\right)\frac{\sin(Z\arctan\xi)}{(1+\xi^2)^{Z/2}}, \end{equation} with $\xi=k\bar v\tau/(Z+1)$. Using Eqs.~\eqref{eqn:PT_intrinsic} and \eqref{eqn:PR_intrinsic} as input, we can solve the integral equations~\eqref{eq:renewalFT1}-\eqref{eq:renewalFT4} numerically by time stepping $\tau$ for each wave number separately. This then leads to the intermediate scattering function (ISF) for the run-and-tumble particle, $f_{RT}(k,t)=P_R(k,\tau)+P_T(k,\tau)$. We note that the Laplace transform obtained from Eq.~\eqref{eq:PR_Laplace_time} and \eqref{eq:PT_Laplace_time} involves hypergeometric functions which makes its numerical inversion cumbersome and inefficient using Weeks' method~\cite{Weeks:1966,Weideman:1999}. \subsection{Speed variability at the population level}\label{sec:population_variability} Alternatively, one could consider a model, where the speed $v$ of a given bacterium is constant, but where $v$ is distributed over the bacterial population. As we show below, this leads to a simpler expression in Laplace space, but does not account for temporal fluctuations of $v$ at the single-bacterium level. Again, the speed distribution $p(v)$ of the population is chosen to be a Schultz distribution. In particular, within the renewal framework we replace the propagator for the run phase by $\mathbb{P}_R(k,\tau)=\exp(-Dk ^2\tau) \sin(vk\tau)/(vk\tau)$. Then the ISF is obtained by post-averaging the ISF of a single cell over the speed distribution, $f_{RT}(k,\tau)=\int \diff v\,p(v)[P_R(k,\tau)+P_T(k,\tau)]$. The two models are not equivalent due to the non-linearity of Eqs.~\eqref{eq:PR_Laplace_time}-\eqref{eq:PT_Laplace_time} with respect to the propagators $\mathbb{P}_{R,T}$. For this model, the numerical evaluation of the ISF, $f_{RT}(k,\tau)$, is expensive within the renewal framework due to the final integration over $p(v)$. Fortunately, the Laplace transform, $f_{RT}(k,s)$ is simpler than in the intrinsic-speed-variability model. Substituting the propagators, $\mathbb{P}_R(k,\tau)$ and $\mathbb{P}_T(k,\tau)=\exp(-Dk ^2\tau)$, and the exponential RT distributions, $\varphi_R$ and $\varphi_T$, into Eqs.~\eqref{eq:PR_Laplace_time} and \eqref{eq:PT_Laplace_time}, the ISF in Laplace space for a population of non-interacting RT particles with speed distribution reads: \begin{widetext} \begin{equation} f_{RT}(k,s)=\int_0^\infty \diff v\,p(v)\frac{kv\tau_T^2\tau_R+\tau_R(\tau_R+2\tau_T+\tau_T\tau_R(Dk^2+s))\arctan(kv\tau_R/(Dk^2\tau_R+1+s\tau_R))}{(\tau_R+\tau_T)[kv\tau_R(1+\tau_T Dk^2+\tau_Ts)- \arctan(kv\tau_R/(Dk^2\tau_R+1+s\tau_R))]}\;. \label{eq:ISF_RTP_const_speed} \end{equation} \end{widetext} Note that for exponentially distributed RT times, the ISF in Laplace space could also be obtained by generalizing the method introduced in Ref.~\cite{Martens:2012} for finite-duration tumbles. The time-domain ISF $f_{RT}(k,\tau)$ can be computed from Eq.~\eqref{eq:ISF_RTP_const_speed} using the standard Weeks' method~\cite{Weeks:1966,Weideman:1999}. Before discussing our fitting procedure and its validation using simulated data in Sec.~\ref{sec:parameters}, we first show below how the ISFs depend on the ingredients of the RT dynamics and the source of speed fluctuations. \subsection{Intermediate scattering functions} Fig.~\ref{fig:ISF_suppl}(a-b) show the ISFs for the RT model with intrinsic speed variability evaluated for motility parameters measured for \emph{E. coli} (see figure caption). The calculated ISFs $f_{RT}(k,\tau)$ [Fig.~\ref{fig:ISF_suppl}(a)] show a clear evolution at short times and small length scales $\ell=2\pi/k$ as one varies the RT durations close to their estimated biological values of $\tau_T\simeq \SI{0.1}{\, \mathrm{s}}$ and $\tau_R\simeq \SI{1}{\, \mathrm{s}}$~\cite{Berg:1972}. For instantaneous tumbles, $\tau_T= 0$ ($p_R=1$), the ISFs display oscillations for large $k$, which are smeared and disappear at times $\tau\gtrsim\tau_R$ and at small wave numbers $k\lesssim \SI{0.38}{\per\micro\meter}$ corresponding to a length scale $\ell\approx \SI{16.5}{\micro\meter}$, comparable with the persistence length $\ell_p=\langle v \rangle\tau_R=\SI{15}{\micro\meter}$ beyond which the motion becomes randomized by tumbles. As the tumble duration increases ($p_R$ decreases), oscillations fade out until a hump develops ($\tau_T=\SI{0.5}{\, \mathrm{s}}$, $p_R\approx0.7$) due to the diffusive motion of tumbling bacteria at small~$\ell$. Fig.~\ref{fig:ISF_suppl}(b) shows the ISFs for a fixed tumble duration of $\tau_T=0.1$~s and varying run time $\tau_R=2,1$~s. As the run time increases, we observe stronger oscillations at short times $\tau\lesssim\tau_R$ and large wave numbers $k$ and a more rapidly decreasing ISF at small wave numbers, which indicates that the regime of effective diffusion emerges at larger length and time scales. This pattern of behavior suggests that, in principle, experimentally-measured ISFs should contain enough information to characterize all features of the RT dynamics of {\it E.~coli}, including the tumbling statistics. \begin{figure*} \centering \includegraphics[width = \linewidth, keepaspectratio]{fig_3.pdf} \caption{(a) Snapshot of a RT simulation with $1\times$~magnification. (b-c) Validation of theoretical predictions with simulations for nine parameter sets with $\bar v = 17\, \upmu\text{ms}^{-1}$, $\sigma_v = 4.3\, \upmu\text{ms}^{-1}$, $D=0.3\, \upmu\text{m}^2\text{s}^{-1}$, $\alpha = 0.9$, varying the mean run and tumbling times, $\tau_R \in [0.5,1.5] \ \text{s}$ and $\tau_T \in [0.1,0.5] \ \text{s}$. (b) Parameter estimates obtained by fitting theoretical predictions of the ISF to agent-based simulations. The estimates are compared with the true parameters for the nine data sets. They were extracted from a global fit including wave numbers $k\in[0.15,2.21] \ \upmu\text{m}^{-1}$. Dark gray regions correspond to fitted parameters within $\pm10\%$ of the true values (light gray corresponds to $\pm20\%$). (c) ISF, $f(k,\tau)$, for $\tau_R=1.25$~s, $\tau_T=0.2$~s (data set 8) and different wave numbers $k$. Theoretical predictions and simulations correspond to lines and symbols, respectively. The ISFs are shifted w.r.t. the $y-$axis and the gray dotted lines indicate $y=0$. } \label{fig:validation} \end{figure*} Finally, we compare the ISFs for both models, see Fig.~\ref{fig:ISF_suppl}(c). In particular, we observe that at short times and length scales the ISFs of both models are almost indistinguishable and hence the fingerprint of the speed variability on the ISFs is subtle. The effect, however, becomes visible in our theory at large length scales, corresponding to $k\gtrsim \SI{0.38}{\per\micro\meter}$. Whether this small difference of the ISFs will be measurable and identifiable from experiments, will be discussed later. \section{Numerical protocol and validation on simulated data} \label{sec:parameters} We next present the numerical protocol that allows us to estimate the motility parameters of bacteria, such as their mean RT times and swimming speed, from measured ISFs. For the sake of completeness, we first recall below how ISFs are measured experimentally using differential dynamic microscopy (DDM). \subsection{Differential dynamic microscopy}\label{sec:ddm} DDM is a high-throughput method that provides quantitative information on 3D swimming microorganisms through their ISF, see Refs.~\cite{Cerbino:2008, Martinez:2012} for details. Briefly, the differential image correlation function (DICF), $g(\vec{k},\tau)$, i.e., the power spectrum of the difference between pairs of images separated by time $\tau$, is obtained via $g(\vec{k}, \tau)=\left\langle\left|I(\vec{k}, t+\tau)-I(\vec{k}, t)\right|^2\right\rangle_t$, where $I(\vec{k},t)$ is the Fourier transform of the image $I(\vec{r},t)$ and $\langle\cdot\rangle_t$ denotes an average over time $t$. Under suitable imaging conditions and for isotropic motion, the DICF is related to the ISF \cite{Wilson:2011,Martinez:2012,Reufer2012}, $f(k,\tau)$, via \begin{align} g(k,\tau) =\left \langle g(\vec{k},\tau)\right\rangle_{\vec{k}}= A(k)\left[1-f(k,\tau)\right]+B(k) \end{align} with $\left\langle\cdot\right\rangle_\vec{k}$ denoting average over $\vec{k}$ and $A(k)$ and $B(k)$ the signal amplitude and instrumental noise, respectively. These coefficients are obtained from the plateau of $g(k,\tau)$ at long and short times, where the ISF approaches $f(k,\tau\to \infty)\rightarrow 0$ and $f(k,\tau\to 0)\rightarrow 1$, respectively. \subsection{Fitting procedure}\label{sec:fitting} To reliably extract quantitative information from the measured ISFs using our renewal theory, we implement a fitting procedure based on the minimization of the squared errors using a Nelder Mead optimization algorithm~\cite{Nelder:1965}. We apply a multi-start fitting analysis, where several fits are obtained for various initial values and the parameter set yielding the smallest error is chosen. In most fitting procedures several initial values provided the same result, which strengthens the reliability of our procedure. We performed a global fit including data for several wave numbers simultaneously. Using one dataset, we tested several wave number ranges and found the most adequate should include length scales of the order of the cell body, up to length scales resolving the randomization of the swimming direction, i.e. $k\ell \lesssim 2\pi$. The parameter estimation method has been validated with simulations (see Section~\ref{sec:validation}). \subsection{Validation of the fitting procedure}\label{sec:validation} Before tackling the experimental data, we have validated our parameter estimation method with computer simulations (see Appendix~\ref{appendix:sim}). To do so, we consider exponentially distributed RT times with different $\tau_R$ and $\tau_T$, which are close to those reported previously~\cite{Berg:1972}. In particular, we set $\tau_R\in[0.5,1.5]\,$s and $\tau_T\in[0.1, 0.5]\,$s corresponding to $p_R = \tau_R/(\tau_R+\tau_T) \in [0.67,0.91]$. We further choose values for the remaining motility parameters, including the mean velocity, $\bar v$, its standard deviation, $\sigma_v$, the translational diffusivity, $D$, that are typical for {\it E.~coli}\ suspensions~\cite{Wilson:2011,Martinez:2012}. Following experimental findings \cite{Kurzthaler:2022}, we add a fraction $1-\alpha$ of non-motile cells that undergo Brownian motion with diffusivity $D$ in the simulations. Thus the ISF obtained from simulated data should follow \begin{equation} f(k,\tau)=\alpha f_{RT}(k,\tau)+(1-\alpha)e^{-Dk^2\tau}\;. \end{equation} First, we perform simulations of the intrinsic-speed variability model. We employ a global fitting procedure (as outlined in Section~\ref{sec:fitting}) and simultaneously include wave numbers $k\in[0.15,2.21]\,\upmu$m$^{-1}$. Fitting our renewal theory to the ISF extracted from simulated data of particles with intrinsic speed variability reveals that our fitting protocol reliably reproduces the true parameter values. In particular, most of the fitted parameters lie within $\pm10\%$ error with respect to the true values [Fig.~\ref{fig:validation}(b)]. Figure~\ref{fig:validation}(c) shows excellent agreement between the simulated ISF with mean run and tumbling times, $\tau_R=1.25\,$s, $\tau_T=0.2\,$s (dataset 8), and the theoretical predictions obtained from the fitting procedure. We also perform simulations of a mixture of particles with different speeds, fixing each particle speed during the simulation. We use the inverse Laplace transform of Eq.~\eqref{eq:ISF_RTP_const_speed} to calculate the ISF. Including a fraction $\alpha$ of non-motile cells, we follow the same global fitting procedure as introduced in Section~\ref{sec:fitting} to fit the data. We find that the fitted parameters again lie within $\pm10\%$ error with respect to the true values [Fig.~\ref{fig:population}(a)]. \begin{figure*} \centering \includegraphics[width = \linewidth, keepaspectratio]{fig_4.pdf} \caption{Parameter estimates obtained by fitting theoretical predictions of the ISF to agent-based simulations with speed variability at the population level. Parameters are the same as in Fig.~\ref{fig:validation}: $\bar v = 17\, \upmu\text{ms}^{-1}$, $\sigma_v = 4.3\, \upmu\text{ms}^{-1}$, $D=0.3\, \upmu\text{m}^2\text{s}^{-1}$, $\alpha = 0.9$. The mean run and tumbling times are varied within $\tau_R \in [0.5,1.5] \ \text{s}$ and $\tau_T \in [0.1,0.5] \ \text{s}$. (a) Fitting in lag time $\tau$ with numerical inverse Laplace transform of theoretical ISF. (b-c) Fitting in Laplace time $s$ using the numerical Laplace transform of the data, with $s$ ranging from (b) $1/\tau_{\rm max}$ to $1/\tau_{\rm min}$ and (c) $10/\tau_{\rm max}$ to $1/\tau_{\rm min}$. The estimates are compared with their true values for the nine data sets. Outliers with an error such that (fits/true$-1)>1$ are not shown. We used a global fit including wave numbers $k\in[0.15,2.21] \ \upmu\text{m}^{-1}$. Dark gray regions correspond to fitted parameters within $\pm10\%$ error of the true values (light gray corresponds to $\pm20\%$). } \label{fig:population} \end{figure*} Note that fitting the expression~\eqref{eq:ISF_RTP_const_speed} directly in Fourier-Laplace space over the full range $s\in[1/\tau_\text{max}, 1/\tau_\text{min}]$ leads to large, systematic errors for most parameters [Fig.~\ref{fig:population}(b)]. These errors are mainly due to the loss of numerical precision during integration of the discrete data to calculate their Laplace transform. Using a smaller range $s\in[10/\tau_\text{max}, 1/\tau_\text{min}]$ [Fig.~\ref{fig:population}(c)] slightly improves the fitting results but does not lead to satisfactory estimates, especially for the tumbling and running durations. We further note that while an optimized $s-$range may lead to a satisfactory fit in Laplace space, it requires \textit{a priori} knowledge of the swimming parameters that makes this procedure unsatisfactory. This is a major issue for the analysis of experimental data, which probably explains why DDM has not been used so far to characterize RT dynamics, despite the explicit expressions for the propagators in Fourier/Laplace space~\cite{Angelani:2013,Martens:2012}. For numerics as well as experimental data, we found that the renewal theory was always a more efficient and reliable avenue. \section{Experiments}\label{sec:exp} We now demonstrate that our numerical protocol can indeed be used to characterize quantitatively experimental data. To do so, we use the data on the wild-type AB1175 \textit{E. coli} strain presented in our joint work~\cite{Kurzthaler:2022}. {In short, these data were obtained by imaging swimming cells in sealed capillaries on a fully-automated inverted bright-field microscope with a sCMOS camera. To characterize the RT dynamics we require access to length scales larger than the cells' persistence length, $\gtrsim \ell_p$, in 3D. Therefore, a large depth of field at low $k$ is needed to ensure that bacteria remain in view over large distances in all directions. To measure the dynamics at all relevant length scales, we consecutively recorded movies at $2\times$ and $10\times$ magnifications to extract the ISF for $k<0.9\, \upmu$m$^{-1}$ and $k\geq 0.9\, \upmu$m$^{-1}$ respectively using standard DDM procedures \cite{Wilson:2011,Martinez:2012}. We refer to the Supplemental Material of Ref.~\cite{Kurzthaler:2022} for more details on the experimental procedure. Fitting the ISFs to our renewal theory using the numerical protocol described here yields RT motility parameters. } We have reported the fitting results from the intrinsic-speed-variability model in Fig.~4 of Ref.~\cite{Kurzthaler:2022}. Here we fit the same data to the theoretical predictions of the model that incorporate speed fluctuations at the bacterial population level, using the numerical inverse Laplace transform of Eq.~\eqref{eq:ISF_RTP_const_speed}. We obtain the following motility parameters: $\alpha=97\pm0.3\%$, $\bar v=16\pm0.2\,\si{\micro\meter\per\, \mathrm{s}}$, $\sigma_v=5.80\pm0.29\,\si{\micro\meter\per\, \mathrm{s}}$, $D=0.25\pm0.04\,\si{\square\micro\meter\per\, \mathrm{s}}$, $\tau_R=3.21\pm0.38\,\text{s}$, $\tau_T=0.50\pm0.05\,\text{s}$. The corresponding ISFs are shown in Fig.~\ref{fig:ISF_laplace} and display nice agreement with the experimental data. The estimates for the motility parameters are largely consistent with those reported in Ref.~\cite{Kurzthaler:2022}, although they were obtained from fitting a slightly different model. We note the fraction of running time $\tau_R/(\tau_R+\tau_T)= 0.866$ agrees with the well-cited results from Berg and Brown~\cite{Berg:1972} and the fits in Ref. \cite{Kurzthaler:2022} ($\tau_R/(\tau_R+\tau_T)=0.863$). This suggests that the origin of the speed variability is indistinguishable under the spatio-temporal scales measured in the experiments. The run and tumbling times obtained from the model with speed variability at the population level are both slightly larger than that from the intrinsic-speed-variability model, with an uncertainty of $\sim 10\%$ which is also slightly larger than that of the intrinsic-speed-variability model ($\sim 5\%$ \cite{Kurzthaler:2022}). We note that in real biological systems, both types of fluctuations are expected; they do not lead to major differences as far as displacement statistics are concerned. We note that implementing the speed variability at the population level allows to work with explicit expression for $f(k,s)$, which, however, have to be inverted numerically. \begin{figure}[tp] \centering \includegraphics[width=\linewidth]{fig_5.pdf} \caption{ISFs for different wave numbers $k$. Symbols represent experimental results for \emph{E. coli} bacteria (same as in Fig.~3 of Ref. \cite{Kurzthaler:2022}) and lines are the theoretical predictions obtained by considering speed variability at population level. The data has been fitted to the numerical inverse of Eq.~\eqref{eq:ISF_RTP_const_speed} including wave numbers $k\in [0.04, 1.89]\SI{}{\micro\meter}^{-1}$. \label{fig:ISF_laplace}} \end{figure} \section{Summary and conclusion} \label{sec} In this paper, we developed a numerical protocol to quantitatively characterize the tumbling statistics of run-and-tumble bacteria from DDM measurements. First, we showed how to use the renewal theory to calculate the intermediate scattering function of run-and-tumble particles. We proposed two slightly different models, which account for the speed fluctuations at either the intrinsic or the population level. Then we demonstrated a robust protocol to extract parameters from experimental data. The protocol was validated using agent-based simulations and then applied to the experimental data of a wild-type \emph{E. coli} AB1157 strain, which was reported in an accompanying paper~\cite{Kurzthaler:2022}. At the spatio-temporal scales of the experiment, the two models seem to be indistinguishable and the bacteria may in fact exhibit both: intrinsic speed variations and a speed variability at the population level. In~\cite{Kurzthaler:2022}, we also show how our method can be employed to characterize a transition between perpetual tumbling and smooth swimming in an engineered bacterial strain whose tumbling statistics is under the (now-quantitative) control of a chemical inducer. The framework of renewal processes is not limited to the RT motion of bacteria and can be extended to other multi-mode motility patterns, such as the `run-reverse(-flick)'~\cite{Taktikos:2013,Taute:2015, Thornton:2020} or `run-reverse-wrap'~\cite{Alirezaeizanjani:2020} motion. In future work, our method may thus allow for a quantitative characterization of a large variety of microorganisms. Furthermore, the numerical protocol proposed in this paper has vast potential applications, ranging from quantitatively studying the tactic response of individual cells to investigating complex collective bacterial organizations. In particular, to gain a complete picture of bacterial chemotaxis on the cell level, the regulation of the tumbling statistics due to the presence of spatially varying, external chemical fields~\cite{clark-pnas-2005, kafri-prl-2008} may be established experimentally by using spatially-resolved DDM \cite{Reufer2012}. The `run-and-tumble' motion has been proposed as a paradigmatic model not only for \emph{E. coli}, but also for many other microorganisms, such as \emph{Euglena gracilis}~\cite{tsang2018polygonal}. The latter can direct its motion in the presence of light sources and its phototaxis has rich features because the cell can sense both the intensity and the polarization of light \cite{yang:2021}. Our framework may allow for a quantitative description of this intricate behavior and the associated tumbling statistics. Finally, the regulation of the tumbling statistics of engineered bacterial strains~\cite{Kurzthaler:2022,mckay:2017} may be exploited for creating complex self-organizations~\cite{Liu:2011,Curatolo:2020}. There our high-throughput method could both help validate the design of the engineered strains as well as allow for their quantitative characterization. \begin{acknowledgments} This work was supported by the Austrian Science Fund (FWF) via P35580-N and the Erwin Schr{\"o}dinger fellowship (J4321-N27), the European Research Council Grant AdG-340877-PHYSAPS, the ANR grant Bactterns, the Shenzhen Peacock Team Project (KQTD2015033117210153), and the National Key Research and Development Program of China (2021YFA0910700). \end{acknowledgments} \section*{Appendix}
2005.11351
\section{Introduction} This paper is devoted to show the existence and uniqueness of logarithmic models for any holomorphic foliation on $({\mathbb C}^n,0)$ of generalized hypersurface type. In the case of $n=2$, this result has been obtained by N. Corral in \cite{Cor}. The main result is Theorem \ref{teo:main} in the last section of the paper. We state it as follows: \begin{theoremmain} Every generalized hypersurface on $({\mathbb C}^n,0)$ has a logarithmic model. \end{theoremmain} A germ $\mathcal L$ of singular codimension one foliation on $({\mathbb C}^n,0)$ is {\em logarithmic} when it is given by a closed logarithmic $1$-form $$ \eta=\sum_{i=1}^s\lambda_i\frac{df_i}{f_i},\quad f_i\in {\mathcal O}_{{\mathbb C}^n,0}. $$ In other words the foliation $\mathcal L$ has the multivaluated first integral $f_1^{\lambda_1}f_2^{\lambda_2}\cdots f_s^{\lambda_s}$. Up to reduction of singularities of the germ of hypersurface $$ H=(f_1f_2\cdots f_s=0), $$ the transform of $\eta$ is a global closed logarithmic $1$-form and the total transform of $H$ has normal crossings. In this situation all the local holonomies are linear in terms of the coordinates given by $H$. We can say that the holonomy of $\mathcal L$ is ``globally linearizable''. Of course this picture needs to be specified, mainly by asking that we are not in a ``dicritical'' situation. Roughly speaking, a ``logarithmic model'' for a codimension one foliation $\mathcal F$ should be a logarithmic foliation $\mathcal L$ such that the local holonomies of $\mathcal L$ coincide with the linear parts of the local holonomies of $\mathcal F$. In this way, the logarithmic model is an object that can be considered as ``the linear part of the holonomy of $\mathcal F$" or, in some sense, a ``holonomic initial part'' of $\mathcal F$. The logarithmic models in ambient dimension two may be described in a more precise way. We do it for foliations $\mathcal F$ on $({\mathbb C}^2,0)$ without saddle nodes in their reduction of singularities (hidden saddle nodes) and that are also ``non-dicritical'', in the sense that we encounter only invariant exceptional divisors in the sequence of blowing-ups desingularizing $\mathcal F$. We give the name {\em generalized curves} to such foliations, following a terminology that comes from the foundational paper of Camacho, Lins Neto and Sad \cite{Cam-LN-S}. In this situation, after desingularization, the foliation at a singular point is given by a local $1$-form $$ (\lambda+\cdots)ydx+(\mu+\cdots)xdy,\quad \lambda\mu\ne0,\;\lambda/\mu\notin {\mathbb Q}_{\leq 0}. $$ The quotient $-\lambda/\mu$ is the Camacho-Sad index of the foliation with respect to $y=0$ and it also determines the coefficient of the linear part of the holonomy. Up to multiply the 1-form by a scalar and to adapt the coordinates, a local logarithmic foliation, having holonomy with the same linear part as $\mathcal F$, is locally given by $$ \lambda\frac{dx}{x}+\mu\frac{dy}{y}. $$ This is the way we take for approaching a germ of generalized curve $\mathcal F$ on $({\mathbb C}^2,0)$ by a logarithmic foliation $\mathcal L$: \begin{quote} A logarithmic foliation $\mathcal L$ is a {\em logarithmic model} for a generalized curve $\mathcal F$ on $({\mathbb C}^2,0)$ if it has the same invariant branches as $\mathcal F$ and the same Camacho-Sad indices after reduction of singularities. \end{quote} This provides a precise definition. With no effort one realizes that the property is independent of the chosen reduction of singularities; indeed, in dimension two we have a well defined minimal reduction of singularities and any other one is obtained by additional blowing-ups from it. In the paper \cite{Cor} there is a proof of the existence of logarithmic models in dimension two for any generalized curve. Logarithmic models in dimension two have been particularly useful for describing the properties of the generic polar of a given foliation, because the main Newton Polygon parts coincide for the foliation and the logarithmic model, see \cite{Cor}. Let us also note that some results in dimension two may be stated in the dicritical case \cite{Can-Co}, anyway in this paper we consider always the non-dicritical situation. We have two possible ways for extending the concept of logarithmic models to higher dimension. The first one is to use reduction of singularities as in the two-dimensional case. Since we are considering generalized hypersurfaces, we know the existence of reduction of singularities for our foliations, more precisely, any reduction of singularities of the finite set of invariant hypersurfaces provides a reduction of singularities of the foliation \cite{Fer-M}. The second way is to perform two-dimensional tests. It is known that certain properties in algebraic geometry are tested by valuative criteria, for instance integral dependence, or properness. In the theory of codimension one foliations, there are remarkable properties detected by testing with a two-dimensional map. The existence of holomorphic first integral is one of them, as exhibited in the paper of Mattei-Moussu \cite{Mat-Mou}. The dicriticalness and the existence of hidden-saddle nodes are also properties of this kind: \begin{itemize} \item Dicriticalness: A codimension one foliation $\mathcal F$ on $({\mathbb C}^n,0)$ is {\em dicritical} if and only if there is a holomorphic map $\phi: ({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$ such that $\phi^*{\mathcal F}=(dx=0)$ and the image of $y=0$ is invariant for $\mathcal F$. \item Existence of hidden saddle nodes: A codimension one foliation $\mathcal F$ on $({\mathbb C}^n,0)$ has a {\em hidden saddle-node} if there is a holomorphic map $\phi: ({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$ such that $\phi^*{\mathcal F}$ is a saddle-node. \end{itemize} When the foliation $\mathcal F$ is non-dicritical and without hidden saddle-nodes, we say that $\mathcal F$ is a {\em generalized hypersurface}. In this context, we take a definition of logarithmic model as follows: \begin{quote} Let $\mathcal F$ be a generalized hypersurface and consider a logarithmic foliation $\mathcal L$, both on $({\mathbb C}^n,0)$. We say that $\mathcal L$ is a {\em logarithmic model for $\mathcal F$} if and only if $\phi^*{\mathcal L}$ is a logarithmic model for $\phi^*{\mathcal F}$, for any holomorphic map $\phi: ({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$ such that $\phi^*{\mathcal F}$ exists. \end{quote} In this paper we show that the two above ways are confluent. The uniqueness of logarithmic models is a consequence of the same result in dimension two. We show the existence of logarithmic models for generalized hypersurfaces by working throughout a particular reduction of singularities of the foliation $\mathcal F$. From the technical viewpoint, we develop the theory of logarithmic models in terms of $\mathbb C$-divisors. We introduce the concept of {\em divisorial model} and we state the existence in the main technical result Theorem \ref{teo:main}. There is a relationship between $\mathbb C$-divisors and logarithmic foliations, that provides the bridge between the divisorial models and the logarithmic models as explained in the last Section \ref{Logarithmic Models}. Consider a non-singular complex analytic space $M$. A {\em ${\mathbb C}$-divisor\/} $\mathcal D$ on $M$ is a formal finite sum $$ {\mathcal D}=\sum_{i=1}^s\lambda_i H_i, \quad 0\ne \lambda_i\in {\mathbb C}, $$ where the $H_i\subset M$ are hypersurfaces. The support of the divisor is the union of the hypersurfaces $H_i$. We can make the usual operations with $\mathbb C$-divisors, in particular the pull-back $\phi^*{\mathcal D}$ under a holomorphic map $\phi:N\rightarrow M$, when the image is not locally contained in the support of ${\mathcal D}$. Working locally, if we take a reduced equation $f_i=0$ of $H_i$, we can consider the closed logarithmic $1$-form $\eta$ given by $$ \eta=\sum_{i=1}^s\lambda_i\frac{df_i}{f_i}. $$ The logarithmic foliation induced by $\eta$ will be called {\em $\mathcal D$-logarithmic.} In dimension two, we give a proof of the existence of logarithmic model in terms of $\mathbb C$-divisors, that is divisorial models, in a more explicit way than in the paper \cite{Cor}. When the foliation $\mathcal F$ is desingularized, at a singular point we have exactly two invariant curves, $\Gamma_1$ and $\Gamma_2$, given by the equations $\Gamma_1=(x_1=0)$ and $\Gamma_2=(x_2=0)$. We know that the foliation is given by a differential $1$-form $\omega$ as $$ \omega=(\lambda_1+\cdots)\frac{dx_1}{x_1}+ (\lambda_2+\cdots)\frac{dx_2}{x_2}, $$ where $-\lambda_i/\lambda_j$ are the Camacho-Sad indices. We say that the ${\mathbb C}$-divisor $$ {\mathcal D}=\lambda_1\Gamma_1+\lambda_2\Gamma_2, $$ is a divisorial model for $\mathcal F$. Of course, the logarithmic foliation $\mathcal L$ defined by $$ \eta=\lambda_1dx_1/x_1+\lambda_2dx_2/x_2 $$ fulfils the definition of being a logarithmic model for $\mathcal F$. We pass to the general case in dimension two through the stability under blowing-ups. More precisely, we recover the general definition of Camacho-Sad indices for foliations in dimension two (see \cite{Bru} and \cite{LNet}) and we establish a similar one for $\mathbb C$-divisors. Both are compatible with the blowing-ups and in this way we obtain logarithmic models once we have proven the existence of divisorial models in Theorem \ref{th:existenciayunicidadendimensiondos}. The above arguments pass in higher dimension, hence we have a definition of divisorial model that automatically gives a logarithmic foliation that is a logarithmic model. In this way, the main difficulty in the paper is the proof of the existence of a divisorial model for any generalized hypersurface $\mathcal F$ on $({\mathbb C}^n,0)$. We state this result in Theorem \ref{teo:main} in Section \ref{Logarithmic Models For Generalized Hypersurfaces}. The first sections are devoted to present the theory of $\mathbb C$-divisors, the relationship between $\mathbb C$-divisors, closed logarithmic 1-forms and logarithmic foliations, the dicriticalness condition for $\mathbb C$-divisors and foliations, the property of being a generalized hypersurface, the existence of logarithmic models in dimension two throughout the generalized Camacho-Sad indices, the reduction of singularities and the properties of generic equireduction and relative transversality that are useful in the final proofs. The proofs of the main results are given in Section \ref{Logarithmic Models For Generalized Hypersurfaces}. We first show the existence of a ${\mathbb C}$-divisor compatible with a given reduction of singularities in Theorem \ref{teo:pilogarithmicmodel} and finally we prove the main Theorem \ref{teo:main} on the existence of divisorial models. All the paper has been developed having in mind that a logarithmic model is a foliation; anyway, from the technical view point, we have stated and prove results in terms of divisorial models. In the last Section \ref{Logarithmic Models}, we quickly summarize how the existence and uniqueness results on divisorial models are translated to logarithmic models. In this way, we obtain the proof of the main Theorem stated in this Introduction. \section{$\mathbb C$-Divisors} Let $M$ be a non singular complex analytic variety. The space of {\em generalized divisors $\operatorname{Div}_{\mathbb C}(M)$}, also called {\em $\mathbb C$-divisors}, is defined to be the ${\mathbb C}$-vector space having as a basis the set of irreducible hypersurfaces of $M$. They have been introduced in \cite{Can-Co} for the purpose of describing logarithmic models of foliations in ambient dimension two. Thus, a {\em $\mathbb C$-divisor ${\mathcal D}$ } in $M$ is a finite expression $$ {\mathcal D}=\sum_{H}\lambda_HH, $$ where $H$ runs over the irreducible hypersurfaces of $M$ and the coefficients $\lambda_H$ are complex numbers, such that only finitely many of them are nonzero. The {\em support $\operatorname{Supp}({\mathcal D})$ of $\mathcal D$} is the union of the $H$ such that $\lambda_H\ne0$. We say that two nonzero $\mathbb C$-divisors ${\mathcal D}_1, {\mathcal D}_2\in \operatorname{Div}_{\mathbb C}(M)$ with connected support are {\em projectively equivalent} if and only if there is a nonnull scalar $\lambda\in {\mathbb C}^*$ such that ${\mathcal D}_2=\lambda{\mathcal D}_1$. If the support is not connected, we say that they are projectively equivalent when the condition holds at each connected component of the support. Consider a function $f:M\rightarrow {\mathbb C}$ that is not constant at any connected component of $M$. As usual, we define the divisor $\operatorname{Div}(f)$ by $$ \operatorname{Div}(f)=\sum_H \mu_HH, $$ where $\mu_H\ne 0$ if and only if $H$ is an irreducible component of $f=0$ and $\mu_H$ is the multiplicity of a local reduced equation of $H$ as a factor of $f$. Let us consider a closed hypersurface $S$ of $M$, not necessarily irreducible. Let $S=\cup_{i=1}^s H_i$ be the decomposition of $S$ into a union of irreducible components. The divisor $\operatorname{Div}(S)$ is defined to be $$ \operatorname{Div}(S)=\sum_{i=1}^s H_i. $$ In particular, if ${\mathcal D}=\sum_H\lambda_H H$, we also have that ${\mathcal D}=\sum_H\lambda_H\operatorname{Div}(H)$. This simple remark allows us to define the restriction of a $\mathbb C$-divisor to an open set $U\subset M$, by means of the formula $$ {\mathcal D}\vert_U=\sum_H\lambda_H\operatorname{Div}(H\cap U). $$ In this way, we can also interpret the germ ${\mathcal D}_p$ at a point $p\in M$ of a $\mathbb C$-divisor ${\mathcal D}$ in $M$ as being a ${\mathbb C}$-divisor on the germified space $(M,p)$. Of course, these ones are particular cases of the inverse image of a $\mathbb C$-divisor by a morphism to be introduced below. \begin{remark} Most of the complex analytic varieties in this paper are germs over compact sets. In this case any hypersurface has only finitely many irreducible components. Anyway, we consider only hypersurfaces with this property, even if we are in an analytic variety not necessarily a germ over a compact. The reader will appreciate at each statement the limits of this implicit assumption. \end{remark} Consider a morphism $\phi: N\rightarrow M$ between connected non singular complex analytic varieties and a hypersurface $S\subset M$. We say that $\phi$ is {\em $S$-transverse} if and only if the image of $\phi$ is not contained in $S$. In this case $\phi$ is $H$-transverse for any irreducible component $H$ of $S$ and the inverse image is a hypersurface $\phi^{-1}(H)\subset N$. When $\phi:N\rightarrow M$ is $H$-transverse, we define the $\mathbb C$-divisor $\phi^*(1\cdot H)$ of $N$ by the following property: for any point $q\in N$ the divisor $\phi^*(1\cdot H)$ germified at $q$ is equal to $ \operatorname{Div} (f\circ \phi) $, where $f=0$ is a local reduced equation of $H$ at $\phi(q)$. Consider a $\mathbb C$-divisor ${\mathcal D}=\sum_H\lambda_HH$. We say that the morphism $\phi:N\rightarrow M$ is {\em ${\mathcal D}$-transverse} if it is $S$-transverse where $S =\operatorname{Supp}(\mathcal{D})$. Otherwise, we say that $\phi$ is {\em $\mathcal D$-invariant}. When $\phi$ is ${\mathcal D}$-transverse, the {\em inverse image} is defined by $$ \phi^{*}{\mathcal D}=\sum_H\lambda_H\phi^{*}(1\cdot H). $$ We are particularly interested in the case of blowing-ups $\pi:M'\rightarrow M$ with irreducible non-singular center $Y\subset M$. The blowing-up $\pi$ being a surjective morphism is ${\mathcal D}$-transverse for any $\mathbb C$-divisor ${\mathcal D}$. The inverse image $\pi^{*}{\mathcal D}$, or {\em transform of $\mathcal D$ by $\pi$}, is given by $$ \pi^{*}{\mathcal D}=\mu E+\sum_{H}\lambda_HH', \quad E=\pi^{-1}(Y), \quad \mu= \sum_{H}\nu_Y(H)\lambda_H, $$ where $H'$ are the strict transforms of the irreducible hypersurfaces $H\subset M$ and $\nu_Y(H)$ denotes the generic multiplicity of $H$ along $Y$. The blowing-up $\pi$ is said to be {\em $\mathcal D$-admissible} when $Y\subset \operatorname{Supp}({\mathcal D})$. This is equivalent to say that $$ \sum_{H\subset \operatorname{Supp}({\mathcal D})}\nu_{Y}(H)\geq 1. $$ A ${\mathcal D}$-admissible blowing-up $\pi$ is called {\em ${\mathcal D}$-dicritical} when $ \sum_{H}\nu_Y(H)\lambda_H=0 $; that is, the exceptional divisor $E$ is not contained in the support of $\pi^*{\mathcal D}$. \begin{remark} Let us recall that for any germ of function $f\in {\mathcal O}_{{\mathbb C}^n,0}$, the generic multiplicity $\nu_Y(f)$ along $Y\subset ({\mathbb C}^n,0)$ is defined to be the minimum of the multiplicity of $f$ at the points of $Y$ near the origin (the multiplicity is an upper semi-continuous function). The generic multiplicity $\nu_Y(H)$ is the generic multiplicity along $Y$ of a reduced germ $f$, such that $H$ is given by $f=0$. \end{remark} \begin{remark} \label{rk:transversemaps} Let us consider two morphisms $\phi_2:N_2\rightarrow N_1$ and $\phi_1:N_1\rightarrow M$ and a $\mathbb C$-divisor ${\mathcal D}$ on $M$. If $\phi_1\circ\phi_2$ is ${\mathcal D}$-transverse, then $\phi_1$ is ${\mathcal D}$-transverse, the morphism $\phi_2$ is $\phi_1^*{\mathcal D}$-transverse and $$ (\phi_1\circ\phi_2)^*{\mathcal D}=\phi_2^*(\phi_1^*{\mathcal D}). $$ The converse is not true. It is possible to have that $\phi_1$ is ${\mathcal D}$-transverse and $\phi_2$ is $\phi_1^*{\mathcal D}$-transverse, but $\phi_1\circ\phi_2$ is ${\mathcal D}$-invariant. The typical example of this situation is the inclusion $$ \pi^{-1}(Y)\stackrel{\phi_2}{\subset} M'\stackrel{\phi_1}{\rightarrow }M, $$ where $\phi_1$ is a $\mathcal D$-dicritical blowing-up with center $Y$. The $\mathbb C$-divisor $\phi_2^*(\phi_1^*{\mathcal D})$ cannot be obtained directly from $\pi^{-1}(Y)\rightarrow M$ (this phenomenon is an essential fact for the transcendence of leaves of singular foliations studied in \cite{Can-L-M}). \end{remark} If no confusion arises, we write $ \pi:(M',{\mathcal D}')\rightarrow (M,{\mathcal D}) $ to denote a ${\mathcal D}$-transverse holomorphic map $\pi:M'\rightarrow M$, where ${\mathcal D}'=\pi^*{\mathcal D}$. The rest of this section is devoted to characterize the dicriticalness of a $\mathbb C$-divisor. We take the following definition, which is inspired in the corresponding one for foliations: \begin{definition} Consider a $\mathbb C$-divisor $\mathcal D$ on a non-singular complex analytic variety $M$. We say that $\mathcal D$ is {\em dicritical at a point $p\in M$} if and only if there is a $\mathcal D$-transverse holomorphic map $ \phi:({\mathbb C}^2,0)\rightarrow M $ such that $$ \phi(0)=p,\quad \phi(y=0)\subset \operatorname{Supp} ({\mathcal D}),\quad \phi^*{\mathcal D}=0. $$ We say that $\mathcal D$ is {\em dicritical} if there is a point $p\in M$ such that it is dicritical at $p$. In a consonant way, we say that $\mathcal D$ is {\em non-dicritical} if and only if it is non-dicritical at each point $p\in M$ (in the case of germs $(M,K)$ we ask the conditions for the points $p\in K$). \end{definition} \begin{proposition} \label{pro:divdicrunblowinup} Consider a $\mathbb C$-divisor ${\mathcal D}$ on $M=({\mathbb C}^n,0)$ and a non-dicritical admissible blowing-up $ \pi:((M,\pi^{-1}(0)),{\mathcal D}')\rightarrow (({\mathbb C}^n,0),{\mathcal D}). $ Then, the $\mathbb C$-divisor ${\mathcal D}$ is dicritical if and only if there is a point $p'\in\pi^{-1}(0)$ such that ${\mathcal D}'$ is dicritical at $p$. \end{proposition} \begin{proof} Let us assume that ${\mathcal D}'$ is dicritical at a point $p'\in \pi^{-1}(0)$. Then, there is a ${\mathcal D}'$-transverse map $\phi': ({\mathbb C}^2,0)\rightarrow (M',p')$ such that $${\phi'}^*{\mathcal D'}=0,\quad \phi'(y=0)\subset \operatorname{Supp} ({\mathcal D}'). $$ Since $\pi$ is non-dicritical, we have that $ \operatorname{Supp} ({\mathcal D}')=\pi^{-1} (\operatorname{Supp} ({\mathcal D})) $. This implies that $\phi=\pi\circ\phi'$ is also a ${\mathcal D}$-transverse map and moreover, we have $$ \phi(y=0)\subset \operatorname{Supp} ({\mathcal D}), \quad \phi^*{\mathcal D}={\phi'}^*{\mathcal D}'=0. $$ Hence, the $\mathbb C$-divisor ${\mathcal D}$ is dicritical. Conversely, let us assume that ${\mathcal D}$ is dicritical. Consider a $\mathcal D$-transverse map $\phi: ({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$, such that $\phi(y=0)\subset \operatorname{Supp}{\mathcal D}$ and $\phi^*{\mathcal D}=0$. In view of Proposition \ref{prop:appdos} in the Appendix I, there is a morphism $$ \sigma: (N,\sigma^{-1}(0))\rightarrow ({\mathbb C}^2,0) $$ that is a composition of blowing-ups and a morphism $\psi: (N,\sigma^{-1}(0))\rightarrow (M,\pi^{-1}(0))$ such that $\pi\circ\psi=\phi\circ\sigma$. Note that $\phi\circ\sigma$ is $\mathcal D$-transverse, since $\phi$ is $\mathcal D$-transverse and $\sigma$ is a surjective map. By Remark \ref{rk:transversemaps}, we have that $\psi$ is $\pi^*{\mathcal D}$-transverse and $$ 0= \sigma^*(\phi^*{\mathcal D})=(\phi\circ\sigma)^*{\mathcal D}=(\pi\circ\psi)^*{\mathcal D}=\psi^*({\mathcal D}'). $$ Now, let $(\Gamma,q)\subset (N,q)$ be the strict transform of $y=0$ by $\sigma$. We have that $\pi(\psi(\Gamma))=\phi(y=0)\subset \operatorname{Supp}({\mathcal D})$. In other words $$ \psi(\Gamma)\subset \pi^{-1}(\operatorname{Supp}({\mathcal D}))=\operatorname{Supp}({\mathcal D}'). $$ Select local coordinates $x',y'$ at $q$ such that $\Gamma=(y'=0)$ and let $$ \phi':(N,q)=({\mathbb C}^2,0)\rightarrow (M,p),\quad p=\psi(q), $$ be the map between germs induced by $\psi$. Thanks to $\phi'$, we see that ${\mathcal D}'$ is dicritical at $p$. \end{proof} The following corollary is a direct consequence of Proposition \ref{pro:divdicrunblowinup}: \begin{corollary} \label{dicriticidadexplosionnodicritica} Consider a morphism $ \pi:(M', {\mathcal D}')\rightarrow (M,{\mathcal D}) $ that is the composition of a sequence of non-dicritical admissible blowing-ups. Then, the $\mathbb C$-divisor $\mathcal D$ is dicritical in $M$ if and only if ${\mathcal D}'$ is dicritical in $M'$. \end{corollary} Now, we characterize the dicriticalness in terms of admissible blowing-ups. We start with the normal crossings case. We say that a $\mathbb C$-divisor $\mathcal D$ on a complex analytic variety $M$ has a {\em non-negative resonance} at a point $p\in M\cap \operatorname{Supp}({\mathcal D})$ if the germ of the divisor is written $$ {\mathcal D}_p=\sum_{i=1}^s\lambda_i H_i,\quad \lambda_i\ne 0\mbox{ for } i=1,2,\ldots,s $$ and there is $\mathbf{m}=(m_1,m_2,\ldots,m_s)\in {\mathbb Z}_{\geq 0}^s$, with $\mathbf{m}\ne\mathbf{0}$ such that \begin{equation} \label{eq:resonance} \sum_{i=1}^sm_i\lambda_i=0,\quad (m_1,m_2,\ldots,,m_s)\in {\mathbb Z}^s_{\geq 0}\setminus\{\mathbf{0}\}. \end{equation} \begin{lemma} \label{lema:normalcrossings} Let $(M,K)$ be a non-singular complex analytic variety that is a germ over a compact subset $K\subset M$. Consider a $\mathbb C$-divisor $\mathcal D$ in $(M,K)$ whose support has normal crossings. Assume that there is a point $p\in K\cap\operatorname{Supp}(\mathcal D)$ in which $\mathcal D$ has a non-negative resonance. Then, there are morphisms $ \pi':(M',{\mathcal D}')\rightarrow (M,{\mathcal D})$ and $\pi'':(M'',{\mathcal D}'')\rightarrow (M',{\mathcal D}')$, such that $\pi'$ is the composition of a sequence of non-dicritical admissible blowing-ups and $\pi''$ is a dicritical admissible blowing-up. \end{lemma} \begin{proof} This result, in another context, is proven in \cite{FDu}. Let us give a quick idea of a proof. Choose local coordinates $(x_1,x_2,\ldots,x_n)$ at the origin such that $$H_i=(x_i=0), \quad i=1,2,\ldots,s. $$ Up to a reordering, we assume that $\prod_{i=1}^tm_i \ne 0$ and $m_i=0$ for $t+1\leq i\leq s$. We proceed by induction on the lexicographical invariant $(t,\delta)$, where $$ \delta=\min_{1\leq i<j\leq t}\{m_i+m_j\}. $$ Assume, up to a new reordering, that $\delta=m_1+m_2$ and $m_1\leq m_2$. Choose $Y=(x_1=x_2=0)$ as a center of blowing-up. The first chart of this blowing-up gives a morphism $$ \phi: ({\mathbb C}^n,0)\rightarrow ({\mathbb C}^n,0) $$ defined by the equations $x_1=x'_1, x_2=x'_1x'_2$ and $x_i=x'_i$ for $i=3,4,\ldots,n$. The transform $\phi^*{\mathcal D}$ is given at the origin of this chart by $$ \phi^*{\mathcal D}=(\lambda_1+\lambda_2)E+\sum_{i=2}^s\lambda_i H'_i, $$ where $E=(x'_1=0)$ and $H'_i=(x'_i=0)$ for $i=2,3,\ldots,s$. If $\lambda_1+\lambda_2=0$ we are done, since then we have an admissible dicritical blowing-up. Otherwise, we obtain a resonance $$ \mathbf{m}'=(m_1, m_2-m_1,m_3,\ldots,m_s) $$ and the invariant $(t',\delta')$ is strictly smaller than $(t,\delta)$. In spite of the local presentation, the above procedure is in fact a global one. This ends the proof. \end{proof} \begin{proposition} \label{pro:divdicritico} Let us consider a $\mathbb C$-divisor ${\mathcal D}$ on $M=({\mathbb C}^n,0)$. The $\mathbb C$-divisor $\mathcal D$ is dicritical if and only if there are morphisms $$ \pi':(M',{\mathcal D}')\rightarrow (M,{\mathcal D}),\quad \pi'':(M'',{\mathcal D}'')\rightarrow (M',{\mathcal D}'), $$ such that $\pi'$ is the composition of a sequence of non-dicritical admissible blowing-ups and $\pi''$ is a dicritical admissible blowing-up. \end{proposition} \begin{proof} Let us assume first the existence of $\pi',\pi''$ with the stated properties. Since $\pi'$ is a composition of non-dicritical admissible blowing-ups, we have that $$ E'\subset\operatorname{Supp}({\mathcal D}'), $$ where $E'$ is the exceptional divisor of $\pi'$. Let $Y\subset M'$ be the center of $\pi''$ and denote $\pi=\pi'\circ\pi''$. The exceptional divisor of $\pi$ is $E''={\tilde E}\cup D$, where $D={\pi''}^{-1}(Y)$ and $\tilde E$ is the strict transform of $E'$ by $\pi''$. Since $\pi''$ is dicritical, we have that $$ \tilde E\subset\operatorname{Supp}({\mathcal D}''),\quad D\not\subset\operatorname{Supp}({\mathcal D}''). $$ Take a point $p\in D\setminus\operatorname{Supp}({\mathcal D}'')$. Let us identify the germ $(M'',p)$ with $({\mathbb C}^n,0)$, with coordinates $x_1,x_2,\ldots,x_n$, where $D$ is locally given at $p$ by the equation $x_n=0$. Consider the morphism $$ \psi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)=(M'',p)\hookrightarrow M'' $$ defined by $x_1=x$, $x_n=y$ and $x_i=0$, for $2\leq i\leq n-1$. We have that $\psi(y=0)\subset D$ and $\operatorname{Im}(\psi)\not\subset D$. Since the support of ${\mathcal D}''$ is empty around $p$, we have that $\psi^*{\mathcal D}''=0$ and we also have $$ \operatorname{Im}(\psi)\not\subset D\cup\operatorname{Supp}({\mathcal D}'')=E''\cup \operatorname{Supp}({\mathcal D}''). $$ Noting that $\pi^{-1}(\operatorname{Supp}({\mathcal D}))=\tilde E\cup \operatorname{Supp}({\mathcal D}'')$, we conclude that $\phi$ is ${\mathcal D}$-transverse, where $\phi=\pi\circ \psi$. Then, we have $$ \phi^*{\mathcal D}=\psi^*(\pi^*{\mathcal D})=\psi^*({\mathcal D''})=0,\quad \phi(y=0)\subset \pi'(Y)\subset\operatorname{Supp}({\mathcal D}). $$ This implies that $\mathcal D$ is dicritical. Assume now that ${\mathcal D}$ is dicritical. Let us perform a Hironaka reduction of singularities of the support of $\mathcal D$ by means of admissible blowing-ups (see \cite{Aro-H-V, Hir}). We can assume that none of the blowing-ups in the reduction of singularities is dicritical, since then we are done. Hence, we have a morphism $$ \tilde\pi: ((\tilde M,\tilde\pi^{-1}(0)),\tilde{\mathcal D})\rightarrow (({\mathbb C}^n,0),{\mathcal D}) $$ that is a composition of non-dicritical admissible blowing-ups such that $\operatorname{Supp}({\tilde{\mathcal D}})$ has normal crossings. Now, in view of Lemma \ref{lema:normalcrossings}, it is enough to find a point $p$ in $\tilde\pi^{-1}(0)\cap \operatorname{Supp}(\tilde{\mathcal D})$ such that $\tilde{\mathcal D}$ has a non-negative resonance at $p$. By Proposition \ref{pro:divdicrunblowinup}, there is a point $p \in \tilde\pi^{-1}(0)$ such that $\tilde {\mathcal D}_{p}$ is dicritical. Then, there is a $\tilde{\mathcal D}$-transverse map $$ \tilde\phi: ({\mathbb C}^2,0)\rightarrow (\tilde M,p) $$ such that $\tilde\phi^*{\tilde{\mathcal D}}=0$ and $\tilde\phi(y=0)\subset \operatorname{Supp} (\tilde{\mathcal D})$. Let us identify $(\tilde M,p)$ with $({\mathbb C}^n,0)$ by means of a choice of local coordinates $x_1,x_2,\ldots,x_n$ at $p$ such that $$ \tilde{\mathcal D}_p=\sum_{i=1}^s\lambda_iH_i,\quad H_i=(x_i=0),\; i=1,2,\ldots,s. $$ Put $\tilde\phi_i=x_i\circ\tilde\phi$, for $i=1,2,\ldots, n$. We know that $\tilde\phi_i(0)=0$ for $i=1,2,\ldots,n$ and that $\tilde\phi_\ell\ne 0$, for $1\leq\ell\leq s$. Let $\Gamma\subset ({\mathbb C}^2,0)$ be an irreducible component of $\tilde\phi_1=0$, that is $\nu_\Gamma(\tilde\phi_1)\geq 1$. The coefficient of $\Gamma$ in $\tilde\phi^*{\tilde{\mathcal D}}=0$ is $$ \sum_{i=1}^s\lambda_i\nu_\Gamma(\tilde\phi_i)=0. $$ This is the desired non-negative resonance. \end{proof} \begin{remark} In other words, the $\mathbb C$-divisor $\mathcal D$ is dicritical if and only if there is a sequence of non-dicritical admissible blowing-ups that can be followed by a dicritical admissible blowing-up. \end{remark} The nonnegative resonances characterize dicriticalness in the case of normal crossings support, as we show in the following result: \begin{corollary} \label{cor:dicriticonormalcrossings} Consider a $\mathbb C$-divisor ${\mathcal D}=\sum_{i=1}^s\lambda_iH_i$ on $M=({\mathbb C}^n,0)$ whose support $S=\cup_{i=1}^sH_i$ has normal crossings. The following statements are equivalent: \begin{enumerate} \item The $\mathbb C$-divisor $\mathcal D$ is dicritical. \item There is a nonnegative resonance $\sum_{i=1}^sm_i\lambda_i=0$, with $m_i\geq 0$ not all zero integer numbers. \end{enumerate} \end{corollary} \begin{proof} See the second part of the proof of Proposition \ref{pro:divdicritico}. \end{proof} \section{Logarithmic Foliations and Dicriticalness} Let $\mathcal F$ be a codimension one singular holomorphic foliation on a non-singular complex analytic variety $M$. Given a point $p \in M$, we recall that the germ of $\mathcal F$ at $p$ is generated by an integrable meromorphic germ $\eta$ of differential $1$-form. Moreover two such differential $1$-forms $\eta$ and $\eta'$ generate the same germ of foliation if and only if $\eta'=\phi\eta$, where $\phi$ is the germ at $p$ of a meromorphic function. We recall from \cite{Sai} that a meromorphic germ of differential $1$-form $\eta$ at a point $p\in M$ is {\em logarithmic} when both $\eta$ and $d\eta$ have at most simple poles. The set $\operatorname{Pol}(\eta)$ of poles of a meromorphic differential $1$-form $\eta$ is the hypersurface $g=0$, where $g\eta$ is holomorphic and $g$ divides any other $g'$ such that $g'\eta$ is holomorphic; the poles are simple when we can take $g$ to be reduced. \begin{remark} \label{rk:logaritmicgenerator} Assume that $\mathcal F$ is a germ of foliation on $({\mathbb C}^n,0)$ locally generated by a germ of holomorphic integrable $1$-form $\omega$, without common factors in its coefficients. Let $f=0$ be a reduced equation for a (maybe non-irreducible) invariant hypersurface of $\mathcal F$; this means that $f$ divides $df\wedge\omega$. Then, the meromorphic $1$-form $\omega/f$ is logarithmic. Indeed, we have that $$ d(\omega/f)= (1/f)\left(-(df\wedge \omega)/f+d\omega\right) $$ and hence $f d(\omega/f)$ is holomorphic. \end{remark} The following result is well known: \begin{proposition} Let $\eta$ be the germ of a closed logarithmic $1$-form on $({\mathbb C}^n,0)$. There is a multivaluated function $F$ such that $\eta=dF/F$. More precisely, if we decompose the set of poles as a union $\operatorname{Pol}(\eta)=\cup_{i=1}^sH_i$ of irreducible hypersurfaces, there are $\lambda_i\ne 0$ and reduced local equations $f_i=0$ for each $H_i$ such that $$ \eta=\sum_{i=1}^s\lambda_i\frac{df_i}{f_i}. $$ Moreover, the coefficients $\lambda_i$ are unique. In the case that $\operatorname{Pol}(\eta)=\emptyset$ and hence $\eta$ is holomorphic, the statement must be interpreted by saying that there is a unit $U$ such that $\eta=dU/U$. \end{proposition} \begin{proof}(See \cite{Cer-M, Sai}) If $\eta$ is holomorphic, by Poincaré Lemma, there is a holomorphic function $G$ such that $\eta=dG$, taking $U=\exp(G)$, we have that $\eta=dU/U$. We know that the residue of $\eta$ along $H_i$ is non zero and constant (see (2.6) and the proof of Theorem (2.9) in \cite{Sai}), let us call $\lambda_i\in {\mathbb C}$ this residue. Taking local reduced equations $g_i=0$ of $H_i$ we have that $$ \alpha=\eta- \sum_{i=1}^s\lambda_i\frac{dg_i}{g_i} $$ is a closed logarithmic differential $1$-form without residues. Hence $(1/\lambda_1)\alpha$ is a closed holomorphic $1$-form and thus $(1/\lambda_1)\alpha=dU/U$, where $U$ is a unit. Put $f_1=Ug_1$ and $f_i=g_i$, for $i=2,3,\ldots,s$. We conclude that $\eta=\sum_{i=1}^s \lambda_idf_i/f_i$. \end{proof} Given a closed logarithmic differential $1$-form $\eta$ on $M$, we attach to it the $\mathbb C$-divisor $\operatorname{Div}{\eta}$ given by $$ \operatorname{Div}{\eta}=\sum_{H}\lambda_HH, $$ where $\lambda_H=\operatorname{Res}_H(\eta)$ is the residue of $\eta$ along $H$, that we know to be constant by \cite{Sai}. When ${\mathcal D}=\operatorname{Div}(\eta)$, we say that the closed $1$-form $\eta$ is {\em ${\mathcal D}$-logarithmic} . \begin{definition} A codimension one singular holomorphic foliation $\mathcal F$ on $M$ is {\em $\mathcal D$-logarithmic} when it is locally generated by a closed $\mathcal D$-logarithmic differential $1$-form. \end{definition} Let us note that $\operatorname{Div}(\mu\eta)=\mu\operatorname{Div}(\eta)$, when $\mu\in{\mathbb C}$. Thus, if ${\mathcal F}$ is ${\mathcal D}$-logarithmic and ${\mathcal D}'$ is a $\mathbb C$-divisor projectively equivalent to ${\mathcal D}$, then ${\mathcal F}$ is also ${\mathcal D}'$-logarithmic. \begin{remark} \label{rk:invarianciasoporte} If $\mathcal F$ is a $\mathcal D$-logarithmic foliation, the irreducible components of the support of $\mathcal D$ are invariant for $\mathcal F$. This may be verified locally, assuming that $\eta=\sum_{i=1}^s\lambda_i df_i/f_i$ generates $\mathcal F$. We have that $f_i$ does not divide the coefficients of the holomorphic $1$-form $\omega=f\eta$, where $f=\prod_{i=1}^sf_i$, indeed $f_i$ does not divide $df_i$, since $f_i$ is reduced; in this situation, we have only to verify that $f_i$ divides $df_i\wedge \omega$, and this condition is visible. \end{remark} \begin{remark} Consider the radial foliation ${\mathcal R}$ on $({\mathbb C}^2,0)$ defined by $\omega=0$ where $\omega=ydx-xdy$. Note that $\mathcal R$ is defined both by $\eta_0$ and $\eta_1$, where $$ \eta_0=\frac{dx}{x}-\frac{dy}{y},\quad \eta_1=\frac{d(x+y)}{x+y}- \frac{d(x-y)}{x-y}. $$ Put $H^0_1=(x=0)$, $H^0_2=(y=0)$, $H^1_1=(x+y=0)$ and $H^1_2=(x-y=0)$. The $\mathbb C$-divisors $$ \operatorname{Div}(\eta_0)= H^0_1-H^0_2,\quad \operatorname{Div}(\eta_1)=H^1_1-H^1_2 $$ are different and not proportional. Hence a codimension one foliation can be logarithmic with respect to several non projectively equivalent $\mathbb C$-divisors. \end{remark} \subsection{Dicriticalness} The word dicritical comes from ancient works of Autom, following Mattei \cite{Cer-M}. The general definition of dicritical foliation, suggested by D. Cerveau, may be found in \cite{Can-RV-S}: \begin{definition} \label{def:foldicr} Let $\mathcal F$ be a codimension one holomorphic foliation on a non-singular complex analytic variety $M$. We say that $\mathcal F$ is {\em dicritical at a point $p\in M$} if and only if there is a holomorphic map $$ \phi:({\mathbb C}^2,0)\rightarrow (M,p) $$ such that $\phi^*{\mathcal F}=(dx=0)$ and $\phi(y=0)$ is invariant for $\mathcal F$. We say that $\mathcal F$ is {\em dicritical} if there is a point $p$ such that it is dicritical at $p$. When $M$ is a germ $(M,K)$ over a compact set $K\subset M$, we ask the condition just for the points in $K$. \end{definition} As in the case of $\mathbb C$-divisors, we adopt the notation $$ \pi:(M',{\mathcal F}')\rightarrow (M,{\mathcal F}) $$ to indicate a morphism $\pi:M'\rightarrow M$, a foliation $\mathcal F$ on $M$ and the transform ${\mathcal F}'=\pi^*{\mathcal F}$. When $\pi$ is a blowing-up with non-singular center $Y$, we say that $\pi$ is {\em $\mathcal F$-admissible} if the center $Y$ is invariant for $\mathcal F$, we say that $\pi$ is a {\em dicritical blowing-up} if the exceptional divisor $\pi^{-1}(Y)$ is not invariant for ${\mathcal F}'$ and it is {\em non-dicritical} when the exceptional divisor is invariant for ${\mathcal F}'$. \begin{proposition} \label{prop:dicriticidaddeunaexplosion} Let $\mathcal F$ be a codimension one singular foliation on $({\mathbb C}^n,0)$ and assume that $Y$ is a non-singular invariant subvariety of $({\mathbb C}^n,0)$. If the blowing-up $$ \pi:((M,\pi^{-1}(0)),{\mathcal F}')\rightarrow (({\mathbb C}^n,0),{\mathcal F}) $$ centered at $Y$ is a dicritical blowing-up, then $\mathcal F$ is a dicritical foliation. \end{proposition} \begin{proof} Choose a point $p\in E=\pi^{-1}(Y)$ and consider local coordinates $x_1,x_2,\ldots,x_n$ at $p$ such that $E=(x_1=0)$ and $x_1=x_2=\ldots=x_{n-1}=0$ is not invariant for ${\mathcal F}'$. This is possible, since not all the non-singular branches through $p$ contained in $E$ are invariant for ${\mathcal F}'$ (this should imply that $E$ itself is invariant). Now, let $ \psi: ({\mathbb C}^2,0)\rightarrow (M,p)\hookrightarrow (M,\pi^{-1}(0)) $ be the map given by $$ x_1 \circ \psi=v,\; x_n \circ \psi=u,\; x_i \circ \psi=0,\, i=2,3,\ldots,n-1, $$ where $u,v$ are local coordinates in $({\mathbb C}^2,0)$. We know that $\Gamma=(v=0)$ is not invariant for $\psi^*{\mathcal F}'$. Let $\sigma: ({\mathbb C}^2,0)\rightarrow ({\mathbb C}^2,0)$ be the composition of a sequence of local blowing-ups following the infinitely near points of $\Gamma$ such that the strict transform of $\Gamma$ is $y=0$ and $\sigma^*(\psi^*{\mathcal F}')$ is the foliation $dx=0$. This is possible, since we do the reduction of singularities both of $\Gamma$ and $\psi^*{\mathcal F}'$. We end by considering $\phi=\pi \circ \psi\circ\sigma$, where $\phi^*({\mathcal F})=(dx=0)$ and $\phi(y=0)\subset Y$ is invariant. \end{proof} \begin{remark} When $M$ has dimension two, we have that $\mathcal F$ is dicritical at $p$ if and only if there are infinitely many germs of invariant branches of $\mathcal F$ at $p$ and this is also equivalent to say that we can find a sequence of blowing-ups ended by a dicritical one. This property is the classical definition of dicritical foliation in dimension two. Nevertheless, the direct generalization to higher dimension is not evident, as Jouanolou's example \cite{Jou} show: a germ of foliation $\mathcal F$ in $({\mathbb C}^3,0)$ without invariant surface, but such that the blowing-up of the origin is dicritical. See \cite{Can-C, Can-C-D} for more details. \end{remark} \begin{proposition} \label{prop:estabilidddicriticidad} Let $\pi:(M',{\mathcal F}')\rightarrow(M,{\mathcal F})$ be an admissible non-dicritical blowing-up. Then ${\mathcal F}$ is a dicritical foliation if and only if ${\mathcal F}'$ is so. \end{proposition} \begin{proof} Assume that ${\mathcal F}'$ is dicritical. Take a holomorphic map $\phi':({\mathbb C}^2,0)\rightarrow M'$ such that ${\phi'}^*{\mathcal F}'=(dx=0)$ and ${\phi'(y=0)}\subset M'$ is invariant. Put $\phi=\pi \circ \phi'$. Since $\pi$ is non dicritical, we have that $$ \phi^*{\mathcal F}={\phi'}^*(\pi^*{\mathcal F}) $$ (the non-dicriticalness of $\pi$ is necessary here, see the Remark \ref{rk:pullbackdicriticao} below) and moreover $\phi(y=0)=\pi(\phi'(y=0))$ is invariant. Hence ${\mathcal F}$ is also a dicritical foliation. Conversely, let us assume that ${\mathcal F}$ is dicritical and take a holomorphic map $$ \phi:({\mathbb C}^2, 0)\rightarrow M $$ such that $\phi^*{\mathcal F}=(dx=0)$ and $\phi(y=0)$ is invariant for ${\mathcal F}$. In view of Proposition \ref{prop:appdos} in the Appendix I, there is a commutative diagram of morphisms $$ \begin{array}{ccc} ({\mathbb C}^2,0)&\stackrel{\sigma}{\longleftarrow}&N\\ \phi\downarrow\;\; &&\;\;\downarrow\psi\\ M&\stackrel{\pi}{\longleftarrow}&M' \end{array}, $$ where $\sigma$ is the composition of a finite sequence of blowing-ups. Let $(\Gamma',p')$ be the strict transform of $(y=0)$ by $\sigma$. We know that there are local coordinates $u,v$ at $p'$ such that $\Gamma'=(v=0)$ and $\sigma^*{(dx=0)}=(du=0)$. Note that $$ (\phi\circ\sigma)^*{\mathcal F}=(\pi\circ\psi)^*{\mathcal F}. $$ Since $\pi$ is non-dicritical, we have that $(\pi\circ\psi)^*{\mathcal F}=\psi^*{\mathcal F}'$. Moreover, since $\sigma$ is a sequence of blowing-ups centered at points, we have that $$ (\phi\circ\sigma)^*{\mathcal F}=\sigma^*(\phi^*{\mathcal F})=(du=0). $$ Hence $\psi^*{\mathcal F}'=(du=0)$ and $\mathcal F'$ is a dicritical foliation. \end{proof} \begin{remark} \label{rk:pullbackdicriticao} Let $\mathcal F$ be a codimension one singular foliation of $M$ and consider two morphisms $\phi:M'\rightarrow M$ and $\psi:M''\rightarrow M'$. The foliation $\phi^*{\mathcal F}$ is defined locally by the pullback $\phi^*\omega$ of a differential $1$-form $\omega$ defining $\mathcal F$. The pull-back foliation $\phi^*{\mathcal F}$ {\em exists}, or {\em is defined}, if and only if $\phi^*\omega\ne 0$, when $\omega$ is chosen to be without common factors in its coefficients. When $(\phi\circ\psi)^*{\mathcal F}$, $\phi^*{\mathcal F}$ and $\psi^*(\phi^*{\mathcal F})$ exist, we have that $$ (\phi\circ\psi)^*{\mathcal F}=\psi^*(\phi^*{\mathcal F}), $$ but it is possible for $\phi^*{\mathcal F}$ and $\psi^*(\phi^*{\mathcal F})$ to be well defined, whereas $(\phi\circ\psi)^*{\mathcal F}$ does not exist. An important case of this situation is the immersion $\psi: M''\rightarrow M'$ of the exceptional divisor of a dicritical blowing-up $\phi:M'\rightarrow M$, see \cite{Can-L-M}. Anyway, when $\phi$ is a non-dicritical blowing-up, and hence the exceptional divisor is invariant for $\phi^*{\mathcal F}$, we have that $\phi^*{\mathcal F}$ exists (this is always true because a blowing-up is an isomorphism in a dense open set) and moreover $\psi^*(\phi^*{\mathcal F})$ is defined if and only if $(\phi\circ\psi)^*{\mathcal F}$ is defined; hence we have the equality. On the other hand, when $\psi$ is a blowing-up or a sequence of blowing-ups, we also have that $\phi^*{\mathcal F}$ is defined if and only if $(\phi\circ\psi)^*{\mathcal F}$ is defined and, if this is the case, we also have that $ (\phi\circ\psi)^*{\mathcal F}=\psi^*(\phi^*{\mathcal F}) $. \end{remark} \subsection{Non-dicritical Logarithmic Foliations} In this Subsection we relate the non-dicriticalness of a ${\mathcal D}$-logarithmic foliation with the same property for the $\mathbb C$-divisor $\mathcal D$. \begin{lemma} \label{lema:unaexplosion} Let $\mathcal F$ be a $\mathcal D$-logarithmic foliation. Assume that $ \pi:(M',{\mathcal D}')\rightarrow (M,{\mathcal D}) $ is a non-dicritical ${\mathcal D}$-admissible blowing-up. Then $ \pi:(M',{\mathcal F}')\rightarrow (M,{\mathcal F}) $ is an admissible non-dicritical blowing-up and ${\mathcal F}'$ is ${\mathcal D}'$-logarithmic. \end{lemma} \begin{proof} Let $Y$ be the center of $\pi$. We know that $Y\subset\operatorname{Supp}({\mathcal D})$ and hence $Y$ is ${\mathcal F}$-invariant, since the support is $\mathcal F$-invariant, in view of Remark \ref{rk:invarianciasoporte}. Then $\pi$ is $\mathcal F$-admissible. Put ${\mathcal D}=\sum_{i=1}^s\lambda_i H_i$ and assume that $\mathcal F$ is generated by $$ \eta=\sum_{i=1}^s\lambda_i\frac{df_i}{f_i}, $$ where $f_i=0$ is a reduced local equation of $H_i$ for $i=1,2,\ldots, s$. Then ${\mathcal F}'$ is generated by $\pi^*\eta$, where $$ \pi^*\eta= \sum_{i=1}^s\lambda_i\frac{d(f_i\circ\pi)}{f_i\circ \pi}. $$ Moreover, we have that $$ {\mathcal D}'=\pi^{*}{\mathcal D}=\sum_{i=1}^s\lambda_i \pi^*(1\cdot H_i)=\mu E+\sum_{i=1}^s\lambda_i H'_i, $$ where $E=\pi^{-1}(Y)$, $\mu=\sum_{i=1}^s\lambda_i\nu_i$, with $\nu_i=\nu_Y(H_i)$ and $H'_i$ stands for the strict transform of $H_i$. By hypothesis, we have that $\mu\ne 0$. We can do the necessary verifications locally at the points in $E$. Take one such $q\in E$ and let $h=0$ be a local reduced equation of $E$ at $q$. We have that $$ f_i\circ \pi=h^{\nu_i}f'_i, $$ where $f'_i=0$ is a local reduced equation for the strict transform $H'_i$ of $H_i$ at $q$. Let us show that $\pi^*\eta$ can be written as \begin{equation} \label{eq:transformado} \pi^*\eta=\mu\frac{dh'}{h'}+\sum_{q\in H'_i}\lambda_{i}\frac{df'_i}{f'_i}, \end{equation} where $h'=0$ is a reduced local equation of $\pi^{-1}(Y)$ at $q$. Recalling that $\mu\ne 0$, we see that $\pi^{-1}(Y)$ is invariant for ${\mathcal F}'$ and hence $\pi:( M', {\mathcal F}')\rightarrow (M,{\mathcal F})$ is non dicritical; moreover Equation \ref{eq:transformado} also shows that ${\mathcal F}'$ is ${\mathcal D}$-logarithmic. It remains to find $h'$ satisfying Equation \ref{eq:transformado}. Note that $f'_i$ is a unit if and only if $q\notin H'_i$. In this situation, there is a unit $U$ such that $$ \mu\frac{dU}{U}= \sum_{q\notin H'_i}\lambda_{i}\frac{df'_i}{f'_i}. $$ Now, it is enough to take $h'=Uh$. \end{proof} \begin{remark} It is possible to have a dicritical ${\mathcal D}$-admissible blowing-up $$ \pi:(M',{\mathcal D}')\rightarrow (M,{\mathcal D}) $$ and a ${\mathcal D}$-logarithmic foliation ${\mathcal F}$ such that $\pi$ induces a non-dicritical admissible blowing-up $ \pi:(M',{\mathcal F}')\rightarrow (M,{\mathcal F}) $. The following example may be found in \cite{Can-Co}: take the foliation $\mathcal F$ on $({\mathbb C}^2,0)$ given by $\eta=0$ where $$ \eta=\frac{d(y-x^2)}{y-x^2}-\frac{d(y+x^2)}{y+x^2}. $$ Then $\mathcal F$ is $\mathcal D$-logarithmic for ${\mathcal D}=(y-x^2=0)-(y+x^2=0)$. Note that $\mathcal F$ is also ${\mathcal D}_1$-logarithmic, where ${\mathcal D}_1=(y=0)-2(x=0)$. The first blowing-up $\pi$ is ${\mathcal D}$-dicritical, but the exceptional divisor is invariant for the transformed foliation. Anyway, we know that in ambient dimension two, this situation implies that ${\mathcal F}$ is actually a dicritical foliation, although the blowing-up $\pi$ could be non dicritical. In general, it is an open question to know if given a logarithmic foliation $\mathcal F$ there is a $\mathbb C$-divisor that ``faithfully'' represents the dicriticalness of the foliation, for instance in terms of blowing-ups. In this paper, we concentrate ourselves in the non-dicritical case. \end{remark} \begin{proposition} \label{pro:hipersuperficiesinvariantes} Let $\mathcal F$ be a $\mathcal D$-logarithmic foliation, where $\mathcal D$ is non dicritical and take a point $p\in \operatorname{Supp}({\mathcal D})$. The only irreducible germs of hypersurface at $p$ invariant for $\mathcal F$ are the irreducible components of the germ at $p$ of support of $\mathcal D$. \end{proposition} \begin{proof} (See also \cite{Cer-M}). We can assume that $M=({\mathbb C}^n,0)$, $0\ne{\mathcal D}=\sum_{i=1}^s\lambda_iH_i$ and $\mathcal F$ is generated by $$ \eta=\sum_{i=1}^s\lambda_i\frac{df_i}{f_i}, $$ where $f_i=0$ are reduced equations of $H_i$, for $i=1,2,\ldots,s$. By Remark \ref{rk:invarianciasoporte}, we already know that each $H_i$ is invariant for $\mathcal F$. Let us suppose now that $S$, given by $g=0$, is another germ of irreducible hypersurface invariant for $\mathcal F$. Up to make a desingularization of the support of $\mathcal D$ and by choosing a point where the strict transform of $S$ intersects the exceptional divisor, we restrict ourselves to the case when the $H_i$ are coordinate hyperplanes. There is at least one of the components of the support, because of the non dicriticalness of ${\mathcal D}$, that makes us to add each time we blow-up the exceptional divisor to the support of $\mathcal D$. Then, we can assume that $$ \eta=\sum_{i=1}^s\lambda_i\frac{dx_i}{x_i}. $$ The non-dicriticalness of ${\mathcal D}$ implies in this situation that $\sum_{i=1}^sm_i\lambda_i\ne 0$ for any $0\ne \mathbf{m}\in {\mathbb Z}_{\geq 0}^s$, in view of Lemma \ref{lema:normalcrossings}. By the curve selection lemma, there is a parameterized curve $$ \gamma:t\mapsto (\gamma_i(t))_{i=1}^n $$ contained in $S$ and not contained in the support $\prod_{i=1}^sx_i=0$, in particular $\gamma_i(t)\ne 0$, for any $i=1,2,\ldots,s$. Let us write $$ \gamma_i(t)=\mu_{im_i}t^{m_i}+\mu_{i,m_i+1}t^{m_i+1}+\cdots,\quad \mu_{im_i}\ne 0, \quad i=1,2,\ldots,s. $$ Since $S$ is invariant, we have that $\gamma^*\eta=0$. Looking at the residue of $\gamma^*\eta$, we have that $$ \sum_{i=1}^sm_i\lambda_i=0. $$ This is not possible. \end{proof} \begin{corollary} Let $\mathcal F$ be a $\mathcal D$-logarithmic foliation, where $\mathcal D$ is non dicritical. Then ${\mathcal F}$ is also non dicritical. Moreover, if $\mathcal F$ is ${\mathcal D}'$-logarithmic, then ${\mathcal D}'$ and ${\mathcal D}$ are projectively equivalent $\mathbb C$-divisors. \end{corollary} \begin{proof} Let us desingularize $\operatorname{Supp}{\mathcal D}$ by means of non-dicritical admissible blowing-ups. Note that, by Lemma \ref{lema:unaexplosion}, the above blowing ups are non-dicritical for $\mathcal F$. Invoking Proposition \ref{prop:estabilidddicriticidad}, we reduce the problem to the case when $\operatorname{Supp}({\mathcal D})$ has normal crossings. Now, if ${\mathcal F}$ is dicritical we get a nonnegative resonance in the coefficients of ${\mathcal D}$ and hence ${\mathcal D}$ should be dicritical, by Lemma \ref{lema:normalcrossings}. We can also do the above reduction in order to prove the second part of the statement. Assume thus that $\operatorname{Supp}({\mathcal D})$ has normal crossings, the foliation is ${\mathcal D}$-logarithmic and non dicritical. Now, the support of ${\mathcal D}$ coincides with the invariant hypersurfaces of ${\mathcal F}$ by Proposition \ref{pro:hipersuperficiesinvariantes}; moreover, the coefficients (up to projective equivalence) are given by the residues. Hence ${\mathcal D}'$ is determined up to projective equivalence from ${\mathcal D}$. \end{proof} \begin{remark} Let $\mathcal F$ be a $\mathcal D$-logarithmic foliation. We know that if $\mathcal D$ is non-dicritical, then the foliation $\mathcal F$ is also non-dicritical. The converse is a natural question that has positive answer. That is, if $\mathcal F$ is non-dicritical, then $\mathcal D$ is also non-dicritical. This is a consequence of the theorem on existence and non-dicriticalness of the logarithmic models, that we prove in this paper. \end{remark} \section{Generalized Hypersurfaces and Logarithmic Forms} We recall here some facts useful for the sequel, concerning generalized hypersurfaces and the more general case of non-dicritical codimension one singular foliations. For more details on generalized hypersurfaces the reader can look at \cite{Fer-M}. We end the section by associating logarithmic forms to the generalized hypersurfaces that are stable under blowing-ups. We take the following definition: \begin{definition}[\cite{Can-RV-S}] Given a complex analytic variety $M$, a foliation $\mathcal F$ on $M$ and a point $p\in M$, we say that $\mathcal F$ is {\em complex hyperbolic} at $p$, or that $\mathcal F$ {\em has no hidden saddle-nodes at $p$}, if and only if there is no holomorphic map $\phi:({\mathbb C}^2,0)\rightarrow M$, with $\phi(0)=p$, such that $\phi^*{\mathcal F}$ is a saddle-node. We say that $\mathcal F$ is a {\em generalized hypersurface at $p$} if, in addition, it is non-dicritical at $p$. We say that $\mathcal F$ is a generalized hypersurface at $M$ when the property holds at each point of $M$. \end{definition} The origin of the terminology {\em generalized curve} is in the paper \cite{Cam-LN-S}, where the authors made an extensive consideration of the condition of being complex hyperbolic, in the two dimensional case. In some cases, the above name is used also for the dicritical situation. We fix the word {\em generalized hypersurface} for denoting both properties: non-dicriticalness and no hidden-saddle-nodes. Of course, in the case of ambient dimension two, the expression {\em generalized curve} also means for us to be non-dicritical and without hidden saddle-nodes. \begin{remark} Some ``ramified'' saddle-nodes have the property of being generalized hypersurfaces. For instance, take the saddle-node given by the meromorphic $1$-form $$ \frac{du}{u}+ u\frac{dv}{v} $$ in dimension two. Let us consider the ramification $u=x^py^q$, $v=y$; we obtain by pull back a differential $1$-form $$ \left(p\frac{dx}{x}+q\frac{dy}{y}\right)+ x^py^q\frac{dy}{y} $$ that defines a generalized curve on $({\mathbb C}^2,0)$; it is an example of Martinet-Ramis resonant case \cite{Mar-R}. Note that it has no holomorphic first integral. The reader can see \cite{Can-C-D} for more details. \end{remark} One of the important features of generalized hypersurfaces is the following result: \begin{proposition}[See \cite{Can,Can-M-RV,Fer-M}] \label{prop:redsinghypgeneralizada} Let $\mathcal F$ be a generalized hypersurface on $({\mathbb C}^n,0)$. There are only finitely many irreducible invariant hypersurfaces of $\mathcal F$, its union $S$ is non empty and any reduction of singularities of $S$ provides a reduction of singularities of $\mathcal F$. In particular, the singular locus of $\mathcal F$ is contained in $S$. \end{proposition} Let us state some other useful results concerning generalized hypersurfaces: \begin{lemma} \label{lema:curvainvariante} Consider a generalized hypersurface $\mathcal F$ on $({\mathbb C}^n, 0)$ and take an invariant analytic branch $(\Gamma,0)\subset ({\mathbb C}^n,0)$ not contained in the singular locus of $\mathcal F$. There is a single irreducible hypersurface $H$ invariant for $\mathcal F$ such that $\Gamma\subset H$. \end{lemma} \begin{proof} This is true for any non-dicritical foliation that admits a reduction of singularities, in particular for generalized hypersurfaces, see \cite{Can}. \end{proof} \begin{proposition} \label{prop:pullbackgeneralizedcurve} Consider a generalized hypersurface $\mathcal F$ on $({\mathbb C}^n, 0)$ and let $S$ be the union of the invariant hypersurfaces of $\mathcal F$. For any $S$-transverse holomorphic map $ \phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0) $, the pull-back $\phi^*{\mathcal F}$ is a generalized curve. \end{proposition} \begin{proof} Let $\omega$ be a reduced holomorphic generator of $\mathcal F$. We have first to show that $\phi^*\omega\ne 0$. Assume that $\phi^*\omega=0$, since $\phi$ is $S$-transverse, there is an irreducible branch $(\Gamma,0)\subset ({\mathbb C}^2,0)$ such that $\phi(\Gamma)\not\subset S$. In this situation, the curve $\phi(\Gamma)$ is an invariant curve of $\mathcal F$ not contained in $S$; this contradicts Lemma \ref{lema:curvainvariante}. Then, we have that $\phi^*{\mathcal F}$ exists. More precisely, the invariant curves for $\phi^*{\mathcal F}$ are precisely the irreducible components of $\phi^{-1}(S)$. In particular, $\phi^*{\mathcal F}$ has only finitely many invariant branches and then it is non-dicritical. Finally, let us find a contradiction if $\phi^*{\mathcal F}$ is not complex hyperbolic. Take $$ \varphi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^2,0) $$ such that $\varphi^*(\phi^*{\mathcal F})$ is a saddle-node. We have that $\operatorname{Im}(\varphi)\not\subset \phi^{-1}(S)$ and hence $\phi\circ\varphi$ is $S$-transverse. We conclude that $(\phi\circ\varphi)^*{\mathcal F}$ exists and $$ (\phi\circ\varphi)^*{\mathcal F}=\varphi^*(\phi^*{\mathcal F}) $$ is a saddle-node, contradiction, since $\mathcal F$ is complex hyperbolic. \end{proof} Next, we show the stability under permissible blowing-ups of being generalized hypersurface \begin{proposition} \label{prop:blowingupgeneralizedhip} Consider a generalized hypersurface $\mathcal F$ on $({\mathbb C}^n, 0)$ and let $Y$ be a non-singular subvariety of $({\mathbb C}^n,0)$ contained in the union $S$ of the invariant hypersurfaces of $\mathcal F$. Consider the admissible blowing-up $$ \pi:((M,\pi^{-1}(0)), {\mathcal F}')\rightarrow (({\mathbb C}^n,0),{\mathcal F}) $$ with center $Y$. Then $\pi$ is non-dicritical and ${\mathcal F}'$ is a generalized hypersurface. \end{proposition} \begin{proof} By Proposition \ref{prop:dicriticidaddeunaexplosion}, we see that the blowing-up $\pi$ is non-dicritical, since $\mathcal F$ is a non-dicritical foliation. Moreover, the transformed foliation ${\mathcal F}'$ is non-dicritical in view of Proposition \ref{prop:estabilidddicriticidad}. Let us show that ${\mathcal F}'$ is complex-hyperbolic. Assume by contradiction that it is not and thus there is a point $p\in \pi^{-1}(0)$ and a morphism $$ \phi: ({\mathbb C}^2,0)\rightarrow (M,p) $$ such that $\phi^*{\mathcal F}'$ is a saddle-node. Since $\pi$ is non-dicritical, we have that the exceptional divisor $E=\pi^{-1}(Y)$ is invariant and hence the image of $\phi$ is not contained in $E$; this implies that $(\pi\circ\phi)^*{\mathcal F}$ exists and, in view of Remark \ref{rk:pullbackdicriticao}, we have $$ (\pi\circ\phi)^*{\mathcal F}=\phi^*(\pi^*{\mathcal F})=\phi^*{\mathcal F}'. $$ It is a saddle-node, contradiction. \end{proof} \subsection{Transversality} We consider here the concepts of generic multiplicity, equimultiplicity and Mattei-Moussu transversality, that we need in the proof of existence of logarithmic model for generalized hypersurfaces. Let $Y$ be a non-singular irreducible subvariety of $({\mathbb C}^n,0)$ and consider a holomorphic $1$-form $\omega$ on $({\mathbb C}^n,0)$. The {\em generic multiplicity $\nu_Y(\omega)$ of $\omega$ along $Y$} is the minimum of the generic multiplicity of the coefficients of $\omega$ along $Y$. When $\omega$ is a reduced (no common factors in the coefficients) generator of a codimension one singular foliation $\mathcal F$ on $({\mathbb C}^n,0)$ we say that $\nu_Y(\omega)$ is the generic multiplicity of $\mathcal F$ along $Y$ and we denote $\nu_Y({\mathcal F})=\nu_Y(\omega)$. Let $S$ be a hypersurface of $({\mathbb C}^n,0)$ with reduced equation $f=0$; recall that $\nu_Y(S)=\nu_Y(f)$. We say that $Y$ is {\em equimultiple at the origin} for $\omega$, $\mathcal F$ or $S$, if we respectively have that $$ \nu_Y(\omega)=\nu_0(\omega),\quad \nu_Y({\mathcal F})=\nu_0(\mathcal F),\quad \nu_Y(H)=\nu_0(H). $$ By taking appropriate representatives of the germs, we know that the points of equimultiplicity define a dense open set in $Y$. \begin{remark} If $S$ is given by the reduced equation $f=0$, where $f$ is reduced, and we consider the foliation ${\mathcal F}=(df=0)$, we have $\nu_Y({\mathcal F})=\nu_Y(S)-1$. \end{remark} Let us recall that the {\em singular locus $\operatorname{Sing}({\mathcal F})$} of a foliation $\mathcal F$ coincides locally with the singular locus of a holomorphic generator of $\mathcal F$ without common factor in its coefficients. In particular, we have that $\operatorname{Sing}({\mathcal F})\subset M$ is an analytic subset of codimension at least two. Take a holomorphic germ of $1$-form $\omega$ on $({\mathbb C}^n,0)$ such that $\operatorname{codim}(\operatorname{Sing}(\omega))\geq 2$. Following \cite{Mat-Mou}, we say that a closed immersion $\phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$ is a {\em Mattei-Moussu transversal} for $\omega$ when the following properties hold $$ \operatorname{Sing}(\phi^*\omega)=\phi^{-1}(\operatorname{Sing}(\mathcal F))\subset \{0\},\quad \nu_0(\phi^*\omega)=\nu_0(\omega). $$ If $\mathcal F$ is a codimension one singular foliation on $({\mathbb C}^n,0)$ we say that $\phi$ is a {\em Mattei-Moussu transversal} for $\mathcal F$ when it is a Mattei-Moussu transversal for a holomorphic generator of $\mathcal F$. Let $S$ be a hypersurface given by a reduced equation $f=0$; we say that $\phi$ is a {\em Mattei-Moussu transversal} for $S$ when it is a Mattei-Moussu transversal for the foliation $df=0$. In this paper, we consider the following version of the Transversality Theorem of Mattei-Moussu: \begin{theorem} Let $\mathcal F$ be a non-dicritical holomorphic foliation on $({\mathbb C}^n,0)$. There is a Zariski nonempty open set $W$ in the space of linear two-planes such that any closed immersion $\phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$ with tangent plane in $W$ is a Mattei-Moussu transversal for $\mathcal F$. \end{theorem} \begin{proof} See \cite{Mat-Mou} and \cite{Can3,Can2, Can-M}. \end{proof} We have the following consequence: \begin{proposition} \label{prop:multiplicidadgenericagh} Let $\mathcal F$ be a generalized hypersurface of $({\mathbb C}^n,0)$ and denote by $S$ the union of the invariant hypersurfaces of $\mathcal F$. Consider a non-singular subvariety $(Y,0)$ of $({\mathbb C}^n,0)$ with $Y\subset S$. Then $\nu_Y({\mathcal F})=\nu_Y(S)-1$. \end{proposition} \begin{proof} We first reduce the problem to the case $Y=\{0\}$ as follows. Taking appropriate representatives of the germs, there is a dense open subset $U$ of $Y$ such that both $S$ and $\mathcal F$ are equimultiple along $Y$ at the points in $U$, that is, we have $$ \nu_p(S)=\nu_Y(S),\quad \nu_p({\mathcal F})=\nu_Y({\mathcal F}), $$ for any $p\in U$. Thus, working at a point of equimultiplicity, we can assume that $Y=\{0\}$. Now, we apply Mattei-Moussu Transversality Theorem to get a closed immersion $\phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$ such that $$ \nu_0(\phi^*{\mathcal F})=\nu_0({\mathcal F}); \quad \nu_0(\phi^{-1}(S))=\nu_0(S). $$ Since $\phi^*{\mathcal F}$ is a generalized curve (see Proposition \ref{prop:pullbackgeneralizedcurve}) we reduce the problem to the two dimensional case. In this case, the result is known from \cite{Cam-LN-S}. \end{proof} \subsection{Logarithmic Forms Fully Associated to Generalized Hypersurfaces} Let us consider a generalized hypersurface $\mathcal F$ on $({\mathbb C}^n, 0)$. We know that there exists at least a germ of invariant hypersurface and that there are finitely many of them $H_i$, $i=1,2,\ldots, s$. Let us select a germ of reduced function $$ f=f_1f_2\cdots f_s\in {\mathcal O}_{{\mathbb C}^n,0} $$ such that $f_i=0$ is a local equation for $H_i$, for $i=1,2,\ldots,s$. Thus $f=0$ gives a reduced equation of the union $S$ of the invariant hypersurfaces of $\mathcal F$. Take a local holomorphic generator $\omega$ of $\mathcal F$ without common factors in its coefficients. The meromorphic $1$-form $\eta=\omega/f$ also defines $\mathcal F$. In view of Remark \ref{rk:logaritmicgenerator}, we know that $\eta$ is a logarithmic differential $1$-form, although it is not necessarily closed. Such an integrable logarithmic $1$-form $\eta=\omega/f$ will be called {\em fully associated to $\mathcal F$}. \begin{remark} Assume that $\eta=\omega/f$ and $\eta'=\omega'/f'$ are two integrable logarithmic $1$-forms fully associated to $\mathcal F$. There are units $U,V\in {\mathcal O}_{{\mathbb C}^n,0}$, such that $\omega'=U\omega$ and $f'=Vf$ and hence there is a unit $W=U/V$ such that $\eta'=W\eta$. \end{remark} \begin{proposition} \label{prop:pullbackoflogaritmicformsfullyassociated} Consider an integrable logarithmic $1$-form $\eta$ fully associated to a generalized hypersurface $\mathcal F$ on $({\mathbb C}^n, 0)$. Take a non-singular irreducible subvariety $Y$ of $({\mathbb C}^n,0)$ invariant for $\mathcal F$ and let us perform the blowing-up centered at $Y$ $$ \pi:((M,\pi^{-1}(0)),{\mathcal F}')\rightarrow (({\mathbb C}^n,0),{\mathcal F}). $$ The pullback $\pi^*\eta$ is an integrable logarithmic $1$-form fully associated to ${\mathcal F}'$. \end{proposition} \begin{proof} (See \cite{Cer} for the two dimensional case). Let $\omega$ be a holomorphic generator of $\mathcal F$ without common factors in its coefficients, take a reduced equation $f=0$ of the union $S$ of the invariant hypersurfaces of $\mathcal F$ and put $\eta=\omega/f$. By Proposition \ref{prop:multiplicidadgenericagh}, we know that $$ \nu_Y(\omega)=\nu_Y(f)-1 . $$ Put $m=\nu_Y(\omega)$. Working locally at a point $p$ of the exceptional divisor $\pi^{-1}(Y)$, where $x'=0$ is a reduced equation of $\pi^{-1}(Y)$, we know that $f\circ\pi=x'^{m+1}f'$, where $f'=0$ is a reduced local equation of the strict transform of $S$. By Proposition \ref{prop:dicriticidaddeunaexplosion}, we know that $\pi$ is a non-dicritical blowing-up for $\mathcal F$. Hence $x'f'=0$ is a local reduced equation of the union of the invariant hypersurfaces of ${\mathcal F}'$ at the point $p$. On the other hand, ${\mathcal F}'$ is generated by $\pi^*\omega$ and $$ \pi^*\omega=x'^{m}\omega', $$ where $\omega'$ has no common factors in its coefficients; for this, we use the fact that $\pi$ is a non dicritical blowing-up and thus we can divide $\pi^*\omega$ exactly by ${x'}^m$. Hence, we have that $$ \pi^*\eta=\frac{\pi^*\omega}{f\circ\pi}=\omega'/(x'f') $$ is an integrable logarithmic $1$-form fully associated to $\pi^*{\mathcal F}$. \end{proof} \section{Divisorial Models in Dimension Two} Consider a foliation $\mathcal F$ on $({\mathbb C}^2,0)$. From the work of A. Seidenberg \cite{Sei}, we know that there is an essentially unique reduction of singularities of $\mathcal F$. When there are no saddle-nodes after reduction of singularities and all the irreducible components of the exceptional divisor are invariant ones, we say that $\mathcal F$ is a {\em generalized curve} \cite{Cam-LN-S}. In this case, the Camacho-Sad indices at the singular points, after reduction of singularities, are all nonzero and they determine locally the linear part of the holonomy. This motivates the quest of a foliation with linear holonomy, with the same reduction of singularities and such that the linear part of the holonomy is the same as the one for $\mathcal F$ after reduction of singularities. Such foliations are the logarithmic ones and hence we look for a ``logarithmic model'' of a given generalized curve. This problem has been solved in dimension two by N. Corral in \cite{Cor}. In this section we recover Corral results in the language of $\mathbb C$-divisors and the indices with respect to singular invariant curves. \subsection{Indices for $\mathbb C$-divisors in dimension two} We develop here a notion of index for ${\mathbb C}$-divisors, directly inspired in the behavior of Camacho-Sad index in the case of holomorphic foliations in dimension two. Let $M$ be a non-singular complex variety of dimension two. Take a $\mathbb C$-divisor \begin{equation} \label{eq:escrituraparaindice} {\mathcal D}=\mu \operatorname{Div}(T)+\sum_{i=2}^s\lambda_i\operatorname{Div}(H_i) \end{equation} on it, where $T\subset M$ and $H_i\subset M$ are curves in $M$, not necessarily irreducible, such that none of the irreducible components of $T$ is an irreducible component of an $H_i$, for $i=2,3,\ldots,s$. We assume that the support of $\mathcal D$ contains $T$, that is $\mu\ne 0$. Let us take a point $p\in T$. We define the {\em Camacho-Sad index $I_p({\mathcal D},T)$ at $p$ of $\mathcal D$ with respect to $T$} by the expression \begin{equation} \label{eq:indice} I_p({\mathcal D},T)=-\frac{\sum_{i=2}^s\lambda_i (T,H_i)_p}{\mu}, \end{equation} where $(T,H_i)_p$ stands for the intersection multiplicity of $T$ and $H_i$ at $p$. Let us note that if ${\mathcal D}_p$ denotes the germ of $\mathcal D$ at $p$ we have that $$ I_p({\mathcal D}_p,T)=I_p({\mathcal D},T). $$ Let us remark that the germ of $T$ at $p$ may not be irreducible, even when we choose $T$ to be irreducible as a curve in $M$. \begin{remark} It is possible to extend the above definition in order to define the index of $\mathcal D$ with respect to any union of curves in the support passing through $p$, by using the formula $$ I_p({\mathcal D}, T_1\cup T_2)= I_p({\mathcal D}, T_1)+I_p({\mathcal D}, T_2)+2(T_1,T_2)_p. $$ In this way, we could recover the complete definition of the Camacho-Sad index with respect to a not necessarily irreducible invariant curve, see \cite{Bru}. Anyway, we only need the definition for the case where the coefficients of all the irreducible components of $T$ in $\mathcal D$ are equal, as defined above. \end{remark} \begin{proposition} Let $\mathcal D$ be a $\mathbb C$-divisor on a non-singular two dimensional complex analytic variety $M$, that we write as ${\mathcal D}=\mu \operatorname{Div}(T)+\sum_{i=2}^s\lambda_i\operatorname{Div}(H_i)$, where $\mu\ne 0$ and $T$ and $H_i$ have no common irreducible components, for any $i=2,3,\ldots,s$. Let $\pi:(M',{\mathcal D}')\rightarrow (M,{\mathcal D})$ be the blowing-up centered at a point $p\in T$. Denote by $T'$ the strict transform of $T$ by $\pi$ and by $E=\pi^{-1}(p)$ the exceptional divisor of $\pi$. The following equality holds: \begin{equation} \label{eq:indiceexplosion1} \sum_{p'\in T'\cap E}I_{p'}({\mathcal D}',T')=I_p({\mathcal D},T)-\nu_p(T)^2. \end{equation} Moreover, if $\pi$ is non-dicritical, we have $\sum_{p'\in E}I_{p'}({\mathcal D}',E)=-1$. \end{proposition} \begin{proof} If $\alpha=\mu\nu_p(T)+\sum_{i=2}^s\lambda_i\nu_p(H_i)$, we have ${\mathcal D}'=\alpha E+\mu T'+\sum_{i=2}^s\lambda_iH'_i$, where we denote by $H'_i$ the strict transforms of the $H_i$ by $\pi$. Recall Noether's formulas: \begin{equation*} (H_i,T)_p=\sum_{p'\in E\cap T'}(H'_i,T')_{p'} +\nu_p(T)\nu_p(H_i);\quad \nu_p(T)=\sum_{p'\in E\cap T'}(E,T')_{p'} . \end{equation*} Let us show that $ \mu I_p({\mathcal D}, T)=\mu\nu_p(T)^2+\sum_{p'\in E}\mu I_{p'}({\mathcal D}',T') $, in order to verify the identity in Equation \ref{eq:indiceexplosion1}: \begin{eqnarray*} \mu I_p({\mathcal D},T)&=&-\sum_{i=2}^s\lambda_i(T,H_i)_p= -\sum_{i=2}^s\lambda_i\nu_p(T)\nu_p(H_i)-\sum_{p'\in E}\sum_{i=2}^s\lambda_i(T',H'_i)_{p'}= \\ &=&\mu\nu_p(T)^2-\nu_p(T)\alpha-\sum_{p'\in E}\sum_{i=2}^s\lambda_i(T',H'_i)_{p'}= \\ &=&\mu\nu_p(T)^2-\sum_{p'\in E}\left(\alpha(E,T')_{p'}+\sum_{i=2}^s\lambda_i(T',H'_i)_{p'}\right)=\\ &=&\mu\nu_p(T)^2+\sum_{p'\in E}\mu I_{p'}({\mathcal D}',T'). \end{eqnarray*} Assume now that $\pi$ is non-dicritical, hence $\alpha\ne 0$. We have $$ -\alpha=-\sum_{p'\in E}\left(\mu(E,T')_{p'}+\sum_{i=2}^s\lambda_i(E,H'_i)_{p'} \right)=\alpha\sum_{p'\in E}I_{p'}({\mathcal D}', E). $$ This ends the proof. \end{proof} \begin{corollary} If $T$ is non singular at $p$, there is only one point $p'\in E\cap T'$ and we have that $ I_{p'}({\mathcal D}',T')=I_p({\mathcal D},T)-1 $. \end{corollary} \subsection{Camacho-Sad indices} Let us recall here the notion of generalized Camacho-Sad index introduced by A. Lins Neto in \cite{LNet}, in the spirit of the residue theory of Saito \cite{Sai}. A good presentation of this results may be found in Brunella \cite{Bru, Bru2} and \cite{Lem-S,Suw}. \begin{definition}[\cite{LNet}] Let $\mathcal F$ be a germ of foliation on $({\mathbb C}^2,0)$ generated by a holomorphic $1$-form $\omega$ without common factors in its coefficients. Consider an invariant branch $\Gamma$ of $\mathcal F$ given by an irreducible equation $f=0$. There is an expression $$ g\omega=hdf+f\alpha, $$ where $\alpha$ is a holomorphic $1$-form and $f$ does not divide $g$. The {\em Camacho-Sad index $\operatorname{CS}_0({\mathcal F},\Gamma)$ of $\mathcal F$ with respect to $\Gamma$} is defined by $$ \operatorname{CS}_0({\mathcal F},\Gamma)=\frac{-1}{2\pi i}\int_{\gamma(f)}\frac{\alpha}{h}, $$ where $\gamma(f)$ is the homological class of the image of the standard loop $z\mapsto \exp(2\pi i)$ under a Puiseux parametrization of $\Gamma$. \end{definition} \begin{remark} \label{rk:indicessingsimples} If the origin is a simple point that is not a saddle-node, we can take $\Gamma=(y=0)$ and $\omega=(\lambda+\cdots)ydx+(\mu+\cdots)xdy$, with $\lambda\mu \ne 0$. In this case we see that \begin{equation}\label{eq:indicereducido} \operatorname{CS}_0({\mathcal F},\Gamma)=-\lambda/\mu. \end{equation} \end{remark} We are mainly interested in the behavior of the above index under non-dicritical blowing-ups. Let us summarize those results in the following proposition: \begin{proposition} Let $\mathcal F$ be a germ of foliation on $({\mathbb C}^2,0)$ and let $$ \pi:( (M,E), {\mathcal F}')\rightarrow (({\mathbb C}^2,0),{\mathcal F}),\quad E=\pi^{-1}(0), $$ be the blowing-up of the origin of ${\mathbb C}^2$. The following properties hold: \begin{enumerate} \item[a)] For any invariant branch $(\Gamma,0)$ we have that $$ \operatorname{CS}_{p'}({\mathcal F}',\Gamma')= \operatorname{CS}_{p}({\mathcal F},\Gamma)-\nu_0(\Gamma)^2, $$ where $p'$ is the only point in $E$ belonging to the strict transform $\Gamma'$ of $\Gamma$. \item[b)] If $\pi$ is non-dicritical, then $\sum_{q\in E}\operatorname{CS}_q({\mathcal F}',E)=-1$. \end{enumerate} \end{proposition} \begin{proof} See Brunella \cite{Bru,Bru2}. \end{proof} As a consequence of the above results we obtain the following proposition: \begin{proposition} Let $\mathcal L$ be a ${\mathcal D}$-logarithmic foliation on $({\mathbb C}^2,0)$, where $\mathcal D$ is a non-dicritical $\mathcal D$-divisor. Then $$ \operatorname{CS}_0({\mathcal L},\Gamma)=I_0({\mathcal D},\Gamma), $$ for any irreducible invariant branch $\Gamma$ of $\mathcal L$. \end{proposition} \begin{proof} The behavior of the indices is the same one after a sequence of blowing-ups that desingularizes $\mathcal L$ and $\Gamma$ along $\Gamma$. When we have a simple point, the indices coincide by Equation \ref{eq:indicereducido}. This equality projects by the sequence of blowing-ups and we are done. \end{proof} \subsection{Existence of Divisorial Models in Dimension Two} In this section we present the definitions and main properties of logarithmic models in dimension two, in terms of ${\mathbb C}$-divisors. The existence of logarithmic models for generalized curves in dimension two has been proved in \cite{Can-Co, Cor}, without an extensive use of $\mathbb C$-divisors. The particularization to the ambient dimension two of the concept of generalized hypersurface is the one of {\em generalized curve}. To avoid possible confusion with other uses of this terminology in the literature, we note that in this paper a generalized curve is given by the following definition: \begin{definition} \label{def:curvageneralizada} A foliation ${\mathcal F}$ on $({\mathbb C}^2,0)$ is a {\em generalized curve} if and only if it is non-dicritical and there are no saddle-nodes in a reduction of singularities of $\mathcal F$. \end{definition} \begin{remark} If there are no saddle-nodes in a reduction of singularities, we find no saddle-nodes after any finite sequence of blowing-ups. In particular the definition is independent of the choice of a reduction of singularities (note that in dimension two we can speak of a {\em minimal reduction of singularities}). For more details, see \cite{Can-C-D}. \end{remark} \begin{definition} \label{def:modelodimensiondos} Consider a generalized curve ${\mathcal F}$ and a let $\mathcal D$ be a ${\mathbb C}$-divisor on a two-dimensional non-singular complex analytic variety $M$. We say that $\mathcal D$ is a {\em divisorial model for ${\mathcal F}$ at a point $p$ in $M$} if the following conditions hold: \begin{enumerate} \item The support $\operatorname{Supp}({\mathcal D}_p)$ of the germ ${\mathcal D}_p$ of $\mathcal D$ at $p$ is the union of the germs at $p$ of the invariant branches of $\mathcal F$. \item The indices of ${\mathcal D}_p$ with respect to the irreducible branches of $\operatorname{Supp}({\mathcal D}_p)$ coincide with the Camacho-Sad indices of $\mathcal F$. \end{enumerate} We say that $\mathcal D$ is a {\em divisorial model for $\mathcal F$} if it fulfils the above conditions at every point $p\in M$. In the case of a germ $(M,K)$ we ask the property at each point of the germification set $K$. \end{definition} \begin{remark} Let us note that if $\mathcal D$ is a divisorial model for a generalized curve $\mathcal F$ on $(M,K)$ then we necessarily have that the ``germification set'' $K$ satisfies that $K\subset \operatorname{Supp}({\mathcal D})$. Indeed, if there is a point $p\in K\setminus \operatorname{Supp}({\mathcal D})$, we know that there is at least one invariant branch (this is a general fact that does not need of the hypothesis generalized curve, see \cite{Cam-S}, also \cite{Ort-R-V}) at $p$ that obviously is not contained in the support of the divisor. \end{remark} \begin{example} \label{ex:integral primera} The first example is a foliation $\mathcal F$ of $({\mathbb C}^2,0)$ with a holomorphic first integral. That is, we take a germ of function $f=f_1^{r_1}f_2^{r_2}\cdots f_s^{r_s}$ and the foliation given by $df/f$. The divisorial model is $$ \mathcal D=r_1\operatorname{Div}(f_1)+r_2\operatorname{Div}(f_2)+\cdots+ r_s\operatorname{Div}(f_s). $$ The verification of this statement can be done, first in the normal crossings situation and second after Corollary \ref{cor:modlogsucblowing} and reduction of singularities. \end{example} Our objective in this subsection is to give a proof of the following result, in terms of $\mathbb C$-divisors: \begin{theorem} \label{th:existenciayunicidadendimensiondos} Given a generalized curve ${\mathcal F}$ on $({\mathbb C}^2,0)$, there is a divisorial model $\mathcal D$ for $\mathcal F$. Moreover, if $\tilde {\mathcal D}$ is another divisorial model for $\mathcal F$, then $\tilde {\mathcal D}$ is projectively equivalent to $\mathcal D$; conversely, any ${\mathbb C}$-divisor projectively equivalent to $\mathcal D$ is also a divisorial model for $\mathcal F$. \end{theorem} Let us work in a matricial way. First of all, we recall a basic fact of linear algebra: \begin{lemma} \label{lema:matrizsimetrica} Let $A=(\alpha_{ij})$ be a $s\times s$ symmetric matrix of rank $s-1$, having coefficients in a field $k$. Assume that there is a vector $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_s)\in k^s$ such that $\lambda A= 0$ and $\lambda_i\ne 0$ for any $i=1,2,\ldots,s$. Consider the diagonal minors $$ \Delta_\ell=\det B_\ell; \quad B_\ell=(\alpha_{ij})_{i,j\in \{1,2,\ldots,s\}\setminus \{\ell\}}. $$ Then $\Delta_\ell\ne 0$, for all $\ell=1,2,\ldots,s$. \end{lemma} \begin{proof} Up to reordering, we may assume that $\ell=1$. Let $F_i$ be the files and $C_i$ the columns of $A$. We know that $$ F_1=(-1/\lambda_1)\sum_{i=2}^s\lambda_iF_i,\quad C_1=(-1/\lambda_1)\sum_{i=2}^s\lambda_iC_i. $$ Let $A'$ be the matrix obtained by changing the first row of $A$ by $F_1$ plus the linear combination $(1/\lambda_1)\sum_{i=2}^s\lambda_iF_i$ and let $A''$ be obtained from $A'$ by changing the first column $C'_1$ of $A'$ by $C'_1$ plus the linear combination $(1/\lambda_1)\sum_{i=2}^s\lambda_iC'_i$, where the $C'_i$ are the columns of $A'$. We have that $\operatorname{rank}(A'')=s-1$ and $$ A''= \left( \begin{array}{c|c} 0& 0\\ \hline 0&B_1 \end{array} \right). $$ We conclude that $\Delta_1\ne 0$. \end{proof} Denote by $H=\cup_{i=1}^sH_i$ the union of invariant branches of $\mathcal F$, where we fix an ordering $H_1,H_2,\ldots,H_s$. We define the $s\times s$ symmetric matrix $A_0({\mathcal F})=(\alpha_{ij})$ by $$ \alpha_{ij}= \left\{ \begin{array}{ccc} \operatorname{CS}_0({\mathcal F},H_i)&\text{ if }& i=j,\\ (H_i,H_j)_0 &\text{ if }& i\ne j. \end{array} \right. $$ Let us denote $B_0({\mathcal F})=(\alpha_{ij})_{2\leq i,j\leq s}$, that is, we have \begin{equation} \label{eq:matrizacero} A_0({\mathcal F})= \left( \begin{array}{c|ccc} \operatorname{CS}_0({\mathcal F},H_1)&(H_1,H_2)_0&\cdots&(H_1,H_s)_0\\ \hline (H_2,H_1)_0&&&\\ \vdots&&B_0({\mathcal F})&\\ (H_s,H_1)_0&&& \end{array} \right). \end{equation} \begin{lemma} \label{rk:matriciallogaritmicmodel} Let $\mathcal F$ be a generalized curve on $({\mathbb C}^2,0)$ and consider a $\mathbb C$-divisor of the form ${\mathcal D}=\sum_{i=1}^s\lambda_i H_i$. The following statements are equivalent: \begin{enumerate} \item The divisor $\mathcal D$ is a divisorial model for $\mathcal F$. \item We have that $\lambda A_0({\mathcal F})=0$ and $\lambda_i\ne 0$ for any $i=1,2,\ldots,s$. \end{enumerate} \end{lemma} \begin{proof} Assume that $\mathcal D$ is a divisorial model for $\mathcal F$. We have that $\lambda_i\ne 0$ for any $i=1,2,\ldots,s$, since the support of $\mathcal D$ is the union $H=\cup_{i=1}^sH_i$ of the invariant curves of $\mathcal F$. Moreover, the indices of $\mathcal D$ coincide with the Camacho-Sad indices of $\mathcal F$. That is, for any $H_i$ we have that $\operatorname{CS}_0({\mathcal F},H_i)=I_0({\mathcal D},H_i)$; noting that $$ I_0({\mathcal D},H_i)=\frac{-\sum_{j\ne i}\lambda_j(H_i,H_j)_0}{\lambda_i}, $$ we conclude that $\lambda A_0({\mathcal F})=0$. Conversely, let us assume that $\lambda A_0({\mathcal F})=0$ and $\lambda_i\ne 0$ for any $i=1,2,\ldots,s$. Then, the support of ${\mathcal D}$ is equal to $H$. Moreover, the fact that $\lambda A_0({\mathcal F})=0$ implies that $$ \operatorname{CS}_0({\mathcal D},H_i)=\frac{-\sum_{j\ne i}\lambda_j(H_i,H_j)_0}{\lambda_i}= I_0({\mathcal D},H_i),\quad i=1,2,\ldots,s, $$ and we are done. \end{proof} \begin{lemma} \label{lema:matrices} Let $\mathcal F$ be a generalized curve on $({\mathbb C}^2,0)$. Then, we have: \begin{enumerate} \item The rank $\operatorname{rk}(A_0({\mathcal F}))$ of $A_0({\mathcal F})$ is equal to $s-1$. \item The determinant $\det B_0({\mathcal F})$ of $B_0({\mathcal F})$ is nonzero. \item There is a vector $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_s)$ such that $\lambda_i\ne 0$ for any $i=1,2,\ldots,s$ and $\lambda A_0({\mathcal F})=0$. \end{enumerate} \end{lemma} \begin{proof} We work by induction on the length of a reduction of singularities of $H$. If this length is zero, either $H$ is non singular or $H=H_1\cup H_2$ is the union of two transverse non singular branches. When $H$ is non singular, we have that $\mathcal F$ is non singular too, since it is a generalized curve; then we have $\operatorname{CS}_0({\mathcal F},H)=0$ and we are done. If $H=H_1\cup H_2$ is the union of two transverse non singular branches, the origin is a simple point which is not a saddle-node. In this case we have $$ \operatorname{CS}_0({\mathcal F},H_1)\operatorname{CS}_0({\mathcal F},H_2)=1. $$ We are done since $$ A_0({\mathcal F})= \left( \begin{array}{cc} \operatorname{CS}_0({\mathcal F},H_1)&1\\ 1&1/\operatorname{CS}_0({\mathcal F},H_1) \end{array} \right). $$ In order to prove the induction step, let us do the blowing-up centered at the origin $$ \pi:(M,E)\rightarrow ({\mathbb C}^2,0);\quad E=\pi^{-1}(0). $$ Denote by $p_1,p_2, \ldots,p_t$ the points of intersection between $E$ and the strict transform $H'$ of $H$. Up to a reordering in $H_2,H_3,\ldots,H_s$, we may assume that $p_j\in H'_i$ if and only if $i\in I_j$, where \begin{equation} \label{eq:indicesjota} I_j=\{n_{j-1}+1,n_{j-1}+2,\ldots,n_j\};\quad n_0=0, n_t=s. \end{equation} Put $\nu_i=\nu_0(H_i)$, for $i=1,2,\ldots,s$. Note that $\nu_i=(E,H'_i)_{p_j}$ when $i\in I_j$. Denote by $\underline{\nu}$ the vector $\underline{\nu}=(\nu^{(1)},\nu^{(2)},\ldots,\nu^{(t)})$, where $$ \nu^{(j)}=(\nu_{n_{j-1}+1}, \nu_{n_{j-1}+2},\ldots,\nu_{n_{j}}),\quad j=1,2,\ldots,t. $$ The matrices $A_{p_j}({\mathcal F}')$ are given by $$ A_{p_j}({\mathcal F}')= \left( \begin{array}{c|c} \operatorname{CS}_{p_j}({\mathcal F}',E)&\nu^{(j)}\\ \hline (\nu^{(j)})^t&B_{p_j}({\mathcal F}') \end{array} \right), $$ where $B_{p_j}({\mathcal F}')$ is the matrix $$ \left( \begin{array}{cccc} \operatorname{CS}_{p_j}({\mathcal F},H'_{n_{j-1}+1})&(H'_{n_{j-1}+1},H'_{n_{j-1}+2})_{p_j}&\cdots&(H'_{n_{j-1}+1},H'_{n_{j}})_{p_j}\\ (H'_{n_{j-1}+2},H'_{n_{j-1}+1})_{p_j}&\operatorname{CS}_{p_j}({\mathcal F},H'_{n_{j-1}+2})&\cdots&(H'_{n_{j-1}+2},H'_{n_{j}})_{p_j}\\ \vdots&\vdots&&\vdots\\ (H'_{n_{j}},H'_{n_{j-1}+1})_{p_j}&(H'_{n_{j}},H'_{n_{j-1}+2})_{p_j}&\cdots&\operatorname{CS}_{p_j}({\mathcal F},H'_{n_{j}}) \end{array} \right). $$ We can apply the induction hypothesis at the points $p_j$. Thus, for any $j=1,2,\ldots,t$ we have: \begin{enumerate} \item $\det B_{p_j}({\mathcal F}')\ne 0$. \item There are vectors $ \lambda^{(j)}=(\lambda_{n_{j-1}+1}, \lambda_{n_{j-1}+2}, \ldots, \lambda_{n_j} ), $ with nonzero entries such that \begin{equation} \label{eq:lambdaapj} (1,\lambda^{(j)})A_{p_j}({\mathcal F}')=0. \end{equation} \end{enumerate} Now, let us define the matrix $A'$ by $$ A'= \left( \begin{array}{c|c|c|c|c} -1&\nu^{(1)}&\nu^{(2)}&\cdots&\nu^{(t)}\\ \hline ({\nu^{(1)}})^t&B_{p_1}({\mathcal F}')&0&\cdots&0\\ \hline ({\nu^{(2)}})^t&0&B_{p_2}({\mathcal F}')&\cdots&0\\ \hline \vdots&\vdots&\vdots&\cdots&\vdots\\ \hline ({\nu^{(t)}})^t&0&0&\cdots&B_{p_t}({\mathcal F}') \end{array} \right) = \left( \begin{array}{c|c} -1&\nu\\ \hline (\nu)^t& B' \end{array} \right). $$ Denote $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_s)= (\lambda^{(1)},\lambda^{(2)},\ldots,\lambda^{(t)})$. Let us show that \begin{equation} \label{eq:lambdaaprima} (1,\lambda)A'=0. \end{equation} Recall that $(1,\lambda^{(j)})A_{p_j}({\mathcal F}')=0$. Looking at the first column of $(1,\lambda^{(j)})A_{p_j}({\mathcal F}')$, we have that $$ \operatorname{CS}_{p_j}({\mathcal F}',E)+\sum_{i=n_{j-1}+1}^{n_j}\lambda_i \nu_i=0, \quad j=1,2,\ldots,t. $$ Noting that $\sum_{j=1}^t\operatorname{CS}_{p_j}({\mathcal F}',E)=-1$, we conclude that \begin{equation} \label{eq:lambdamultiplicidadesmenosuno} -1+\sum_{i=1}^s\lambda_i \nu_i= \sum_{j=1}^t\left(\operatorname{CS}_{p_j}({\mathcal F}',E)+\sum_{i=n_{j-1}+1}^{n_j}\lambda_i \nu_i\right)=0. \end{equation} Hence, the first column of $(1,\lambda)A'$ is zero. The $(1+i)$-th column of $(1,\lambda)A'$, where $i=\ell+n_{j-1}\in I_j$ coincides with the $(1+\ell)$-th column of $ (1,\lambda^{(j)})A_{p_j}({\mathcal F}')$. This shows that $(1,\lambda)A'=0$. Note that $\det B'\ne 0$, since $\det B_{p_j}({\mathcal F}')\ne 0$ for any $j=1,2,\ldots,t$. On the other hand, $(1,\lambda)A'=0$ implies that \begin{equation} \label{eq:lambdabprima} \nu+\lambda B'=0. \end{equation} Moreover, recall the equalities \begin{eqnarray*} \operatorname{CS}_0({\mathcal F},H_i)&=& \operatorname{CS}_{p_j}({\mathcal F}',H'_i)+\nu_i^2,\quad i\in I_j\\ (H_i,H_\ell)_0&=& \left\{ \begin{array}{ccc} \nu_i\nu_\ell+(H'_i,H'_\ell)_{p_j}&\mbox{ if }& i,\ell\in I_j. \\ \nu_i\nu_\ell&\mbox{ if }& i\in I_j,\ell\notin I_j. \end{array} \right. \end{eqnarray*} Then, we have $$ A_0({\mathcal F})= B'+\operatorname{Diag}(\nu_1,\nu_2,\ldots,\nu_s)N, $$ where $N$ is the matrix that has all the rows equal to $\nu$. We conclude that $$\operatorname{rank} A_0({\mathcal F})\geq s-1,$$ since $\operatorname{rank} B'=s$ and the rows of $A_0({\mathcal F})$ are obtained from the ones of $B'$ by adding vectors that are proportional to the single vector $\nu$. Let us show that $\lambda A_0({\mathcal F})=0$, having in mind equations (\ref{eq:lambdabprima}) and (\ref{eq:lambdamultiplicidadesmenosuno}). We have \begin{eqnarray*} \lambda A_0({\mathcal F})&=&\lambda B'+\lambda\operatorname{Diag}(\nu_1,\nu_2,\ldots,\nu_s)N=\\ &=&-\nu+(\lambda_1\nu_1,\lambda_2\nu_2,\ldots,\lambda_s\nu_s)N =\\ &=& -\nu+(\sum_{i=1}^s\lambda_i\nu_i)\nu=(-1+\sum_{i=1}^s\lambda_i\nu_i)\nu=0. \end{eqnarray*} Since $\lambda\ne 0$ and $\operatorname{rank}(A_0({\mathcal F}))\geq s-1$, we conclude that $\operatorname{rank}(A_0({\mathcal F}))=s-1$, this shows property (1) of the statement. By construction, we have that $\lambda_i\ne 0$ for all $i=1,2,\ldots,s$, this shows property (3). Finally, property (2) follows from properties (1) and (3) in view of Lemma \ref{lema:matrizsimetrica}. \end{proof} \begin{remark} Note that the above proof implies that $\operatorname{CS}_0({\mathcal F},H)=0$ when there is only one invariant branch $H$, even if $H$ is singular. \end{remark} Let us end the proof of Theorem \ref{th:existenciayunicidadendimensiondos}. Take the matrix $A_0({\mathcal F})$ as in Equation \ref{eq:matrizacero}. We have a vector $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_s)$ with only nonzero entries such that $\lambda A_0({\mathcal F})=0$. Define the ${\mathbb C}$-divisor ${\mathcal D}$ as $$ {\mathcal D}= \lambda_1H_1+\lambda_2H_2+\cdots+\lambda_sH_s. $$ Applying Lemma \ref{rk:matriciallogaritmicmodel}, we see that $\mathcal D$ is a divisorial model for $\mathcal F$. If $\tilde {\mathcal D}=\sum_{i=1}^s\tilde\lambda H_i$ is projectively equivalent to ${\mathcal D}$, there is a constant $c\in {\mathbb C}^*$ such that $\tilde\lambda=c\lambda$. Hence we also have that $\tilde\lambda A_0({\mathcal F})=0$ and $\tilde{\mathcal D}$ is also a divisorial model for $\mathcal F$. Assume now that ${\mathcal D}'=\sum_{i=1}^s\lambda'_i H_i$ is another divisorial model for $\mathcal F$, by Lemma \ref{rk:matriciallogaritmicmodel}, we have that $\lambda' A_0({\mathcal F})=0$. Since $A_0({\mathcal F})$ has rank $s-1$, there is a constant $c\in {\mathbb C}^*$ such that $\lambda'=c\lambda$ and thus the ${\mathbb C}$-divisor ${\mathcal D}'$ is projectively equivalent to ${\mathcal D}$. This ends the proof of Theorem \ref{th:existenciayunicidadendimensiondos}. \subsection{Stability under Morphisms} In this Subsection, we characterize the two-dimensional divisorial models in terms of blowing-ups and also in terms of transverse maps. This properties are the essential facts we need for extending to higher dimension the concept of divisorial model. \begin{proposition} \label{pro:modelostrasunaexplosion} Consider a generalized curve ${\mathcal F}$ on $({\mathbb C}^2,0)$ and a ${\mathbb C}$-divisor ${\mathcal D}$. Let $\pi:(M,\pi^{-1}(0))\rightarrow ({\mathbb C}^2,0)$ be the blowing-up of the origin and denote by ${\mathcal F}'$ the transform of $\mathcal F$ by $\pi$. The following statements are equivalent: \begin{enumerate} \item The $\mathbb C$-divisor $\mathcal D$ is a divisorial model for $\mathcal F$. \item The transform $\pi^*{\mathcal D}$ of $\mathcal D$ is a divisorial model for ${\mathcal F}'$. \end{enumerate} \end{proposition} \begin{proof} Take notations as in the proof of Lemma \ref{lema:matrices} and put ${\mathcal D}=\sum_{i=1}^s\mu_i H_i$. Assume first that $\mathcal D$ is a divisorial model for $\mathcal F$. We have to prove that $\pi^*{\mathcal D}$ is a divisorial model for $\pi^*{\mathcal F}$ at any point $p\in \pi^{-1}(0)=E$. In view of the proof of Lemma \ref{lema:matrices}, we find vectors $(1,\lambda^{(j)})$ for any $j=1,2,\ldots,t$ such that $$ (1,\lambda^{(j)})A_{p_j}({\mathcal F}')=0, $$ with $\lambda^{(j)}=(\lambda_{n_{j-1}+1}, \lambda_{n_{j-1}+2},\ldots,\lambda_{n_{j}})$ and $\lambda_i\ne 0$ for $i=n_{j-1}+1, n_{j-1}+2,\ldots,n_{j}$. By Lemma~\ref{rk:matriciallogaritmicmodel}, this means that $$ {\mathcal D}^{(j)}=E+ \sum_{i=n_{j-1}+1}^{n_j}\lambda_i H'_i $$ is a divisorial model for ${\mathcal F}'$ at the point $p_j$. Let us take $$ {\mathcal D}'= E+ \sum_{i=1}^s\lambda_i H'_i. $$ We have that ${\mathcal D}'$ is a divisorial model for ${\mathcal F}'$ at each of the points $p_j$, since the germ of ${\mathcal D}'$ at $p_j$ is equal to ${\mathcal D}^{(j)}$. Moreover, the $\mathbb C$-divisor ${\mathcal D}'$ is also a divisorial model for ${\mathcal F}'$ at any point $p\in E\setminus\{p_1,p_2,\ldots,p_t\}$, since the germ of ${\mathcal D}'$ at such points $p$ is just the ${\mathbb C}$-divisor $1\cdot E$. On the other hand, by Equation \ref{eq:lambdamultiplicidadesmenosuno} we have \begin{equation*} \sum_{i=1}^s\lambda_i \nu_i=1. \end{equation*} This implies that ${\mathcal D}'=\pi^*{\mathcal D}_0$, where ${\mathcal D}_0=\sum_{i=1}^s\lambda_i H_i$. Since ${\mathcal D}_0$ is a divisorial model for ${\mathcal F}$ at the origin $0\in {\mathbb C}^2$, we have that ${\mathcal D}=c{\mathcal D}_0$ for a nonzero constant $c\in {\mathbb C}^*$. Hence $\pi^*{\mathcal D}=c{\mathcal D}'$ and it is a divisorial model for ${\mathcal F}'$. Conversely, write ${\mathcal D}=\sum_{i=1}^s\mu_i H_i$ and assume that $\pi^*{\mathcal D}$ is a divisorial model for ${\mathcal F}'$. The exceptional divisor $E$ is invariant for ${\mathcal F}'$ and thus $\sum_{i=1}^s\mu_i\nu_i\ne 0$. Up to change ${\mathcal D}$ by a proportional $\mathbb C$-divisor, we can assume that $\sum_{i=1}^s\mu_i\nu_i=1$. This implies that $\pi^*{\mathcal D}={\mathcal D}'$ since they are both divisorial models for ${\mathcal F}'$ with fixed coefficient equal to $1$ for the exceptional divisor $E$. This implies also that ${\mathcal D}={\mathcal D}_0=\sum_{i=1}^s\lambda_i H_i$. We are done. \end{proof} Next corollary is a direct consequence of the preceding proposition: \begin{corollary} \label{cor:modlogsucblowing} Consider a generalized curve ${\mathcal F}$ on $({\mathbb C}^2,0)$ and a ${\mathbb C}$-divisor ${\mathcal D}$. Let $\pi:(M,\pi^{-1}(0))\rightarrow ({\mathbb C}^2,0)$ be the composition of a finite sequence of blowing-ups. The following statements are equivalent: \begin{enumerate} \item $\mathcal D$ is a divisorial model for $\mathcal F$. \item $\pi^*{\mathcal D}$ is a divisorial model for the transform ${\mathcal F}'$ of $\mathcal F$ by $\pi$. \end{enumerate} \end{corollary} In particular, when $\pi$ is a reduction of singularities of ${\mathcal F}$, we have that ${\mathcal D}$ is a divisorial model for $\mathcal F$ if and only if $\pi^*{\mathcal D}$ is a divisorial model for $\pi^*{\mathcal F}$ . This is the point of view taken in \cite{Cor} in the construction of divisorial models in dimension two. The property stated in next Proposition \ref{prop:pullbacklogmod} is the starting point for defining divisorial models in higher ambient dimension. \begin{proposition} \label{prop:pullbacklogmod} Let $\mathcal F$ be a generalized curve on $({\mathbb C}^2,0)$ and consider a $\mathbb C$-divisor ${\mathcal D}$ on $({\mathbb C}^2,0)$. The following statements are equivalent: \begin{enumerate} \item The $\mathbb C$-divisor $\mathcal D$ is a divisorial model for $\mathcal F$. \item For any $\mathcal D$-transverse holomorphic map $\phi: ({\mathbb C}^2,0)\rightarrow ({\mathbb C}^2,0)$, we have that $\phi^*{\mathcal D}$ is a divisorial model for $\phi^*{\mathcal F}$. \end{enumerate} \end{proposition} \begin{proof} We see that (2) implies (1) by choosing the identity morphism. Let us show now that (1) implies (2). We have to test that $\phi^*{\mathcal D}$ is a divisorial model for $\phi^*{\mathcal G}$. Note that the existence of the pull-back $\phi^*{\mathcal F}$ is guarantied by Proposition \ref{prop:pullbackgeneralizedcurve} and we also know that $\phi^*{\mathcal F}$ is a generalized curve. Let $\pi:(M,\pi^{-1}(0))\rightarrow ({\mathbb C}^2,0)$ be a reduction of singularities of $\mathcal F$ by blowing-ups centered at points. In view of Proposition \ref{prop:appdos} in the Appendix I, there is a commutative diagram of morphisms $$ \begin{array}{ccc} ({\mathbb C}^2,0)&\stackrel{\sigma}{\longleftarrow}&(N,\sigma^{-1}(0))\\ \phi\downarrow\;\; &&\;\;\downarrow\psi\\ ({\mathbb C}^2,0)&\stackrel{\pi}{\longleftarrow}&(M,\pi^{-1}(0)) \end{array}, $$ where $\sigma$ is the composition of a finite sequence of blowing-ups. Let us recall that $S=\operatorname{Supp}({\mathcal D})$ is the union of the invariant curves of $\mathcal F$. Note that $\phi^*{\mathcal F}$ exists, since $\phi$ is $S$-transverse and we also have that the pullback $\phi^*{\mathcal D}$ exists by the same reason. We have that $\phi\circ\sigma$ is $S$-transverse, since $\phi$ is $S$-transverse, by hypothesis, and $\sigma$ is a composition of blowing-ups. Hence $\pi\circ\psi$ is $S$-transverse, because we have $\pi\circ\psi=\phi\circ\sigma$. This implies that $\psi$ is $\pi^{-1}(S)$-transverse and then it is $\pi^*{\mathcal D}$-transverse, since $$ \operatorname{Supp}(\pi^*{\mathcal D})\subset \pi^{-1}(S). $$ In this situation, we have that \begin{eqnarray} \label{eq:transformados1} \sigma^*(\phi^*{\mathcal F})=(\phi\circ\sigma)^*{\mathcal F}&=& (\pi\circ\psi)^*{\mathcal F} = \psi^*(\pi^*{\mathcal F}), \\ \label{eq:transformados2} \sigma^*(\phi^*{\mathcal D})=(\phi\circ\sigma)^*{\mathcal D}&=& (\pi\circ\psi)^*{\mathcal D}= \psi^*(\pi^*{\mathcal D}). \end{eqnarray} We know that $\pi^*{\mathcal D}$ is a divisorial model for $\pi^*{\mathcal F}$ in view of Corollary \ref{cor:modlogsucblowing}. Also by Corollary \ref{cor:modlogsucblowing}, we have that $\phi^*{\mathcal D}$ is a divisorial model of $\phi^*{\mathcal F}$ if and only if $\sigma^*(\phi^*{\mathcal D})$ is a divisorial model of $\sigma^*(\phi^*{\mathcal F})$. In view of Equations \ref{eq:transformados1} and \ref{eq:transformados2}, it is enough to show that $\psi^*(\pi^*{\mathcal D})$ is a divisorial model for $\psi^*(\pi^*{\mathcal F})$. Recalling that $\pi^*{\mathcal F}$ is desingularized, that $\pi^*{\mathcal D}$ is a divisorial model for $\pi^*{\mathcal F}$ and that the desired verification may be done in a local way, we have reduced the problem to the case when $\mathcal F$ is desingularized. More precisely: \begin{quote} In order to prove that (1) implies (2), it is enough to consider only the case when $\mathcal F$ is desingularized. \end{quote} Then, we assume that $\mathcal F$ is desingularized. If $\mathcal F$ is non-singular, we have that ${\mathcal F}=(dx=0)$ and (up to multiply by a constant) ${\mathcal D}=\operatorname{Div}(x)=1\cdot (x=0)$. In this case $\phi^*({\mathcal D})=\operatorname{Div}(x\circ \phi)$ and $\phi^*{\mathcal F}=(d(x\circ\phi=0))$, we are done by Example \ref{ex:integral primera}. Assume that $\mathcal F$ has a simple singular point at the origin, then it is generated by a logarithmic $1$-form $$ \eta=(\lambda+f(x,y))\frac{dx}{x}+(\mu+g(x,y))\frac{dy}{y},\quad f(0,0)=g(0,0)=0, $$ where $(\lambda,\mu)$ is non resonant, in the sense that $m\lambda+n\mu\ne 0$ for any pair of non-negative integer numbers $n,m$ such that $n+m\geq 1$. Moreover, the divisor $\mathcal D$ is given by $${ \mathcal D}=\lambda \operatorname{Div}(x)+\mu \operatorname{Div}(y). $$ Now, we apply Proposition \ref{prop:appuno} to desingularize the list of functions $(x\circ \phi, y\circ \phi)$, by means of a sequence of blowing-ups $\sigma':(N',{\sigma'}^{-1}(0))\rightarrow ({\mathbb C}^2,0)$. It is enough to verify that ${\sigma'}^*(\phi^*{\mathcal D})$ is a divisorial model for ${\sigma'}^*(\phi^*{\mathcal F})$ at the points $p\in {\sigma'}^{-1}(0)$. This reduces the problem to the case in which $\phi$ has the form $$ x\circ\phi=Uu^av^b,\quad y\circ\phi=Vu^cv^d, \quad U(0,0)\ne 0\ne V(0,0), $$ where $a+b\geq 1$ and $c+d\geq 1$ (note that none of these functions is identically zero, since $\phi$ is $S$-transverse). Put $(\lambda',\mu')=(\lambda a+\mu c, \lambda b+\mu d)$, we have \begin{eqnarray*} \phi^*\eta&=& \lambda'\frac{du}{u}+\mu'\frac{dv}{v}+\alpha, \quad \mbox{ $\alpha$ holomorphic}, \\ \phi^*{\mathcal D}&=& \lambda'\operatorname{Div}(u)+\mu'\operatorname{Div}(v). \end{eqnarray*} Now, we see that $\phi^*{\mathcal D}$ is a divisorial model of ${\phi^*{\mathcal F}}$. Note that either $\lambda'\ne 0$ or $\mu'\ne 0$, since $a+b+c+d\geq 2$ and there are no resonances between $\lambda,\mu$. If $\lambda'\ne 0=\mu'$, we have a non singular foliation with $u=0$ the only invariant curve and $\phi^*{\mathcal D}=\lambda'\operatorname{Div}(u)$, we are done. If $\lambda'\ne 0\ne \mu'$ we have a simple singularity and $\phi^*{\mathcal D}=\lambda'\operatorname{Div}(u)+\mu'\operatorname{Div}(v)$ is a divisorial model, as we know by Remark \ref{rk:indicessingsimples}. \end{proof} \begin{corollary} Let $\mathcal F$ be a generalized curve on $({\mathbb C}^2,0)$ and consider a divisorial model ${\mathcal D}$ of ${\mathcal F}$. Then the $\mathbb C$-divisor $\mathcal D$ is non-dicritical. \end{corollary} \begin{proof} Assume that there is a $\mathcal D$-transverse map $\phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^2,0)$ such that $\phi^*{\mathcal D}=0$ and $\phi(y=0)\subset \operatorname{Supp}({\mathcal D})$. By Proposition \ref{prop:pullbacklogmod}, we know that $\phi^*{\mathcal D}$ is a divisorial model of $\phi^*{\mathcal F}$ at the origin. But this is not possible, since we know that $\phi^*{\mathcal F}$ exists and hence the divisorial model at the origin cannot be zero. \end{proof} \section{Reduction of Singularities of Foliated Spaces} Before considering reduction of singularities, let us precise what we mean by a {\em desingularized foliated space } in the case of generalized hypersurfaces. This concept is developed for any foliated space in \cite{Can}. \begin{definition} \label{def:foliatedspace} A {\em foliated space} is just a data $((M,K),E,{\mathcal F})$, where \begin{enumerate} \item The {\em ambient space} $(M,K)$ is a germ of non-singular complex analytic variety along a connected and compact analytic subset $K\subset M$. \item The {\em divisor} $E\subset M$ is a normal crossings divisor on $M$. More precisely, it is a germ along $E\cap K$. \item The {\em foliation} $\mathcal F$ is a germ of holomorphic foliation on $M$ along the germification set $K$. \end{enumerate} We say that a foliated space $((M,K),E,{\mathcal F})$ is of {\em generalized hypersurface type} if and only if $\mathcal F$ is a generalized hypersurface and all the irreducible components of $E$ are invariant for $\mathcal F$. \end{definition} We say that a foliated space $((M,K),E,{\mathcal F})$ is {\em desingularized} if it is {\em simple } at any point $p\in K$, in the sense that we detail in Subsection \ref{definicionsimplepoint}. The property of being a simple point is an open property and hence it is also satisfied in an open neighborhood of $K$. \subsection{Simple Points} \label{definicionsimplepoint} The definition of ``simple point'' in any dimension has been introduced in \cite{Can-C, Can}. Here we recall this concept particularized to the case of foliated spaces of generalized hypersurface type. Consider a foliated space $((M,K), E, {\mathcal F})$ of generalized hypersurface type. Let us define when a point $p\in K$ is a simple point for the foliated space. Denote by $\tau$ the {\em dimensional type} of ${\mathcal F}$ at $p$ (see \cite{Can,Can-C, Can-M-RV}). Roughly speaking, the dimensional type $\tau$ is the minimum number of local coordinates needed to describe ${\mathcal F}$ at $p$. Denote by $e$ the number of irreducible components of $E$ through $p$. The first request for $p$ to be simple is that $\tau-1\leq e \leq \tau$. In this way we have two categories of simple points: \begin{enumerate} \item[a)] {\em Simple corner points:} the simple points where $e=\tau$. \item[b)]{\em Simple trace points:} the simple points where $e=\tau-1$. \end{enumerate} Assume that $e=\tau$. Then, there are coordinates $(x_1,x_2,\ldots,x_n)$ at $p$ such that $E=(\prod_{i=1}^\tau x_i=0)$ and ${\mathcal F}$ is locally defined at $p$ by a meromorphic differential $1$-form $\eta$ written as \begin{equation} \label{simplecorners} \eta= \sum_{i=1}^\tau (\lambda_i+a_i(x_1,x_2,\ldots,x_\tau))\frac{dx_i}{x_i},\quad a_i\in {\mathcal O}_{M,p}, \end{equation} where $a_i(0)=0$ for $i=1,2,\ldots,\tau$. We say that $p$ is a {\em simple corner} if the following {\em non resonance property} holds: \begin{quote} \label{quote:resonance} ``For any $\mathbf{0}\ne \mathbf{m}=(m_i)_{i=1}^\tau\in {\mathbb Z}_{\geq 0}^\tau$, we have that $\sum_{i=1}^\tau{m_i}\lambda_i\neq 0$.'' \end{quote} Let us note that $\prod_{i=1}^\tau\lambda_i\ne0$. \begin{remark} It is known that the germs at $p$ of the irreducible components of $E$ are the only invariant germs of hypersurface for $\mathcal F$ at a simple corner $p$. One way of verifying this is as follows. First of all, we can assume that $\tau=n$, because of the ``cylindric shape'' of the foliation over its projection on the first $\tau$ coordinates. Assume now that there is another invariant hypersurface. Then we should have an invariant curve $t\mapsto \gamma(t)$ as follows: $$ \gamma(t)=(t^{m_1}U_1(t),t^{m_2}U_2(t),\ldots,t^{m_n}U_n(t)),\quad U_i(0)\ne 0, \; i=1,2,\ldots,n. $$ Let $\eta$ be as in Equation \ref{simplecorners}. The fact that $\gamma^*\eta=0$ implies that $\sum_{i=1}^n m_i\lambda_i=0$ and this contradicts the property of non-resonance. \end{remark} Assume now that $e=\tau-1$. The point $p$ is a {\em simple trace point} if and only if there is a invariant germ of non-singular hypersurface $H_p$ at $p$, not contained in $E$ and having normal crossings with $E$, in such a way that the germ of $\mathcal F$ at $p$ is a simple corner with respect to the normal crossings divisor $E\cup H_p$. \begin{remark} \label{rk:regular implicasimple} Given a foliated space $((M,K),E,{\mathcal F})$ of generalized hypersurface type, any point $p\in M\setminus\operatorname{Sing}({\mathcal F})$ is a simple point. In this case the dimensional type is $\tau=1$. If $e=0$, we have an ``improper'' trace point and the foliation is locally given by $dx=0$, where $x$ is a local coordinate, we write it as $dx/x=0$ and we see that $(M,\emptyset,\mathcal F)$ fulfils the definition of simple point for generalized hypersurfaces. If $e\geq 1$ we necessarily have that $e=1$, since all the components of $E$ are invariant and we have only one of them; we can choose an appropriate coordinate such that $x=0$ is the divisor $E$ and $\mathcal F$ is given by $dx/x=0$; hence it satisfies the definition of simple point. The above property is true in the general case when all the components of $E$ are invariant. In presence of dicritical components, we have to assure the normal crossings property between $\mathcal F$ and the divisor. See \cite{Can}. \end{remark} In our current case of generalized hypersurface type foliated spaces, simple points may be described by means of the {\em logarithmic order} as follows. Consider a foliated space $((M,K), E, {\mathcal F})$ of generalized hypersurface type. Take a point $p\in K$. There are local coordinates $(x_1,x_2,\ldots,x_n)$ such that $E=(\prod_{i=1}^ex_i=0)$ and ${\mathcal F}$ is generated locally at $p$ by an integrable meromorphic $1$-form $$ \eta=\sum_{i=1}^e a_i(x)\frac{dx_i}{x_i}+\sum_{i=e+1}^na_i(x)dx_i,\quad a_i\in {\mathcal O}_{M,p}, $$ where the coefficients $a_i$ do not have a common factor, for $i=1,2,\ldots,n$. The {\em logarithmic order $\operatorname{LogOrd}_p(\eta,E)$} of $\eta,E$ at the origin is defined by $$ \operatorname{LogOrd}_p(\eta,E)=\min\{\nu_0(a_i);\; i=1,2,\ldots,n\}. $$ We also put $\operatorname{LogOrd}_p({\mathcal F},E)=\operatorname{LogOrd}_p(\eta,E)$, when $\eta$ generates ${\mathcal F}$ as above. \begin{proposition} \label{prop:simplepointsandlogorder} Assume that $((M,K), E, {\mathcal F})$ is a foliated space of generalized hypersurface type. Take a point $p\in K$. The following statements are equivalent \begin{enumerate} \item The point $p$ is a simple point for $((M,K), E, {\mathcal F})$. \item $\operatorname{LogOrd}_p({\mathcal F},E)=0$. \end{enumerate} \end{proposition} \begin{proof} See also \cite{Can-M-RV, Moli}. We provide a direct proof in Appendix II. \end{proof} Thus, the locus of non simple points coincides with the {\em log-singular locus\/}: $$\operatorname{LogSing}({\mathcal F}, E)=\{p\in M;\quad \operatorname{LogOrd}_p({\mathcal F},E)\geq 1 \}.$$ \subsection{Reduction of Singularities} \label{reductionofsingularities} Let us recall now what we mean by a reduction of singularities of a foliated space of generalized hypersurface type. The existence of reduction of singularities for germs of codimension one holomorphic foliations is know from the paper of Seidenberg \cite{Sei} in ambient dimension two; when the ambient dimension is three, it has been proven in \cite{Can}. In general ambient dimensions it is still an open problem, but there is reduction of singularities for foliated spaces of generalized hypersurface type \cite{Fer-M}. Take a foliated space $((M,K), E,{\mathcal F})$ of generalized hypersurface type. A {\em reduction of singularities of $((M,K), E,{\mathcal F})$} is a transformation of foliated spaces \begin{equation} \label{eq:reducciondesingularidades} \pi: ((M',K'),E',{\mathcal F}')\rightarrow ((M,K),E,{\mathcal F}) \end{equation} obtained by composition of a finite sequence of admissible blowing-ups of foliated spaces in such a way that $((M',K'),E',{\mathcal F}')$ is desingularized. A non-singular and connected closed analytic subset $(Y,Y\cap K)\subset (M,K)$ is an {\em admissible center} for $((M,K), E,{\mathcal F})$ when it is invariant for $\mathcal F$ and it has normal crossings with $E$. In this situation, we can perform the admissible blowing-up with center $Y$: $$ \pi_1:((M_1,K_1), E^1,{\mathcal F}_1)\rightarrow ((M,K), E,{\mathcal F}),\quad K_1=\pi_1^{-1}(K), $$ where ${\mathcal F}_1=\pi_1^*{\mathcal F}$ is the transform of ${\mathcal F}$ and $E^1=\pi_1^{-1}(E\cup Y)$. Such transformations may be composed. Then, a reduction of singularities $\pi$ as in Equation \ref{eq:reducciondesingularidades} is a finite composition $$ \pi=\pi_1\circ\pi_2\circ\cdots\circ\pi_s, $$ where each $\pi_i$ is an admissible blowing-up of foliated spaces, for $i=1,2,\ldots,s$. The number $s$ is called the {\em length } of $\pi$ and it will be important in order to perform inductive arguments. \begin{remark} We recall that a reduction of singularities of the (finite) union of invariant hypersurfaces induces a reduction of singularities of the foliated space, in the framework of generalized hypersurfaces, see \cite{Fer-M, Can-M-RV}. Then, we can assure the existence of reduction of singularities for a given foliated space $((M,K), E,{\mathcal F})$ of generalized hypersurface type. \end{remark} \subsection{Notations on a Reduction of Singularities} \label{Notations on a Reduction of Singularities} Let us introduce some useful notations concerning a given reduction of singularities $\pi$ as in Equation \ref{eq:reducciondesingularidades}. The morphism $\pi$ is a finite composition $\pi=\pi_1\circ\pi_2\circ\cdots\circ\pi_s$, where $$ \pi_j:((M_j,K_{j}), E^j, {\mathcal F}_j)\rightarrow ((M_{j-1},K_{j-1}), E^{j-1},{\mathcal F}_{j-1}),\quad j=1,2,\ldots,s, $$ is the admissible blowing-up with center $Y_{j-1}\subset M_{j-1}$. The initial and final foliated spaces are given by \begin{eqnarray*} ((M_0,K_0),E^0,{\mathcal F}_0)&=&((M,K),E,{\mathcal F}),\\ ((M_s,K_s),E^s,{\mathcal F}_s)&=&((M',K'),E',{\mathcal F}') \end{eqnarray*} The exceptional divisor of $\pi_j$ is $E^j_j=\pi_j^{-1}(Y_{j-1})$. Moreover, for any $j=1,2,\ldots, s$ we write the decomposition into irreducible components of $E^{j-1}$ and $E^j$ as $$ E^{j-1}=\cup_{i\in I_0\cup \{1,2,\ldots,j-1\}}E^{j-1}_i,\quad E^{j}=\cup_{i\in I_0\cup \{1,2,\ldots,j\}}E^{j}_i, $$ where $E^{j}_i$ is the strict transform of $E^{j-1}_i$, for $i\in I_0\cup \{1,2,\ldots,j-1\}$. If we denote $I=I_0\cup \{1,2,\ldots,s\}$ and $E'_i=E^s_i$ for $i\in I$, we have that $E'=\cup_{i\in I}E'_i$. In the same way, we can express the decomposition of $E$ into irreducible components as $E=\cup_{i\in I_0}E_i$, where $E_i=E^0_i$, for $i\in I_0$. The inductive arguments on the length of $\pi$ are just based on the fact that after a first blowing-up, we have a reduction of singularities of smaller length. That is, when $s\geq 1$, we consider the decomposition $\pi=\pi_1\circ \sigma$, where $\sigma=\pi_2\circ\pi_3\circ\cdots\circ\pi_s$. Thus, we have \begin{equation} \begin{array}{ccc} \pi_1: ((M_1,K_1), E^1,{\mathcal F}_1)&\rightarrow& ((M,K), E,{\mathcal F}), \\ \sigma: ((M',K'), E',{\mathcal F}')&\rightarrow& ((M_1,K_1), E^1,{\mathcal F}_1). \end{array} \end{equation} Note that $\sigma$ is a reduction of singularities of length $s-1$. \begin{remark} For the shake of simplicity, we do not detail certain properties about the germs of spaces. We will just use expressions as ``a point close enough to the germification set'' or ``by taking appropriate representatives''. In each case, we hope the reader to supply the exact meaning of these expressions. \end{remark} Take a point $p\in M$, close enough to the germification set $K$. Then $\pi$ induces a reduction of singularities over the ambient space $(M,p)$ that we denote \begin{equation} \label{eq:pisobrep} \pi_p: ((M',K'_p), E',{\mathcal F}')\rightarrow ((M,p),E,{\mathcal F}),\quad K'_p=\pi^{-1}(p). \end{equation} We can decompose it as $\pi_p=\pi_{1,p}\circ\sigma_p$, where \begin{equation} \label{eq.redsingsobrep} \begin{array}{cccc} \pi_{1,p}: ((M_1,K_{1,p}), E^1,{\mathcal F}_1)&\rightarrow& ((M,p), E,{\mathcal F}),& K_{1,p}=\pi_1^{-1}(p), \\ \sigma_p: ((M',K'_p), E',{\mathcal F}')&\rightarrow& ((M_1,K_{1,p}), E^1,{\mathcal F}_1).& \end{array} \end{equation} Let us unify next the notations between the components of the exceptional divisors and the irreducible invariant hypersurfaces, not necessarily contained in them. Denote by $S'\subset M'$ the union of invariant hypersurfaces of ${\mathcal F}'$ not contained in the divisor $E'$. We know that $S'$ is a disjoint union of non singular hypersurfaces and $D'=E'\cup S'$ is also a normal crossings divisor on $M'$. Since the irreducible components of $E'$ are invariant, we have that $D'$ is the union of all invariant hypersurfaces of ${\mathcal F}'$. Let us denote $$ S'=\cup_{b\in B}S'_b $$ the decomposition into irreducible components of $S'$, where we choose the set of indices $B$ in such a way that $B\cap I=\emptyset$. Denote $D'_i=E'_i$ if $i\in I$ and $D'_b=S'_b$ if $b\in B$. We have that $$ D'=\cup_{j\in I\cup B}D'_j, $$ is the decomposition into irreducible components of $D'$. Moreover, let us denote by $D_j=\pi(D'_j)$, for $j\in I_0\cup B$. Then $D=\cup_{j\in I_0\cup B}D_j\subset M$ is the union of the irreducible invariant hypersurfaces of $\mathcal F$. \subsection{Equidesingularization} We recall here the concept of {\em equireduction point} for a foliated space $((M,K),E,{\mathcal F})$ of generalized hypersurface type. This idea has already been useful in \cite{Can-M} and \cite{Can-M-RV}. \begin{remark} In our applications we will consider points that can be outside of the germification set, but close enough to it. So, if we say ``take a point $p\in M$'' we understand that it is ``close enough to $K$''. \end{remark} Let us take a point $p\in M$. We say that $p$ is an {\em even point} for $((M,K),E,{\mathcal F})$ if either $p\not\in \operatorname{Sing}({\mathcal F})$ (see Remark \ref{rk:regular implicasimple}) or $p\in \operatorname{Sing}({\mathcal F})$ and the singular locus $\operatorname{Sing}({\mathcal F})$ satisfies the following properties, locally at $p$: \begin{enumerate} \item[a)] The singular locus $\operatorname{Sing}({\mathcal F})$ has codimension two in $M$, it is non-singular and it has normal crossings with $E$. \item[b)] The foliation ${\mathcal F}$ is equimultiple along $\operatorname{Sing}({\mathcal F})$. In particular, each irreducible component of $E$ through $p$ contains $\operatorname{Sing}({\mathcal F})$ and there are at most two of them. \end{enumerate} We say that $p$ is an {\em equireduction point}, or a point of {\em $2$-equireduction}, for the foliated space $(M,E,{\mathcal F})$ if it is an even point and this is stable under blowing-ups centered at the singular locus. More precisely, we say that an even point $p\in M$ is an {\em equireduction point} for $((M,K), E, {\mathcal F})$ if for any finite sequence of local blowing-ups over $p$ \begin{equation} \label{eq:sucesiondeequirreduccion} ((M,p),E,{\mathcal F})\stackrel{\sigma_1}{\leftarrow}((M_1,p_1),E^1,{\mathcal F}_1) \stackrel{\sigma_2}{\leftarrow}\cdots \stackrel{\sigma_m}{\leftarrow} ((M_m,p_m),E^m,{\mathcal F}_m) \end{equation} such that the center of $\sigma_i$ is $\operatorname{Sing}({\mathcal F}_{i-1})$, for $i=1,2,\ldots,m$, we have the following properties: \begin{enumerate} \item The point $p_m$ is an even point for $((M_m,p_m),E^m,{\mathcal F}_m)$. \item If $p_m\in \operatorname{Sing}({\mathcal F}_m)$, the induced morphism $ \operatorname{Sing}({\mathcal F}_{m})\rightarrow \operatorname{Sing}({\mathcal F}_{0}) $ is étale. \end{enumerate} Let us recall that a {\em local blowing-up} is the composition of a blowing-up $$\pi:(M',\pi^{-1}(p))\rightarrow (M,p)$$ with an immersion of germs $(M',p')\rightarrow (M',\pi^{-1}(p))$. Next two results may be obtained by a direct adaptation of the statements proved in \cite{Can-M} to the case of generalized hypersurfaces: \begin{proposition} \label{pro:codimensionnoequireduccion} Let $((M,K),E,{\mathcal F})$ be a foliated space of generalized hypersurface type. The set $\operatorname{Z}({\mathcal F}, E)$ of non-equireduction points is a closed analytic subset of $M$ of codimension at least three. \end{proposition} \begin{proposition} \label{pro:secciontransversalequirreduccion} Let $p\in M$ be a singular equireduction point for a foliated space $((M,K),E,{\mathcal F})$ of generalized hypersurface type. Any two dimensional section $$ (\Delta,p)\subset (M,p) $$ transverse to $\operatorname{Sing}({\mathcal F})$ is a Mattei-Moussu transversal for ${\mathcal F}$ and it induces a foliated space $((\Delta,p), E\cap \Delta,{\mathcal F}\vert_{{\Delta}})$ that is a generalized curve. \end{proposition} Consider a singular equireduction point $p\in M$ for the generalized hypersurface type foliated space $((M,K),E,{\mathcal F})$. Let us perform the blowing-up with center at the whole singular locus $(\operatorname{Sing}(\mathcal F), p)\subset (M,p)$: $$ \varsigma_1: ((M_1,\varsigma_1^{-1}(p)),E^1,{\mathcal F}_1)) \rightarrow ((M,p),E,{\mathcal F}). $$ There are only finitely many points $\{p_{j}\}_{j=1}^{n_1}$ over $p$ in the singular locus $\operatorname{Sing}({\mathcal F}_1)$ and the morphism of germs $$ (\operatorname{Sing}({\mathcal F}_1),p_{j}) \rightarrow (\operatorname{Sing}({\mathcal F}),p) $$ is étale for all $j=1,2,\ldots,n_1$. Then, we can blow-up $((M_1,\varsigma_1^{-1}(p)),E^1,{\mathcal F}_1))$ with center $\operatorname{Sing}({\mathcal F}_1)$ to obtain a morphism $$ \varsigma_2: ((M_2,\varsigma_2^{-1}(\varsigma_1^{-1}(p))),E^2,{\mathcal F}_2)) \rightarrow ((M_1,\varsigma_1^{-1}(p)),E^1,{\mathcal F}_1)). $$ Note that the center of $\varsigma_2$ has exactly $n_1$ connected components, each one passing through a point $p_{j}$, for $j=1,2,\ldots,n_1$. Locally at each point $p_{j}$ we have an induced blowing-up with center $(\operatorname{Sing}({\mathcal F}_1),p_{j})$: $$ \varsigma_{p_{j}}:((M_2,\varsigma_2^{-1}(p_{j})),E^2,{\mathcal F}_2)) \rightarrow ((M_1,p_{j}),E^1,{\mathcal F}_1)). $$ Continuing indefinitely in this way, we obtain the {\em equireduction sequence} \begin{equation} \label{eq:sucesiondeequirreduccion2} {\mathcal E}_{M,E,{\mathcal F}}^p: ((M,p),E,{\mathcal F})\stackrel{\varsigma_1}{\leftarrow}((M_1,\varsigma_1^{-1}(p)),E^1,{\mathcal F}_1) \stackrel{\varsigma_2}{\leftarrow} \cdots . \end{equation} The {\em infinitely near points of $p$} in $M_\ell$ are the points in $$ (\varsigma_1\circ\varsigma_2\circ \cdots\circ \varsigma_{\ell})^{-1}(p)\cap \operatorname{Sing}({\mathcal F}_\ell) $$ that we can write as $p_{j_1j_2\cdots j_\ell}$, with the ``dendritic'' property that $$ \varsigma_\ell(p_{j_1j_2\cdots j_{\ell-1} j_\ell})=p_{j_1j_2\cdots j_{\ell-1}}. $$ Let us detail some consequences of Proposition \ref{pro:secciontransversalequirreduccion} relative to plane sections at an equireduction point $p\in M$. Take a two dimensional section $(\Delta,p)\subset (M,p)$ transverse to $\operatorname{Sing}({\mathcal F})$. First of all, we consider the following remark: \begin{remark} \label{rk:sectiondimdos} In view of \cite{Mat}, we know that $p$ is a simple point for $((M,p),E,{\mathcal F})$ if and only if it is a simple point for the restriction $((\Delta,p),E\cap \Delta,{\mathcal F}\vert_{\Delta}) $. \end{remark} If we consider a local holomorphic generator $\omega$ of ${\mathcal F}$ at $p$, without common factors in its coefficients, we know that $\eta=\omega\vert_{\Delta}$ is a local generator of ${\mathcal F}\vert_{\Delta}$ and moreover $$ \nu_{\Sigma}(\omega)=\nu_p(\omega)=\nu_p(\eta), \quad \Sigma=(\operatorname{Sing}({\mathcal F}), p). $$ This makes the blowing-ups $\varsigma_i$ in the equireduction sequence \ref{eq:sucesiondeequirreduccion2} to be ``compatible'' with the transversal section $\Delta$. More precisely, the equireduction sequence induces a sequence of blowing-ups $\bar\varsigma_i$ of the two-dimensional section $((\Delta,p),E\cap \Delta,{\mathcal F}\vert_{\Delta})$ with center at the points $p_{j_1j_2\cdots j_\ell}$, in such a way that the following diagram is commutative: \begin{equation} \label{eq:equrreducciondomtwo} \begin{array}{ccccc} ((\Delta,p),E\cap \Delta,{\mathcal F}\vert_{\Delta})& \stackrel{\bar\varsigma_1}{\longleftarrow}& ((\Delta_1,\varsigma_1^{-1}(p)),E^1\cap \Delta_1,{\mathcal F}_1\vert_{\Delta_1})& \stackrel{\bar\varsigma_2}{\longleftarrow}& \cdots \\ \downarrow&&\downarrow&& \\ ((M,p),E,{\mathcal F})&\stackrel{\varsigma_1}{\longleftarrow}& ((M_1,\varsigma_1^{-1}(p)),E^1,{\mathcal F}_1)& \stackrel{\varsigma_2}{\longleftarrow}&\cdots \end{array} \end{equation} Looking at the diagram in Equation \ref{eq:equrreducciondomtwo}, we know that the sequence of the $\bar\varsigma_i$ desingularizes the two dimensional foliated space $((\Delta,p),E\cap \Delta,{\mathcal F}\vert_{\Delta})$, since we blow-up each time at the singular points; hence we apply the existence of reduction of singularities in dimension two, see \cite{Sei,Can-C-D}. As a consequence, at a finite step of the equireduction sequence given in Equation \ref{eq:sucesiondeequirreduccion2}, we reach a reduction of singularities of $((M,p), E,{\mathcal F})$. \subsection{Relative Equireduction Points and Relative Transversality} Let us introduce a version of the equireduction points relative to a fixed reduction of singularities \begin{equation*} \pi: ((M',K'),E',{\mathcal F}')\rightarrow ((M,K),E,{\mathcal F}) \end{equation*} as in Equation \ref{eq:reducciondesingularidades}. We also introduce the locus of {\em $\pi$-good} points that will be essential in our proof of the existence of divisorial models. Let us take the notations introduced in Subsection \ref{Notations on a Reduction of Singularities}. We define the locus $Z_\pi({\mathcal F},E)$ of the {\em points that are not points of $\pi$-equireduction} as being $$ Z_\pi({\mathcal F},E)=Z({\mathcal F},E)\cup B_\pi({\mathcal F}, E), $$ with $B_\pi({\mathcal F}, E)=\cup_{j\in J_\pi}(\pi_1\circ\pi_2\circ\cdots\circ\pi_j)(Y_j) $, where $J_\pi$ is the set of indices $j$ in $\{0,1,\ldots,s-1\}$ such that the center $Y_j$ of $\pi_{j+1}$ has codimension at least three. The complement $M\setminus Z_\pi({\mathcal F},E)$ is the set of {\em $\pi$-equireduction points}. \begin{remark} \label{rk:codimpiequireduccion} We have that $Z_\pi({\mathcal F},E)$ is a closed analytic subset of $M$ of codimension at least three. \end{remark} If we take a point $p\in M\setminus Z_\pi({\mathcal F},E)$, the morphism $$ \pi_p:((M',\pi^{-1}(p)), E',{\mathcal F}')\rightarrow ((M,p), E,{\mathcal F}) $$ is a ``part of the equireduction sequence'' in Equation \ref{eq:sucesiondeequirreduccion2}, in the sense that there is a finite step in the equireduction sequence that can be obtained from $$ ((M',\pi^{-1}(p)), E',{\mathcal F}') $$ by repeatedly blowing-up the singular locus. Recall now that we have the decomposition $\pi=\pi_1\circ \sigma$ and that $\pi_1^{-1}(Y)=E^1_1$ is the exceptional divisor of the first blowing-up $\pi_1$, where $Y=Y_0$. Take a point $q\in E^1_1$. Let us put $p=\pi_1(q)$ and denote by $F_p=\pi_1^{-1}(p)$ the fiber of $p$ by $\pi_1$. Recall that $F_p$ is isomorphic to a complex projective space of dimension equal to $n-d-1$, where $n=\dim M$ and $d=\dim Y$. We say that $q$ is a point of {\em $\pi_1$-transversality} if $q\notin \operatorname{Sing}({\mathcal F}_1)$ or $q\in \operatorname{Sing}({\mathcal F}_1)$ and the germ $(\Sigma,q)$ of the singular locus $\operatorname{Sing}({\mathcal F}_1)$ at $q$ is non singular, it is contained in $(E^1_1,q)$, it has codimension two in $(M_1,q)$ and, moreover, we have the transversality property with respect to the fiber $F_p$ given by $T_q(F_p)\not\subset T_q\Sigma$, where $T_q$ stands for the tangent space. Let us denote by $T^1_\pi \subset E^1_1$ the locus of points that are not of $\pi_1$-transversality. Note that we have the closed analytic set $ Z_\sigma({\mathcal F}_1,E^1) $ defining the locus of points in $M_1$ that are not of $\sigma$-equireduction. The codimension of $Z_\sigma({\mathcal F}_1,E^1)$ in $M_1$ is at least three. We say that a point $p\in Y$ is a {\em $\pi$-bad point} if and only if $$ \dim \left( Z_\sigma({\mathcal F}_1,E^1) \cup T^1_\pi \right) \cap F_p \geq n-d-2,\quad d=\dim Y. $$ Denote by $B_\pi\subset Y$ the set of $\pi$-bad points. \begin{lemma} The set $B_\pi$ is a closed analytic subset of $Y$ such that $B_\pi\ne Y$. \end{lemma} \begin{proof} We know that $\dim \left( Z_\sigma({\mathcal F}_1,E^1) \cup T^1_\pi \right) \cap F_p$ is the maximum $$ \max\{ \dim (Z_\sigma({\mathcal F}_1,E^1) \cap F_p), \dim( T^1_\pi \cap F_p)\}. $$ Hence $B_\pi=B'\cup B''$ where \begin{eqnarray*} B' &=& \{p\in Y;\; \dim (Z_\sigma({\mathcal F}_1,E^1) \cap F_p) \geq n-d-2\}, \\ B''&=&\{p\in Y;\; \dim (T^1_\pi \cap F_p) \geq n-d-2 \}. \end{eqnarray*} Recall that the projection $E^1_1\rightarrow Y$ is a fibration with fiber ${\mathbb P}^{n-d-1}_{\mathbb C}$. Since the dimension of the fibers is upper semicontinuous, we see that both $B'$ and $B''$ are closed analytic subsets of $Y$. The codimension of $Z_\sigma({\mathcal F}_1,E^1)\cap E^1_1$ in $E^1_1$ is at least two, hence there is a closed subset $Z'\subset Y$, with $Z'\ne Y$ such that the codimension of $Z_\sigma({\mathcal F}_1,E^1)\cap F_p$ in $F_p$ is at least two, for any $p\in Y\setminus Z'$; in particular, we have that $B'\ne Y$. Let us show now that $B''\ne Y$. Decompose the analytic subset $\operatorname{Sing}({\mathcal F}_1)\cap E^1_1$ of $E^1_1$ as a union $$ \operatorname{Sing}({\mathcal F}_1)\cap E^1_1= \Sigma_1\cup \Sigma_2\cup\cdots\Sigma_t\cup R_1, $$ where the $\Sigma_i$ are the irreducible components of $\operatorname{Sing}({\mathcal F}_1)$ that have codimension two and that are contained in $E^1_1$, for $i=1,2,\ldots,t$. The closed analytic set $R_1\subset E^1_1$ is the union of the intersection with $E^1_1$ of the other irreducible components of $\operatorname{Sing}({\mathcal F}_1)$. Let us note that $R_1$ has codimension at least two in $E_1$ and that $R_1\subset T^1_\pi$. Moreover, we have that $$ T^1_\pi=R_1\cup\bigcup_{i=1}^t(\Sigma_i\cap T^1_\pi). $$ We have that $B''=B''_{1}\cup B''_{2}\cup\cdots\cup B''_{t}\cup B'''$, where \begin{eqnarray*} B''_{i}&=& \{p\in Y;\; \dim (\Sigma_i\cap T^1_\pi \cap F_p)\geq n-d-2 \}, \\ B'''&=& \{p\in Y;\; \dim (R_1\cap F_p) \geq n-d-2 \}. \end{eqnarray*} Since the codimension of $R_1$ in $E^1_1$ is at least two, we have that $B'''\ne Y$. Let us show that $B''_{i}\ne Y$, for $i=1,2,\ldots,t$. If $\dim \Sigma_i\cap T^1_\pi\leq n-3$, we are done. Thus, we assume that $\dim\Sigma_i\cap T^1_\pi= n-2$ and hence $\Sigma_i\subset T^1_\pi$. Take $U=\Sigma_i\setminus \Upsilon$, where $\Upsilon$ is the set of singularities of the closed analytic set $\operatorname{Sing}({\mathcal F}_1)$. The fact that $U\subset T^1_\pi$ means that for any point $q\in U$ we have that $T_q(F_{\pi_1(q)})\subset T_q\Sigma_i$. This property implies that $\Sigma_i$ is a union of fibers. Then, if $B''_i=Y$, we have that $\Sigma_i=E^1_1$, contradiction, since $\Sigma_i$ is a hypersurface of $E^1_1$. We have a decomposition of $B_\pi$ into a finite union of strict closed analytic subsets of $Y$. Since $Y$ is irreducible, we conclude that $B_\pi\ne Y$. \end{proof} Next Corollary is an important step in our proof of the existence of divisorial models in higher dimension: \begin{corollary} \label{cor: equirrducciongenerica} There is a strict closed analytic subset $Z\subset Y$, $Z\ne Y$ such that any point $p\in Y\setminus Z$ satisfies the following properties \begin{enumerate} \item The center $Y$ is equimultiple for $\mathcal F$ at the point $p$. \item There is a Mattei-Moussu transversal $(\Delta,p)$ of ${\mathcal F}$ at $p$ such that the strict transform $\Delta^1$ of $\Delta$ by $\pi_{1,p}$ intersects $\operatorname{Sing}({\mathcal F}_1)$ transversely and only in points of $\sigma$-equireduction. \end{enumerate} \end{corollary} \begin{proof} Let $C\subset Y$ be the points where $Y$ is not equimultiple for $\mathcal F$. We know that $C\ne Y$ is a strict closed analytic subset of $Y$. Now, take $Z=C\cup B_\pi$, that is also a strict closed analytic subset of $Y$. Let us show that $Z$ satisfies the statement. Take a point $p\in Y\setminus Z$. Since $p\notin C$, it is a point of equimultiplicity of $Y$ with respect to $\mathcal F$. Consider the fiber $F_p$ of $p$; we know that it is a projective space of dimension $n-d-1$. Since $p\notin B_\pi$, we have that $$ \dim \left( Z_\sigma({\mathcal F}_1,E^1) \cup T^1_\pi \right) \cap F_p \leq n-d-3,\quad d=\dim Y. $$ This means that for a nonempty Zariski open set $U$ of the grassmanian of lines $\ell$ in the projective space $F_p$, we have the property that $$ \ell\cap (Z_\sigma({\mathcal F}_1,E^1) \cup T^1_\pi)=\emptyset. $$ We also know that for a nonempty Zariski open set $W$ of the grassmanian of lines $\ell$ in the projective space $F_p$, we have the property that $\ell$ meets transversely $\operatorname{Sing}({\mathcal F}_1)$. By the generic properties of Mattei-Moussu transversals, we can choose $(\Delta,p)$ for $\mathcal F$ such that the line $\ell=F_p\cap \Delta_1$ lies in $U\cap W$. The desired property follows from the definition of the sets $Z_\sigma({\mathcal F}_1,E^1)$ and $T^1_\pi$. \end{proof} \section{Divisorial Models For Generalized Hypersurfaces} \label{Logarithmic Models For Generalized Hypersurfaces} We introduce the definition of divisorial model as follows: \begin{definition} Consider a generalized hypersurface $\mathcal F$ on a non-singular complex analytic variety $M$ and a $\mathbb C$-divisor $\mathcal D$ on $M$. We say that $\mathcal D$ is a {\em divisorial model for ${\mathcal F}$ at a point $p$ in $M$} if the following conditions hold: \begin{enumerate} \item The support $\operatorname{Supp}({\mathcal D}_p)$ of the germ ${\mathcal D}_p$ of $\mathcal D$ at $p$ is the union of the germs at $p$ of invariant hypersurfaces of $\mathcal F$. \item For any ${\mathcal D}$-transverse map $\phi:({\mathbb C}^2,0)\rightarrow (M,p)$, the ${\mathbb C}$-divisor $\phi^*{\mathcal D}$ is a divisorial model for $\phi^*{\mathcal F}$. \end{enumerate} We say that $\mathcal D$ is a divisorial model for $\mathcal F$ if it fulfils the above conditions at every point $p$ in $M$. \end{definition} In view of Proposition \ref{prop:pullbacklogmod}, this definition extends the one for dimension two, stated in Definition \ref{def:modelodimensiondos}. This Section is devoted to giving a proof of the following results: \begin{theorem} \label{teo:main} Every generalized hypersurface on $({\mathbb C}^n,0)$ has a divisorial model. \end{theorem} \begin{proposition} \label{pro: uniquenessandnondicriticalness} Let $\mathcal D$ be a divisorial model for a generalized hypersurface $\mathcal F$ on $({\mathbb C}^n,0)$. Then $\mathcal D$ is a non-dicritical $\mathbb C$-divisor. Moreover, any other $\mathbb C$-divisor is a divisorial model for $\mathcal F$ if and only if it is projectively equivalent to $\mathcal D$. \end{proposition} In order to build a divisorial model, we take a reduction of singularities $\pi$ of the generalized hypersurface $\mathcal F$. There is a natural projective class of $\mathbb C$-divisors associated to the desingularized foliated space obtained from $\pi$. The divisorial model will be a $\mathbb C$-divisor that is transformed into this class by the reduction of singularities. For technical convenience, we systematically consider foliated spaces including a highlighted normal crossings divisor, although the concept of divisorial model does not involve such a normal crossings divisor. Note that the uniqueness part in Proposition \ref{pro: uniquenessandnondicriticalness} comes from the corresponding fact in dimension two, just by taking a Mattei-Moussu transversal. \subsection{$\mathbb C$-Divisors Associated to Desingularized Foliated Spaces} \label{DivisorsAssociatedtoDesingularizedFoliateSpaces} Let us consider a foliated space $((M',K'), E', {\mathcal F}')$ of generalized hypersurface type, where $K'$ is a connected and compact analytic subset of $M'$. Let us take the following hypothesis: \begin{itemize} \item There is a logarithmic $1$-form $\eta'$ on $(M',K')$ fully associated to ${\mathcal F}'$. \item The foliated space $((M',K'),E',{\mathcal F}')$ is desingularized. \end{itemize} \begin{remark} \label{rk:existenciadeetaprima} Consider a foliated space $((M,K), E, {\mathcal F})$ of generalized hypersurface type, where $K$ is a connected and compact analytic subset of $M$. Assume that there is a logarithmic $1$-form $\eta$ on $(M,K)$ fully associated to ${\mathcal F}$ and consider a reduction of singularities $$ \pi:((M',K'), E',{\mathcal F}')\rightarrow ((M,K),E,{\mathcal F}) $$ as in Equation \ref{eq:reducciondesingularidades}. Then $\eta'=\pi^*{\eta}$ is a logarithmic $1$-form on $(M',K')$ fully associated to ${\mathcal F}'$, in view of Proposition \ref{prop:pullbackoflogaritmicformsfullyassociated}. We note that the existence of such $\eta$, and hence $\eta'$, is assured when $K=\{0\}$ is a single point. \end{remark} In this subsection we build a $\mathbb C$-divisor ${\mathcal D}^{\eta'}$ on $(M',K')$, defining a divisorial model for ${\mathcal F}'$ on $(M',K')$, that is obtained from $\eta'$ in terms of residues. Taking notations compatible with Subsection \ref{Notations on a Reduction of Singularities}, we denote by $S'\subset M'$ the union of invariant hypersurfaces of ${\mathcal F}'$ not contained in the divisor $E'$. We know that $S'$ is a disjoint union of non singular hypersurfaces and $D'=E'\cup S'$ is also a normal crossings divisor on $M'$. Since the irreducible components of $E'$ are invariant, we have that $D'$ is the union of all invariant hypersurfaces of ${\mathcal F}'$. Accordingly with Subsection \ref{Notations on a Reduction of Singularities}, we denote $D'=\cup_{j\in I\cup B}D'_j$ the decomposition of $D'$ into irreducible components, where $E'=\cup_{i\in I}E'_i$ and $S'=\cup_{b\in B}S'_b$. Taking in account Saito's residue theory in \cite{Sai} and noting that $D'$ has the normal crossings property, the residue $\operatorname{Res}_p(\eta')$ of the germ of $\eta'$ at a point $p\in K'\subset M'$ is an element $$ \operatorname{Res}_p(\eta')\in \oplus_{j\in I_p\cup B_p}{\mathcal O}_{D'_j,p}, $$ where $I_p=\{i\in I; p\in E'_i\}$ and $B_p=\{b\in B; p\in H'_b\}$. Note that $B_p$ is empty (corner points) or it has exactly one element (trace points) in view of the description of simple points in Subsection \ref{definicionsimplepoint}. More generally, the residues induce global holomorphic functions \begin{equation} \label{eq:residuefunctions} f_j:D'_j\rightarrow {\mathbb C},\quad j\in I\cup B. \end{equation} Let us note that each $D'_j$ is a germ along $D'_j\cap K'$. Thus, the functions $f_j$ are constant along the connected components of the compact sets $D'_j\cap K'$. More precisely, following Saito's Theory, we have local coordinates at $p$ that can be labelled as $(x_j; j\in I_p\cup B_p)\cup (y_s;s\in A)$ such that the germ $\eta'_p$ of $\eta'$ at $p$ can be written as \begin{equation*} \eta'_p=\sum_{j\in I_p\cup B_p}\tilde f_j\frac{dx_j}{x_j}+\alpha,\quad \tilde f_j\vert_{D'_j}=f_j, \end{equation*} where $\alpha$ is a germ of holomorphic $1$-form and moreover, the functions $\tilde f_j$ satisfy that $\partial \tilde f_j/\partial x_j=0$, that is $\tilde f_j$ does not depend on the coordinate $x_j$. By evaluating $f_j$ at the point $p$, we get a local $\mathbb C$-divisor ${\mathcal D}_{\eta',p}$ defined by \begin{equation} \label{eq:descripciondedeetaprima} {\mathcal D}_{\eta',p}=\sum_{j\in I_p\cup B_p} f_j(p)\operatorname{Div}_p(D'_j). \end{equation} \begin{proposition} There is a $\mathbb C$-divisor ${\mathcal D}^{\eta'}$ on $(M',K')$ such that the germ ${\mathcal D}^{\eta'}_p$ of ${\mathcal D}^{\eta'}$ at any $p\in K'$ satisfies that ${\mathcal D}^{\eta'}_p={\mathcal D}_{\eta',p}$. \end{proposition} \begin{proof} The residual functions $f_j$ of Equation \ref{eq:residuefunctions} are constant along each connected component of $K'\cap D'_j$, for $j\in I\cup B$. We have only to remark that the compact set $K'\cap D'_j $ is connected for any $j\in I\cap B$; otherwise $D'_j$ would not be irreducible, since each connected component of $K'\cap D'_j$ determines a connected component of $D'_j$ as a germ of hypersuface. \end{proof} \begin{remark} The $\mathbb C$-divisor ${\mathcal D}^{\eta'}$ is a divisorial model for ${\mathcal F}'$. The proof of this statement is a consequence of the more general results in Subsection \ref{Existence of Logarithmic Models}, by considering a trivial reduction of singularities. \end{remark} \subsection{The $\mathbb C$-Divisor Induced by a Reduction of Singularities} \label{The Divisor Induced by a Reduction of Singularities} Let us consider a foliated space $((M,K),E,{\mathcal F})$ of generalized hypersurface type, where $K$ is a connected and compact analytic subset $K\subset M$. Assume that we have a logarithmic $1$-form $\eta$ on $(M,K)$ fully associated to $\mathcal F$. It is not evident how to define a ${\mathbb C}$-divisor associated to $\eta$ as in Subsection \ref{DivisorsAssociatedtoDesingularizedFoliateSpaces}, unless we are in the situation of normal crossings outside a subset of codimension $\geq 3$, described in Saito \cite{Sai} and also in \cite{Cer-LN}. In this subsection we do it, once we have fixed a reduction of singularities of $((M,K), E,{\mathcal F})$. More precisely, this subsection is devoted to the proof of the following result: \begin{theorem} \label{teo:pilogarithmicmodel} Consider a foliated space $((M,K),E,{\mathcal F})$ of generalized hypersurface type, where $K$ is a connected and compact analytic subset of $M$. Let $\eta$ be a logarithmic differential $1$-form on $(M,K)$ fully associated to ${\mathcal F}$. Given a reduction of singularities $$ \pi:((M',K'), E',{\mathcal F}')\rightarrow ((M,K), E,{\mathcal F}), $$ there is a unique $\mathbb C$-divisor ${\mathcal D}_\pi^{\eta}$ of $(M,K)$ such that $\pi^*({\mathcal D}_\pi^{\eta})={\mathcal D}^{\eta'}$, where $\eta'=\pi^*\eta$. Moreover ${\mathcal D}_\pi^\eta$ is non dicritical. \end{theorem} \begin{remark} We know that Theorem \ref{teo:pilogarithmicmodel} is true when $\dim M=2$. Let us see this. Assume that $\dim M=2$ and take a logarithmic differential $1$-form $\eta$ fully associated to $\mathcal F$. We have that $\eta'=\pi^*\eta$ is locally given at the singular points $p\in K'$ as $$ \eta'=(\lambda+a(x,y))\frac{dx}{x}+ (\mu+b(x,y))\frac{dy}{y}, $$ where $xy=0$ is a local equation of $S' \cup E'$ at $p$. The coefficient in ${\mathcal D}^{\eta'}$ of the irreducible component of $S'\cup E'$ of equation $x=0$ is equal to $\lambda$. This tells us that ${\mathcal D}^{\eta'}$ is a divisorial model for ${\mathcal F}'$. Consider now a divisorial model ${\mathcal D}$ for ${\mathcal F}$, it exists in view of Theorem \ref{th:existenciayunicidadendimensiondos}. By Corollary \ref{cor:modlogsucblowing}, the pullback $\pi^*{\mathcal D}$ is a divisorial model for ${\mathcal F}'$. Recall that two divisorial models are projectively equivalent, by Theorem \ref{th:existenciayunicidadendimensiondos}. Noting that the supports are connected (each component of the support intersects the connected subset $K'=\pi^{-1}(0)$), there is a non-zero scalar $\mu\in {\mathbb C}$ such that $ {\mathcal D}^{\eta'}=\mu \pi^*{\mathcal D}$. If we take ${\mathcal D}_\pi^{\eta}=\mu{\mathcal D}$, we obtain that $\pi^*({\mathcal D}_\pi^{\eta})={\mathcal D}^{\eta'}$. The non-dicriticalness of ${\mathcal D}_\pi^{\eta}$ is also a consequence of the fact that it is a divisorial model for $\mathcal F$, in view of the statement of Theorem \ref{th:existenciayunicidadendimensiondos}. \end{remark} From now on, let us take the general notations introduced in Subsection \ref{Notations on a Reduction of Singularities}. Let us recall that $I\setminus I_0=\{1,2,\ldots,s\}$, where $s$ is the length of $\pi$ as a composition of a sequence of blowing-ups. We assume that $s\geq 1$, otherwise we take ${\mathcal D}^{\eta}_\pi={\mathcal D}^{\eta}$ and we are done. Let us recall also the decomposition $\pi=\pi_1\circ\sigma$, where $\pi_1$ is the first blowing-up and $\sigma$ is a composition of $s-1$ blowing-ups. Let us show that ${\mathcal D}_\pi^{\eta}$ is necessarily unique. If we write ${\mathcal D}^{\eta'}=\sum_{j\in I\cup B}\lambda_j D'_j$, the condition $\pi^*({\mathcal D}_\pi^\eta)={\mathcal D}^{\eta'}$ implies that \begin{equation} \label{eq:depieta} {\mathcal D}_\pi^\eta=\sum_{j\in I_0\cup B}\lambda_j D_j. \end{equation} Then ${\mathcal D}_\pi^{\eta}$ is unique, if it exists. From now on, we fix ${\mathcal D}_\pi^\eta$ as being the one given by Equation \ref{eq:depieta}. Let us see that ${\mathcal D}_\pi^\eta$ is a non-dicritical $\mathbb C$-divisor, under the assumption that $\pi^*({\mathcal D}^\eta_\pi)={\mathcal D}^{\eta'}$. We know that ${\mathcal D}^{\eta'}$ is non-dicritical, by applying Corollary \ref{cor:dicriticonormalcrossings}. Moreover, for any $i\in \{1,2,\ldots,s\}$, the $i$-blowing-up is non dicritical, since $\lambda_i\ne 0$. Hence $\pi$ is a composition of admissible non-dicritical blowing-ups for ${\mathcal D}^\eta_\pi$. Since ${\mathcal D}^{\eta'}$ is non-dicritical, by Corollary \ref{dicriticidadexplosionnodicritica} we conclude that ${\mathcal D}^\eta_\pi$ is non-dicritical. Finally, we have to verify that $\pi^*({\mathcal D}_\pi^\eta)={\mathcal D}^{\eta'}$. We proceed by induction on the length $s$ of the reduction of singularities $\pi$, using the fact that Theorem \ref{teo:pilogarithmicmodel} is true when $n=2$. Let us put $\eta_1=\pi_1^*\eta$. We have that the logarithmic 1-form $\eta_1$ is fully associated to ${\mathcal F}_1$ and $\eta'=\sigma^*\eta_1$. Our induction hypothesis implies that the statement of Theorem \ref{teo:pilogarithmicmodel} is true for the morphism $$ \sigma:((M',K'), E',{\mathcal F}')\rightarrow ((M_1,K_1), E^1,{\mathcal F}_1). $$ This means that $\sigma^*({\mathcal D}^{\eta_1}_\sigma)={\mathcal D}^{\eta'}$. Then, in order to show that $\pi^*({\mathcal D}^{\eta}_\pi)={\mathcal D}^{\eta'}$, we have only to verify that $\pi_1^*({\mathcal D}^{\eta}_\pi)={\mathcal D}^{\eta_1}_\sigma$. This is equivalent to saying that the following equality holds: \begin{equation} \label{eq:lambdauno} \lambda_1=\sum_{j\in J_Y}\lambda_j\nu_Y(D_j), \end{equation} where $J_Y=\{j\in I_0\cup B;\; Y\subset D_j\}$. \begin{remark} Take a point $p\in Y$, not necessarily in $Y\cap K$, but close enough to $K$. Assume that $p$ is a point of equimultiplicity for $\mathcal F$; then, for any $D_j$, with $j\in I_0\cup B$, we have that $\nu_p(D_j)=\nu_Y(D_j)$. In this case, we have the equality \begin{equation} \label{eq:indicesdey} J_Y=\{j\in I_0\cup B;\quad p\in D_j\}. \end{equation} \end{remark} Let us take a point $p\in Y\setminus Z$ and a Mattei-Moussu transversal $(\Delta,p)$ as stated in Corollary \ref{cor: equirrducciongenerica}. We recall that $\pi$ induces a morphism $\pi_p$ as in Equation \ref{eq:pisobrep} that splits as $\pi_p=\pi_{1,p}\circ\sigma_p$ as in Equation \ref{eq.redsingsobrep}. Moreover, the morphism $\pi$ induces a two-dimensional reduction of singularities of foliated spaces of generalized curve type that we denote as: \begin{equation} \label{eq:pibarrap} \bar\pi_p: ((\Delta',E'(p)), E'(p), {\mathcal F}'\vert_{\Delta'}) \rightarrow ((\Delta,p), \emptyset, {\mathcal F}\vert_\Delta),\quad E'(p)=\Delta'\cap\pi^{-1}(p), \end{equation} where $\Delta'$ is the strict transform of $\Delta$ by $\pi$. The reader may find similar situations in \cite{Can-M-RV}. In view of the equireduction properties, there is an increasing sequence of integers \begin{equation} \label{eq:listaele} \ell_1=1<\ell_2< \cdots<\ell_r\leq s,\quad T_Y=\{1,\ell_2,\ldots,\ell_s\}, \end{equation} depending only on $Y$ with the following property: for any $j=1,2,\ldots,s$, the inverse image $K'_p=\pi^{-1}(p)$ intersects $D'_j$ if and only if $j\in T_Y$. We have that $$ E'(p)=\cup_{t\in T_Y} (\Delta'\cap E'_t) $$ is the decomposition into irreducible components of $E'(p)=\bar\pi^{-1}_p(p)$. We can decompose $\bar\pi_p$ as $\bar\pi_p=\bar\pi_{1,p}\circ \bar\sigma_p$ where \begin{eqnarray*} \bar\pi_{1,p}:((\Delta_1,E^1(p)), E^1(p),{\mathcal F}_1\vert_{\Delta_1})&\rightarrow & ((\Delta,p), \emptyset,{\mathcal F}\vert_{\Delta}), \quad E^1(p)=\Delta_1\cap \pi_1^{-1}(p), \\ \bar\sigma_p: ((\Delta',E'(p)), E'(p),{\mathcal F}'\vert_{\Delta'})&\rightarrow&((\Delta_1,E^1(p)), E^1(p),{\mathcal F}_1\vert_{\Delta_1}), \end{eqnarray*} where $\Delta_1$ is the strict transform of $\Delta$ by $\pi_1$. Since $(\Delta,p)$ is a Mattei-Moussu transversal, we have that $ \bar\eta=\eta\vert_\Delta $ is a logarithmic differentiable $1$-form fully associated to ${\mathcal F}\vert_{\Delta}$. Moreover, we know that $\Delta_1$ is also a Mattei-Moussu transversal for ${\mathcal F}_1$ in all the points of $E^1(p)$, then $\bar\eta_1=\eta_1\vert_{\Delta_1}$ is also a logarithmic $1$-form fully associated to ${\mathcal F}_1\vert_{\Delta_1}$. By the same reason, we see that $\bar\eta'=\eta'\vert_{\Delta'}$ is a logarithmic $1$-form fully associated to ${\mathcal F}'\vert_{\Delta'}$. On the other hand, an elementary functoriality assures that \begin{equation} \bar\eta'=\bar\pi_p^*(\bar\eta)=\bar\sigma_p^*(\bar\eta_1), \quad \bar\eta_1=\bar\pi_{1,p}^*(\bar\eta). \end{equation} Note that $((M', K'_p ),E',{\mathcal F}')$ is desingularized. Then, we can follow the construction in Subsection \ref{DivisorsAssociatedtoDesingularizedFoliateSpaces} to obtain a $\mathbb C$-divisor $ {\mathcal D}^{\eta'}_p $, defined in $(M', K'_p )$ and associated to the logarithmic $1$-form $\eta'$. To be precise, the $\mathbb C$-divisor $ {\mathcal D}^{\eta'}_p $ is associated to the germ of $\eta'$ along $K'_p$; this germ may be considered for $p$ close enough to $K$, by taking appropriate representatives. We can write the $\mathbb C$-divisor ${\mathcal D}^{\eta'}_p$ on $(M', K'_p)$ and the ${\mathbb C}$-divisor ${\mathcal D}^{\eta}_{\pi_p}$ on $(M,p)$ as \begin{eqnarray} \label{eq:enlafibra} {\mathcal D}^{\eta'}_p&=&\sum_{j\in J_Y}\lambda_j(p)\operatorname{Div}_{K'_p}(D'_j) + \sum_{j\in T_Y}\lambda_j(p)\operatorname{Div}_{K'_p}(D'_j), \\ {\mathcal D}^{\eta}_{\pi_p}&=&\sum_{j\in J_Y}\lambda_j(p)\operatorname{Div}_{p}(D_j). \end{eqnarray} We denote by $\operatorname{Div}_{K'_p}(D'_j)$ the germ of $D'_j$ along $K'_p=\pi^{-1}(p)$. In the same way, we denote by $\operatorname{Div}_{p}(D_j)$ the germ of $D_j$ at the point $p$. Note that Equation \ref{eq:enlafibra} is written without null coefficients. \begin{remark} \label{rk:igualdaddecoefficientes} Recall that $K'_p=\pi^{-1}(p)$. If $p\in K$, we have that $K'_p\subset K'$ and the reader can verify that ${\mathcal D}^{\eta'}_p$ is just the germ of ${\mathcal D}^{\eta'}$ along $\pi^{-1}(p)$. In this case, we have that \begin{equation} \label{eq:igualdaddecoeficientes} \lambda_j(p)=\lambda_j, \mbox{ for any } j\in J_Y\cup T_Y. \end{equation} When $p$ is not in the germification set $K$, we describe further the relationship between ${\mathcal D}^{\eta'}_p$ and the germ of ${\mathcal D}^{\eta'}$ along $\pi^{-1}(p)$. \end{remark} The following Lemma \ref{lema:thconequrreduccion} is our first step in the proof of Theorem \ref{teo:pilogarithmicmodel}: \begin{lemma} \label{lema:thconequrreduccion} Under the induction hypothesis, we have that \begin{equation} \label{eq:teoenpuntosdeequirreduccion} \lambda_1(p)=\sum_{j\in J_Y}\lambda_j(p)\nu_Y(D_j) \end{equation} and thus $\pi_p^*({\mathcal D}^{\eta}_{\pi_p})={\mathcal D}^{\eta'}_{p}$. \end{lemma} \begin{proof} By induction hypothesis, we have that $\sigma_p^*({\mathcal D}^{\eta_1}_{\sigma_p})= {\mathcal D}^{\eta'}_{p}$. Reasoning as before, we have that Equation \ref{eq:teoenpuntosdeequirreduccion} holds if and only if $\pi_p^*({\mathcal D}^{\eta}_{\pi_p})={\mathcal D}^{\eta'}_{p}$. We know that the $\mathbb C$-divisor ${\mathcal D}^{\bar\eta'}$ on $(\Delta', K'_p)$, the $\mathbb C$-divisor ${\mathcal D}^{\bar\eta_1}_{\bar\sigma_p}$ on $(\Delta_1, K_{1,p})$ and the $\mathbb C$-divisor ${\mathcal D}^{\bar\eta}_{\bar\pi_p}$ on $(\Delta, p)$ are given by $$ {\mathcal D}^{\bar\eta'}={\mathcal D}^{\eta'}_{p}\vert_{\Delta'}, \quad {\mathcal D}^{\bar\eta_1}_{\bar\sigma_p}={\mathcal D}^{\eta_1}_{\sigma_p}\vert_{\Delta_1}, \quad {\mathcal D}^{\bar\eta}_{\bar\pi_p}= {\mathcal D}^{\eta}_{\pi_p}\vert_{\Delta} . $$ Recalling that Theorem \ref{teo:pilogarithmicmodel} is true for two dimensional ambient spaces, we have $$ \bar\pi_p^*({\mathcal D}^{\bar\eta}_{\bar\pi_p})={\mathcal D}^{\bar\eta'}, \quad \bar\sigma_p^*({\mathcal D}^{\bar\eta_1}_{\bar\sigma_p})={\mathcal D}^{\bar\eta'}, \quad \bar\pi_{1,p}^*({\mathcal D}^{\bar\eta}_{\bar\pi_p})={\mathcal D}^{\bar\eta_1}_{\bar\sigma_p}. $$ The property $\bar\pi_{1,p}^*({\mathcal D}^{\bar\eta}_{\bar\pi_p})={\mathcal D}^{\bar\eta_1}_{\bar\sigma_p}$ gives us that Equation \ref{eq:teoenpuntosdeequirreduccion} holds, as follows. The coefficient $\mu(p)$ of $\operatorname{Div}(E^1_1\cap \Delta_1)$ in $\pi_{1,p}^*({\mathcal D}^{\bar\eta}_{\bar\pi_p})$ is given by $$ \mu(p)= \sum_{j\in J_Y}\lambda_j(p)\nu_p(D_j\cap \Delta). $$ Recalling that $(\Delta,p)$ is a Mattei-Moussu transversal, we have that $$ \nu_p(D_j\cap \Delta)=\nu_p(D_j),\quad \mbox{ for any } j\in J_Y. $$ Since $p$ is a point of $Y$-equimultiplicity for the generalized hypersurface $\mathcal F$, we have that $\nu_p(D_j)=\nu_Y(D_j)$, for all $j\in J_Y$. We conclude that $$ \mu(p)= \sum_{j\in J_Y}\lambda_j(p)\nu_Y(D_j). $$ On the other hand, the coefficient of $\operatorname{Div}(E^1_1\cap \Delta_1)$ in ${\mathcal D}^{\bar\eta_1}_{\bar\sigma_p}$ is equal to $\lambda_1(p)$. Since $\bar\pi_{1,p}^*({\mathcal D}^{\bar\eta}_{\bar\pi_p})={\mathcal D}^{\bar\eta_1}_{\bar\sigma_p}$, he have that $\lambda_1(p)=\mu(p)$ and we are done. \end{proof} In view of Remark \ref{rk:igualdaddecoefficientes} and Equation \ref{eq:igualdaddecoeficientes}, if we can choose $p\in K$, the equality in Equation \ref{eq:lambdauno} holds and we are done. This is the situation when $Y=\{p\}$ and more generally when $Y\subset K$. We have to consider the case when we cannot chose $p\in K$. This means that $Y\cap K\subset Z$, where $Z\subset Y$ is the closed analytic subset presented in Corollary \ref{cor: equirrducciongenerica}. We end with the following lemma: \begin{lemma} \label{lema.csindices} For any $j\in J_Y$ and $p\in Y\setminus Z$, we have that $ \lambda_j/\lambda_1=\lambda_j(p)/\lambda_1(p)$. \end{lemma} \begin{proof} Consider the reduction of singularities $\bar\pi_p$ as in Equation \ref{eq:pibarrap}. For any index $\ell\in J_Y\cup T_Y$, let us denote $D'_\ell(p)=D'_\ell\cap \Delta'$. The exceptional divisor of $\bar\pi_p$ is $$ E'(p)=\bar\pi_p^{-1}(p)=\cup_ {t\in T_Y}D'_t(p) $$ and $\bar\pi_p$ is a composition of $r$ blowing-ups, the morphisms corresponding to the indices $t\in T_Y$. By connectedness of the dual graph of $\bar\pi_p$, there is a finite sequence $(t_u)_{u=1}^{v+1}$ of elements of $T_Y\cup \{j\}$, with $t_1=1$, $t_{v+1}=j$ such that $$ D'_{t_u}(p)\cap D'_{t_{u+1}}(p)\ne \emptyset, \quad u=1,2,\ldots,v. $$ Take a point $q_{u}\in D'_{t_u}(p)\cap D'_{t_{u+1}}(p)$, for $u=1,2,\ldots,v$. By the local description of simple singularities in dimension two, we have that $$ \operatorname{CS}_{q_u}({\mathcal F}'\vert_{\Delta'}, D'_{t_u}(p)) =-\lambda_{t_{u+1}}(p)/\lambda_{t_{u}}(p) , $$ see Remark \ref{rk:indicessingsimples}. The Camacho-Sad index $\operatorname{CS}_{q_u}({\mathcal F}'\vert_{\Delta'}, D'_{t_u}(p))$ is the Camacho-Sad index of the transversal type of ${\mathcal F}'$ along $D'_{t_u}\cap D'_{t_{u+1}}$. It can be read locally at the points in $ (D'_{t_u}\cap D'_{t_{u+1}})\cap K' $, from the germ of the $1$-form $\eta'$ along $K'$. We deduce that $$ \operatorname{CS}_{q_u}({\mathcal F}'\vert_{\Delta'}, D'_{t_u}(p))=-\lambda_{t_{u+1}}/\lambda_{t_{u}}. $$ Hence $\lambda_{t_{u+1}}(p)/\lambda_{t_{u}}(p)=\lambda_{t_{u+1}}/\lambda_{t_{u}}$ for any $u=1,2,\ldots,v$. Making the product of these equalities, we conclude that $\lambda_{j}(p)/\lambda_{1}(p)=\lambda_j/\lambda_1$ and we are done. \end{proof} As a consequence of Lemmas \ref{lema:thconequrreduccion} and \ref{lema.csindices}, we obtain that Equation \ref{eq:lambdauno} holds. This ends the proof of Theorem \ref{teo:pilogarithmicmodel}. \subsection{Existence of Divisorial Models} \label{Existence of Logarithmic Models} We apply first Theorem \ref{teo:pilogarithmicmodel} to a reduction of singularities $$ \pi:((M',K'), E',{\mathcal F}')\rightarrow (({\mathbb C}^n,0), \emptyset,{\mathcal F}),\quad K'= \pi^{-1}(0), $$ of the foliated space $(({\mathbb C}^n,0), \emptyset,{\mathcal F})$. That is, we take an integrable logarithmic differential $1$-form $\eta$ fully associated to ${\mathcal F}$, we consider the $\mathbb C$-divisor ${\mathcal D}'={\mathcal D}^{\eta'}$ on $M'$, where $\eta'=\pi^*\eta$ and finally, we consider the ${\mathbb C}$-divisor ${\mathcal D}$ on $({\mathbb C}^n,0)$ defined by the property $\pi^*{\mathcal D}={\mathcal D}'$, whose existence has been shown in Theorem \ref{teo:pilogarithmicmodel}. We are going to verify that ${\mathcal D}$ is a divisorial model for $\mathcal F$. Note that the support of $\mathcal D$ is the union $D$ of the invariant hypersurfaces of $\mathcal F$. Consider a ${\mathcal D}$-transverse holomorphic map $\phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$. By Proposition \ref{prop:appdos} in the Appendix I, there is a commutative diagram of holomorphic maps \begin{equation} \begin{array}{ccc} ({\mathbb C}^2,0)&\stackrel{\sigma}{\longleftarrow}& (N',\sigma^{-1}(0)) \\ \downarrow \phi& & \downarrow \psi \\ ({\mathbb C}^n,0)&\stackrel{\pi}{\leftarrow}& (M',\pi^{-1}(0)) \end{array} \end{equation} such that $\sigma$ is the composition of a finite sequence of blowing-ups. \begin{lemma} \label{lem:conmutatividad} In the above situation, we have that $\phi^*{\mathcal F}$ exists and $\sigma^*({\phi^*{\mathcal F}})=\psi^*{\mathcal F}'$. \end{lemma} \begin{proof} Let $\omega$ be an integrable holomorphic $1$-form defining $\mathcal F$, without common factors in its coefficients. Assume that $\phi^*\omega\ne 0$, then we are done. Indeed, in this case $\phi^*{\mathcal F}$ is defined by $\phi^*\omega$, since $\sigma$ is a sequence of blowing-ups, we have that $\sigma^*(\phi^*\omega)\ne 0$ and $\sigma^*(\phi^*{\mathcal F})$ is defined by the nonzero $1$-form $\sigma^*(\phi^*\omega)$. Noting that $$ \sigma^*(\phi^*\omega)=(\phi\circ\sigma)^*\omega=(\pi\circ\psi)^*\omega=\psi^*(\pi^*\omega), $$ we conclude that $\phi^*{\mathcal F}$ exists and $\sigma^*({\phi^*{\mathcal F}})=\psi^*{\mathcal F}'$. Let us show that $\phi^*\omega\ne 0$. Assume by contradiction that $\phi^*\omega= 0$. Take an analytic germ of curve $(\Gamma, 0)$ such that $\phi(\Gamma)\not\subset D$. The existence of such a $(\Gamma, 0)$ is guarantied by the hypothesis that $\operatorname{Im}(\phi)\not\subset D$. The assumption that $\phi^*\omega=0$ implies that $(\phi(\Gamma),0)$ is an invariant germ of curve of $\mathcal F$. Taking a reduction of singularities of $D$, that induces a reduction of singularities of $\mathcal F$, we see that any germ of invariant curve must be contained in $D$. Contradiction. \end{proof} In view of Proposition \ref{pro:modelostrasunaexplosion}, in order to prove that $\phi^*{\mathcal D}$ is a divisorial model for $\phi^*{\mathcal F}$, it is enough to prove that $ \sigma^*(\phi^*{\mathcal D}) $ is a divisorial model for $\sigma^*(\phi^*{\mathcal F})$. By Lemma \ref{lem:conmutatividad} above, we have that $\sigma^*({\phi^*{\mathcal F}})=\psi^*{\mathcal F}'$. Moreover, we also have $$ \sigma^*(\phi^*{\mathcal D})=\psi^*{\mathcal D}'. $$ Thus, we have to verify that $\psi^*{\mathcal D}'$ is a divisorial model for $\psi^*{\mathcal F}'$. Let us work locally at a point $p\in\sigma^{-1}(0)$ and put $q=\psi(p)$. First of all, we take local coordinates $(x_1,x_2,\ldots,x_n)$ at $q$ such that the foliation ${\mathcal F}'$ is locally given at $q$ by an integrable meromorphic $1$-form $$ \eta'=\sum_{i=1}^\tau (\lambda_i+f_i(x_1,x_2,\ldots,x_\tau))\frac{dx_i}{x_i},\quad f_i(0,0,\ldots,0)=0, $$ where $\sum_{i=1}^\tau n_i\lambda_i\ne 0$ for any $\mathbf{n}\in {\mathbb Z}^\tau_{\geq 0}\setminus\{0\}$. Recall that then the total transform of $D$ is locally given at $q$ by the union of the hyperplanes $x_i=0$, with $i=1,2,\ldots,\tau$. Moreover, we know that the $\mathbb C$-divisor ${\mathcal D}'$ is locally given at $q$ by $$ {\mathcal D}'=\sum_{i=1}^\tau \lambda_i (x_i=0). $$ We have to show that $\psi^*{{\mathcal D}'}$ is a divisorial model for $\psi^*{{\mathcal F}'}$ at $p$. We apply now Proposition \ref{prop:appuno} in the Appendix I as follows. Take the list of functions $$ {\mathcal L}'_p=\{\psi_{i,p}\}_{i=1}^\tau, $$ where $\psi_i=x_i\circ\psi$, for $i=1,2,\ldots,\tau$ and $\psi_{i,p}$ denotes the germ at $p$ of $\psi_i$. There is a composition of blowing-ups centered at points $$ \sigma': (N'',\sigma'^{-1}(p))\rightarrow (N',p) $$ in such a way that the transformed list ${\mathcal L}''=\{f_i\}_{i=1}^\tau$ is desingularized, where $f_i=\psi_{i,p}\circ \sigma'$, see Appendix I. In particular, for any point $p'\in \sigma'^{-1}(p)$, there are local coordinates $u,v$ such that for any $i\in\{1,2,\ldots,\tau\}$ with $f_{i,p'}\ne 0$, there is a unit $U_{i,p'}\in {\mathcal O}_{N'',p'}$ and $(a_i,b_i)\in {\mathbb Z}^2_{\geq 0}$ with $f_{i,p'}=U_{i,p'} u^{a_i}v^{b_i}$; note also that we have $a_i+b_i\geq 1$, since $\sigma'(p')=p$. Now, in order to prove that $\psi^*{{\mathcal D}'}$ is a divisorial model for $\psi^*{{\mathcal F}'}$ at $p$, it is enough to prove that ${\sigma'}^*(\psi^*{{\mathcal D}'})$ is a divisorial model for ${\sigma'}^*(\psi^*{{\mathcal F}'})$ at any point $p'$ of ${\sigma'}^{-1}(p)$. By the local expression of $\psi\circ\sigma'$ at $p'$ in appropriate local coordinates $u,v$, we conclude that ${\sigma'}^*\psi^*{{\mathcal F}'}$ is generated by $$ {\sigma'}^*\psi^*\eta'= \left(\sum_{i=1}^\tau a_i\lambda_i+g(u,v) \right) \frac{du}{u}+\left(\sum_{i=1}^\tau b_i\lambda_i+h(u,v) \right) \frac{dv}{v}+\alpha, $$ where $\alpha$ is a holomorphic $1$-form. Put $\mu_1=\sum_{i=1}^\tau a_i\lambda_i$ and $\mu_2=\sum_{i=1}^\tau b_i\lambda_i$. We know that not all the germs of functions $f_{i,p'}$ are identically zero, for $i=1,2,\ldots\tau$; this would imply that ${\sigma'}^*\psi^* \eta'=0$ and this is not possible since we know that $\psi^*\eta'\ne 0$. Then, some of the $a_i, b_i$ are nonzero and by the non resonance properties either $\mu_1$ or $\mu_2$ are nonzero. Say that $\mu_1\ne 0$, since we are dealing with generalized hypersurfaces, there are no saddle-nodes, and then $\mu_1\mu_2\ne 0$ or we have a non-singular foliation, in the sense that $\mu_2+h(u,v)$ is identically zero. Now, we have that $$ {\sigma'}^*\psi^*{\mathcal D}'=\mu_1(u=0)+\mu_2(v=0) $$ locally at $p'$. Hence ${\sigma'}^*\psi^*{\mathcal D}'$ is a divisorial model for ${\sigma'}^*(\psi^*{\mathcal F}')$ at $p'$. This ends the proof of Theorem \ref{teo:main}. \section{Logarithmic Models} \label{Logarithmic Models} Let $\mathcal F$ be a generalized hypersurface over $(M,K)$ where $M$ is a germ of non-singular complex analytic variety over a connected and compact analytic subset $K\subset M$. Let us assume that $\mathcal D$ is a divisorial model for ${\mathcal F}$. By definition, any $\mathcal D$-logarithmic foliation $\mathcal L$ on $(M,K)$ is a {\em logarithmic model} for $\mathcal F$. In the case of $K=\{0\}$ and hence $(M,K)=({\mathbb C}^n,0)$, the existence of logarithmic models is assured. Indeed, by Theorem \ref{teo:main} there is a divisorial model $\mathcal D$ for $\mathcal F$, that we can write $$ {\mathcal D}=\sum_{i=1}^s\lambda_iS_i. $$ Choosing a reduced local equation $f_i=0$ for each $S_i$, the closed logarithmic $1$-form $\eta=\sum_{i=1}^sdf_i/f_i$ generates a logarithmic model $\mathcal L$. This gives sense to the main theorem stated in the Introduction. Let us state certain properties of logarithmic models that are directly deduced from the results presented in this work: \begin{enumerate} \item If $(M,K)=({\mathbb C}^2,0)$, a logarithmic foliation $\mathcal L$ is a logarithmic model for a generalized curve $\mathcal F$ if and only if ${\mathcal L}$ and $\mathcal F$ have the same Camacho-Sad indices with respect to the invariant branches. \item Assume that $\pi: ((M',K'),E',{\mathcal F}')\rightarrow ((M,K),E,{\mathcal F})$ is the composition of a finite sequence of admissible blowing-ups, where $((M,K),E,{\mathcal F})$ is a foliated space of generalized hypersurface type. If $\mathcal L$ is a logarithmic model for $\mathcal F$, then $\pi^*{\mathcal L}$ is a logarithmic model for ${\mathcal F}'$. \item Let $\mathcal F$ be a generalized hypersurface on $({\mathbb C}^n,0)$ and denote by $S$ the union of its invariant hypersurfaces. Consider a logarithmic foliation $\mathcal L$ on $({\mathbb C}^n,0)$. The following statements are equivalent: \begin{enumerate} \item $\mathcal L$ is a logarithmic model for $\mathcal F$. \item For any $S$-transverse map $\phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$ we have that $\phi^*{\mathcal L}$ is a logarithmic model for $\phi^*{\mathcal F}$. \end{enumerate} \end{enumerate} A question for the future is to develop a similar theory concerning the dicritical case. In dimension two, some results are known \cite{Can-Co}. \section{Appendix I} We recover here results in Proposition \ref{prop:appuno} and Proposition \ref{prop:appdos} concerning the reduction of singularities of lists of functions in dimension two and the lifting of morphisms by a sequence of blowing-ups. These results are well known. We just state the first one and we prove the second one as a consequence of it. Let us note that there are stronger results on monomialization of morphisms by D. Cutkosky \cite{Cut} and Akbulut and King (Chapter 7 of \cite{Akb-K}, according to \cite{Cut}); we do not need the use of such strong versions in this work. Let $N$ be a two dimensional complex analytic variety and consider a finite list ${\mathcal L}=\{f_i\}_{i=1}^t$, where $f_i;N\rightarrow {\mathbb C}$ is a holomorphic function for $i=1,2,\ldots,t$. Given a point $p\in N$, denote by ${\mathcal L}_p=\{f_{i,p}\}_{i=1}^t$ the list of germs at $p$ of the functions $f_i$. We say that ${\mathcal L}$ is {\em desingularized } at $p\in N$, or equivalently that {\em the list ${\mathcal L}_p$ is desingularized,\/} if and only if there are local coordinates $(u,v)$ at $p$ such that the following two properties hold: \begin{enumerate} \item For any $i\in\{1,2,\ldots,t\}$ with $f_{i,p}\ne 0$, there is a unit $U_{i,p}\in {\mathcal O}_{N,p}$ and $(a_i,b_i)\in {\mathbb Z}^2_{\geq 0}$ such that $f_{i,p}=U_{i,p} u^{a_i}v^{b_i}$. \item Given $i,j\in\{1,2,\ldots,t\}$, if $f_{i,p}$ does not divide $f_{j,p}$, then $f_{j,p}$ divides $f_{i,p}$. \end{enumerate} We say that {\em ${\mathcal L}$ is desingularized} when it is desingularized at any point $p\in N$. This is an open property, in the sense that the points where ${\mathcal L}$ is desingularized are the points of an open subset of $N$. Given a morphism $\sigma:N'\rightarrow N$, the transform $\sigma^*{\mathcal L}$ of the $\mathcal L$ is the list in $N'$ defined by $$ \sigma^*{\mathcal L}=\{f_i\circ\sigma\}_{i=1}^t $$ Next, we state a well known result of desingularization of lists of functions: \begin{proposition} \label{prop:appuno} Let ${\mathcal L}=\{f_i\}_{i=1}^t$ be a list of functions on a non singular two-dimensional holomorphic variety $(N,C)$, that is a germ along a compact set $C\subset N$. There is a morphism $$ \sigma:(N',\sigma^{-1}(C))\rightarrow (N,C), $$ that is the composition of a finite sequence of blowing-ups centered at points, in such a way that the transformed list $\sigma^*{\mathcal L}=\{f_i\circ \sigma\}_{i=1}^t$ is desingularized. \end{proposition} \begin{proof} This result is an easy consequence of the classical results of desingularization for plane curves. The reader may look at \cite{Lip}. \end{proof} \begin{remark} The above proposition \ref{prop:appuno} is true without restriction on the dimension of $N$ (with similar definition of what is a desingularized list). This is a consequence of Hironaka's reduction of singularities in \cite{Hir}. \end{remark} \begin{proposition} \label{prop:appdos} Let $\phi:(N,C)\rightarrow (M,K)$ be a holomorphic map between connected germs of non-singular analytic varieties along compact sets, where $\dim N=2$. Consider a morphism $\pi:(M',\pi^{-1}(K))\rightarrow (M,K) $ that is the composition of a finite sequence of blowing-ups with non singular centers. Let us assume that the image of $\phi$ is not contained in the projection by $\pi$ of the centers of blowing-up. Then, there is a morphism $$ \sigma: (N',\sigma^{-1}(C))\rightarrow (N,C) $$ that is the composition of a finite sequence of blowing-ups centered at points, in such a way that there is a unique morphism $ \psi: (N',\sigma^{-1}(C))\rightarrow (M',\pi^{-1}(K)) $ such that $\phi\circ\sigma=\pi\circ\psi$. \end{proposition} \begin{proof} Let us show first that the result is true in the special case that $$ (N,C)=({\mathbb C}^2,0), \quad (M,K)=({\mathbb C}^n,0) $$ and $\pi$ is the single blowing-up with center $Y=(x_1=x_2=\cdots=x_t=0)$. Consider the list of functions ${\mathcal L}=\{\phi_i\}_{i=1}^t$, where $\phi_i=x_i\circ \phi$. Take a desingularization $$ \sigma: (N',\sigma^{-1}(0))\rightarrow ({\mathbb C}^2,0), $$ of $\mathcal L$ as stated in Proposition \ref{prop:appuno}, where ${\mathcal L}'=\{\varphi_i\}_{i=1}^t$ is the transformed list, recalling that $\varphi_i=\phi_i\circ\sigma$. Let us represent $\phi$ by a morphism $ \phi_U:U\rightarrow V $ where $U\subset{\mathbb C}^2$ is a connected open neighborhood of the origin $0\in {\mathbb C}^2$ and $V={\mathbb D}_\epsilon^n\subset {\mathbb C}^n$ is a poly-cylinder around the origin in such a way that the center of $\pi$ is represented by $$ Y=(x_1=x_2=\cdots=x_t=0)\subset V. $$ We also consider the morphism $\sigma_U:U'\rightarrow U$ obtained by the same blowing-ups indicated by $\sigma$ and we denote by $\pi_V:V'\rightarrow V$ the blowing-up with center $Y$. Let us put $\phi_{i,U}=x_i\circ\phi_U$ and $\varphi_{i,U'}= \phi_{i,U}\circ \sigma_U$. Since the property of being desingularized is open, by taking $U$ small enough, we can assume that the list ${\mathcal L}_{U'}'=\{\varphi_{i,U'}\}_{i=1}^t$ is desingularized at any point of $U'$. In this situation, let us show that there is a unique holomorphic map $$ \psi_{U'}:U'\rightarrow V' $$ such that $\phi_U\circ \sigma_U=\pi_V\circ\psi_{U'}$. More precisely, we are going to prove that given any nonempty open subset $W\subset U'$, there is a unique holomorphic map $$ \psi_{W}:W\rightarrow V' $$ such that $\phi_U\circ ({\sigma_U}\vert_W)=\pi_V\circ\psi_W$. Let us recall how is constructed the blowing-up $\pi_V$. Take the projective space ${\mathbb P}^{t-1}_{\mathbb C}$ and consider the closed subset $Z\subset {\mathbb P}^{t-1}_{\mathbb C}\times {\mathbb D}_\epsilon^t$ given by $$ Z=\{([a_1,a_2,\ldots,a_t], (b_1,b_2,\ldots, b_t)); \quad a_ib_j=a_jb_i, \; 1\leq i,j\leq t\}. $$ The blowing-up $\tilde\pi$ of the origin of ${\mathbb D}^t_{\epsilon}$ is the second projection $ \tilde\pi: Z\rightarrow {\mathbb D}^t_{\epsilon} $. The blowing-up of $V={\mathbb D}^t_{\epsilon}\times {\mathbb D}^{n-t}_{\epsilon}$ with center $Y$ is the product $$ \pi=\tilde\pi\times \operatorname{id}_{{\mathbb D}^{n-t}_{\epsilon}}: Z\times {\mathbb D}^{n-t}_{\epsilon}\rightarrow {\mathbb D}^{t}_{\epsilon}\times {\mathbb D}^{n-t}_{\epsilon}=V. $$ Now, let us show the existence and uniqueness of $\psi_{W}$. Let us consider the open subset $W_0\subset W$ defined by $ W_0=W\setminus \sigma_{U}^{-1}(\phi_U^{-1}(Y)) $. By hypothesis, we know that $W_0$ is a dense open subset of $W$. The uniqueness of $\psi_W$ is then implied by the uniqueness of $\psi_{W_0}$. Take $p\in W_0$ and consider the vectors \begin{eqnarray*} {\mathbf v}^t(p)&=&(\varphi_{1,U'}(p), \varphi_{2,U'}(p), \ldots, \varphi_{t,U'}(p))\\ {\mathbf v}^{n-t}(p)&=&(\varphi_{t+1,U'}(p), \varphi_{t+2,U'}(p), \ldots, \varphi_{n,U'}(p)). \end{eqnarray*} Since $p\in W_0$, we see that ${\mathbf v}^t(p)$ is not identically zero and we necessarily have that \begin{equation} \label{eq:uniciddlevantamiento} \psi_W(p)=([{\mathbf v}^t(p)], {\mathbf v}^t(p), {\mathbf v}^{n-t}(p) ). \end{equation} This shows the uniqueness of $\psi_W$. Now take a point $p\in W$; we denote $\varphi_{i,p}$ the germ of $\varphi_{i,U'}$ at $p$, even in the case when $p\notin \sigma^{-1}(0)$. Consider the set $$ I_p=\{i\in \{1,2,\ldots,t\}; \quad \varphi_{i,p} \mbox{ divides } \varphi_{j,p},\; \mbox{ for any } j\in \{1,2,\ldots,t\} \}. $$ Let us note that $I_p\ne \emptyset$. Indeed, saying that $I_p=\emptyset$ means that $\varphi_{i,p}=0$ for all $i=1,2\ldots,t$, this implies that $\varphi_{i,U'}=0$ and thus $\phi_{i,U}=0$ for any $i=1,2,\ldots,t$; this contradicts the hypothesis that the image of $\phi$ is not contained in $Y$. Let us choose an index $i\in I_p$. Define the germs $$ \varphi_{ji,p}=\left\{ \begin{array}{ccc} \varphi_{j,p}/\varphi_{i,p}&\mbox{ if }& 1\leq j\leq t,\\ \varphi_{j,p}&\mbox{ if }& t+1\leq j\leq n.\\ \end{array} \right. $$ Note that $\varphi_{ii,p}=1$. We can define the germ $\psi_p$ of $\psi_W$ by $$ \psi_p= ([\varphi_{1i,p},\varphi_{2i,p},\ldots,\varphi_{ti,p}], \varphi_{1i,p},\varphi_{2i,p},\ldots,\varphi_{ti,p},\varphi_{t+1,i,p},\ldots, \varphi_{ni,p}). $$ The definition does not depend on the index $i\in I_p$ and the uniqueness is guarantied since the restriction to $W_0$ is as indicated in Equation \ref{eq:uniciddlevantamiento}. Let us consider now the case where $\pi:(M',\pi^{-1}(K))\rightarrow (M,K) $ is a single blowing-up with center $(Y, Y\cap K)$ and $\phi: (N,C)\rightarrow (M,K)$ is as in the statement. Once this case is solved, we obtain directly the general case by induction on the number of blowing-ups in $\pi$. In view of the previous result, for any point $p\in N$ there is an open set $U_p\subset N$ with $p\in U_p$ and a finite sequence of blowing-ups over $p$ $$ \sigma_{U_p}:U'_p\rightarrow U_p $$ such that for any open subset $W'\subset U'_p$ there is a unique map $\psi_{W'}:W'\rightarrow M'$ such that $\phi\circ (\sigma_{U_p}\vert_{W'})=\pi\circ \psi_{W'}$. Note that $$ \psi_{W'}=\psi_{U'_p}\vert_{W'}, $$ for any open set $W'\subset U'_p$. By the compactness of $C\subset N$, we can cover $C$ by finitely many open subsets of the type $U_p$, with $p\in C$. That is, there are finitely many points $p_1,p_2,\ldots,p_r$ in $C$ such that $$ C\subset \cup_{i=1}^r U_i,\quad U_i=U_{p_i},\; i=1,2,\ldots,r. $$ Without loss of generality, we assume that $p_j\notin U_i$, if $j\ne i$. We can glue the morphisms $\sigma_{U_i}: U'_i\rightarrow U_i$ into a morphism $$ \sigma_U: U'\rightarrow U=\cup_{i=1}^rU_i, $$ is such a way that we identify $U'_i=\sigma_U^{-1}(U_i)$. Of course, the morphism $\sigma_U$ is the composition of a sequence of blowing-ups points, over the points $p_1,p_2,\ldots,p_r$. Note that $\sigma_U$ induces a morphism of germs $$ \sigma: (N',\sigma^{-1}(C))\rightarrow (N,C), $$ where $(N',\sigma^{-1}(C))$ is represented by $(U',\sigma_U^{-1}(C)$ and $(N,C)$ by $(U,C)$. On the other hand, by the uniqueness property, we have that $$ \psi_{U'_i}\vert_{U'_i\cap U'_j}=\psi_{U'_j}\vert_{U'_i\cap U'_j}. $$ Then, we can also glue the morphisms $\psi_{U'_i}$ to a morphism $\psi_{U'}:U'\rightarrow M'$ such that $$ \phi\circ \sigma_{U}=\pi\circ \psi_{U'}. $$ We have an induced morphism of germs $\psi:(N',\sigma^{-1}(C))\rightarrow (M',\pi^{-1}(K))$ with the property that $\pi\circ\psi=\phi\circ\sigma$. This ends the proof. \end{proof} \section{Appendix II} Here we provide a proof of Proposition \ref{prop:simplepointsandlogorder}. Recall that we have a foliated space $((M,K), E, {\mathcal F})$ of generalized hypersurface type and a point $p\in K$. We have to show that the following statements are equivalent \begin{enumerate} \item The point $p$ is a simple point for $((M,K), E, {\mathcal F})$. \item $\operatorname{LogOrd}_p({\mathcal F},E)=0$. \end{enumerate} We have that (1) implies (2) as a direct consequence of the definition of simple point in Subsection \ref{definicionsimplepoint}. Let us suppose that $\operatorname{LogOrd}_p({\mathcal F},E)=0$. We assume that the dimensional type $\tau$ is equal to $n$. The case when $\tau<n$ may be done in the same way. Moreover, since we are working locally at $p$, we identify $(M,p)=({\mathbb C}^n,0)$ and we work at the origin of ${\mathbb C}^n$. Choose local coordinates $x_1,x_2,\ldots,x_n$ such that $E=\left(\prod_{i=1}^ex_i=0\right)$. Let us see first that $n-1\leq e\leq n$. If this is not the case, we have $e\leq n-2$ and one of the following expressions holds for a local generator $\eta$ of $\mathcal F$ (up to a reordering and to multiply by a unit) \begin{enumerate} \item[a)]$\eta=dx_1/x_1+\sum_{i=2}^e a_idx_i/x_i+a_{e+1}dx_{e+1}+ a_{e+2}dx_{e+2}+\sum_{i=e+3}^n a_idx_i$. \item[b)] $\eta=\sum_{i=1}^e a_idx_i/x_i+dx_{e+1}+ a_{e+2}dx_{e+2}+\sum_{i=e+3}^n a_idx_i$. \end{enumerate} This situation does not hold, since we can find a non-singular vector field $\xi$ such that $\eta(\xi)=0$, hence $\xi$ trivializes the foliation and $\tau<n$. The vector field $\xi$ can be taken as follows $$ \xi=\left\{ \begin{array}{cc} a_{e+1}x_1\partial/\partial x_1-\partial /\partial x_{e+1},&\mbox{ in case a)}\\ a_{e+2}\partial/\partial x_{e+1}-\partial /\partial x_{e+2} ,&\mbox{ in case b)} \end{array} \right. $$ Thus, we conclude that $n-1\leq e\leq n$. Note that, even when $e=n-1$, the case a) above does not hold. Then, we have that $e=n$ or $e=n-1$ and one of the following situations holds: \begin{enumerate} \item[i)]$\eta=dx_1/x_1+\sum_{i=2}^n a_idx_i/x_i$. \item[ii)] $\eta=\sum_{i=1}^{n-1} a_idx_i/x_i+dx_{n}$, with $a_i(0)=0$, for $i=1,2,\ldots,n-1$. \end{enumerate} Assume first we are in the situation i) and put $\lambda_1=1$, $\lambda_i=a_i(0)$, for $i=2,3,\ldots,n$. We have to show that there is no resonance $\sum_{i=1}^{n}m_i\lambda_i=0$ with $\mathbf{m}\ne 0$, for $\mathbf{m}=(m_1,m_2,\ldots,m_n)$. Let us reason by contradiction, assuming that there is such a resonance. Note that there is at least one $m_i\ne 0$ with $2\leq i\leq n$. Up to a reordering, we assume that $m_2m_3\cdots m_\ell\ne0$ and $m_i=0$ for $\ell <i\leq n$. Consider the map $\phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0)$ given by $$ x_1=uv^{m_1}, x_2=v^{m_2},\ldots,x_\ell=v^{m_\ell}; \quad x_i=0,\mbox{ if }\ell< i\leq n. $$ Then we have $$ \phi^*\eta=\frac{du}{u}+b(u,v)\frac{dv}{v}, $$ where $b(u,v)=m_1\lambda_1+\sum_{i=2}^\ell m_i a_i(uv^{m_1},v^{m_2},\ldots,v^{m_\ell},0,\ldots,0)$. Note that $b(0,0)=0$, since $\sum_{i=1}^nm_i\lambda_i=0$. We have two possible situations: \begin{enumerate} \item The function $b(u,v)$ is not divisible by $v$. In this case, we have that $\phi^*\eta$ defines a saddle-node. This is not possible, since $\mathcal F$ is complex-hyperbolic. \item We have that $b(u,v)=vb'(u,v)$. Then $\phi^*{\mathcal F}$ is defined by the non-singular $1$-form $du+ub'(u,v)dv$. We know that there is a unit $U(u,v)$ such that $du+ub'(u,v)dv=d(uU(u,v))$ and we take new local coordinates $u^*=uU(u,v)$ and $v$. We have that $\phi^*{\mathcal F}=(du^*=0)$ and $\phi(v=0)$ is the origin. This implies that $\mathcal F$ is dicritical, but this is not possible since it is a generalized hypersurface. \end{enumerate} Assume now that we are in the situation ii). We are going to show the existence of a non-singular hypersurface $H$ having normal crossings with $E$ such that we get a simple corner for $\mathcal F$ with respect to $E\cup H$. Note that $H$ should have an equation $x_n=f(x_1,x_2,\ldots,x_{n-1})$. If we do this, we are done, since we are in situation i) with respect to $E\cup H$. In particular, we are done when $x_n$ divides $a_i$ for $i=1,2,\dots,n-1$. Let us rename the variables as $$ \mathbf{y}=(y_1,y_2,\ldots,y_{n-1})=(x_1,x_2,\ldots,x_{n-1}),\quad z=x_n. $$ We end the proof by providing a coordinate change $z^*=z-f(\mathbf{y})$ with the property that $z^*=0$ is an invariant hypersurface of $\mathcal F$. The existence of such a coordinate change depends upon certain non-resonances in $\eta$, that will nor occur thanks to the hypothesis that $\mathcal F$ is a generalized hypersurface. Let us precise this. We write \begin{equation} \label{eq:omega} \eta=z(\omega_0+\tilde\omega)+\omega'+dz, \end{equation} where $\omega_0=\sum_{i=1}^{n-1}\mu_i{dy_i}/{y_i}$, $\tilde\omega=\sum_{i=1}^{n-1}\tilde a_i(\mathbf{y},z){dy_i}/{y_i}$ and $\omega'=\sum_{i=1}^{n-1} a'_i(\mathbf{y}){dy_i}/{y_i}$, with $\mu_i\in {\mathbb C}$, $\tilde a_i(\mathbf{0}, 0)=0$ and $a'_i(\mathbf{0})=0$, for $i=1,2,\ldots,n-1$. \begin{lemma} \label{lema:noresonanciatotal} In the above situation, for any $ \mathbf{m}=(m_1,m_2,\ldots,m_{n-1})\in {\mathbb Z}^{n-1}_{\geq 0}$, with $\mathbf{m}\not=\mathbf{0}$, we have that $\omega_0+\sum_{i=1}^{n-1}m_i{dy_i}/{y_i}\ne 0$. \end{lemma} \begin{proof} Let us reason by contradiction, assuming that there is $\mathbf{m}\in {\mathbb Z}^{n-1}_{\geq 0}$, with $\sum_{i=1}^{n-1}m_i=m>0$, such that $\omega_0=-\sum_{i=1}^{n-1}m_i{dy_i}/{y_i}$. Consider the map $$ \phi:({\mathbb C}^2,0)\rightarrow ({\mathbb C}^n,0) $$ given by $z=u$ and $y_i=v$, for $i=1,2,\ldots,n-1$. Then we have $$ \phi^*\eta= (-mu+ h(v)+ug(u,v))\frac{dv}{v}+du , $$ where $g(0,0)=0$ and $h(0)=0$. In particular, this singularity is a pre-simple singularity in dimension two (non-nilpotent linear part) that is not simple, since we have the resonance $1\cdot(-m)+m\cdot 1=0$. These singularities are either dicritical or they have a hidden saddle-node, \cite{Can-C-D}. This is the desired contradiction. \end{proof} Let us perform the coordinate change $z\mapsto z^*$ as a Krull limit, in the following way. Assume that the order $\nu_0(\omega')$ of the coefficients of $\omega'$ is $\nu_0(\omega')=m>0$ and let us write $$ \omega'= \bar\omega+\omega'', $$ where $\nu_0(\omega'')>m$ and the coefficients of $\bar\omega$ are homogeneous of degree $m$. We are going to show that there is a homogeneous polynomial $p_m(\mathbf{y})$ of degree $m$ such that if $z^{(m)}=z-p_m(\mathbf{y})$, then $$ \eta=z^{(m)}\left\{\omega_0+\tilde\omega^{(m)}\right\}+{\omega'}^{(m)}+dz^{(m)}, $$ with the same structure as in Equation \ref{eq:omega} but with $\nu_0({\omega'}^{(m)})>m$. Now, we are done by taking the Krull limit of the $z^{(m)}$. Of course, we obtain a formal invariant hypersurface $z^*=0$, but we know that all the formal invariant hypersurfaces of generalized hypersurfaces are in fact convergent ones and thus we are done. Now, looking at the $\mathbf{y}$-homogeneous part of degree $m$ in the Frobenius integrability condition $\eta\wedge d\eta=0$, we have that $ d\bar\omega=\bar\omega\wedge\omega_0 $. Write $\eta$ in the coordinates $\mathbf{y},z^{(m)}$: $$ \eta=z^{(m)}(\omega_0+\tilde\omega)+(p_m\tilde\omega+\omega'')+dz^{(m)} + (p_m\omega_0+\bar\omega + dp_m). $$ If there is $p_m$ such that $p_m\omega_0+\bar\omega + dp_m=0$, then we are done. Let us write $$ \bar\omega=\sum_{\vert\mathbf{m}\vert=m}\mathbf{y}^\mathbf{m} \bar\omega_\mathbf{m}, \quad d\bar\omega=\sum_{\vert\mathbf{m}\vert=m}\mathbf{y}^\mathbf{m}\frac{d \mathbf{y}^\mathbf{m}}{\mathbf{y}^\mathbf{m}}\wedge\bar\omega_\mathbf{m}, $$ where the $1$-forms $\bar\omega_\mathbf{m}$ have constant coefficients. Moreover $$ \bar\omega\wedge\omega_0= \sum_{\vert\mathbf{m}\vert=m}\mathbf{y}^\mathbf{m} \bar\omega_\mathbf{m}\wedge\omega_0. $$ Since $ d\bar\omega=\bar\omega\wedge\omega_0 $, we conclude that $$ \left\{ ({d \mathbf{y}^\mathbf{m}}/{\mathbf{y}^\mathbf{m}})+\omega_0\right\}\wedge \bar\omega_\mathbf{m}=0, \quad \mbox{ for all } \mathbf{m}\in {\mathbb Z}^{n-1}_{\geq 0} , \mbox{ with } \vert{\mathbf{m}}\vert=m. $$ By Lemma \ref{lema:noresonanciatotal}, we know that $ 0\ne d \mathbf{y}^\mathbf{m}/\mathbf{y}^\mathbf{m}+\omega_0 $. Hence, there are constants $c_\mathbf{m}\in {\mathbb C}$ such that $$ \bar\omega_\mathbf{m}+c_\mathbf{m} \left\{ ({d \mathbf{y}^\mathbf{m}}/{\mathbf{y}^\mathbf{m}})+\omega_0\right\}=0, \quad \mbox{ for all } \mathbf{m}\in {\mathbb Z}^{n-1}_{\geq 0} , \mbox{ with } \vert{\mathbf{m}}\vert=m. $$ Taking $p_m=\sum_{\vert\mathbf{m}\vert=m}c_{\mathbf{m}}\mathbf{y}^{\mathbf{m}}$, we obtain that $p_m\omega_0+\bar\omega + dp_m=0$ and we are done.
1410.2206
\section{Introduction} An intriguing relation between entanglement entropy and partition functions holds in quantum field theories \cite{Calabrese:2004eu}. The R{\'e}nyi entropy $S_n$, one parameter generalization of entanglement entropy $S=S_1$, can be written as \begin{align}\label{REdef} S_n = \frac{1}{1-n} \log \left[ \frac{Z_n}{(Z_1)^n} \right] \ , \end{align} where $Z_n$ is the partition function on the $n$-covering space $\mathbb{M}_n$ around an entangling surface $\Sigma$. If quantum field theories in $d$ dimensions have conformal symmetry and the entangling surface is a $(d-2)$-dimensional hypersphere, $\Sigma=\mathbb{S}^{d-2}$, $\mathbb{M}_n$ becomes the $n$-covering space of a $d$-sphere \cite{Casini:2011kv}. Especially, the entanglement entropy $S$ obtained in the $n\to 1$ limit is equal to the free energy on $\mathbb{S}^d$, $S = \log Z_1\equiv -F$. This relation makes it easier to calculate the R{\'e}nyi entropy for free fields \cite{Klebanov:2011uf}, but cannot be applied to supersymmetric theories because the conical singularity at $\Sigma$ breaks supersymmetries. To remedy the situation, a supersymmetric extension of the R{\'e}nyi entropy was introduced in \cite{Nishioka:2013haa} by modifying \eqref{REdef} to \begin{align}\label{SRE_def} S_n^\text{susy} = \frac{1}{1-n} \log \left| \frac{Z_n^\text{susy}}{(Z_1)^n} \right| \ , \end{align} where $Z_n^\text{susy}$ is the supersymmetric partition function on the $n$-covering $d$-sphere with an $R$-symmetry background field to preserve supersymmetries. ${\cal N}=2$ superconformal field theories in $d=3$ dimensions are considered in \cite{Nishioka:2013haa} and the matrix model representation of the supersymmetric R{\'e}nyi entropies are derived by using the localization method \cite{Kapustin:2009kz,Jafferis:2010un,Hama:2010av,Hama:2011ea,Imamura:2011wg}. The gravity dual of the supersymmetric R{\'e}nyi entropy is constructed in \cite{Huang:2014gca,Nishioka:2014mwa} and turns out to be a supersymmetric AdS$_4$ charged topological black hole. The holographic computation precisely agrees with the large-$N$ limit of the supersymmetric R{\'e}nyi entropy both with and without Wilson loop operators. Similar constructions have been carried out recently in $d=4$ dimensions by \cite{Huang:2014pda,Crossley:2014oea} extending the works \cite{Pestun:2007rz,Hama:2012bg} for ${\cal N}=2$ supersymmetric gauge theories on $\mathbb{S}^4$. The four-dimensional supersymmetric R{\'e}nyi entropy has a logarithmic divergence with a coefficient determined by the $a$-anomaly of the Weyl symmetry. The objective of this paper is to introduce the supersymmetric R{\'e}nyi entropy \eqref{SRE_def} for ${\cal N}=1$ supersymmetric gauge theories in five dimensions and derive their matrix model representations by the localization technique. We will construct the theories by taking the rigid limit \cite{Festuccia:2011ws} of the ${\cal N}=1$ five-dimensional supergravity \cite{Kugo:2000hn,Kugo:2000af}. In some aspects, this can be regarded as a theoretical challenge to define field theories on a singular manifold with supersymmetry preserved. The partition functions of supersymmetric gauge theories are calculated on a round five-sphere in \cite{Kallen:2012cs,Hosomichi:2012ek,Kallen:2012va,Kim:2012ava,Kallen:2012zn,Jafferis:2012iv} and on a squashed five-sphere in \cite{Imamura:2012xg,Imamura:2012bm,Lockhart:2012vp,Kim:2012qf}. More general five-dimensional manifolds admitting rigid supersymmetry are explored in \cite{Kawano:2012up,Kim:2012gu,Terashima:2012ra,Fukuda:2012jr,Yagi:2013fda,Lee:2013ida,Cordova:2013cea,Pan:2013uoa,Imamura:2014ima}. We will show that the Killing spinor equations and additional equations for the Killing spinors can be solved on the resolved space of the $n$-covering five-sphere. Using the supersymmetry generated by the solution of the Killing spinor equations, we perform the localization computation for the partition function on the resolved sphere that is the Hopf fibration over deformed $\mathbb{C}\mathbb{P}^2$. There are three fixed points on $\mathbb{C}\mathbb{P}^2$ for $U(1)^2$ actions inside $U(1)\times SO(4)$ symmetry the resolved sphere has. We notice that the resolved five-sphere can be identified with the squashed five-sphere with the squashing parameters $(\omega_1, \omega_2, \omega_3)= (1/n, 1, 1)$ near the fixed points. Translating the results for the squashed sphere \cite{Imamura:2012xg,Imamura:2012bm,Lockhart:2012vp,Kim:2012qf}, we obtain the perturbative partition function on the $n$-covering five-sphere. We evaluate the supersymmetric R{\'e}nyi entropy \eqref{SRE_def} in the large-$N$ limit of ${\cal N}=1$ $USp(2N)$ superconformal gauge theories and reveal the $n$-dependence that satisfies several inequalities the usual R{\'e}nyi entropy does. We also consider an addition of a Wilson loop preserving supersymmetry, that can be physically interpreted as an insertion of a quark inside an entangling surface \cite{Lewkowycz:2013laa}. The variation of the supersymmetric R{\'e}nyi entropy by the loop turns out to be independent of the parameter $n$. Finally, we construct the gravity dual of five-dimensional supersymmetric R{\'e}nyi entropy as a solution of the Romans $F(4)$ supergravity \cite{Romans:1985tw}. It is a supersymmetric charged topological black hole in the Euclidean AdS$_6$ space whose boundary is $\mathbb{S}^1 \times \mathbb{H}^4$.\footnote{Non-supersymmetric charged topological black holes are studied as gravity duals of charged R{\'e}nyi entropies in \cite{Belin:2013uta,Belin:2013dva,Pastras:2014oka,Belin:2014mva}.} We compute the holographic free energy following \cite{Alday:2014rxa,Alday:2014bta} and the expectation value of the holographic Wilson loop. We find that the holographic supersymmetric R{\'e}nyi entropy and its variation due to the loop perfectly agree with the results of the dual field theory in the large-$N$ limit. \bigskip \noindent {\bf Note added:} While this work was being completed, we were aware of the paper \cite{Alday:2014fsa}, which has a substantial overlap with this paper. \section{Rigid ${\cal N}=1$ supersymmetry in five dimensions} The ${\cal N}=1$ supergravity coupled to Yang-Mills and matter fields in five dimensions is constructed in \cite{Kugo:2000hn}. This theory has an $SU(2)_R$ $R$-symmetry and the Weyl multiplet consists of the vielbein $e^a_\mu$, the graviphoton ${\cal A}_\mu$, the $SU(2)_R$ gauge field $V_\mu^{IJ}$, the $SU(2)_R$ triplet scalar field $t^{IJ}$, the dilaton $\alpha$, the real anti-symmetric tensor $v_{ab}$, the real scalar $C$, the $SU(2)_R$-Majorana gravitino $\psi^I_\mu$ and $SU(2)_R$-Majorana fermion $\chi^I$. Since the dilaton does not change under the supersymmetry transformation, we can fix the dilatational symmetry by $\alpha=1$. In addition, we only consider theories without central charge which allows us to {\it turn off the graviphoton ${\cal A}_\mu=0$}. This simplifies the construction of supersymmetric field theories on a curved space from the ${\cal N}=1$ supergravity as we will perform below. Following \cite{Festuccia:2011ws}, we take the rigid limit of the ${\cal N}=1$ supergravity by setting the gravitino $\psi_\mu^I$ and the fermion $\chi^I$ and their variations to be zero while letting the space-time be curved: \begin{align}\label{KSE} \begin{aligned} \delta \psi_\mu^I &= \nabla_\mu \xi^I - i t^I_{~J} \Gamma_\mu \xi^J - (V_\mu)^I_{~J} \xi^J + \frac{i}{2}v^{\nu\rho}\Gamma_{\mu\nu\rho} \xi^I = 0 \ , \\ \delta \chi^I &= \frac{i}{2}\Gamma^\mu \xi^I \nabla^\nu v_{\mu\nu} + \frac{i}{2}D_\mu t^I_{~J} \Gamma^\mu \xi^J + v_{\mu\nu} \Gamma^{\mu\nu} t^I_{~J}\xi^J + \frac{1}{2}C\xi^I = 0\ , \end{aligned} \end{align} where $\nabla_\mu$ is the covariant derivative with respect to the Lorentz index, and \begin{align} \begin{aligned} \nabla_\mu v_{ab} &= \partial_\mu v_{ab} + \omega_{\mu\, a}^{~~~c} v_{cb} - \omega_{\mu\, b}^{~~~c} v_{ca} \ , \\ D_\mu t^{I}_{~J} &= \partial_\mu t^{I}_{~J} - (V_{\mu})_{~K}^{I} t^{K}_{~J} + (V_{\mu})^{K}_{~J} t^{I}_{~K} \ . \end{aligned} \end{align} In this limit, the supersymmetry transformations of the other background fields automatically vanish. \subsection{Supersymmetry algebra and multiplets} The ${\cal N}=1$ supersymmetry algebra in five dimensions is given by \begin{align}\label{SUSYAlgebra} \begin{aligned} \{ \delta_{\xi_1} , \delta_{\xi_2} \} = v^\mu D_\mu + \delta_M(\Theta_{ab}) + \delta_R(R^{IJ}) + \delta_G(\gamma)\ , \end{aligned} \end{align} where $D_\mu$ is the covariant derivative with respect to the gauge symmetry and translation, $v_\mu$, $\Theta_{ab}$, $R^{IJ}$ and $\gamma$ are parameters for the translation, Lorentz rotation, $SU(2)_R$-symmetry and gauge symmetry transformations\footnote{We contract the $SU(2)_R$ indices from northwest to southeast direction when the indices are suppressed.} \begin{align} \begin{aligned} v_\mu &= 2\xi_1 \Gamma_\mu \xi_2 \ , \\ \Theta_{ab} &= 2i (\xi_1 \Gamma_{abcd}\xi_2) v^{cd} - 2i(\xi_1^I\Gamma_{ab}\xi_2^J + \xi_1^J\Gamma_{ab}\xi_2^I)t_{IJ} \ , \\ R^{IJ} &= 6i (\xi_1 \xi_2) t^{IJ} +2i (\xi_1^I\Gamma_{ab}\xi_2^J + \xi_1^J\Gamma_{ab}\xi_2^I)v^{ab} \ , \\ \gamma &= -2i (\xi_1 \xi_2) \sigma \ . \end{aligned} \end{align} \paragraph{Vector multiplet.} The vector multiplet contains the gauge field $A_\mu$, a real scalar $\sigma$, an $SU(2)_R$ triplet scalar $Y_{IJ}$ and an $SU(2)_R$-Majorana fermion $\lambda^I$. They transform under the supersymmetry as\footnote{We can redefine the triplet scalar $Y_{IJ}$ in \cite{Zucker:1999ej,Kugo:2000hn,Kugo:2000af,Fujita:2001kv} as $D_{IJ} = 2Y_{IJ} - 2t_{IJ} \sigma$ to make contact with \cite{Hosomichi:2012ek}.} \begin{align}\label{VM_transform} \begin{aligned} \delta A_\mu &= -2 \xi\Gamma_\mu\lambda \ , \\ \delta \sigma &= 2i \xi \lambda \ , \\ \delta \lambda^I &= \frac{1}{4}\Gamma^{\mu\nu} \xi^I F_{\mu\nu} - \frac{i}{2}\Gamma^\mu \xi^I D_\mu \sigma - Y^{IJ}\xi_J \ , \\ \delta Y^{IJ} &= - \xi^{I}\Gamma^\mu D_\mu \lambda^{J} + \frac{i}{2} v^{\mu\nu} \xi^{I}\Gamma_{\mu\nu}\lambda^{J} + i t^J_{~K} \xi^I \lambda^K + 2i\, t^{IJ} \xi \lambda -i [\sigma, \xi^I \lambda^J] + (I\leftrightarrow J) \ . \end{aligned} \end{align} Here $F_{\mu\nu} = \nabla_\mu A_\nu - \nabla_\nu A_\mu - [A_\mu, A_\nu]$. The covariant derivatives are \begin{align} \begin{aligned} D_\mu \sigma &= \partial_\mu \sigma - [A_\mu, \sigma] \ , \\ D_\mu \lambda^I &= \nabla_\mu \lambda^I - (V_\mu)^I_{~J} \lambda^J - [A_\mu , \lambda^I] \ . \end{aligned} \end{align} To put them into cohomological forms, we redefine the gaugino as \cite{Kallen:2012cs} \begin{align} \lambda^I = \frac{1}{4}\left( -\xi^I v^\mu\Psi_\mu - \Gamma^\mu\xi^I \Psi_\mu - \Gamma^{\mu\nu}\xi^I \chi_{\mu\nu} \right) \ , \end{align} where \begin{align} \Psi_\mu = -2\xi\Gamma_\mu \lambda \ , \qquad \chi_{\mu\nu} = \xi\Gamma_{\mu\nu} \lambda\ . \end{align} Then the supersymmetry transformation law \eqref{VM_transform} becomes \begin{align} \begin{aligned} \delta A_\mu &= \Psi_\mu \ , \\ \delta \Psi_\mu &= v^\nu F_{\nu\mu} + i D_\mu \sigma \ , \\ \delta \sigma &= -i v^\mu \Psi_\mu \ , \\ \delta \chi_{\mu\nu} &\equiv H_{\mu\nu}=\frac{1}{4}(\xi^I \Gamma_{\mu\nu\rho\sigma}\xi_I)F^{\rho \sigma}-\frac{1}{2}F_{\mu\nu}-\frac{i}{2}(v_\mu D_\nu \sigma - v_\nu D_\mu \sigma)+ (\xi^I \Gamma_{\mu\nu}\xi^J)Y_{IJ} \ , \\ \delta H_{\mu\nu} &= v^\rho D_\rho \chi_{\mu\nu} + i [\sigma, \chi_{\mu\nu}] +\Theta_\mu {}^\rho \chi_{\nu\rho}-\Theta_\nu {}^\rho \chi_{\mu\rho}\ . \end{aligned} \end{align} The supersymmetric action of the vector multiplet is \begin{align} \begin{aligned} {\cal L}_\text{YM} &= - \frac{2}{g^2}\text{Tr} \bigg[ \frac{1}{4} F_{\mu\nu}F^{\mu\nu} - \frac{1}{2}D_\mu \sigma D^\mu \sigma - Y_{IJ}Y^{IJ} + 2 \sigma (2 t_{IJ}Y^{IJ} - F_{\mu\nu}v^{\mu\nu}) +2(C-4t_{IJ}t^{IJ})\sigma^2 \\ &\qquad \qquad \quad +2\lambda \Gamma^\mu D_\mu \lambda - i \lambda^I (\epsilon_{IJ} \Gamma_{\mu\nu}v^{\mu\nu} - 2t_{IJ})\lambda^J - 2i \sigma [\lambda, \lambda] \bigg] \ , \end{aligned} \end{align} where $g$ is the gauge coupling constant. The Chern-Simons term can be added in five dimensions\footnote{Note that there is no imaginary unit as an overall factor because the gauge field $A$ is anti-hermitian in our convention.} \begin{align} {\cal L}_\text{CS} (A) &= \frac{k}{24\pi^2} \text{Tr} \left[A\wedge dA \wedge dA + \frac{3}{2}A \wedge A\wedge A \wedge dA + \frac{3}{5} A\wedge A\wedge A\wedge A\wedge A\right] \ , \end{align} whose supersymmetric completion is \cite{Kallen:2012cs} \begin{align} {\cal L}_\text{SCS} = {\cal L}_\text{CS}(A - i\sigma \kappa) - \frac{k}{8\pi^2} \text{Tr} \left[ \Psi \wedge \Psi \wedge \kappa \wedge F(A - i\sigma \kappa) \right] \ , \end{align} where $\kappa$ and $\Psi$ are one-forms dual to the Killing vector $\kappa\equiv v_\mu dx^\mu$ and $\Psi\equiv \Psi_\mu dx^\mu$, respectively. \paragraph{Hypermultiplet.} We can realize the supersymmetry algebra only for on-shell hypermultiplets. We, however, need one supercharge $\delta$ with unit norm $\xi\xi = 1$ for localization which satisfies \begin{align}\label{PSUSYAlg} \delta^2 = v^\mu D_\mu + \delta_M(\Theta_{ab}) + \delta_R(R^{IJ}) + \delta_G(\gamma) \ , \end{align} where \begin{align} \begin{aligned} v_\mu &= \xi \Gamma_\mu \xi \ , \\ \Theta_{ab} &= i (\xi \Gamma_{abcd}\xi) v^{cd} - 2i(\xi^I\Gamma_{ab}\xi^J) t_{IJ} \ , \\ R^{IJ} &= 3i t^{IJ} +2i (\xi^I\Gamma_{ab}\xi^J)v^{ab} \ , \\ \gamma &= -i \sigma \ , \end{aligned} \end{align} with the modified supersymmetry transformation law by introducing an auxiliary field \cite{Hosomichi:2012ek}. They satisfy \begin{align} v^\mu\Gamma_\mu \xi_I = \xi_I \ , \qquad v_\mu v^\mu = 1 \ , \qquad v^a \Theta_{ab} = 0 \ , \qquad \nabla_\mu v_\nu = -\Theta_{\mu\nu} \ . \end{align} The hypermultiplet consists of a scalar $q_A^I$, a fermion $\psi_A$ and an auxiliary field $F^A_I$ with flavor indices $A=1,2,\cdots, 2r$ for $r$ hypermultiplets. The index $A$ is raised and lowered with a $2r\times 2r$ antisymmetric matrix $\Omega_{AB}$ as \begin{align} q^I_A = q^{IB} \Omega_{BA} \ , \qquad q^{IA} = \Omega^{AB} q^I_B \ , \qquad\Omega^{AB}\Omega_{BC} = - \delta^A_C \ . \end{align} The reality conditions are imposed by \begin{align}\label{reality_condition} \begin{aligned} (q^I_A)^\ast &= \epsilon_{IJ} \Omega^{AB} q^J_B \ , \qquad (\psi_{A}^\alpha)^\ast = \Omega^{AB} \psi_B^\beta {\cal C}_{\beta\alpha} \ , \\ (F^I_A)^\ast &= \epsilon_{IJ} \Omega^{AB} F^J_B \ , \qquad \Omega_{AB} = \Omega^{AB} \ . \end{aligned} \end{align} The generators of the Lie group have one upper and one lower indices of the flavor symmetry, $\sigma^A_{~B}$ for example. Choosing the invariant tensor $\Omega$ of $Sp(2r)$ to be \begin{align} \Omega_{AB} = \Omega^{AB} = \left( \begin{array}{cc} 0 & {\bold 1}_{r} \\ -{\bold 1}_r & 0 \end{array}\right) \ , \end{align} the scalar field $q^I_A$ satisfying the reality condition \eqref{reality_condition} is written as a two-component vector \begin{align} q^1 = \left(\begin{array}{c} \phi_+ \\ \phi_- \end{array}\right) \ , \qquad q^2 = \left(\begin{array}{c} -\phi_-^\ast \\ \phi_+^\ast \end{array}\right) \ , \end{align} where $\phi_+$ and $\phi_-$ are in the fundamental and anti-fundamental representation of $SU(r)$ respectively. The fermion $\psi_A$ is similarly decomposed to \begin{align} \psi_A^\alpha = \left( \begin{array}{c} \psi_{\hat A}^\alpha \\ - \psi^\ast_{\alpha \hat B} \end{array}\right) \ , \end{align} as a vector with spinors $\psi_{\hat A}$ in the fundamental representation of $SU(r)$. The off-shell supersymmetry transformations realizing \eqref{PSUSYAlg} are found to be \begin{align}\label{HypSusyTr} \begin{aligned} \delta q^I_A &= 2i \xi^I \psi_A \ , \\ \delta \psi_A &= - i(D_\mu q^I_A) \Gamma^\mu \xi_I - 3t^I_{~J}\xi_I q^J_A + \xi^I \sigma_{AB} q_I^B - v_{\mu\nu} \Gamma^{\mu\nu} \xi_I q^I_A + F^I_A {\check \xi}_I\ , \\ \delta F^I_A &= -2 {\check \xi}^I \left( \Gamma^\mu D_\mu \psi_A + \frac{i}{2} v_{\mu\nu} \Gamma^{\mu\nu} \psi_A + i \sigma_{AB} \psi^B + 2i (\lambda^J)_{AB} q_J^B \right) \ , \end{aligned} \end{align} where the checked parameter $\check \xi^I$ satisfies \cite{Hosomichi:2012ek} \begin{align} \begin{aligned} \xi^I \xi_I = {\check \xi}^I {\check \xi}_I \ , \qquad \xi_I {\check \xi}_J = 0 \ , \qquad \xi^I\Gamma_\mu \xi_I + {\check \xi}^I\Gamma_\mu {\check \xi}_I = 0 \ , \end{aligned} \end{align} and the covariant derivatives are \begin{align} \begin{aligned} D_\mu q^I &= \partial_\mu q^I - (V_\mu)^I_{~J} q^J - A_\mu q^I \ , \\ D_\mu \psi &= \nabla_\mu \psi - A_\mu \psi \ . \end{aligned} \end{align} We rewrite the transformation laws \eqref{HypSusyTr} with the fermionic variables \cite{Kallen:2012va} \begin{align} q_A \equiv \xi_I q^I_A \ , \qquad \psi_A^\pm \equiv P_\pm \psi_A \ , \end{align} which satisfy the ``chirality" conditions $P_+ q_A = q_A$ with the projection operators $P_\pm \equiv \frac{1\pm v^\mu \Gamma_\mu}{2}$. For a shifted auxiliary field $\tilde F_A \equiv \check \xi_I \tilde F^I_A$ ($P_- \tilde F_A = \tilde F_A$), the supersymmetry transformations of the fields $(q_A, \psi_A^\pm, \tilde F_A)$ are recast in the following form: \begin{align} \begin{aligned} \delta q_A &= -i \psi_A^+ \ , \\ \delta \psi_A^+ &= i \left( v^\mu D_\mu \delta_A^B + i \sigma_A^{~B} \right) q_B + \frac{i}{4}\Theta_{ab}\Gamma^{ab} q_A \ , \\ \delta \psi_A^- &= \tilde F_A \ , \\ \delta \tilde F_A &= \left( v^\mu D_\mu \delta_A^B + i \sigma_A^{~B} \right)\psi_B^- + \frac{1}{4}\Theta_{ab}\Gamma^{ab} \psi_A^-\ . \end{aligned} \end{align} The matter lagrangian reads \cite{Kugo:2000af}\footnote{Although our off-shell supersymmetry transformation \eqref{HypSusyTr} differs from theirs in \cite{Kugo:2000af}, the off-shell lagrangian still closes. } \begin{align} \begin{aligned} {\cal L}_\text{matter} &= D_\mu \bar q D^\mu q + \left(v_{\mu\nu}v^{\mu\nu} + 2t_{IJ}t^{IJ} - C - \frac{R}{4}\right)\bar q q - \bar q^I (t_{IK}t^K_{~J} + 2\sigma t_{IJ} - \sigma^2 \epsilon_{IJ} - 2Y_{IJ})q^J \\ &\qquad + 2\bar\psi \Gamma^\mu D_\mu \psi + i \bar\psi (\Gamma_{\mu\nu} v^{\mu\nu} + 2 \sigma )\psi - 8i \bar q \lambda \psi - \bar F F \ , \end{aligned} \end{align} where the flavour indices $A,B$ are contracted from northeast to southwest. \subsection{Resolved space} We are interested in ${\cal N}=1$ supersymmetric field theories on a branched $n$-covering of five-sphere. To treat the conical singularity we replace it with a resolved space whose metric is given by \begin{align}\label{Resolved} ds^2 = \frac{d\theta^2}{f(\theta)} + n^2 \sin^2\theta d\tau^2 + \cos^2\theta \,ds_{\mathbb{S}^3}^2 \ , \end{align} where $f(\theta)$ is a smooth function behaving as \begin{align} f(\theta) = \left\{ \begin{array}{cl} 1/n^2 \ , & \qquad \theta = 0 \ , \\ 1 \ , & \qquad \epsilon < \theta < \pi/2 \ , \end{array}\right. \end{align} for $\epsilon\ll 1$. $ds_{\mathbb{S}^3}^2$ is the metric of a three-sphere \begin{align} ds_{\mathbb{S}^3}^2 = \sum_{i=1}^3 e_L^i e_L^i \ . \end{align} The vielbein $e_L^i$ in the left invariant frame and the spin connections satisfy \begin{align} d e^i_L = \epsilon^{ijk}e_L^j \wedge e_L^k \ , \qquad \omega^{ij}_L = \epsilon^{ijk} e_L^k \ . \end{align} They can be parametrized by an element $g$ of $SU(2)$ group \begin{align}\label{SU2} i e_L^i \sigma^i = g^{-1} dg \ , \end{align} with the Pauli matrices $\sigma^i$ $(i=1,2,3)$. We choose the vielbein of the resolved space \eqref{Resolved} as \begin{align}\label{Vielbein} e^1 = \frac{d\theta}{\sqrt{f(\theta)}} \ , \qquad e^2 = n\sin\theta\, d\tau \ , \qquad e^{i+2} = \cos\theta \,e^i_L\quad (i=1,2,3) \ , \end{align} and the spin connections are \begin{align}\label{SpinConnection} \omega^{12} = - n\cos\theta \sqrt{f(\theta)}\, d\tau \ , \qquad \omega^{1\, i+2} = \sin\theta \sqrt{f(\theta)} \, e^i_L \ , \qquad \omega^{i+2\, j+2} = \omega_L^{ij} \ . \end{align} \subsection{Relation between resolved space and squashed five-sphere}\label{ss:Relation} A five-sphere is embedded into $\mathbb{C}^3$ by complex coordinates $(z_1,z_2,z_3)$ as $|z_1|^2 + |z_2|^2 + |z_3|^3 =1$. It has $U(1)^3$ symmetry acting on the coordinates as \begin{align}\label{S5_sym} (z_1,z_2,z_3) \to (e^{ia_1}z_1,e^{ia_2}z_2,e^{ia_3}z_3) \ . \end{align} The five-sphere has the Hopf fiber representation as the $U(1)$ fibration over $\mathbb{C}\mathbb{P}^2$. The translation along the $U(1)$ fiber is described by the overall $U(1)$ phase rotation $a_1=a_2=a_3$ in \eqref{S5_sym}. There are three fixed points of $U(1)^2$ symmetry on the base $\mathbb{C}\mathbb{P}^2$ at $(z_1,z_2,z_3) = (1,0,0),\,(0,1,0),\,(0,0,1)$. Let us introduce new coordinates by \begin{align} \begin{aligned} z_1 &= \sin\theta \,e^{i n \tau} \ , \\ z_2 &= \cos\theta \cos\frac{\phi}{2} \,e^{i(\chi + \xi)/2} \ , \\ z_3 &= \cos\theta \sin\frac{\phi}{2} \,e^{i(\chi - \xi)/2} \ , \end{aligned} \end{align} where the ranges of the angles are taken to be $0\le \theta < \pi/2,\, 0\le \tau < 2\pi, \,0\le\phi< \pi,\, 0\le \chi< 4\pi$ and $0\le \xi < 2\pi$. The metric is locally that of a five-sphere, but globally the $n$-branched cover \begin{align}\label{SingS5} \begin{aligned} ds^2 &= d\theta^2 + n^2 \sin^2 \theta d\tau^2 + \cos^2\theta ds_{\mathbb{S}^3}^2 \ , \\ ds_{\mathbb{S}^3}^2 &= \frac{1}{4}\left[ d\phi^2 + \sin^2\phi \,d\xi^2 + (d\chi + \cos\phi \,d\xi)^2\right] \ . \end{aligned} \end{align} In this parametrization, the translation along the Hopf fiber is given by the shifts of the angles \begin{align} \tau \to \tau + \frac{a}{n} \ , \qquad \chi \to \chi + 2a \ , \end{align} that is generated by a vector field \begin{align} v^\mu \partial_\mu = \frac{1}{n}\partial_\tau + 2\partial_\chi \ . \end{align} The three fixed points are located at \begin{align} \begin{aligned} (1,0,0): & \quad \theta = \frac{\pi}{2} \ , \\ (0,1,0): & \quad \theta = 0\ , \quad \phi = 0 \ , \\ (0,0,1): & \quad \theta = 0\ , \quad \phi = \pi \ . \end{aligned} \end{align} The vielbein for $\mathbb{S}^3$ are written as \begin{align} \begin{aligned} e^1_L &= \frac{1}{2}(\sin\phi\cos\chi \,d\xi - \sin\chi \,d\phi) \ , \\ e^2_L &= \frac{1}{2}(\sin\chi \sin\phi \,d\xi + \cos\chi \,d\phi) \ , \\ e^3_L &= \frac{1}{2}(d\chi + \cos\phi \,d\xi) \ . \end{aligned} \end{align} Now we consider a deformation of a five-sphere satisfying $|z_1|^2/n^2+ |z_2|^2 + |z_3|^2 = 1$. This is a special case of the squashed five-sphere defined by \begin{align} \omega_1^2 |z_1|^2+ \omega_2^2 |z_2|^2 + \omega_3^2 |z_3|^2 = 1 \ , \end{align} with three squashing parameters chosen as \begin{align}\label{triplesines} (\omega_1, \omega_2, \omega_3) = \left(\frac{1}{n}, 1, 1\right) \ . \end{align} One can parametrize the complex coordinates with the real angles by \begin{align} \begin{aligned} z_1 &= n\sin\theta \,e^{i \tau} \ , \\ z_2 &= \cos\theta \cos\frac{\phi}{2} \,e^{i(\chi + \xi)/2} \ , \\ z_3 &= \cos\theta \sin\frac{\phi}{2} \,e^{i(\chi - \xi)/2} \ , \end{aligned} \end{align} that gives \begin{align} ds^2 &= \frac{ d\theta^2}{f_n(\theta)} + n^2 \sin^2 \theta d\tau^2 + \cos^2\theta ds_{\mathbb{S}^3}^2 \ , \end{align} where $f_n(\theta) = 1/(n^2 \cos^2\theta + \sin^2\theta)$. This space may be regarded as a resolved space with $f(\theta)=f_n(\theta)$ if the partition function does not depend on the choice of $f(\theta)$. In appendix \ref{app:OmegaBG}, we show in detail that this identification is possible for evaluating the one-loop partition functions. \subsection{Killing spinor equations} We will solve the Killing spinor equations \eqref{KSE} on the resolved space \eqref{Resolved}. We let spinors $\xi^I$ be tensor products of spinors in two dimensions $\zeta^I$ and spinors in three dimensions $\eta^I$ \begin{align} \xi^I = \zeta^I \otimes \eta^I \ . \end{align} Correspondingly, the gamma matrices that are hermitian $(\Gamma^a)^\dagger = \Gamma^a$ can be written in tensor product forms: \begin{align} \begin{aligned}\label{5dGamma} \Gamma^1 &= \sigma^1\otimes {\bf 1}_2 \ ,\qquad \Gamma^2 = \sigma^2 \otimes {\bf 1}_2 \ , \qquad \Gamma^{i+2} = \sigma^3 \otimes \sigma^i\ , \quad (i=1,2,3) \ . \end{aligned} \end{align} The charge conjugation matrix takes the form \begin{align} {\cal C} = \sigma_1 \otimes (i\sigma_2) \ . \end{align} With the vielbein \eqref{Vielbein} and the spin connections \eqref{SpinConnection}, we find the background fields \begin{align} \begin{aligned} t^1_{~1} =& -t^2_{~2}= \frac{1}{2}\sqrt{f(\theta)} \ , \qquad v^{12} = \mp i \frac{ \sqrt{f(\theta)} -1 }{2\cos\theta} \ , \\ (V_\mu)^1_{~1} &= - (V_\mu)^2_{~2} = \mp i\frac{n \sqrt{f(\theta)}}{2} \delta_{\mu \tau} \ , \end{aligned} \end{align} solve the first line of the Killing spinor equations \eqref{KSE} with the solutions \begin{align}\label{KSSols} \begin{aligned} \xi^1 & = (e^{\frac{i}{2}\theta \sigma_1} \zeta^1) \otimes \eta_+ \ , &\qquad &\sigma_3 \zeta^1 = \pm \zeta^1 \ , \\ \xi^2 & = (e^{-\frac{i}{2}\theta \sigma_1} \zeta^2) \otimes \eta_- \ , &\qquad &\sigma_3 \zeta^2 = \mp \zeta^2 \ , \end{aligned} \end{align} where $\zeta^{1,2}$ are constant spinors in two dimensions and $\eta_\pm$ are the Killing spinors on a unit three-sphere \begin{align} \left(\partial_i + \frac{i}{2}\sigma_i\right)\eta_\pm = \pm \frac{i}{2}\sigma_i \eta_\pm \ , \qquad (i=1,2,3) \ . \end{align} The $SU(2)$-Majorana condition leads the relations \begin{align} \zeta^2 = \sigma_1 \zeta^1 \ , \qquad \eta_- = i\sigma_2 \eta_+ \ , \end{align} which are compatible with the solutions \eqref{KSSols}. The second line of the Killing spinor equations \eqref{KSE} is satisfied by the solutions \eqref{KSSols} if we choose the scalar field $C$ to be \begin{align} C = \frac{1}{4}\cot\theta f'(\theta) - \frac{f(\theta) - \sqrt{f(\theta)}}{2\cos^2\theta} \ . \end{align} \section{Localization} We will localize the infinite-dimensional path integral of the partition function to a finite-dimensional matrix integral by adding a $\delta$-exact term to the action $I \to I + t \delta V$ where the localizing term $\delta V$ is taken to be positive semi-definite. Since the path integral does not depend on the $\delta$-exact term, we let $t$ be large so that the fixed points are given by $\delta V=0$. After determining the fixed point loci for the gauge and matter sectors, we will read off the perturbative partition function on the resolved space by identifying it with the squashed five-sphere at the three fixed points on the base $\mathbb{C}\mathbb{P}^2$. \subsection{Gauge sector} A localization term for the gauge sector is given similarly to \cite{Hosomichi:2012ek} by \begin{align} V_\text{gauge} = -4\text{Tr}\left[ (\delta\lambda)^\dagger \lambda\right] \ , \end{align} whose supersymmetry variation yields \begin{align}\label{Gauge_Loc} \begin{aligned} \delta V_\text{gauge}\big|_\text{boson} = - \text{Tr}\Bigg[ &\frac{1}{4}\left( F_{\mu\nu} + \frac{1}{2}\varepsilon_{\mu\nu\rho\sigma\kappa} v^\rho F^{\sigma\kappa} \right)\left( F^{\mu\nu} + \frac{1}{2}\varepsilon^{\mu\nu\rho\sigma\kappa} v_\rho F_{\sigma\kappa} \right) \\ &\qquad + \frac{1}{2}(v^\rho F_{\rho\mu})(v_\rho F^{\rho\mu})- D_\mu\sigma D^\mu \sigma + 2 Y_{IJ} Y^{IJ} \Bigg] \ , \end{aligned} \end{align} where we let the hermitian conjugates of $\sigma$ and $Y_{IJ}$ be $\sigma^\dagger = - \sigma$ and $Y^\dagger_{IJ} = Y^{IJ}$. The saddle point of the localization term \eqref{Gauge_Loc} is \begin{align}\label{saddlepts} F_{\mu\nu} = - \frac{1}{2}\varepsilon_{\mu\nu\rho\sigma\kappa} v^\rho F^{\sigma\kappa} \ , \qquad v^\rho F_{\rho\mu}=0 \ , \qquad D_\mu \sigma = 0 \ , \qquad Y_{IJ} = 0 \ . \end{align} In the zero instanton sector, the gauge field is a flat connection and the saddle point becomes \begin{align}\label{GaugeFP} A_\mu = 0 \ , \qquad \sigma = \sigma_0 = \text{const}\ , \end{align} up to the gauge transformation. \subsection{Matter sector} We choose a localization term for the matter sector as \begin{align} V_\text{matter} = (\delta \psi^+_A)^\dagger \psi^{+A} + (\delta \psi^-_A)^\dagger \psi^{-A} \ , \end{align} whose supersymmetry variation yields \begin{align} \begin{aligned} \delta V_\text{matter}|_\text{boson} &= \left( \frac{1}{4}\Theta_{ab}\Gamma^{ab} q_A + v^\mu D_\mu q_A \right)^\dagger \left( \frac{1}{4}\Theta_{ab}\Gamma^{ab} q_A + v^\mu D_\mu q_A \right) + (\sigma q)^\dagger_A (\sigma q)^A + \tilde F_A^\dagger \tilde F^A \ . \end{aligned} \end{align} Then the path integral localizes to the following fixed locus \begin{align} \left( \frac{1}{4}\Theta_{ab}\Gamma^{ab} + v^\mu D_\mu \right) q_A = 0 \ , \qquad \sigma_{A}^{~B} q_B = 0 \ , \qquad \tilde F_A = 0 \ . \end{align} Combining with \eqref{GaugeFP}, one finds \begin{align} q_A = \tilde F_A = 0 \ . \end{align} \subsection{Partition function on the $n$-covering five-sphere} As described in section \ref{ss:Relation}, the resolved five-sphere \eqref{Resolved} can be regarded as a squashed five-sphere with $f(\theta) = f_n(\theta)$ since they are locally equivalent near the fixed points of the four-dimensional base space in the Hopf fiber representation. Under this identification, the squashing parameters are $(\omega_1, \omega_2, \omega_3) = (1/n, 1, 1)$ and the perturbative partition function is \cite{Lockhart:2012vp,Imamura:2012bm} \begin{align} \begin{aligned} Z^{{\rm pert}} = \int d \sigma_0 \,e^{-I_0}& \prod_{\alpha \in {\rm positive \ root}} \mathcal{S}_3\left(\alpha(\sigma_0) \Big| \frac{1}{n},1,1\right) \mathcal{S}_3\left(\alpha(\sigma_0) +2+\frac{1}{n}\Big| \frac{1}{n},1,1\right) \\ & \times \prod_{\rho \in {\rm weight}} \mathcal{S}_3^{-1}\left(m+\frac{1}{2} \left( \frac{1}{n}+2 \right)+\rho(\sigma_0) \Big|\frac{1}{n},1,1\right) \ , \end{aligned} \end{align} where ${\cal S}_3$ is the triple sine function \begin{align} {\cal S}_3 (x| \omega_1, \omega_2, \omega_3) = \prod_{p,q,r\ge 0} \left( p\omega_1 + q\omega_2 + r\omega_3 + x\right) \left( (p+1)\omega_1 + (q+1)\omega_2 + (r+1)\omega_3 - x\right) \ . \end{align} Here we take the mass of hypermultiplet as $m$, and $I_0$ is the classical contribution at the localization fixed point\footnote{Here, there is an imaginary unit in front of the Chern-Simons level $k$ because of our anti-hermitian convention for Lie algebras.} \begin{align} \begin{aligned} I_0 &= -\frac{2}{g^2}\int d^5 x \sqrt{g} \, (2C - 8 t^{IJ}t_{IJ}) \text{Tr}\, \sigma_0^2 + \frac{ik}{24\pi^2} \int \kappa \wedge d\kappa \wedge d\kappa \,\text{Tr}\,\sigma_0^3\\ &= - \frac{8\pi^3 n}{g^2}\text{Tr}\, \sigma_0^2 + \frac{ik\pi n}{3} \text{Tr}\,\sigma_0^3\ , \end{aligned} \end{align} where we used the volume of the $n$-covering five-sphere in the second equality.\footnote{Note that our convention leads to $\kappa\wedge d\kappa \wedge d\kappa = 8 \text{vol}(M_5)$.} There are also instanton contributions that we will neglect in the large-$N$ limit. \subsection{Large-$N$ limit of supersymmetric R{\'e}nyi entropy} Consider ${\cal N}=1$ supersymmetric $USp(2N)$ gauge theories with $N_f$ flavors and a single hypermultiplet in the antisymmetric representation. In the large-$N$ limit, the free energy $F=-\log Z$ on the $n$-covering five-sphere is approximated by the saddle point $\sigma \to -i N^{1/2} x$ with the density \cite{Jafferis:2012iv,Alday:2014bta} \begin{align}\label{density} \rho (x) = \frac{2}{x_\ast^2} x \ ,\qquad x \in \left[ 0, x_\ast\equiv \frac{\omega_1 + \omega_2 + \omega_3}{\sqrt{2(8-N_f)}}\right] \ . \end{align} The free energy in the leading order of the large-$N$ is \begin{align}\label{FreeEnergy} F = \frac{(\omega_1 + \omega_2 + \omega_3)^3}{27 \omega_1 \omega_2 \omega_3} F_{\mathbb{S}^5} = \frac{(2n+1)^3}{27n^2} F_{\mathbb{S}^5}\ , \end{align} where $F_{\mathbb{S}^5}$ is for a round five-sphere \begin{align} F_{\mathbb{S}^5} = - \frac{9\sqrt{2} \pi N^{5/2}}{5\sqrt{8-N_f}} \ . \end{align} Using the definition \eqref{SRE_def}, the supersymmetric R{\'e}nyi entropy in the large-$N$ limit is\footnote{Note that $S_1^\text{susy} = S_1$ because there is no $R$-symmetry flux in the $n\to 1$ limit.} \begin{align}\label{SRE_largeN} S_n^\text{susy} = - \frac{19n^2 + 7n +1}{27n^2} F_{\mathbb{S}^5} = \frac{19n^2 + 7n +1}{27n^2} S_1 \ . \end{align} It is worth mentioning that the ratio of the supersymmetric R{\' e}nyi entropy $H_n \equiv S^\text{susy}_n/S_1$ in the large-$N$ limit satisfies the inequalities \begin{align} \begin{aligned} \partial_n H_n &\le 0 \ , \\ \partial_n \left( \frac{n-1}{n} H_n \right) &\ge 0 \ , \\ \partial_n ((n-1)H_n) &\ge 0 \ , \\ \partial_n^2 ((n-1)H_n) &\le 0 \ , \end{aligned} \end{align} that are satisfied by the usual R{\'e}nyi entropy \cite{zyczkowski2003renyi} and the supersymmetric R{\'e}nyi entropy in three dimensions \cite{Nishioka:2013haa}. \subsection{Adding Wilson loop} The supersymmetric Wilson loop in a representation $\mathfrak R$ of the gauge group is \begin{align} W_{\mathfrak R} = \frac{1}{\text{dim} {\mathfrak R}} \text{Tr}_{\mathfrak R}\, {\cal P} \exp \left[ -\oint ds\, ( A_\mu \dot x^\mu (s) - i \sigma |\dot x(s)|) \right] \ . \end{align} This is invariant under the supersymmetry transformation if the contour of the loop is the same as the orbit of the Killing vector, namely, $\dot x^\mu(s)/|\dot x(s)| = v^\mu$. The variation of the entanglement entropy due to the Wilson loop is considered by \cite{Lewkowycz:2013laa} and those of the supersymmetric R{\'e}nyi entropies are \cite{Nishioka:2014mwa} \begin{align}\label{Wilson_RE} S_{W,n}^\text{susy} = \frac{1}{n-1}\left( n \log|\langle W_{\mathfrak R} \rangle_1| - \log|\langle W_{\mathfrak R} \rangle_n|\right) \ , \end{align} where $\langle \cdot \rangle_n$ stands for the expectation value taken on the $n$-covering sphere. Let us consider a Wilson loop in the fundamental representation wrapped on $\tau$ direction. This configuration is BPS for arbitrary real $n$. In the large-$N$ limit, the expectation value can be approximated at the saddle point \begin{align} \langle W \rangle_n = \int_0^{x_\ast} dx\, \rho(x) \, e^{2\pi n (N^{1/2}x + O(1))} \ , \end{align} with the density \eqref{density}, resulting in \begin{align} \log \langle W\rangle_n = \sqrt{\frac{2}{8-N_f}} \,\pi (2n+1) N^{1/2} \ . \end{align} Then, the definition \eqref{Wilson_RE} yields the variation of the supersymmetric R{\'e}nyi entropy independent of $n$ \begin{align}\label{Wilson_SRE} S_{W, n}^\text{susy} = \sqrt{\frac{2}{8-N_f}} \,\pi\, N^{1/2} \ . \end{align} \section{Gravity dual of supersymmetric R{\' e}nyi entropy} The gravity dual of a squashed five-sphere with $SU(3)\times U(1)$ symmetry has been constructed in \cite{Alday:2014rxa,Alday:2014bta} using Romans $F(4)$ supergravity in six dimensions \cite{Romans:1985tw}. We follow the conventions of \cite{Alday:2014rxa,Alday:2014bta} below. Instead of finding a solution dual to the resolved five-sphere, we map the $n$-covering of a five-sphere to the hyperbolic space $\mathbb{S}^1 \times \mathbb{H}^4$ with the metric \begin{align} ds^2 = d\hat\tau^2 + du^2 + \sinh^2 u \,ds_{\mathbb{S}^3}^2 \ , \end{align} by a conformal transformation \cite{Casini:2011kv}. Here $\hat\tau\equiv n\tau$ is the rescaled circle direction with the periodicity $\hat \tau \sim \hat \tau + 2\pi n$. We will look for an asymptotically Euclidean AdS$_6$ solution whose boundary is $\mathbb{S}^1 \times \mathbb{H}^4$ in the Romans theory. \subsection{Romans $F(4)$ supergravity} The bosonic part of the Romans $F(4)$ supergravity includes the metric $g_{\mu\nu}$, a dilaton $\phi$, a one-form field $A$, a two-form field $B$ and $SU(2)$ gauge field $A^i$ $(i=1,2,3)$. The Euclidean action is\footnote{We rescale the fields so that the gauge coupling $g$ appears as an overall factor.} \begin{align} \begin{aligned} I = - \frac{1}{16\pi G_N g^4} \int & \Bigg[ R\ast 1- 4 X^{-2} dX \wedge \ast dX - \left( \frac{2}{9}X^{-6} - \frac{8}{3}X^{-2} - 2X^2\right) \ast 1 \\ &\qquad - \frac{1}{2}X^{-2} \left( F\wedge \ast F + F^i \wedge \ast F^i\right) - \frac{1}{2}X^4 H \wedge \ast H \\ &\qquad - i B\wedge \left( \frac{1}{2}dA \wedge dA + \frac{1}{3}B \wedge dA + \frac{2}{27} B\wedge B + \frac{1}{2}F^i \wedge F^i \right)\Bigg] \ , \end{aligned} \end{align} where $X\equiv e^{-\phi/\sqrt{8}}$, $F = dA + \frac{2}{3}B$, $F^i = dA^i - \frac{1}{2} \varepsilon_{ijk}A^j \wedge A^k$ and $H = dB$. Since the gauge coupling $g$ appears only in the overall factor, we set $g=1$ in the following. The Hodge duality is defined by \begin{align} \alpha \wedge \ast \beta = \frac{1}{p!}\alpha_{\mu_1\cdots \mu_p} \beta^{\mu_1\cdots \mu_p} \ast 1 \ , \end{align} for $p$-forms $\alpha$ and $\beta$. The equations of motion are \begin{align}\label{EOM_Gauge} \begin{aligned} d (X^4 \ast H) &= \frac{i}{2} F\wedge F + \frac{i}{2}F^i\wedge F^i + \frac{2}{3} X^{-2} \ast F \ , \\ d (X^{-2} \ast F) &= -i F\wedge H \ , \\ D (X^{-2} \ast F^i) &= - i F^i \wedge H \ , \\ d (X^{-1} \ast dX) &= - \left( \frac{1}{6} X^{-6} - \frac{2}{3} X^{-2} + \frac{1}{2} X^2\right) \ast 1 \\ &\qquad - \frac{1}{8} X^{-2} (F \wedge \ast F + F^i \wedge \ast F^i ) + \frac{1}{4} X^4 H \wedge \ast H \ , \end{aligned} \end{align} where the covariant derivative $D$ in the third line acts as $D \omega^i = d \omega^i - \epsilon_{ijk} A^j \wedge \omega^k$, and the Einstein equation is \begin{align}\label{EOM_Einstein} \begin{aligned} R_{\mu\nu} &= 4 X^{-2} \partial_\mu X \partial_\nu X + \left( \frac{1}{18} X^{-6} - \frac{2}{3} X^{-2} - \frac{1}{2} X^2\right) g_{\mu\nu} + \frac{1}{4} X^4 \left( H_{\mu\rho\sigma} H_\nu^{~\rho\sigma} - \frac{1}{6} H^2 g_{\mu\nu} \right) \\ &\qquad + \frac{1}{2} X^{-2} \left( F_{\mu\rho}F_\nu^{~\rho} - \frac{1}{8} F^2 g_{\mu\nu}\right) + \frac{1}{2} X^{-2}\left( F^i_{\mu\rho}F_\nu^{i~\rho} - \frac{1}{8} (F^i)^2 g_{\mu\nu}\right) \ . \end{aligned} \end{align} The gauge symmetry $A\to A + 2\lambda/3$, $B\to B - d\lambda$ allows us to set $A=0$. Also, the second equation of \eqref{EOM_Gauge} follows from the first one by taking the exterior derivative \cite{Alday:2014bta} . \subsection{Holographic free energy}\label{ss:holographicFE} The holographic free energy is the on-shell action on a manifold $\mathbb{M}_6$ with boundary $\partial \mathbb{M}_6$ \begin{align} I_\text{tot} = I_\text{bulk} + I_\text{bdy} + I_\text{GH} + I_\text{c.t.} \ , \end{align} where $I_\text{bulk}$ and $I_\text{bdy}$ are the bulk and the boundary on-shell actions \begin{align} \begin{aligned} I_\text{bulk} &= \frac{1}{16\pi G_N} \int_{\mathbb{M}_6}\left[ \frac{4}{9}X^{-2} (2+3X^4) \ast 1 + \frac{1}{3}X^{-2} F^i \wedge \ast F^i + \frac{i}{3} B\wedge F^i \wedge F^i \right] \ , \\ I_\text{bdy} &= \frac{1}{16\pi G_N} \int_{\partial \mathbb{M}_6} \left[ \frac{2}{3}X^{-1}\ast dX + \frac{1}{3}X^4 B\wedge\ast H \right]\ . \end{aligned} \end{align} $I_\text{GH}$ is the Gibbons-Hawking term added at the boundary $\partial \mathbb{M}_6$ with the induced metric $h$ \begin{align} I_\text{GH} = - \frac{1}{8\pi G_N} \int_{\partial \mathbb{M}_6} K \ast_h 1 \ , \end{align} where $\ast_h$ is the Hodge dual with respect to $h$ and $K\equiv \nabla_\mu n^\mu$ is the trace of the extrinsic curvature with the unit normal vector $n^\mu$ to $\partial \mathbb{M}_6$. The last one is the counterterm needed to make the action finite \cite{Alday:2014rxa,Alday:2014bta} \begin{align} \begin{aligned} I_\text{c.t.} &= \frac{1}{8\pi G_N} \int_{\partial \mathbb{M}_6} \Bigg[- \frac{3}{4\sqrt{2}} F^i \wedge \ast_h F^i + (\text{terms including} \, B)\\ &\quad +\left\{ \frac{4\sqrt{2}}{3} + \frac{1}{2\sqrt{2}} R(h) + \frac{3}{4\sqrt{2}}R(h)_{mn}R(h)^{mn} - \frac{15}{64\sqrt{2}}R(h)^2 + \frac{4\sqrt{2}}{3}(1-X)^2\right\}\ast_h 1 \Bigg] \ . \end{aligned} \end{align} \subsection{BPS charged topological AdS black hole} The reasonable ansatz for the metric that preserves the boundary symmetry $SO(1,4)\times U(1)$ is \begin{align} ds^2_6 = h^{1/2}(r)f^{-1}(r) dr^2 + h^{-3/2}(r)f(r) d\hat\tau^2 + h^{1/2}(r)r^2\left[ du^2 + \sinh^2 u \,ds_{\mathbb{S}^3}^2\right] \ . \end{align} Since the background fields break $SU(2)_R$ symmetry to the subgroup $U(1)$ in the dual field theory, we correspondingly assume $A^1=A^2=B=0$ and \begin{align} \begin{aligned} X &= X(r) \ , \qquad A^3 = a(r) d\hat\tau \ . \end{aligned} \end{align} In \cite{Cvetic:1999un}, AdS black hole solutions were constructed with the fields given by \begin{align}\label{AdSBH} \begin{aligned} X(r) &= h^{-1/4}(r) \ , \qquad & a(r) &= \sqrt{2} (1-h^{-1}(r)) \coth\beta \ , \\ f(r) &= \frac{2}{9}r^2 h^2(r) - \frac{\mu}{r^3} - 1\ , \qquad & h(r) &= 1 - \frac{\mu \sinh^2 \beta}{r^3} \ . \end{aligned} \end{align} This solution becomes the supersymmetric AdS$_6$ topological black hole when $\mu=0$, but is no longer BPS for $\mu >0$. Actually, the BPS solution takes very close form to the above solution \begin{align}\label{BPS_BH} \begin{aligned} X(r) &= h^{-1/4}(r) \ , \qquad & a(r) &= \sqrt{2} (1-h^{-1}(r)) \ , \\ f(r) &= \frac{2}{9}r^2 h^2(r) - 1\ , \qquad & h(r) &= 1 - \frac{\sqrt{2} \sinh^2 (3\gamma/2)}{r^3} \ . \end{aligned} \end{align} One can check that the above solution satisfy the integrability equation of Killing spinor equation of the Euclidean Romans theory given in (A.5) of \cite{Alday:2014bta}. This BPS black hole is asymptotically AdS space with the AdS radius $L_\text{AdS} = \frac{3}{\sqrt{2}g}$ and has a horizon at the largest root $r=r_H(\gamma)$ of $f(r_H(\gamma))=0$ depending on the parameter $\gamma$ \begin{align} r_H(\gamma) = \frac{2\cosh \gamma + 1}{3} L_\text{AdS} \ . \end{align} The temperature and the entropy of the black holes are \begin{align} T = \frac{2\cosh \gamma -1}{2\pi L_\text{AdS}} \ . \end{align} and \begin{align} S_\text{BH} = \frac{ L_\text{AdS}\, r_H^3(\gamma)}{4G_N}\, \text{vol}(\mathbb{H}^4) \ , \end{align} where $\text{vol}(\mathbb{H}^4)$ is the reguralized volume of the four-dimensional unit hyperbolic space \begin{align} \text{vol}(\mathbb{H}^4) = \frac{4\pi^2}{3} \ . \end{align} Since the period of the circle in the boundary has $2\pi n$ periodicity, we take the parameter $\gamma$ to be \begin{align} \cosh \gamma = \frac{n+1}{2n} \ . \end{align} Using the renormalized action in section \ref{ss:holographicFE}, the holographic free energy becomes \begin{align} I_\text{tot} = - \frac{\pi^2}{4G_N} \frac{(2n+1)^3}{n^2} \ . \end{align} Comparing this result with the free energy on the $n$-covering five-sphere \eqref{FreeEnergy}, they completely agree due to the relation $G_N = - 15\pi \sqrt{8-N_f}/ (4\sqrt{2}N^{5/2})$ \cite{Jafferis:2012iv,Alday:2014bta}. Therefore, the holographic supersymmetric R{\' e}nyi entropy also agrees with the large-$N$ result \eqref{SRE_largeN}. \subsection{Holographic Wilson loop} Consider a Wilson loop in the fundamental representation wrapped along $\tau$ direction at the origin of $\mathbb{H}^4$. It is dual to the fundamental string in the AdS$_6$ space \cite{Maldacena:1998im,Rey:1998ik} extending from the boundary $r=\infty$ and terminating at the horizon $r=r_H$. The expectation value is given by \begin{align}\label{W=S} \log \langle W\rangle = -S_\text{string} \ , \end{align} where $S_\text{string}$ is the string world sheet action in a target space with the metric $ds^2_\text{string} = G_{\mu\nu} dx^\mu dx^\nu$ \begin{align} S_\text{string} = \frac{1}{2\pi \alpha'} \int d^2 \xi \sqrt{\det \, G_{\mu\nu} \partial_\alpha x^\mu \partial_\beta x^\nu} \ . \end{align} Uplifting the solution to the massive IIA supergravity, the scalar field $X$ becomes the dilation field which distinguishes the string frame from the Einstein frame as $ds^2_\text{string} = X^{-2} ds_\text{Einstein}^2$. Taking into account this effect and choosing the world sheet coordinates to be $\xi^1 = \tau, \xi^2 = r$, the string action becomes \begin{align} S_\text{string} = \frac{n L_\text{AdS}}{\alpha '} (r_\infty - r_H) = - \frac{2n +1}{3 \alpha '} L_\text{AdS}^2 \ . \end{align} From the relation \eqref{W=S} and the definition \eqref{Wilson_RE}, we obtain the variation of the supersymmetric R{\'e}nyi entropy \begin{align}\label{Wilson_HSRE} S_{W,n}^\text{susy} = \frac{L^2_\text{AdS}}{3\alpha'} \ , \end{align} which is independent of the parameter $n$. If the solution is dual to ${\cal N}=1$ $USp(2N)$ superconformal theories, the AdS radius is fixed to be $L_\text{AdS}^2/\alpha ' = 3\sqrt{2} \pi N^{1/2}/\sqrt{8-N_f}$ \cite{Bergman:2012kr} and \eqref{Wilson_HSRE} agrees with the field theory result \eqref{Wilson_SRE}. \vspace{1.3cm} \centerline{\bf Acknowledgements} We are grateful to T.\,Nosaka, S.\,Pufu and M.\,Yamazaki for valuable discussions. The work of N.\,H. was supported in part by JSPS Research Fellowship for Young Scientists. The work of T.\,U. was supported in part by the National Science Foundation under Grant No.\,NSF\, PHY-25915.
1001.4405
\section{Introduction} \label{sec:intro} The basic definition of Virtual Organisation (VO) is simple enough: organisations and individuals who bind themselves dynamically to one another in order to share resources within temporary alliances~\cite{FosterKessel01}. Several issues arise at various levels of abstraction when attempting to describe the alliance, the binding between members and the sharing of resources. We focus on an abstract, high-level description of VOs and their lifecycle and define a formal model for VOs and their formation that can guide their realisation. Like others (e.g. \cite{Matos,brainbrawn,patel-2005}) we focus on VOs that can be formed and managed automatically by intelligent agents. To support the envisaged automation agents {\em represent} organisations and individuals by providing and requesting resources/services, by {\em wrapping} them or by connecting to their published interface. Agents are designed to incorporate the requirements of these organisations and individuals and exhibit some {\em human aspects} while supporting the decision-making processes automatically. Unlike existing work, we abstract away from concrete realisation choices of VOs so that our models can be applicable to a range of service grid applications. Instead, the framework focuses on what we believe are likely to be the {\em essential elements} of VOs and ignore a number of lower-level aspects that are normally included in reference models for collaborative networks (see ~\cite{ecolead}). Assuming a set of essential elements that are applicable across applications, our representation for the operational aspects of VOs relies upon the notion of {\em VO life-cycle}, which reflects the orthodox managerial and technical views of VOs, as proposed by~\cite{appel,gridbook2,websiteVO}. This lifecycle can be structured in three main phases: and the \emph{selection of partners}, formation, operation, and dissolution. In this paper we focus on the formation phase with subphases: (i) \emph{initiation}, whereby an initiating agent \emph{identifies the goals} that it cannot achieve in isolation and \emph{discovers the potential partners} who can assist it in achieving those goals; and (ii) \emph{configuration}, involving some form of \emph{negotiation}, trivial or otherwise, and the \emph{selection of partners}, trimmed down from those discovered, who will constitute the members of the VO once it is started. We see a VO abstractly as a tuple consisting of \emph{agents} participating in the VO, \emph{roles} they play therein, \emph{goals} the VO is set to achieve, the \emph{workflow} of services being provided within the VO, and \emph{contracts} associated with these services, to which agents are meant to conform. We then define the formation phase of a VO as a transition system between tuples providing partial approximations of the resulting VO. Within our framework and for the purposes of VOs, agents are seen as existing within \emph{agent societies}: we define these abstractly as our starting point. VOs then emerge within societies as a result of interactions amongst agents, as determined by the roles they play. Throughout the paper we shall exemplify the proposed framework for VOs by showing how it can be applied to the following simple but realistic \emph{earth obsevation scenario}. \begin{quote} A government official is asked to investigate the detection of an offshore oil spill. As the ministry where the official works does not have direct access to earth observation facilities, the official typically follows a procedure. The first step of such a procedure is for the official to call a number of companies that control satellites which may provide suitable images. Satellites may have different orbits, sensors, capabilities and costs, so the official needs to discuss with satellite companies in order to select their most appropriate services for the task at hand. Satellites have names such as \emph{Envisat}, \emph{ERS-1}, and \emph{RADARSAT}. As the satellite output is normally provided in the form of raw images, not immediately suitable for the detection of an oil spill, the second step of the procedure involves the official calling companies that provide processing services by appropriate software, for example for \emph{format conversion} (into formats such as TIFF, JPEG, etc), \emph{reprojection} (into different coordinate systems), \emph{pattern recognition} (to detect in the environment objects such as ships and buildings or features such as oil spills). After a post-processing image company is selected, the output of the satellite is processed by them and the resulting image will allow the official to identify the cause of the oil spill. \end{quote} We will reinterpret this scenario by assuming that the government official is a user of an agent-oriented service grid. In such a grid, services such as oil spill detection and image processing are automatically discovered and negotiated by software agents that act on behalf of people and/or organisations. In this interpretation, the scenario will result into the formation of a VO that consists of the following parties: the \emph{ministry official} and his agent acting as {\em service requester}; the \emph{satellite company}, the \emph{post-processing company}, and their agents acting as {\em service providers}. The agents negotiate over the two requested services and orchestrate them into a {\em workflow} where the post-processing of the image data requires that the image data is created first. To guarantee the properties and delivery of the services provided by the satellite company and the post-processing company and to ensure those companies are compensated for their efforts, all the parties are involved into signing a {\em contract}, which binds parties, in particular establishing their roles within the VO and defining a {\em Service Level Agreement} (SLA). An SLA specifies details of the service provision such as the resolution of images, quality threshold, and time of delivery. Once the services are delivered via the execution of the workflow, the high-level goal of the user is satisfied and the VO is dissolved. The paper is organised as follows. Section~\ref{sec:ag soc} presents the formalisation of the required abstractions, namely: agents, their roles and social norms specified as interaction protocols in an agent society, the services/resources available in that society, how these services can be combined in workflows, and how interactions in these workflows can be regulated by agreed contracts. These components will become the constituents of the VOs and will be produced by a VO's formation phase. This phase is defined as a state transition system that will be formalised in section~\ref{sec:def}. Finally, in section~\ref{sec:conc} we summarise our work and we outline our plans for the future. \section{Agent Societies} \label{sec:ag soc} For the purposes of VOs, agents ``representing'' services can be seen as existing within a {\em society} (of agents). VOs emerge as a result of interactions amongst the agents in this society. In other words, the agent society can be seen as the breeding environment~\cite{breeding} for VOs. We will assume that an agent society exists prior to decisions and interactions leading to VOs. However, typically this society is intended to be ``virtual'', in that it is the implicit result of the existence of agents and services within an agent-enabled grid/service-oriented architecture. An agent society is characterised by roles that agents can adopt, services available to and controlled by agents in the society, possible combinations of these services within workflows, and possible contracts between agents. Formally, $AgentSociety = \ensuremath{\langle Agents, Services, Roles, Workflows, Contracts\rangle}$. The elements of the $AgentSociety$ can be described as follows. \begin{itemize} \item $Agents$ is a (finite) set of agents, $\{A_1, \ldots, A_n \}$, with $n \geq 2$; each agent is equipped with a set of individual goals, an evaluation mechanism, and a set of roles it can cover (see section~\ref{sec:agents}). \item $Services$ is a (non-empty and finite) set of services represented by agents (see section~\ref{sec:services}). \item $Roles$ is a (non-empty and finite) set of roles that agents can play within the society as well as the VOs, once they have been created. We require that there are roles for $requester(s)$ and $provider(s)$ in $Roles$, for all $s \in Services$; roles are associated with interaction protocols (see section~\ref{sec:protocols}). \item $Workflows$ is a (non-empty) set of possible combinations of services in $Services$ (see section~\ref{sec:wf}). \item $Contracts$ is a (non-empty) set of possible combinations of agents (in $Agents$), roles (in $Roles$), and workflows (in $Workflows$) as terms in a contract (see section~\ref{sec:con}). \end{itemize} Note that, in addition to roles for $requester$ and $provider$ of all available services in the society, $Roles$ may also include roles for a $broker$ that provides information on how to obtain or use some services, an $arbitrator$ for making sure that interactions for services are suitable regulated, and so on. Finally, note that there are no goals of the agent society itself, and goals exist within agents only. However, VOs are goal-oriented: we will see, in section~\ref{sec:def}, that the goals of VOs originate from those of individual agents. The components of an agent society will be defined using several abstract underlying languages. Here we single out these languages. We adopt the following conventions: variables start with capital letters; constants start with lower-case letters or numbers; `\_' stands for the anonymous variable. We use a set \ensuremath{{AI}_{as}}{} of {\rm agent identifiers}, that serve as unique ``names'' to address agents in the society, e.g. to support communication. An example is $satERS1Ag$, representing the satellite {\em ERS-1}. We use a set \ensuremath{{RI}_{as}}{} of {\em role labels} for the definition of $Roles$. We require that $requester(s)$ and $provider(s)$ belong to \ensuremath{{RI}_{as}}, for all $s \in Services$. We use {\em contracts identifiers}, \ensuremath{{CI}_{as}}, univocally identifying and distinguishing contracts in $Contracts$. We will assume some given, shared {\em ontology} \ensuremath{{O}_{as}}, which for simplicity we think of as a set of atomic and ground propositions.\footnote{ In general, \ensuremath{{O}_{as}}{} may need to contain hierarchical concepts, for example a ``generic service'' may be defined as either a ``satellite service'' or a ``processing service''.} \ensuremath{{O}_{as}}{} will include (i) (an abstract representation of) all services in $Services$, (ii) generic infrastructure knowledge, e.g. for querying registries holding information about agents and the services they provide. An example of the latter may be $provides(X,satImage(in,out))$ instantiating $X$ to $satERS1Ag$, representing that the agent named $satERS1Ag$ represents a provider of service $satImage(in,out) \in Services$. We will see that VOs emerge in an agent society by communicative interaction amongst its members. As usual, communicating agents will share a {\em communication language} that will act as a ``lingua franca'' amongst agents. We thus assume as given a language \ensuremath{{ACL}_{as}}{} of locutions. As conventional, locutions consist of a {\em performative} and a {\em content}. Examples of locutions in \ensuremath{{ACL}_{as}}{} may be $request(s)$ and $accept(s)$, where $s \in Services$ is the content. Each individual agent is equipped with an {\em internal language} to express its knowledge/beliefs and goals. Since the goals of VOs are derived from the individual agents' goals, we need to assume that the agents share at least a fragment of their internal languages. This fragment can be also used to express, e.g., conditions in protocol clauses (see section~\ref{sec:protocols}). We will refer to this shared fragment of all agents' internal languages as \ensuremath{{L}_{as}}. We require that the sentence $true$ is contained in this language, as well as sentences built using the usual connectives $\wedge$ and $\neg$. We assume that this language is propositional. Examples of sentences in \ensuremath{{L}_{as}}{} may be $toBuy(satImage(in,out))$. Sentences are meant to be evaluated using the agents' internal evaluation mechanisms (see section~\ref{sec:agents}). Note that there are no eligibility conditions to choose which agents enter the society, as we assume an open setting where agents can freely circulate. In this context, VOs provide a mechanism for defining which agents can be suitably put together to help solving each others' goals. \subsection{Services} \label{sec:services} {\em Concrete services}, that can be executed by their providers, are described using sentences in \ensuremath{{O}_{as}}. Examples of concrete services are $satImage(in,out)$ with $in$ and $out$ representing the inputs and outputs for the service (e.g. $in$ may be $[\lati,\longi,1000, 500, 5, optical,3]${} and $out$ may be $results.data$) \footnote{Here, 38.0 and -9.4 are the latitude and longitute coordinates of the area to be scanned, 1000 is the resolution of the image in metres, 500 is the km$^2$ area to be scanned, 5 is the frequency in hours for the area to be scanned, $optical$ is the type of sensor to be used, 3 is the wave frequency to be used in the scan, $results.data${} is the name of the file produced by Envisat.} and some service for detecting oil-spills $detectOilSpills([a,5],b)$\footnote{ Here, $a$ represents the input raw satellite data, 5 is the acceptable threshold for false positives and $b$ is the output processed data image, as computed by the provider of $detectOilSpill$. Note that algorithms for detecting oil spills may occasionally give false positives, namely indicate that there is an oil spill at some location where in reality no oil spill is present there. The lower the acceptable false positive threshold requested from a service, the more expensive the service.}. In order to accommodate negotiation for the provision of (concrete) services during the formation of VOs, it is useful that agents are able to talk about {\em partially uninstantiated} and \emph{abstract} services, before they commit to any concrete instantiation (actually, it may happen that this instantiation can only be provided at the time of execution of the services). For example, an agent may require, for some given $a$, $detectOilSpill([a,T],B)$, where the threshold $T$ and the output processed data image $B$ are as yet unspecified ($T$ may be associated with constraints, e.g. $T \geq 5$). In summary, we adopt the following description of services: \begin{center} \begin{tabular}{lll} $serviceName(In,Out)$ & : & uninstantiated service ({\em abstract service});\\ $serviceName(in,Out)$ & : & partially instantiated service;\\ $serviceName(In,out)$ & : & partially instantiated service;\\ $serviceName(in,out)$ & : & fully instantiated service ({\em concrete service});\\ $serviceName$ & = & predicate , with $serviceName(i,o) \in \ensuremath{{O}_{as}}$, \\ && for constants or lists of constants $i,o$; \\ $in,out$ & = & constants or lists of constants; \\ $In,Out$ & = & variables or lists of variables. \\ \end{tabular} \end{center} The $serviceName$ can be seen as the ``type'' of service being provided by $serviceName(in,out)$. We will often refer to an abstract service $serviceName(In,Out)$ simply as $serviceName$. Also, an abstract or partially instantiated service can be seen logically as representing a disjunction of concrete services (one for each possible instantiation). We could thus see the process of negotiating an instantiation of an abstract or partially instantiated service as the process of negotiating a concrete service in a set of alternatives (the disjuncts). For our scenario, we need four types of services, namely $satImage$ and three processing services (with $serviceName$ one of $formatConversion$, $reprojection$ and $detectOilSpill$). We have already seen examples of concrete services of type $satImage$ and $detectOilSpill$. \subsection{Roles and Protocols} \label{sec:protocols} A role is defined as a tuple $\langle rid , PC \rangle$ where $rid \in \ensuremath{{RI}_{as}}$ is the label of the role, and $PC$ is a {\em Protocol Clause}, understood in this paper as a (non-empty and finite) set of {\em Operations} defined as follows: \begin{center} \begin{tabular}{llll} $Operation$ & = & $\psi [send(m,i,rid')] \phi$& (send operation)\\ & $|$ & $\psi [receive(m,i,rid')] \phi$& (receive operation)\\ $\psi$ & $\in$ & $\ensuremath{{L}_{as}} \cup \ensuremath{{O}_{as}}$ & (precondition)\\ $\phi$ & $\in$ & $\ensuremath{{L}_{as}} $ & (postcondition) \\ $m$ & $\in$ & $\ensuremath{{ACL}_{as}}$ & (locution) \\ $i$ & $\in$ & \ensuremath{{AI}_{as}} & (unique identifier of agent)\\ $rid' $ & $\in$ & \ensuremath{{RI}_{as}} & (role label) \end{tabular} \end{center} Intuitively, each operation has three parts: a locution $m$ in $\ensuremath{{ACL}_{as}}$, an identifier $i$ in \ensuremath{{AI}_{as}}{} of the communicative partner (i.e. the intended recipient or the actual sender of message $m$, respectively for $send$ and $receive$), and the identifier $rid'$ of the role that the agent $i$ is intended to be playing when receiving or sending the message (respectively for $send$ and $receive$). An agent can send or receive the locution (according to what the operation specifies) if and only if the evaluation mechanism of the agent evaluates the precondition $\psi$ to true. Once the message is sent or received, the postcondition $\phi$ \emph{will automatically hold} (namely the evaluation mechanism of the agent will evaluate this condition positively after the message is sent or received, until further changes). Moreover, when an agent $i'$ playing some role $\langle rid,PC\rangle$ sends a locution $send(m,i,rid')$ to some other agent $i$, $i$ receives the message from $i'$ indicating that $i'$ sent it while playing role $rid$: $receive(m,i',rid)$. This message will be ``processed'' by $i$ using the role with identifier $rid'$. We could adopt other formalisms for communication, e.g. non-determinisitc finite-state automata. The reason we have chosen protocol clauses is that this formalism is an abstraction of several existing formalisms for modelling inter-agent communication, e.g. LCC \cite{dave:iclp} and dialogue constraints \cite{atal01:negotiation}. To illustrate roles, consider a simple example where an agent playing the role of $requester$ (of some service $S$) sends a $request$ to an agent it believes to be a provider of $S$, and requiring it to be playing the role of $provider$ of $S$. The $provider$ agent replies with $accept$ or $refuse$ depending on whether it is indeed a provider of that service $S$ (and it wants to sell that service) or not (respectively). \vspace*{0.2cm} {\em \begin{tabbing} \=$\langle$ requester(S), \= \{ \= \\ \>\>\>toBuy(S) $\wedge$ provides(Ag,S) $[$send(request(S), Ag, provider(S))$]$ requested(S,Ag),\\ \>\>\>requested(S,Ag) $[$receive(accept, Ag, provider(S))$]$ bought(S),\\ \>\>\>requested(S,Ag) $[$receive(refuse, Ag, provider(S))$]$ true\\ \>\> \} \\ \>$\rangle$ \end{tabbing}} {\em \begin{tabbing} \= $\langle$ provider(S),\= \{ \= \\ \>\>\>true $[$ receive(request(S), Ag, requester(S)) $]$ requestedBy(Ag,S),\\ \>\>\>requestedBy(Ag,S)$\wedge$ toSell(S) $[$ send(accept, Ag, requester(S))$]$ sold(S), \\ \>\>\>requestedBy(Ag,S)$\wedge$ $\neg$ toSell(S) $[$ send(refuse, Ag, requester(S))$]$ true\\ \>\>\}\\ \>$\rangle$ \end{tabbing}} \noindent In the two {\em protocol clause schemata} above variables {\em S} and {\em Ag} are used instead of concrete values. These variables are implicitly universally quantified over the appropriate languages. A protocol clause for a role defines the communicative actions for any agent adopting the role. However, protocol clauses typically relate to other protocol clauses and give a global picture of the interaction amongst agents and roles. For the earlier example, the two roles, $requester(S)$ and $provider(S)$, are related to one another to form a simple \emph{negotiation protocol}. Intuitively, a {\em protocol} is a (non-empty and finite) set of protocol clauses for roles in $Roles$ that are ``related'' to one another, possibly indirectly. With an abuse of notation, we will often refer to a role simply by its identifier and will use the identifier to stand for the corresponding protocol clauses. Moreover, when an agent needs to play the role of $requester$ for any service, we use $requester$ to stand for $requester(s)$ for any service $s \in Services$. Finally, we use $provider(serviceName)$ to indicate that a service provider can provide all instances of an abstract service $serviceName(In,Out)$ or when we are interested in the provision of some instance of this service without specifying which one. \subsection{Agents} \label{sec:agents} For the purposes of VOs, an agent can be seen abstractly as a tuple $\langle i,R,G \rangle$ where $i \in \ensuremath{{AI}_{as}}$ is the unique identifier for the agent; $R\subseteq Roles$ is a (non-empty) set of roles that the agent can play within the society; $G\subseteq \ensuremath{{L}_{as}}$ is a (non-empty) set of goals of the agent. An agent is also equipped with an {\em evaluation mechanism} for determining whether (i) preconditions in roles are satisfied, (ii) goals are fulfilled by the agent in isolation. This mechanism is affected by the execution of protocols in that postconditions of protocol clauses are taken into account by this evaluation mechanism (they are satisfied after the protocol clause is executed, until overwritten by further postconditions). We do not include this evaluation mechanism within the representation of an agent in a society as this mechanism is private to agents and different agents may adopt different such mechanisms in general. Roles and goals of an agent $\langle i,R,G \rangle$ are inter-related as follows: \begin{itemize} \item[(a)] $\forall r \in R$, $\exists g \in G$ which ``enables'' $i$ to adopt $r$, namely the need to fulfil $g$ is a precondition for one of the protocol clauses in $r$; \item[(b)] $\forall g \in G$, $\exists r \in R$ such that playing the role $r$ gives $i$ a ``chance of fulfilling'' $g$ namely one of the protocol clauses in $r$ admits the fulfilment of $g$ as one of its postconditions. \end{itemize} \noindent As an example, consider the earlier protocol clause for the role $requester(S)$ where {\em toBuy(S)} corresponds to a goal and {\em bought(S)} corresponds to the goal being fulfilled. Example agents for our scenario are \vspace*{0.2cm} $\langle clientAg,\{requester\},\{toBuy(someI), toBuy(someD)\} \rangle$ $\langle satERS1Ag,\{requester(detectOilSpill), provider(satImage)\}$, $\{toSell(someI)\} \rangle$ $\langle detectAg,\{provider(detectOilSpill)\},\{toSell(someD)\} \rangle$ \vspace*{0.2cm} \noindent where $someI$ is of the form $satImage(in,Out)$ and $someD$ is of the form $detectOilSpill([Out,t],Out')$ for some $in$ and $t$ (as discussed earlier). Here, the agent identified by $clientAg$ represents the government ministry and can only play the $requester$ role (for any service), the agent identified by $satERS1Ag$ represents the {\em ERS-1} satellite and can play both the $requester$ role for $detectOilSpill$ services and the $provider$ role for $satImage$ services, and the agent identified by $detectAg$ can play only the $provider$ role for $detectOilSpill$ services. The agents' goals allow them to engage in interactions following the protocols for the roles they are equipped with (see the simple protocol clauses of section~\ref{sec:protocols}). \subsection{Workflows} \label{sec:wf} We see workflows simply as (non-empty) sets of services\footnote{More generally, workflows can be compositions, e.g. by sequencing or parallel execution, of services.} possibly annotated with ``constraints'', that are sentences in \ensuremath{{L}_{as}}. Services may be abstract, partially instantiated or concrete, as in section~\ref{sec:services}. As an example, consider the annotated workflow (consisting of a single partially instantiated service) \vspace*{0.2cm} $\{satImage([38.0,-9.4,Res,500,5,ST],Out)\}$ $\bigcup$ $\{Res \in [900,1100], ST \in \{radar, optical\}\}$ \vspace*{0.2cm} \noindent where the resolution $Res$ and sensor type $ST$ arguments are constrained within the annotation. We require that the constraints annotating workflows are satisfiable in \ensuremath{{L}_{as}}. Annotations only make sense for workflows with at least one partially instantiated or abstract service. They are intended to restrict the possible instantiations of the services in the workflow. Typically, as in the example above, they pose restrictions on the variables occurring in the services of the workflow. We will adopt the following terminology: an {\em abstract workflow} is a workflow with at least one abstract or partially instantiated service (with or without annotations); a {\em concrete workflow} is a workflow consisting solely of concrete services (without annotations). Also, a concrete workflow may or may not be executable, and that, prior to execution of a workflow, may need to be appropriately set up. In this paper, we focus on the {\em formation} phase of VOs and ignore execution issues that may occur in the {\em operation} phase. \subsection{Contracts} \label{sec:con} Inspired by web service contract standards~\cite{wsagree}, we define a contract as $\langle Cid,Context,SDT,GT \rangle$ where \begin{itemize} \item $Cid \in \ensuremath{{CI}_{as}}$ is a unique identifier for the contract; \item $Context$ indicates all agents bound by the contract (the ``contracted parties'') and their role in the contract, formally seen as a set of pairs of the form $(AgentId,AgentRole)$ such that $\langle AgentId, R, \_ \rangle \in Agents$ and $AgentRole \subseteq R$; \item $SDT$, the {\em Service Description Terms}, is a workflow, consisting of services being contracted; \item $GT$, the {\em Guarantee Terms}, is a (possibly empty) set of sentences in \ensuremath{{L}_{as}}{} that define the assurances with regards to the services described in $SDT$. \end{itemize} The $GT$ component of a contract may also include rewards/penalties for the contracted parties and refer to roles (and protocols) to be used by agents in the case of exceptions. For example, if blame for failure is disputed, there may be a clause in $GT$ defining a protocol for arbitration. By definition of $Context$, we require that the contracted parties play, within the contract, some of the roles they are equipped with. We require that there are at least two different agents involved in any contract, and that there is at least one agent playing the role of $requester(s)$ and at least one agent playing the role of $provider(s)$ for some service $s$, namely: \begin{itemize} \item $\exists (id1,role1), (id2,role2) \in Context$ such that $id1 \neq id2$, and $requester(s) \in role1$, $provider(s) \in role2$. \end{itemize} \noindent We exclude the possibility that the same agent may be a provider and a requester for the same service: \begin{itemize} \item $\not \exists (id,role) \in Context$ such that, for some $s\in Services$, $\{requester(s), provider(s) \} \subseteq role$. \end{itemize} \noindent We require that for all the services in $SDT$ there exists an agent in $Context$ providing that service: \begin{itemize} \item $\forall s\in SDT$, $\exists (id,role) \in Context$ such that $role=provider(s)$. \end{itemize} \noindent A simple example of a contract is: {\em \begin{tabbing} \= $\langle$ co\=ntr\=actX,\\ \>\> \{$\langle$clientAg,\{requester(formatConversion)\}$\rangle$, $\langle$procF, \{ provider(formatConversion)\}$\rangle$\}, \\ \>\>\{formatConversion([image.jpeg, jpegTOgif], imageGIF.gif)\},\\ \>\>\{dueBy(ImageGIF.gif,1400hrs,12.4.09), priceReduced(ImageGIF,1400hrs,12.4.09,reduction(0.5))\}\\ $\rangle$ \end{tabbing}} \begin{sloppypar} \noindent The above contract, identified as $contractX$, is between $clientAg$ and $procF$ for the delivery of (an instance of) the service $formatConversion$, for converting the file $image.jpeg$ using the operation called $jpegTOgif$. The service has a due delivery date specified using the $dueBy$ predicate. The clause on $priceReduced$ specifies that the price charged will be halved if the provider fails to deliver on time. \end{sloppypar} \subsection{From Agents and Services to the Agent Society} Given $Agents$ as in section~\ref{sec:agents} and $Services$ as in section~\ref{sec:services} an agent society ``emerges'' with: \begin{itemize} \item $Roles = \bigcup_{\langle i,R,G \rangle \in Agents} R$ (the possible roles are all roles that agents within the society can play); \item the concrete workflows in $Workflows$ are all possible (non-empty) sets of services, namely $(2^{Services} - \emptyset) \subset Workflows$, while the remaining elements of $Workflows$ are abstract, possibly annotated ``versions'' of these concrete workflows; \item $Contracts$ is built solely from elements of $Workflows$, $Roles$ and $Agents$. \end{itemize} \noindent We require also that each service is ``represented'' by an agent within the society, in other words the possible services in the society correspond to all services the agents can provide: \begin{itemize} \item $\forall s \in Services$, $\exists \langle provider(s),\_ \rangle \in Roles$ \end{itemize} \noindent However, it may be the case that several alternative protocols exist in the society for the same role, namely: $\langle rid, PC\rangle$ and $\langle rid, PC'\rangle$ both belong to $Roles$ for $PC \neq PC'$. The creation of a VO will need to address the choice of protocols being used for negotiation of workflows and contracts. \section{The VO Formation Phase} \label{sec:def} VOs can be defined as tuples $\ntuple{A_{vo}}{G_{vo}}{R_{vo}}{Wf_{vo}}{Con_{vo}}$ whose components can be described as follows. \begin{itemize} \item $A_{vo} \subseteq Agents^*$ with $Agents^*$ = $\{\langle i, R', G'\rangle | \langle i, R, G \rangle \in Agents$ and $R' \subseteq R$, $G' \subseteq G\}$. $A_{vo}$ contains at least two agents and exactly one agent in $A_{vo}$ is referred to as the {\em initiating agent}, which is denoted \ensuremath{ag_{0}}. \item $G_{vo}$ is a set of goals for the agents in $A_{vo}$, which contains at least a goal of the initiating agent: $G_{vo} \subseteq \bigcup_{\langle i,R,G \rangle \in Agents} G$ and $G_0\cap G_{vo} \neq \emptyset$, where $\langle \ensuremath{ag_{0}},R_0,G_0 \rangle \in Agents$. \item $R_{vo}$ is a set of roles to be played by the participating agents, where $R_{vo} \subseteq Roles$. \item $Wf_{vo} \subseteq Workflows$ is the workflow of services currently agreed amongst the agents in $A_{vo}$. \item $Con_{vo} \subseteq Contracts$ is a set of contracts between the agents in $A_{vo}$. \end{itemize} $Agents^*$ represents the set of all possible ``full'' or ``partial'' specifications of agents, corresponding to concrete choices of roles agents may play and goals they may focus on within a specific VO. $A_{vo}$ describes the (partial specifications of) agents involved in the VO, as providers or requestors of services, or in whichever other roles, as indicated by $R_{vo}$. $A_{vo}$ only describes the agents insofar as their involvement in the VO is concerned (and thus possibly omitting some of their goals and roles, not relevant to the VO). The $Wf_{vo}$ and $Con_{vo}$ components determine the behaviour of the VO and its members during the execution and dissolution phases of VOs. The $G_{vo}$ and $V_{vo}$ components are related to the the $A_{vo}$ component in that $G_{vo} = \bigcup_{\langle i,R,G \rangle \in A_{vo}} G$ and $R_{vo}= \bigcup_{\langle \ensuremath{ag_{0}},R_0,G_0 \rangle \in A_{vo}} R$. In our model, a VO is instantiated during the formation phase, through interactions amongst the agents composing the VO. These interactions can be understood in terms of several operations progressively refining ``partial'' representations of VOs. These operations are defined as transitions, as outlined below. In the remainder, $Ids(A)$ refers to all identifiers of (partial specifications of) agents in the set $A$. \subsection{Goal Identification} The \emph{identify goals}{} transition results in the additions of the initiating agent \ensuremath{ag_{0}}{} and (some of) its goals into the partial VO tuple. These are goals that the agent cannot achieve on its own. Given \begin{center} $\ntuple{\emptyset}{\emptyset}{\emptyset}{\emptyset}{\emptyset} \op{identify \; goal} \ntuple{A_{init}}{G_{init}}{\emptyset}{\emptyset}{\emptyset}$,\\ \end{center} \noindent then \begin{itemize} \item $A_{init} = \{\langle \ensuremath{ag_{0}}, \emptyset, G_{init} \rangle \}$ for some $\langle \ensuremath{ag_{0}}, R_0 , G_0\rangle \in Agents$ such that some goals $G_0' \subseteq G_0$ cannot be fulfilled by \ensuremath{ag_{0}}{} in isolation; \item $G_{init} = G_0'$. \end{itemize} \noindent Here, goal fulfilment is determined using the evaluation mechanism of agent \ensuremath{ag_{0}}{} (see section~\ref{sec:agents}). Note that no role is yet identified for \ensuremath{ag_{0}}: this will be done in transition \emph{establish roles}{} (see section~\ref{sec:em}). In order to ground this transition to our scenario, we assume that an agent $clientAg$ is informed by its user that a possible oil spill has been reported by a passing vessel. The user provides the following information to the agent: latitude, longitude, acceptable false positive threshold and scan area. Given the user's parameters the agent initiates the VO formation process by first identifying its goals. The parameters correspond to high-level goals, that will later be decomposed into specific workflows. In the example, the high-level goal $detectPossibleOilspill(38.0,-9.4,500,5)$ given by the user is to detect an oceanic oil spill off the Portuguese coast at a latitude and longitude of 38.0{} and -9.4{} with an acceptable false positive threshold of 5\% for the surrounding 500{} square kilometres. The agent may decide that to satisfy this high-level goal it needs two services: \hspace*{0.1cm} $g_1=toBuy(satImage([38.0,-9.4,\_,500,\_,radar,\_],\_))$, and $g_2=toBuy(oilSpillDetect([\_,\_,5],\_))$ \hspace*{0.1cm} \noindent where the sensor-type for the satellite providing the image must be radar, because of the required resolution and weather conditions, and once this image data is obtained a service is needed to provide the actual identification of the oil spills on the images. In summary, \emph{identify goals}{} will compute $A_{init} = \{\langle clientAg , \emptyset , G_{init} \rangle \}$ and $G_{init} = \{g_1, g_2\} $. Note that in our model goals of VOs emerge from goals of agents. Once the goals of VOs have been identified, they will dictate the behaviour of agents during the operation of VOs. \subsection{Partner Discovery} The \emph{discover partners}{} transition results in the addition of a number of agents to the set of agents in the current partial VO (after \emph{identify goals}). Whether by traditional means such as a yellow page registry or through `friend of a friend' discovery utilising the multiagent system, the VO tuple is transformed to include potential partners that the initiating agent believes will help it reach its goals, notably by providing suitable services. Given \begin{center} $\ntuple{A_{init}}{G_{init}}{\emptyset}{\emptyset}{\emptyset} \op{discover \; partners} \ntuple{A_{pot}}{G_{init}}{\emptyset}{\emptyset}{\emptyset} $, \end{center} \noindent then $A_{pot} = A_{init} \bigcup A_{queryresult}$, where $A_{queryresult}$ includes those potential partners such that \begin{itemize} \item $A_{queryresult} \subseteq Agents^* - A_{init}$ and each element in $A_{queryresult}$ is of the form $\langle i, \emptyset, \emptyset \rangle$; \item each agent in $A_{queryresult}$ is a provider of one of the services in $G_{init}$ namely, for each $i \in Ids(A_{queryresult})$, if $\langle i, R, G \rangle \in Agents$ then $\exists s$ such that $toBuy(s) \in G_{init}$ such that $\langle provider(s), PC \rangle \in R$. \end{itemize} In our example, $clientAg$ finds that two satellite image providers advertise the services it is interested in. Both agents $satERS1ag$ and $radSatAg$ represent a radar-capable polar satellite with orbits amenable to the area of interest. Moreover, there is one agent, $procOSAg$, who can provide the type of image processing in which $clientAg$ is interested. After this transition is completed, $A_{queryresult} = \{ \langle satERS1ag,\emptyset,\emptyset \rangle, \langle radSatAg,\emptyset,\emptyset \rangle, \langle procOSAg,\emptyset,\emptyset \rangle \}$. \subsection{Partner Selection} The set of potential partners discovered by \ensuremath{ag_{0}}{} may include unreliable or untrustworthy ones. The \emph{select partners}{} transition allows the agent to prune the results of the \emph{discover partners}{} transition. We do not provide a detailed specification of the pruning mechanism needed to support this stage as this is largely dependent on mechanisms for assessing trustworthiness and reliability. Several such mechanisms exist in the literature. Any could be used here. Generally, given \begin{center} $\ntuple{A_{pot}}{G_{init}}{\emptyset}{\emptyset}{\emptyset} \op{select \; partners} \ntuple{A_{pre}}{G_{init}}{\emptyset}{\emptyset}{\emptyset}$, \end{center} \noindent then \begin{itemize} \item $A_{pre} \subseteq A_{pot}$; \item $\ensuremath{ag_{0}} \in Ids(A_{pre})$; \item for each $s$ such that $toBuy(s) \in G_{init}$ there exists $ i \in Ids(A_{pre})$ such that if $\langle i, R, G \rangle \in Agents$ then $\exists \langle provider(s), PC \rangle \in R$. \end{itemize} In the example, after the \emph{select partners}{} transition is completed: \begin{quote} $A_{pre} = \{\langle clientAg , \emptyset , G_{init} \rangle, \langle satERS1ag,\emptyset,\emptyset \rangle, \langle procOSag,\emptyset,\emptyset \rangle \}$. \end{quote} Note that, in general, several providers of the same service may still be in $A_{pre}$ after the pruning. \subsection{Establish Roles} \label{sec:em} Before the agents are able to negotiate workflows and contracts, the roles they will be playing in the negotiation need to be established. These roles (with their protocols) are the social norms used for forming the VO. Establishing these roles also amounts to establishing protocols for them (as our roles include protocols). Roles include requester and provider roles, but may also include other roles (e.g. that of arbitrator, or contract-negotiator if agents other than provider and requester agents may be needed to support the contract negotiation stage of VO formation). For simplicity, we assume that these roles are decided by the initiating agent, and that, given \begin{center} $\ntuple{A_{pre}}{G_{init}}{\emptyset}{\emptyset}{\emptyset} \op{establish \; roles} \ntuple{A_{roles}}{G_{init}}{R_{vo}}{\emptyset}{\emptyset}$, \end{center} then \begin{itemize} \item $Ids(A_{pre}) \subseteq Ids(A_{roles})$; \item $A_{roles} = A_{pre}^* \cup A_{rest}$, where \begin{itemize} \item $A_{pre}^* = \bigcup_{\langle i, \emptyset, G \rangle \in A_{pre}} \{\langle i, R_i, G_i \rangle\} $ for some $R_i$s such that $R_i \subseteq R^*_i$ and $G_i \subseteq G_i^*$ where $\langle i, R^*_i, G_i^* \rangle \in Agents$; \item $A_{rest} \subseteq Agents^*$ (where $A_{rest}$ may be empty); \item $A_{rest} \cap A_{pre}^* = \emptyset$; \end{itemize} \item $R_{vo} = \bigcup_{\langle i, R_i, \_ \rangle \in A_{roles}} \{R_i\}$; \item for each $s$ such that $toBuy(s) \in G_{init}$, there exists exactly one $\langle provider(s), PC_{provider(s)} \rangle$ and exactly one $\langle requester(s), PC_{requester(s)} \rangle$ in $R_{vo}$; \item for every $\langle r_1, PC_{r_1} \rangle$ and $\langle r_2, PC_{r_2} \rangle$ in $R_{vo}$, if $r_1=r_2$ then $PC_{r_1}=PC_{r_2}$, namely there is exactly one role for each role identifier in $R_{vo}$. \end{itemize} Note that we do not impose that \ensuremath{ag_{0}}{} plays the role of requester of all the services in the workflow it wants to instantiate: indeed, in general it may be possible that \ensuremath{ag_{0}}{} delegates the task of requesting some or all services to some other agent. In particular, it may be the case that $\langle \ensuremath{ag_{0}}, \emptyset, G_{init} \rangle$ belongs to $A_{roles}$. Also, we allow for the same agent to play several roles in a VO (namely, $\langle i, R_i, G_{i} \rangle$ may belong to $A_{roles}$ with $R_i$ containing more than one role). In our running example, once the \emph{establish roles}{} stage is completed: \begin{quote} $A_{roles} = \{$ $\langle clientAg, \{\langle requester(satImage([38.0,-9.4,\_,500,\_,radar,\_],\_)), PC_1 \rangle,$\\ \hspace*{3.4cm}$\langle requester(oilSpillDetect([\_,\_,5],\_)), PC_2 \rangle \}, G_{init} \rangle, $\\ \hspace*{1.6cm}$\langle satERS1ag, \{\langle provider(satImage([38.0,-9.4,\_,500,\_,radar,\_],\_)), PC_3 \rangle \},\emptyset \rangle$\\ \hspace*{1.6cm}$\langle procOSag,\{\langle provider(\goaltwoN), PC_4 \rangle\},\emptyset \rangle$ $\}$ $R_{vo} = \{$ $\langle requester(\goalone), PC_1 \rangle$, \\ \hspace*{1.3cm}$\langle requester(\goaltwo), PC_2 \rangle$, \\ \hspace*{1.3cm}$\langle provider(\goalone), PC_3 \rangle$, \\ \hspace*{1.3cm}$\langle provider(\goaltwo), PC_4 \rangle$ $\}$ \end{quote} Here, the $PC_i$ are protocol clauses that the agents commit to follow during the negotiation of workflows. In this specific example, no role/protocol is specified for the \emph{agree contract}{} transition. \iffalse NO SPACE AND MESSY $A_{roles}$ and $R_{vo}$ could be extended to cater for contract negotiation. For example, $R_{vo}$ may be extended to include: \begin{quote} $\langle contract\_requester(toBuy(satImage([38.0,-9.4,\_,500,\_,radar,\_],\_))), PC_5 \rangle$, \\ $ \langle contract\_provider(toBuy(satImage([38.0,-9.4,\_,500,\_,radar,\_],\_))), PC_6 \rangle$ \end{quote} (similarly for the $processingimageOilspill(\_)$ service) where the set of roles for $clientAg$ also includes the first new role and the set of roles for $satER$ also includes the second new role (examples of $PC_5$ and $PC_6$ are given in section~\ref{implementation}). \fi Note that other agents may be brought into $A_{roles}$ at this stage to play these new roles. \subsection{Negotiation} The negotiation activities in the VO formation amount to 1) agreeing a concrete workflow (\emph{agree Wf}) and 2) agreeing a set of contracts amongst agents contributing to the workflow, by providing services in it, and the initiating agents (stage \emph{agree contract}). Both transitions make use of roles (and protocols) identified at the \emph{establish roles}{} transition: communicating by following these protocols agents agree on the provision of services and contracts. Negotiation may result in additional goals to be added, as goals of agents providing services. The \emph{agree contract}{} transition may cause no changes in the partial VO tuple, if no suitable roles have been computed by the \emph{establish roles}{} transition. For lack of space we will only describe the \emph{agree Wf}{} transition. In order for the computed VO to be meaningful, it needs to compute a workflow that is concrete or partially instantiated, but can be fully instantiated when the workflow is executed. This workflow instantiates the abstract workflow corresponding to the $toBuy$ goals in $G_{init}$. This instantiation may be obtained after several negotiations, each following the protocols of the roles identified after the \emph{establish roles}{} transition, each resulting in a service becoming instantiated. After each instantiation, the initiating agent puts those instantiated services into the workflow component of the VO tuple. Generally, given\\ \begin{center} $\ntuple{A_{roles}}{G_{init}}{R_{vo}}{\emptyset}{\emptyset} \op{agree \; Wf} \ntuple{A_{vo}}{G_{vo}}{R_{vo}}{Wf_{vo}}{\emptyset}$, \end{center} then \begin{itemize} \item $Ids(A_{vo}) \subseteq Ids(A_{roles})$; \item $\ensuremath{ag_{0}} \in Ids(A_{vo})$; \item for each $s$ such that $toBuy(s) \in G_{init}$ there exists exactly one agent $i \in Ids(A_{vo})$ such that $\langle i, R_i, G_i \rangle \in A_{vo}$ and $provider(s) \in R_i$, and a successful dialogue between \ensuremath{ag_{0}}{} and this agent $i$ with \ensuremath{ag_{0}}{} playing the role of $requester(s)$ and $i$ playing the role of $provider(s)$; \item for each $\langle i, R_i, G_i \rangle \in A_{vo}$, if $\langle i, R_i^*, G_i^* \rangle \in A_{roles}$ then $R_i^*=R_i$ (namely roles cannot be changed at this stage); $G_i \supseteq G_i^*$ (namely goals can only be added at this stage); if $\langle i, R_i^{**}, G_i^{**} \rangle \in Agents$ then $G_i \subseteq G_i^{**}$ (namely all goals are chosen from the pool of goals of the agent); \item $G_{init} \subseteq G_{vo}$; \item $G_{vo} = \bigcup_{\langle i, R_i, G_i \rangle \in A_{vo}} G_i$; \item $Wf_{vo}$ is the result of instantiating $Wf$ by the given sequence of successful dialogues, as dictated by $G_{vo}$; the providers of the services are given by $A_{vo}$. \end{itemize} Intuitively, agents may decide to add goals at this stage to avoid agreements to provide a service which could prevent the fulfillment of some of their goals. We impose that the initiating agent is not allowed to change the workflow. However, it can add constraints or services to it, as soon as no new role is required by this addition. For example, this would be needed and useful to support shimming\footnote{Informally, shimming is the introduction of a service into a workflow to ensure that the output of a preceding service matches the type required by the input of the subsequent service.} of services. Goals of provider agents may render this shimming necessary (e.g. because a service provider does not want to interface to another service provider). Another aspect of this formulation is that one single provider per service is required. These providers will need to be selected amongst all agents that have successfully completed a dialogue with \ensuremath{ag_{0}}. We do not impose any constraints on how this choice is performed: the given protocols may typically dictate this. \section{Conclusions} \label{sec:conc} We have described a formalisation for VOs in grid and service-oriented architectures, formed from agent societies, using a realistic scenario for illustration throughout. This formalisation is abstract and independent of any realisation choices (in terms of agent architectures, communication platform etc). It can guide the development of (agent-based) VOs, in that it identifies essential components (such as several underlying languages for services, identifiers, communication, as well as protocol-based roles for negotiation of services and contracts). We have experimented with some of the interactions presented here for the earth observation scenario~\cite{DBLP:conf/atal/BromuriUMST09} with emphasis on the coordination patterns agents should follow when creating a VO~\cite{mage}. Our emphasis on the use of protocols to support VOs is influenced by \cite{FosterKessel01}. The CONOISE-G~\cite{patel-2005} project presented an agent-based model for VOs on the grid, but focused on the challenging task of engineering a working system and thus making concrete realisation choices (e.g. agents use a constraint satisfaction algorithm for decision-making). We have taken a more abstract view of agents, agent society and VOs, to ensure that the definitions can be ported to any other agent-enabled grid systems to support VOs in general. Papers such as~\cite{Matos} speculate on the consequences of introducing software agents as a means to alleviate the burden on human decision-making. We have a similar focus in that we see an opportunity in the use of the multiagent paradigm for automating parts of VOs. There are a few papers that have formalised aspects of agent-enabled VOs, for example~\cite{DBLP:conf/atal/PittKSA05} look at voting protocols for VOs while~\cite{UdupiS06} focuses on the representation of contracts in VOs based on a specific commitment-based approach for them. We have taken a more exhaustive view by considering all components of agent-enabled VOs but more abstractly. There is also existing work applying agent societies and electronic institutions for VOs~\cite{prometheus}. There are a number of differences of this work from ours as follows. First, our emphasis is one the formalisation of a VO in terms of its components and the VO transitions, not on the details of the regulation of the participating agents and their behaviour. Secondly, we abstract away from methodological issues. Thirdly, we do not require a classification of the goals of agents and a focus on the capabilities of individual agents. The regulation of VOs and Electronic Institutions with emphasis on norms is also discussed in~\cite{Cardoso08}, where like here the focus is on agreed contracts about the provision of institutional services. However, we abstract away from the monitoring of VO activities and the evaluation of norms. As future work, it will be important to further validate the proposed model with further examples, e.g. in e-business and pharma settings, as well as formally verifying that the VO formation model provided results in ``coherent'' VOs, namely VOs where all agents involved can fulfil their relevant goals as a result of the participation in the VO, given that the VO is executed as agreed. \section*{Acknowledgements} We would like to thank the anonymous referees for their comments on a previous version of this paper. The work reported here was partially funded by the Sixth Framework IST programme of the EC, under the 035200 ARGUGRID project. \bibliographystyle{eptcs}
1001.4679
\section{Introduction} \label{introduction} The idea of Kaluza and Klein~\cite{kk} of obtaining the electromagnetism - and under the influence of their idea nowadays also the weak and colour fields~\cite{geogla,chofre,zee,salstr,mec,dufnilspop,daess,wet} - from purely gravitational degrees of freedom connected with having extra dimensions is very elegant. More than twenty fives years ago the Kaluza-Klein[like] theories were studied very intensively by many authors~\cite{wet,dufnilspop,zelenaknjiga,horpal}. Although the breaking of the symmetry of the starting Lagrange density to the low energy effective ones (that is to the charges and correspondingly to the gauge fields assumed by the {\it standard model of the electroweak and colour interactions}) seem very promising, the idea of Kaluza and Klein was almost killed by the "no-go theorem" of E. Witten~\cite{witten} telling that these kinds of Kaluza-Klein[like] theories with the gravitational fields only (that is with vielbeins and spin connections) have severe difficulties with obtaining massless fermions chirally coupled to the Kaluza-Klein-type gauge fields in $d=1+3$, as required by the {\it standard model}. There were attempts to escape from the "no-go theorem" in compact extra spaces by having torsion~\cite{salstr}, % or by having an orbifold structure~\cite{orbifold}, or by putting extra gauge fields by hand in addition to gravity in higher dimensions~\cite{sap}, which is no longer the pure Kaluza-Klein[like] theory and loses accordingly the elegance. Since there is the assumption that the space is {\em compact} in the "no-go theorem" of E. Witten, there are also the attempts to achieve masslessness by appropriate choices of vielbeins in {\em noncompact} spaces, one of works~\cite{wet} is commented in the footnote~\footnote{The author of the ref.~\cite{wet} proposes, for example, the "squashed" $S^2$ sphere, recognizing that with the zweibein of $S^2$ (he calls in this case $S^2$ a compact space) there are no massless spinor states, while with at least a little "stronger" zweibein than with that of $S^2$ (like with $f= (1+(\frac{\rho}{2\, \rho_0})^{2+k})$, with $0< k \le 2$, $k=0$ reproduces $S^2$) there are two massless states. Although the author wrote differently, these two massless states belong to the left and the right handed state with respect to $d=(1+3)$, and therefore not mass protected, and would correspondingly lead to massive fermion state .}. There are several attempts to point out the importance of non compact extra dimensions, like ~\cite{mmWeakScale98di}, many of them surveyed in~\cite{rubakovExtraDim}. These attempts do not really try to keep the Kaluza-Klein approach in the original elegant version, they rather embed strings, membranes, p-branes into higher dimensional spaces. The most popular models of this kind are probably Randall-Sundrum models~\cite{rs}. We are interested in this paper in extra dimensions in the Kaluza-Klein sense: that is as a possibility that the gravity (and only gravity) in extra dimensions manifests as the {\it standard model} gauge fields in $(1+3)$, coupled to the corresponding charges. In refs.~\cite{hnkk06} we achieved masslessness of spinors in the pure Kaluza-Klein[like] theory (for the case of $M^{1+5}$ manifold broken into $M^{1+3} \times$ an infinite disc) with the appropriate choice of a boundary limiting the extra dimensions on a finite surface on a disc. In the proposed paper we take the whole two dimensional plane, and roll it up into an almost $S^2$ with one point - the south pole - excluded. It is our choice of a {\em zweibein} which forces the two extra dimensions into an almost $S^2$. Thus, although it has a finite volume (namely the surface of $S^2$), the space is {\em non compact}. We require spinor states to be in the fifth and sixth dimensions {\em normalizable}~\footnote{In the ref.~\cite{wet}, mentioned and discussed in the previous footnote, this idea of a finite volume of a noncompact space, as well as the normalizability of states is already stressed.}, proving that the normalizable solutions form a complete set. It is our choice of a {\em particular spin connection field}, with the strengths within an interval, which allows only one normalizable massless state of a particular handedness (with respect to $(1+3)$), breaking the parity symmetry. The {\it finite volume} of an {\em infinite} disc, an appropriate {\em choice of the spin connection field} with the strength $F$ allowed to be within the whole interval $0< 2F \le 1$ and the {\em normalizability} requirement make the mass spectrum of our Hermitean Hamiltonian in a noncompact space discrete, with only one massless state of particular charge chirally coupled to the Kaluza-Klein gauge field. It is the sign of $F$ which makes a choice of the handedness of a massless state, breaking the parity symmetry. The usually expected problem with extra non compact dimensions having a continuous spectrum is {\it not present} in our model. For a particular choice of the strength of the spin connection field we find the states and the spectrum (the masses) analytically. This mass spectrum of states forms the complete set on our almost $S^2$. For the remaining values of the strength, for all of which only one massless solution of a particular handedness in $(1+3)$ exists, it is not difficult to find the recursive formulas for normalizable solutions and the masses. Accordingly in this two dimensional noncompact space, with the spin connections and vielbeins which both are a part of the gravitational gauge fields and with no presence of an (additional) external field, the "no-go theorem" of E. Witten is not valid. We also characterize the "singularity" which the spinor solutions "feel" on our infinite disc with the zweibein of a $S^2$ sphere, when treating the disc as the almost $S^2$ sphere, that is the $S^2$ sphere with the hole on the southern pole, so that we have almost $M^{(1+3)}\times S^2$ case, that it is almost a compact space. Let us add: As it is not difficult to recognize, the two dimensional spaces are very special~\cite{mil,deser}. Namely, in dimensions higher than two, when we have no fermions present and only the curvature in the first power in the Lagrange density, the spin connections are normally determined from the vielbein fields, and the torsion is zero. In the two dimensional spaces, the vielbeins do not determine the spin connection fields. In the present article we pay attention to cases, which we call {\em an effective two-dimensionality}, when the spin connections are not fully determined by the vielbeins. In the here proposed types of models there is the chance for having chirally mass protected fermions in a theory in which the chirally protecting effective four dimensional gauge fields are {\it true} Kaluza-Klein[like] fields, the degrees of which inherit from the higher dimensional gravitational ones. We are thus hoping for a revival of true Kaluza-Klein[like] models as candidates for phenomenologically viable models! One of us has been trying for long to develop the {\it approach unifying spins and charges and predicting families} (N.S.M.B.)~\cite{norma92939495,holgernorma20023} so that spinors which carry in $d\ge 4$ nothing but two kinds of the spin (no charges), would manifest in $d=(1+3)$ all the properties assumed by the {\it standard model} and does accordingly share with the Kaluza-Klein[like] theories the problem of masslessness of the fermions before the electroweak like types of break. We present briefly the ideas of the {\it approach} in the footnote~\footnote{ The {\it approach unifying spin and charges and predicting families}~\cite{norma92939495} proposes in $d=(1 + (d-1))$ a simple starting action for spinors with the two kinds of the spin generators ($\gamma$ matrices): the Dirac one, which takes care of the spin and the charges, and the second one, anticommuting with the Dirac one, which generates families. For the explanation of the appearance of the two kinds of the spin generators we invite the reader to look at the refs.~\cite{norma92939495,holgernorma20023} and the references therein. A spinor couples in $d=1+13$ to the vielbeins and (through two kinds of the spin generators to) the spin connection fields. Appropriate breaks of the starting symmetry lead to the left handed quarks and leptons in $d=(1+3)$, which carry the weak charge while the right handed ones are weak chargeless. The {\it approach} is offering the answers to the questions about the origin of families of quarks and leptons, about the explicit values of their masses and mixing matrices (predicting the fourth family to be possibly seen at the LHC or at somewhat higher energies) as well as about the masses of the scalar and the weak gauge fields, about the dark matter candidates, and about breaking the discrete symmetries. There are many possibilities in the {\it approach } for breaking the starting symmetries to those of the {\it standard model}. These problems were studied in some crude approximations in refs.~\cite{norma92939495} and are under consideration~\cite{proc2010}.}. Let us point out that in odd dimensional spaces and in even dimensional spaces divisible by four there is no mass protection in the Kaluza-Klein[like] theories~\cite{wet,hnkk06}. The spaces therefore, for which we can have a hope that the Kaluza-Klein[like] theories lead to chirally protected fermions and accordingly to the effective theory of the {\it standard model of the electroweak and colour interactions}, have $2(2n+1)$ dimensions. And breaking symmetries in such spaces, if starting with one Weyl spinor, and accordingly with the mass protected case, should again lead to mass protected cases in accordance with the {\it standard model}. \section{The action, equations of motion, solutions, proofs and comments} \label{starting action} We prove in this section that in $M^{1+3} \times $ an infinite disc with the particular zweibein and spin connection on the disc there exists only one massless normalizable (on the disc) fermion state of only one handedness and of a particular charge. It is accordingly mass protected. We also present proofs that the Hamiltonian is Hermitean and the spectra of normalizable states correspondingly discrete. For a particular strength of the spin connection field we present the spectrum and states. We discuss the properties of solutions for the strengths allowed by the normalizability requirement. Let us first repeat the four assumptions, stressed already in the introduction. \begin{enumerate} \item We assume $2(2n+1)$-dimensional space, in our case $n=1$, with only gravity, described by the action~\footnote{We have proven in ref.~\cite{hnkk06} that only in even dimensional spaces of $d=2$ modulo $4$ dimensions (\textit{i.e.} in $d=2(2n+1),$ $n=0,1,2,\ldots$) spinors (they are allowed to be in families) of one handedness and with no conserved charges gain no Majorana mass.} \begin{eqnarray} {\cal S} = \alpha \int \; d^d x \, E {\cal\, R}\,. \label{action} \end{eqnarray} % The Riemann scalar ${\cal R} = {\cal R}_{abcd}\,\eta^{ac}\eta^{bd}$ is determined by the Riemann tensor ${\cal R}_{abcd} = f^{\alpha}{}_{[a} f^{\beta}{}_{b]}(\omega_{cd \beta, \alpha} - \omega_{ce \alpha} \omega^{e}{}_{d \beta} )$, with vielbeins $f^{\alpha}{\!}_{a}$~\footnote{$f^{\alpha}{}_{a}$ are inverted vielbeins to $e^{a}{}_{\alpha}$ with the properties $e^a{}_{\alpha} f^{\alpha}{\!}_b = \delta^a{}_b,\; e^a{}_{\alpha} f^{\beta}{}_a = \delta^{\beta}_{\alpha} $. Latin indices $a,b,..,m,n,..,s,t,..$ denote a tangent space (a flat index), while Greek indices $\alpha, \beta,..,\mu, \nu,.. \sigma,\tau ..$ denote an Einstein index (a curved index). Letters from the beginning of both the alphabets indicate a general index ($a,b,c,..$ and $\alpha, \beta, \gamma,.. $ ), from the middle of both the alphabets the observed dimensions $0,1,2,3$ ($m,n,..$ and $\mu,\nu,..$), indices from the bottom of the alphabets indicate the compactified dimensions ($s,t,..$ and $\sigma,\tau,..$). We assume the signature $\eta^{ab} = diag\{1,-1,-1,\ldots,-1\}$. } and the spin connections $\omega_{ab\alpha}$ (the gauge fields of $S^{ab}= \frac{i}{4}(\gamma^a \gamma^b - \gamma^b \gamma^a)$). $[a\,\,b]$ means that the antisymmetrization must be performed over the two indices $a$ and $b$, $E$ is the determinant of the inverse zweibein $e^{s}{}_{\sigma}, \, e^{s}{}_{\sigma} f^{\sigma}{}_{t} \delta^s{}_t,\; $ (Eq.(\ref{fzwei})). \item Space $M^{1+5}$ has the symmetry of $M^{1+3} \times $ an infinite disc with the zweibein on the disc \begin{eqnarray} e^{s}{}_{\sigma} = f^{-1} \pmatrix{1 & 0 \cr 0 & 1 \cr}, f^{\sigma}{}_{s} = f \pmatrix{1 & 0 \cr 0 & 1 \cr}\,, \label{fzwei} \end{eqnarray} with \begin{eqnarray} \label{f} f &=& 1+ (\frac{\rho}{2 \rho_0})^2, \nonumber\\ x^{(5)} &=& \rho \,\cos \phi,\quad x^{(6)} = \rho \,\sin \phi, \quad E= f^{-2}\,.\nonumber \end{eqnarray} The last relation follows from $ds^2= e_{s \sigma}e^{s}{}_{\tau} dx^{\sigma} dx^{\tau}= f^{-2}(d\rho^{2} + \rho^2 d\phi^{2})$. We use indices $s,t=5,6$ to describe the flat index in the space of an infinite plane, and $\sigma, \tau = (5), (6), $ to describe the Einstein index. $\phi$ determines the angle of rotations around the axis perpendicular to the disc. \item The spin connection field is chosen to be \begin{eqnarray} f^{\sigma}{}_{s'}\, \omega_{st \sigma} &=& i F\, f \, \varepsilon_{st}\; \frac{e_{s' \sigma} x^{\sigma}}{(\rho_0)^2}\, , \quad 0 <2F \le 1\, ,\quad s=5,6,\,\,\; \sigma=(5),(6)\,. \label{omegas} \end{eqnarray} \item We require normalizability of states $\psi$ on the disc \begin{eqnarray} \label{normalizability} \int_{0}^{2\pi}\,d \phi \; \int_{0}^{\infty}\;E\,\rho d\rho \psi^{\dagger} \psi < \infty\,, \end{eqnarray} as usual in quantum mechanics, allowing at most the plane waves normalized to the delta function: $\int_{-\infty}^{\infty}\,d x^{(5)} \,\int_{-\infty}^{\infty}\,d x^{(6)}\;E\, e^{i \vec{k}(\vec{x}- \vec{x'})} = \delta^{2}(\vec{x}-\vec{x'})\,.$ \end{enumerate} Let us make now several statements, proofs of these statements and comments, which will help to clarify the meaning of the assumptions.\\ {\it Statement 1.:} {\it In the absence of the fermion fields in $d=2$ any zweibein and any spin connection fulfills the equations of motion.}\\ {\it Proof 1.: } The action of Eq.~(\ref{action}) leads to the equations of motion~\cite{mil,norma92939495} \begin{equation} \label{omegabcc} (d-2)\,\omega_b{}^c{}_c = \frac{e^a{}_\alpha}{E} \partial_\beta\left(Ef^\alpha{}_{[a}f^\beta{}_{b]} \right), \end{equation} which clearly demonstrate that any spin connection $\omega_b{}^c{}_c = \omega_b{}^c{}_{\alpha} \, f^{\alpha}{}_{c}$ (which can in $d=2$ have only two different indices) satisfies this equation. \\ {\it Comment 1.:} For $d=2$ the variation of the action~(\ref{action}) with respect to vielbeins leads to the equation $-e_{s \, \sigma} R + 4 f^{\tau t} \omega_{st \sigma,\tau} = 0,$ which is zero for any $R$ ($-2 R + 2 R=0$). \\ {\it Statement 2.:} {\it The volume of this noncompact space} (which looks almost as $S^2$ sphere) {\it is finite.} \\ {\it Proof 2.:} The volume is $\displaystyle \int^{\infty}_0 \, f^{-2}\rho \,d\rho= \pi\, (2 \rho_0)^2$.\\ {\it Comments 2.: } {\bf i.)} Finite volume helps to assure the existence of normalizable spinor states on this disc. {\bf ii.)} The symmetry of this disc, which is the symmetry of $U(1)$ group, determines the charge of spinors in $d=(1+3)$.\\ {\it Statement 3.:} {\it The choice that }$M^{1+5}$ {\it breaks into }$M^{1+3} \times $ {\it an infinite disc with no gravity in } $M^{1+3}$ {\it and with the zweibein of} Eq.~(\ref{fzwei}) {\it and the spin connection of} Eq.~(\ref{omegas}) {\it on an infinite disc makes the Lagrange density for a Weyl spinor} ${\cal L}_{W} = \frac{1}{2} [(\psi^{\dagger} E \gamma^0 \gamma^a p_{0a} \psi) + (\psi^{\dagger} E \gamma^0\gamma^a p_{0 a} \psi)^{\dagger}]$ {\it to be} \begin{eqnarray} {\cal L}_{W} &=& \psi^{\dagger}\{E\gamma^0 \gamma^n p_n + E f \gamma^0 \gamma^s \delta^{\sigma}_s ( p_{0\sigma} + \frac{1}{2 E f} \{p_{\sigma}, E f\}_- )\}\psi,\; n=0,1,2,3, \nonumber\\ && p_{0\sigma} = p_{\sigma}- \frac{1}{2} S^{st}\omega_{st \sigma}, \label{weylE} \end{eqnarray} {\it with} $ E = \det(e^a{\!}_{\alpha}) = f^{-2}$, $f$ {\it is from } Eq.~(\ref{f}), {\it and with} $ \omega_{st \sigma}$ {\it from }Eq.~(\ref{omegas})~\footnote{One finds that $ \omega_{cda} = \Re e \;\omega_{cda}, \;\; {\rm if \;\; c,d,a\;\; all\;\; different} $ while $ \omega_{cda}= i\,\Im m\; \omega_{cda},\;\; \rm{otherwise}.$}.\\ {\it Proof 3.:} Eq.~(\ref{weylE}) follows from the starting Lagrangean for a Weyl spinor interacting with only the vielbeins and spin connections straightforwardly. \\ {\it Comment 3.:} The Lagrange density of Eq.~(\ref{weylE}) assures that the Hamiltonian is Hermitean. \\ {\it Statement 4.:} {\it Normalizability condition for spinors on an infinite disc curled into an almost $S^2$ and with the spin connection of particular choice makes a choice of a spectrum which forms a complete set.} \\ {\it Proof 4.:} The Lagrange density of Eq.~(\ref{weylE}) leads to equations of motion (Eqs.(\ref{weylEp},\ref{weylErho}, \ref{equationm56gen1})) \begin{eqnarray} \label{weylErho0} if \, \{ e^{i \phi 2S^{56}}\, [\frac{\partial}{\partial \rho} + \frac{i\, 2 S^{56}}{\rho} \, (\frac{\partial}{\partial \phi}) - \frac{1}{2 \,f} \, \frac{\partial f}{\partial \rho }\, (1- 2F \, 2S^{56})\,] \, \} \, \psi^{(6)} + \gamma^0 \gamma^5 \, m \, \psi^{(6)}=0\,, \end{eqnarray} which look for $F=1/2$ like Legendre equations~(Eq.~(\ref{equationm56u})). It is the sign of $F$ which makes a choice of the handedness of a massless state and breaks accordingly the parity symmetry. One can prove that the only normalizable eigenstates in the interval $0 \le \rho \le \infty$ are those with integer parameters $l$ and $n$, $(m\rho_0)^2= l(l+1)$, in Eqs.~(\ref{equationm56ux}). These states are Legendre polynomials and form the {\em complete set}. Solutions for a non integer $n$ are singular at $\rho =0$, while solutions with a non integer $l$ are singular at $\rho = \infty$, both singularities make the corresponding eigenstates not normalizable. \\ {\it Comments 4.:} {\bf i.)} In the subsection \ref{equations} of this section the solutions of Eq.~(\ref{weylErho0}) are discussed for any choice of $F$ in the interval $0 <2F \le 1$. All the normalizable solutions can for any $F$ in this interval be expressed as a normalizable superposition of a complete set of Legendre polynomials and have the discrete spectrum. {\bf ii.)} In the limit when $\rho_0 \to \infty$, $f$ (in Eq.~(\ref{equationm56gen1}, next section) goes to one and the two equations, Eq.~(\ref{equationm56gen1}), define the recurrence relations between the Bessel functions of an integer order (${\cal A}_{n}(\rho m)= J_{n}(\rho m)$ and ${\cal B}_{n+1}(\rho m) = J_{n+1}(\rho m)$) for any mass $m$. Making the limit $\rho_0 \to \infty$ in Eq.~(\ref{equationm56u}) in next section, with the discrete mass term $(m\rho_{0})^2= l(l+1)$ one again reproduces the Bessel equation, if putting $l= m \rho_0$. (Bessels functions can be squared normalized only within a finite radius, determined by zeros.) With $\rho_0$ going to infinity the distance between $m$-values solving this constraint goes to zero, so that in this limit the system of allowed $m$ values approaches the continuum (all $m$ values). This is satisfactory because this limit corresponds to our already non-compact space approaches, the usual flat two-dimensional space (with which one would have a truly fully 5 +1 dimensional world in which of course the spectrum seen as $(3 +1)$-dimensional one should be continuous). {\bf iii.)} For any finite $\rho_{0}$ can the plane wave in the fifth and sixth dimension be expressed in terms of the Legendre polynomials. To a plane wave in general many Legendre polynomials contribute, each corresponding to a different mass. There is the solution for $2F=1$ which is independent of $x^{\sigma}\,, \, \sigma \in \{(5),(6)\}$. It corresponds to massless solution. This solution can be called the plane wave with momentum zero. In the limit $\rho_0 \to \infty$ the definition for the plane waves in flat space follows. {\it Statement 5.:} {\it The zweibein} (Eq.(\ref{fzwei})) {\it and the spin connection} (Eq.(\ref{omegas})) {\it with the parameter} $F$ {\it within the interval} $0 <2F \le 1$ {\it allow only one massless spinor of a particular charge.}\\ {\it Proof 5.:} It is proven in the next subsection, in the last paragraph before Eq.~(\ref{weylEp}), that it is the term $\psi^{\dagger}\, E f \gamma^0 \gamma^s \delta^{\sigma}_s ( p_{0\sigma} + \frac{1}{2 E f} \{p_{\sigma}, E f\}_- )\psi$ in the Lagrange density~(Eq.(\ref{weylE})), which manifests as the mass term $m$ in Eq.~(\ref{weylErho0}). There is a term in Eq.~(\ref{weylErho0}), namely $- if \, e^{i \phi 2S^{56}} \,\frac{1}{2 \, f} \, \frac{\partial f}{\partial \rho }\, (1- 2F \, 2S^{56})\, \, \psi^{(6)}$, which clearly distinguishes between the two possible values of the spin operator $S^{56}$ in $d=5,6$, when this term applies on the state $\psi^{(6)}$, distinguishing correspondingly also between the two possible handedness of the state $\psi^{(6)}$ in $d=(1+3)$. It is shown in the next subsection that a normalizable massless state ($m=0$ in Eq.~(\ref{weylErho0})) must fulfil the condition: $( \; 0 \le (1-2F\,2 S^{56}) < 1)\;\psi^{(6)}$. The sign of $F$ chooses the handedness of a massless normalizable spinor state.\\ {\it Comments 5.} {\bf i.)} Having the rotational symmetry around the axis perpendicular to the plane of the fifth and the sixth dimension it is meaningful to require that $\psi^{(6)}$ is the eigen function of the total angular momentum operator $(M^{56}= x^5 p^6-x^6 p^5 + S^{56})$ in the fifth and sixth dimension $M^{56}= (-i \frac{\partial}{\partial \phi} + S^{56})\,;$ $M^{56}\,\psi^{(6)}= (n+\frac{1}{2})\,\psi^{(6)}$~(Eqs.(\ref{mabx},\ref{mabpsi}, \ref{weylErho})). {\bf ii.)} The only massless state, which fulfills the normalization condition (see Eq.(\ref{masslesseqsolf})) for a positive $F$, is the state with the property $2 S^{56}\,\psi^{(6)} = \psi^{(6)}$. Its charge (spin on the disc) is for $0 <2F \le 1$ equal to $\frac{1}{2}$ as it is shown in section~\ref{properties1+3}. {\bf iii.)} All the other states are massive. {\bf iv.)} The current in the radial direction is for all these cases equal to zero for any $F$. Detailed derivations of equations of motion and solutions are presented in subsection~\ref{equations} of this section. Let us summarize this section. We have a Weyl spinor in $d=(1+5)$-dimensional space. This space breaks into $M^{1+3}$ cross an infinite disc with the zweibein which formally looks almost -- up to a hole in the southern pole -- as a $S^2$ sphere, while a chosen spin connection allows on such an infinite disc only one normalizable massless state. The {\it Hamiltonian is Hermitean}, the mass spectrum of {\it normalizable} states is correspondingly discrete and the probability for a fermion to escape out of the disc is zero~\footnote{It is expected that the zweibein curving the infinite disc into an (almost $S^2$) and the spin connection, which breaks the parity symmetry and takes a part in determining equations of motion, appear dynamically, causing the {\em "phase transition"}. Accordingly could dynamical fields by causing the phase transition restore the symmetry of $M^{1+5}$}. Allowing the whole interval of the strength of the spin connection fields ($0 < 2F \le 1$) the spin connection field is not fine tuned. For a particular choice of the constant of the spin connection field, that is for $2F=1$, the normalizable solutions are expressible with the Legendre polynomials and the massive states manifest a spectrum $m \rho_0 =l(l+1)$, with $l=0,1,2,\cdots$ and $-l \le n \le 1$. $n+1/2$ is the charge of the spectrum. A free choice of a zweibein and a spin connection field in the action of Eq.~(\ref{action}) is possible only in $d=2$ dimensional spaces (the presence of fermions might make this possible also for $d>2$). {\it Let us point out that the "two dimensionality" can be simulated in any dimension larger than two, if vielbeins and spin connections are completely flat in all but two dimensions} (this point is discussed also in the ref.~\cite{wet}). \subsection{Solutions of the equations of motion for spinors} \label{equations} We look for the solutions of the equations of motion (\ref{weylE}) for a spinor in $(1+5)$-dimensional space, which breaks into $M^{(1+3)} \times$ an infinite disc curved into a noncompact "almost" $S^2$ sphere as a superposition of all four ($2^{6/2 -1}$) states of a single Weyl representation. (We kindly ask the reader to see the technical details about how to write a Weyl representation in terms of the Clifford algebra objects after making a choice of the Cartan subalgebra, for which we take: $S^{03}, S^{12}, S^{56}$ in the refs.~\cite{holgernorma20023}.) In our technique one spinor representation---the four states, which all are the eigenstates of the chosen Cartan subalgebra with the eigenvalues $\frac{k}{2}$, correspondingly---are the following four products of projectors $\stackrel{ab}{[k]}$ and nilpotents $\stackrel{ab}{(k)}$: \begin{eqnarray} \varphi^{1}_{1} &=& \stackrel{56}{(+)} \stackrel{03}{(+i)} \stackrel{12}{(+)}\psi_0,\nonumber\\ \varphi^{1}_{2} &=&\stackrel{56}{(+)} \stackrel{03}{[-i]} \stackrel{12}{[-]}\psi_0,\nonumber\\ \varphi^{2}_{1} &=&\stackrel{56}{[-]} \stackrel{03}{[-i]} \stackrel{12}{(+)}\psi_0,\nonumber\\ \varphi^{2}_{2} &=&\stackrel{56}{[-]} \stackrel{03}{(+i)} \stackrel{12}{[-]}\psi_0, \label{weylrep} \end{eqnarray} where $\psi_0$ is a vacuum state for the spinor state. If we write the operators of handedness in $d=(1+5)$ as $\Gamma^{(1+5)} = \gamma^0 \gamma^1 \gamma^2 \gamma^3 \gamma^5 \gamma^6$ ($= 2^3 i S^{03} S^{12} S^{56}$), in $d=(1+3)$ as $\Gamma^{(1+3)}= -i\gamma^0\gamma^1\gamma^2\gamma^3 $ ($= 2^2 i S^{03} S^{12}$) and in the two dimensional space as $\Gamma^{(2)} = i\gamma^5 \gamma^6$ ($= 2 S^{56}$), we find that all four states are left handed with respect to $\Gamma^{(1+5)}$, with the eigenvalue $-1$, the first two states are right handed and the second two states are left handed with respect to $\Gamma^{(2)}$, with the eigenvalues $1$ and $-1$, respectively, while the first two are left handed and the second two right handed with respect to $\Gamma^{(1+3)}$ with the eigenvalues $-1$ and $1$, respectively. Taking into account Eq.~(\ref{weylrep}) we may write the most general wave function $\psi^{(6)}$ obeying Eq.~(\ref{weylErho0}) in $d=(1+5)$ as \begin{eqnarray} \psi^{(6)} = {\cal A} \,{\stackrel{56}{(+)}}\,\psi^{(4)}_{(+)} + {\cal B} \,{\stackrel{56}{[-]}}\, \psi^{(4)}_{(-)}, \label{psi6} \end{eqnarray} where ${\cal A}$ and ${\cal B}$ depend on $x^{\sigma}$, while $\psi^{(4)}_{(+)}$ and $\psi^{(4)}_{(-)}$ determine the spin and the coordinate dependent parts of the wave function $\psi^{(6)}$ in $d=(1+3)$ \begin{eqnarray} \psi^{(4)}_{(+)} &=& \alpha_+ \; {\stackrel{03}{(+i)}}\, {\stackrel{12}{(+)}} + \beta_+ \; {\stackrel{03}{[-i]}}\, {\stackrel{12}{[-]}}, \nonumber\\ \psi^{(4)}_{(-)}&=& \alpha_- \; {\stackrel{03}{[-i]}}\, {\stackrel{12}{(+)}} + \beta_- \; {\stackrel{03}{(+i)}}\, {\stackrel{12}{[-]}}. \label{psi4} \end{eqnarray} Using $\psi^{(6)}$ in Eq.~(\ref{weylErho0}) and separating dynamics in $(1+3)$ and on the infinite disc the following relations follow, from which we recognize the mass term $m$: $\frac{\alpha_+}{\alpha_-} (p^0-p^3) - \frac{\beta_+}{\alpha_-} (p^1-ip^2)= m,$ $\frac{\beta_+}{\beta_-} (p^0+p^3) - \frac{\alpha_+}{\beta_-} (p^1+ip^2)= m,$ $\frac{\alpha_-}{\alpha_+} (p^0+p^3) + \frac{\beta_-}{\alpha_+} (p^1-ip^2)= m,$ $\frac{\beta_-}{\beta_+} (p^0-p^3) + \frac{\alpha_-}{\beta_+} (p^1-ip^2)= m.$ One notices that for massless solutions ($m=0$) $\psi^{(4)}_{(+)}$ and $\psi^{(4)}_{(-)}$ decouple. Taking the above derivation into account Eq.~(\ref{weylErho0}) transforms into \begin{eqnarray} \label{weylEp} f \, \{(p_{05} + i 2S^{56}\,p_{06}) + \frac{1}{2E}\, \{p_{5} + i 2S^{56}\,p_{6}, Ef\}_{-} \}\, \psi^{(6)} + \gamma^0 \gamma^5 \, m \, \psi^{(6)}=0. \end{eqnarray} For $x^{(5)}$ and $x^{(6)}$ from Eq.~(\ref{f}) and for the zweibein from Eqs.(\ref{fzwei},\ref{f}) and the spin connection from Eq.(\ref{omegas}) one obtains \begin{eqnarray} \label{weylErho} if \, \{ e^{i \phi 2S^{56}}\, [\frac{\partial}{\partial \rho} + \frac{i\, 2 S^{56}}{\rho} \, (\frac{\partial}{\partial \phi}) - \frac{1}{2 \,f} \, \frac{\partial f}{\partial \rho }\, (1- 2F \, 2S^{56})\,] \, \} \, \psi^{(6)} + \gamma^0 \gamma^5 \, m \, \psi^{(6)}=0. \end{eqnarray} Having the rotational symmetry around the axis perpendicular to the plane of the fifth and the sixth dimension we require that $\psi^{(6)}$ is the eigen function of the total angular momentum operator $M^{56}= x^5 p^6-x^6 p^5 + S^{56}= -i \frac{\partial}{\partial \phi} + S^{56}$ \begin{eqnarray} M^{56}\psi^{(6)}= (n+\frac{1}{2})\,\psi^{(6)}. \label{mabx} \end{eqnarray} Accordingly we write \begin{eqnarray} \psi^{(6)}= {\cal N}\, ({\cal A}_{n}\, \stackrel{56}{(+)}\, \psi^{(4)}_{(+)} + {\cal B}_{n+1}\, e^{i \phi}\, \stackrel{56}{[-]}\, \psi^{(4)}_{(-)})\, e^{in \phi}. \label{mabpsi} \end{eqnarray} After taking into account that $S^{56} \stackrel{56}{(+)}= \frac{1}{2} \stackrel{56}{(+)}$, while $S^{56} \stackrel{56}{[-]}= -\frac{1}{2} \stackrel{56}{[-]}$ we end up with the equations of motion for ${\cal A}_n$ and ${\cal B}_{n+1}$ as follows \begin{eqnarray} &&-if \,\{ \,(\frac{\partial}{\partial \rho} + \frac{n+1}{\rho}) - \frac{1}{2\, f} \, \frac{\partial f}{\partial \rho}\, (1+ 2F)\} {\cal B}_{n+1} + m {\cal A}_n = 0, \nonumber\\ &&-if \,\{ \,(\frac{\partial}{\partial \rho} - \quad \frac{n}{\rho}) - \frac{1}{2\, f} \, \frac{\partial f}{\partial \rho}\, (1- 2F)\} {\cal A}_{n} + m {\cal B}_{n+1} = 0. \label{equationm56gen1} \end{eqnarray} Let us treat first the massless case ($m=0$). Taking into account that $F\frac{f-1}{f \rho} = \frac{\partial}{\partial \rho} \ln f^{\frac{F}{2}}$ and that $E=f^{-2}$, it follows \begin{eqnarray} \frac{\partial \, \ln ({\cal B}_n \, \rho^n \,f^{-F -1/2})}{\partial \rho}&=&0,\nonumber\\ \frac{\partial \, \ln ({\cal A}_n \, \rho^{-n} \,f^{F -1/2})}{\partial \rho}&=&0. \label{masslesseq} \end{eqnarray} We get correspondingly the solutions \begin{eqnarray} {\cal B}_n \, e^{in \phi}&=& {\cal B}_0 \, e^{in \phi}\, \rho^{-n} f^{F+1/2}, \nonumber\\ {\cal A}_n \, e^{in \phi}&=& {\cal A}_0 \, e^{in \phi}\, \,\rho^{n} f^{-F+1/2}. \label{masslesseqsol} \end{eqnarray} Requiring that only normalizable (square integrable) solutions are acceptable \begin{eqnarray} 2\pi \, \int^{\infty}_{0} \,E\, \rho d\rho {\cal A}^{\star}_{n} {\cal A}_{n} && < \infty, \nonumber\\ 2\pi \, \int^{\infty}_{0} \,E\, \rho d\rho {\cal B}^{\star}_{n} {\cal B}_{n} && < \infty, \label{masslesseqsolf} \end{eqnarray} it follows \begin{eqnarray} &&{\rm for}\; {\cal A}_{n}: -1 < n < 2F, \nonumber\\ &&{\rm for}\; {\cal B}_{n}: 2F < n < 1, \quad n \;\; {\rm is \;\; an \;\;integer}. \label{masslesseqsolf1} \end{eqnarray} One immediately sees that for $F=0$ there is no solution for the zweibein from Eq.~(\ref{f}). Eq.~(\ref{masslesseqsolf1}) tells us that the strength $F$ of the spin connection field $\omega_{56 \sigma}$ can make a choice between the two massless solutions ${\cal A}_n$ and ${\cal B}_n$: For \begin{eqnarray} 0< 2F \le 1 \label{Fformassless} \end{eqnarray} the only massless solution is the left handed spinor with respect to $(1+3)$ \begin{eqnarray} \psi^{(6)m=0}_{\frac{1}{2}} ={\cal N}_0 \; f^{-F+1/2} \stackrel{56}{(+)}\psi^{(4)}_{(+)}. \label{Massless} \end{eqnarray} It is the eigen function of $M^{56}$ with the eigenvalue $1/2$. No right handed massless solution is allowed. For the particular choice $2F=1$ the spin connection field $-S^{56} \omega_{56\sigma}$ compensates the term $\frac{1}{2Ef} \{p_{\sigma}, Ef \}_- $ and the left handed spinor with respect to $d=(1+3)$ becomes a constant with respect to $\rho $ and $\phi$. For $2F=1$ it is easy to find also all the massive solutions of Eq.~(\ref{equationm56gen1}). Introducing $u=\frac{\rho}{2\rho_0}$ and assuming that $2F=1$ one finds from Eq.~(\ref{equationm56gen1}) \begin{eqnarray} &&{\cal B}_{n+1} = \frac{i}{2 \rho_0 m} \, (1+u^2)\, (\frac{d}{du} - \frac{n}{u} )\,{\cal A}^{m}_{n},\nonumber\\ &&\{(\frac{1+u^2}{2})^2\, \left( \frac{d^2}{du^2} + \frac{1}{u}\, \frac{d}{du} - \frac{n^2}{u^2}\right)\, + (\rho_0 \, m)^2 \} {\cal A}^{m}_{n} = 0\,. \label{equationm56u} \end{eqnarray} If one expresses $(\frac{\rho}{2\rho_0})^2= \frac{1-x}{1+x}$, with $-1 \le \,x\,\le 1 $ for $0 \le \rho \le \infty$, it follows that $f=\frac{2}{1+x}$, $\frac{dx}{du}= \frac{-4u}{(1+u^2)^2}$ and $\frac{4\,u^2}{(1+u^2)^2}= (1-x^2)$. Then Eq.~(\ref{equationm56u}) transforms into the equations of motion for the associate Legendre polynomials ${\cal A}^{(\rho_0 m)^2=l(l+1)}_n = P^{l}_n$, if we assume that $(\rho_0\, m)^2 =l(l+1)$ \begin{eqnarray} &&\left((1-x^2)\,\frac{d^2}{dx^2} -2x\, \frac{d}{dx} - \frac{n^2}{1-x^2} + l(l+1) \,\right) \, {\cal A}^{(\rho_0 m)^2=l(l+1)}_n = 0\,, \nonumber\\ && l(l+1)= (\rho_0 \, m)^2\,, \nonumber\\ && {\cal B}^{(\rho_0 m)^2=l(l+1)}_{n+1} = \frac{-i}{\rho_0 m} \,\sqrt{1-x^2}\, \left(\frac{d}{dx} + \frac{n}{1-x^2} \right)\, {\cal A}^{(\rho_0 m)^2=l(l+1)}_n \,. \label{equationm56ux} \end{eqnarray} From the above equations we see that for $m=0$, that is for the massless case, the only solution with $n=0$ exists, which is ${\cal A}^{(\rho_0 m)^2=0}_0$, which is a constant (in agreement with our discussions above). It is not difficult to prove that there is no normalizable solutions of Eq.~(\ref{equationm56ux}) for an arbitrary $m \,\rho_0 $, which is not of the kind $(m \rho_0)^2=l(l+1),$ with $l$ an integer and also not for a noninteger $n$. The solutions of Eq.~(\ref{equationm56ux}) are, namely, not square integrable on the interval $-1 \le x \le 1$ for $l \ne {\rm an \; integer}\,,$ and $ \, \nu \ne {\rm an \;integer}$. $P^{n}_{\nu} (x \to -1+0)$ are unbounded, going to $\infty$, while they are bounded at $(x \to 1-0)$. One also finds that $P^{\mu}_{n} \to \infty$, if $ (x \to 1-0)$, unless $\mu= \pm m,\,$ with $ m {\,\rm an\; integer}$. (See ref.~\cite{SF}, sect. 5.18, pages 255-258.) Accordingly the massive solutions with the masses equal to $m = l (l+1)/\rho_0$ (we use the units in which $c=1=\hbar$) and the eigenvalues of $M^{56}$ ((Eq.~\ref{mabx}))---which is the charge as we see in section~\ref{properties1+3}---equal to $(\frac{1}{2}+n)$, with $-l \le n \le l$, $l=1,2,..$, are \begin{eqnarray} &&\psi^{(6) (\rho_0 m)^{2}=l(l+1)}_{n+1/2} = \nonumber\\ &&{\cal N}^{l}_{n+1/2} \, \left( \stackrel{56}{(+)} \psi^{(4)}_{(+)} + \frac{i}{2 \sqrt{l(l+1)}} \, \stackrel{56}{[-]} \, \psi^{(4)}_{(-)} \, e^{i \phi} \, (1+u^2)\, (\frac{d}{du} \, -\frac{n}{u})\, \right) {}\cdot e^{i n \phi} \, {\cal A}^{(\rho_0 m)^{2}=l(l+1)}_n \,,\nonumber\\ \label{knsol} \end{eqnarray} with ${\cal A}^{(\rho_0 m)^2=l(l+1)}_n (x)$, which are the associate Legendre polynomials $P^{l}_{n}(x)$, where $x= \frac{1-u^2}{1+u^2}$, and $u= \frac{\rho}{2 \rho_0}$~\footnote{ Rewriting the mass operator $\hat{m}= \gamma^0 \gamma^s f^{\sigma}{}_{s} (p_{\sigma} - S^{56} \omega_{56 \sigma} + \frac{1}{2Ef} \{p_{\sigma}, Ef\}_-)$ as a function of $\vartheta $ and $\phi$: $\rho_0 \hat{m}= i \gamma^0\, \{\stackrel{56}{(+)} e^{-i\phi} (\frac{\partial}{\partial \vartheta} \, -\frac{i}{\sin \vartheta} \frac{\partial}{\partial \phi } \, - \frac{1-\cos \vartheta}{\sin \vartheta}) + \stackrel{56}{(-)} e^{i\phi} (\frac{\partial}{\partial \vartheta} \, + \frac{i}{\sin \vartheta} \frac{\partial}{\partial \phi } ) \},$ one can easily show that when applying $\rho_0 \hat{m}$ and $M^{56}$ on $\psi^{(6)\hat{m}^2=l(l+1)}_{n+1/2}$, for $l=1,2,\cdot$, one obtains from Eq.~(\ref{knsol}) $\rho_0 \hat{m}\, \psi^{(6)\hat{m}^2=l(l+1)}_{n+1/2} = l(l+1) \psi^{(6)\hat{m}^2=l(l+1)}_{n+1/2}, \;\; M^{56}\, \psi^{(6)\hat{m}^2= l(l+1)}_{n+1/2} = (n+1/2) \psi^{(6)\hat{m}^2=l(l+1)}_{n+1/2}$, $l=1,2,\cdot$. A wave packet, which is the eigen function of $M^{56}$ with the eigenvalue $1/2$, for example, can be written as $\psi^{(6)}_{1/2} = \,\sum_{k=0, \infty} C_{1/2}^k \;\, {\cal N}_{1/2} \{ \stackrel{56}{(+)} \psi^{(4)}_{(+)} + (1 - \delta^{k}_0) \frac{i}{\sqrt{k(k+1)}}\, \stackrel{56}{[-]} \psi^{(4)}_{(-)}\,e^{i\phi} \frac{\partial}{\partial \vartheta} \} Y^{k}_{0}. $ The expectation value of the mass operator $ \hat{m}$ on such a wave packet is $\sum_{k=0, \infty} C_{1/2}^{k*} C_{1/2}^{k} \sqrt{k(k+1)}/\rho_0$. }. It is not difficult to see that the solutions of Eq.~(\ref{equationm56gen1}) for $2F=1$, $\psi^{(6)m=0}_{\frac{1}{2}} $ and $\psi^{(6) (\rho_0 m)^2=l(l+1)}_{n+1/2} $, are normalizable on the infinite disc curved into almost $S^2$ ($2 \,\pi\, \int\,\rho d \rho E \,\psi^{(6)(\rho_0 m)^2=l(l+1) \dagger}_{n+1/2} \, \psi^{(6)(\rho_0 m)^2=l(l+1)}_{n+1/2} < \infty$, with $E= f^{-2}$). One can show as well that the eigenstates, with the discrete eigenvalues $(\rho_0 m)^2=l(l+1)$, are orthogonal ($\int\,d^{2}x E \, (\psi^{(6)(\rho_0m)^2=l'(l'+1) \dagger}_{n'+1/2} \, \psi^{(6)(\rho_0 m)^2=l(l+1)}_{n+1/2})= \delta^{l l'} \delta^{n n'} \propto \int d^2 x \, e^{-i(n'-n)\phi}\,\{{\cal B}^{l' +}_{n'+1} \, {\cal B}^{l}_{n+1} + {\cal A}^{l' +}_{n'} \, {\cal A}^{l}_{n} \} $) for all pairs of $(l,n), (l',n')$, the spectrum is obviously discrete as it should be for the Hermitean Hamiltonian with the boudary conditions determined by normalizability of states. To find solutions for all $F$ in the interval $0 < F \le \frac{1}{2}$, besides the massless one $\psi^{(6)m=0}_{\frac{1}{2}} $, is a more tough work. Yet one can expect that on the space of normalizable functions the Hamiltonian will stay Hermitean and since an infinitesimal change of the constant $F$ from $F=\frac{1}{2}$ to a tiny smaller $F$ can not spoil the discreteness of the Hamiltonian eigenvalues, the spectrum would stay discrete. One can see that the current in the radial direction is zero for any $F$. We studied these solutions and found the discrete spectrum, a paper is in preparation. (Let us recognize that $e^{i n \phi} \, P^{l}_{n}$ are spherical harmonics $Y^{l}_{n}$. Expressing $\rho$ with $\vartheta$, $\frac{\rho}{ 2 \rho_0}= \, \sqrt{\frac{1- \cos \vartheta}{1+ \cos \vartheta}}$ we rewrite the equations of motion~(Eq.\ref{equationm56gen1})as follows \begin{eqnarray} &&(\frac{\partial}{\partial \vartheta} + \frac{n+1 -(F+1/2)(1-\cos \vartheta)}{\sin \vartheta} )\, {\cal B}_{n+1} + i \rho_0 \,m {\cal A}_n = 0, \nonumber\\ &&(\frac{\partial}{\partial \vartheta} + \; \frac{-n +(F-1/2)\,(1-\cos \vartheta)}{\sin \vartheta} ) \,\; {\cal A}_{n} + i \rho_0 \,m {\cal B}_{n+1} = 0\,.) \label{equationm56theta} \end{eqnarray} \section{Singularities on an almost $S^2$ sphere} \label{singularitiesgaugetransformations} In this section we comment on singularities "felt" by a spinor if a noncompact disc with the zweibein from Eq.~(\ref{fzwei}) and the spin connections from Eq.~(\ref{omegas}) is understood as the $S^2$ sphere with a hole on the southern pole. Intuitively it is not difficult to see that we are in troubles if we want the chiral fermion field of Eq.~(\ref{Massless}), that is $\psi^{(6)m=0}_{\frac{1}{2}} ={\cal N}_0 \; f^{-F+1/2} \stackrel{56}{(+)}\psi^{(4)}_{(+)}$, on a two dimensional space to be an eigenstate of some rotational operator $M^{56}$, if the two dimensional space has to have the topology of $S^2$, while the spin of the fermion contributes to $M^{56}$ in the "usual way" \begin{eqnarray} M^{56} &=& S^{56} + K^{56}, \label{m56k} \end{eqnarray} where $K^{56} $ is the Killing vector, like in Eq.~(\ref{mabx}) ($K^{56}= x^5 p^6-x^6 p^5 $). Near the starting point (the origin, the northern pole of $S^2$) on the topologically $S^2$ sphere the Killing operator functions as the orbital angular momentum ($L^{56}=x^{(5)} p^{(6)}- x^{(6)} p^{(5)}$) and has to be added to the spin part $S^{56}$, just as it is in the flat two-dimensional space. Going away from the starting point the action of $M^{56}$ may be more complicated as just a simple sum in Eq.~(\ref{m56k}). Because of the $S^{2}$ topology there has to be namely yet another point at which the orbital Killing generator eigenvalue goes to zero, since there has to be a point, the south pole, which is left invariant under the orbital Killing transportation as it is at the starting point, at the north pole. It is also easy to see that on the two-dimensional $S^2$, the orientation of the Killing transportation in the infinitesimal neighbourhood of this second stable point, the south pole, is in the {\it opposite} direction with respect to the orientation of the Killing transportation around the north pole. If we want to have on $S^2$ only a spinor of one handedness, let say the spinor $\psi^{(6)m=0}_{\frac{1}{2}}$ of Eq.~(\ref{Massless}), then we should count at the south pole the orbital symmetry generator with the opposite sign relative to $S^{56}$ as we do at the starting point (see Eqs.~(\ref{m56spsi6sp},\ref{spinorsouth})). In order to be able to have on the two-dimensional $S^2$ surface a spinor of only one handedness, we have to let the phase rotation generated by $S^{56 }$ part of $M^{56}$ relative to the Killing part at the south pole to be of the opposite sign with respect to the north pole. Namely, when we consider smaller and smaller circles around the south pole, the phase of the single handedness spinor state must be rotated under $M^{56}$ so that when extrapolating to the south pole the phase rotation correspond to the spin, which is inverted relative to the orientation of the two-dimensional space of the $S^2$ surface. Therefore, embedding the $S^2$ sphere into a three-dimensional Euclidean space, it is not surprising that if we want a spinor of one handedness and succeed to implement it at the north pole in an outward normal direction, we can hardly implement it at the south pole. We might hope for the compensation by the orbital part of $M^{56}$, except at the poles. This means that we could have a state of a handed spinor if the wave function goes to zero at at least one of the poles, say the southern pole (see Eqs.~(\ref{Massless},\ref{masslesseqsolf1})). \subsection{Formal introduction of a singular point} \label{forsingpo} We might formally introduce at the south pole a special singularity, so that we require the wave function instead to behave at the south pole in the usual differentiable way, to be differentiable only after being multiplied (corrected) by a phase factor: Instead of $\psi$ we require that $e^{i \phi^{\mali{SP}}} \, \psi$ is our wave differentiable function in the neighbourhood of the singular point at the south pole, the phase factor $e^{i \phi^{\mali{SP}}}$ itself behaving singularly. By making this modified requirement of the differentiability we effectively change the orbital angular momentum of the wave function by one unit of $\hbar$ before we require the wave function to be smooth or differentiable. Thereby we have made the requirement that the actual wave function should have a rather unphysical extra bit of a negative angular momentum around the south pole. We must admit that it looks rather strange from the physical point of view, unless we recognize that this smoothness condition is to simulate the non-compactness of the $S^2$ space, which only after adding a singular point becomes an $S^2$ at all. When changing the differentiability of the wave function in the neighbourhood of the singular point with the requirement that the wave function must be multiplied by a phase, we recognize that such a phase multiplication of the wave function appears when transforming the coordinate system from the northern to the southern pole, as we can see in equation (\ref{s}) bellow. This phase transformation of the wave function requires the appearance of the spin connection field, as can be seen in Eq.(\ref{omegasp}): The gauge transformation of any spin connection field (when transforming the coordinate system), appears even if the spin connection field is zero and manifests in the second term of this equation. \subsection{Gauge transformations from the northern to the southern pole} \label{gaugetrans} To demonstrate further what does the hole do in the noncompact space of an almost $S^2$ sphere let us transform the coordinate system from the northern to the southern pole of the sphere $S^2$ as the $S^2$ would be a sphere made out of an infinite plane with the zweibein of a sphere and look at how do the equations of motion and the wave functions transform correspondingly and how do they demonstrate the noncompactness of our space. From Fig.~\ref{northsouthpole} we read \begin{figure} \centering \includegraphics{northsouthpole} \caption{Transforming coordinates from the north to the south pole on $S^2$. \label{northsouthpole}} \end{figure} \begin{eqnarray} \label{xsp} x^{\mali{NP}(5)}&=& (\frac{2\rho_0}{\rho^{\mali{SP}}})^2\,x^{\mali{SP}(5)},\quad x^{\mali{NP}(6)} = -(\frac{2\rho_0}{\rho^{\mali{SP}}})^2\,x^{\mali{SP}(6)}, \end{eqnarray} and \begin{eqnarray} \rho^{\mali{SP}}\rho^{\mali{NP}}&=& (2\rho_0)^2,\quad E^{\mali{NP}} \, d^2 x^{\mali{NP}} = E^{\mali{SP}} \, d^2 x^{\mali{SP}}, \end{eqnarray} where $x^{\mali{NP}\sigma }, \sigma=(5),(6)$ stay for up to now used $x^{\sigma}, \sigma=(5),(6)$, while $x^{\mali{SP} \sigma }, \sigma=(5),(6)$ stay for coordinates when we put our coordinate system at the southern pole and $\rho_0$ is the radius of $S^2$ as before. We have $E^{\mali{SP}}=(1+ (\frac{\rho^{\mali{SP}}}{2 \rho_0})^2)^{-2}$ and $E^{\mali{NP}}=(1+ (\frac{\rho^{\mali{NP}}}{2 \rho_0})^2)^{-2}= (\frac{2 \rho_0}{\rho^{\mali{SP}}})^4 \,E^{\mali{SP}}$. We also can write $x^{\mali{NP}\sigma}= (\frac{2\rho_0}{\rho^{\mali{SP}}})^2\, (-)^{1+\sigma} \,x^{\mali{SP} \sigma}$. We ought to transform the Lagrange density (Eq.(\ref{weylE})) expressed with respect to the coordinates at the northern pole \begin{eqnarray} {\cal L}^{\mali{NP}}_{W}&=&\psi^{\mali{NP} \dagger}E^{\mali{NP}} \gamma^0 \gamma^s \, ( f^{\mali{NP} \sigma}{}_s \, p^{\mali{NP}}_{0 \sigma} + \frac{1}{2 E^{\mali{NP}} } \, \{p^{\mali{NP}}_{\sigma}, E^{\mali{NP}} \, f^{\mali{NP} \sigma}{}_{s}\}_- )\,\psi^{\mali{NP}}, \nonumber\\ p^{\mali{NP}}_{0 \sigma} &=& p^{\mali{NP}}_{\sigma}- \frac{1}{2} S^{st}\, \omega^{\mali{NP}}_{st \sigma},\nonumber\\ f^{\mali{NP} \sigma}{}_{s}\, \omega^{\mali{NP}}_{s't' \sigma} &=& \frac{iF \delta^{\sigma}_{s}\, \varepsilon_{s' t'} x^{\mali{NP}}_{\sigma}}{\rho^{2}_{0}} \label{weylEnp} \end{eqnarray} to the corresponding Lagrange density ${\cal L}^{\mali{SP}}_{W}$ expressed with respect to the coordinates at the southern pole by assuming \begin{eqnarray} \psi^{\mali{NP}} &=& S \, \psi^{\mali{SP}}. \label{psinpsp} \end{eqnarray} We use the antisymmetric tensor $\varepsilon^{(5)(6)}=1= -\varepsilon^{(5)}{}_{(6)}$. We recognize that \begin{eqnarray} \label{unpsp} f^{\mali{NP} \sigma}{}_{s}&=& f^{\mali{SP} \sigma'}{}_{t} \;\frac{\partial x^{\mali{NP} \sigma}}{\partial x^{\mali{SP} \sigma'}} \; O^{-1 t}{}_{s},\nonumber\\ f^{\mali{SP} \sigma}{}_{s} &=& f^{\mali{SP}} \, \delta^{\sigma}_{s}, \quad f^{\mali{SP}}= (1+(\frac{\rho^{\mali{SP}}}{2 \rho_0})^2). \end{eqnarray} The matrix $O$ takes care that the zweibein expressed with respect to the coordinate system at the southern pole is diagonal: $f^{\mali{SP} \sigma}{}_{s} = f^{\mali{SP}}\, \delta^{\sigma}_{s}$ \begin{eqnarray} O = \pmatrix{- \cos(2 \phi +\pi) & - \sin (2 \phi +\pi) \cr \;\;\; \sin(2 \phi +\pi) & - \cos(2 \phi +\pi) \cr}. \label{o} \end{eqnarray} Requiring that \begin{eqnarray} S^{-1} \gamma^0 \gamma^s S \, O^{-1 t}{}_{s}&=& \gamma^0 \gamma^t, \label{gammanpsp} \end{eqnarray} from where it follows that $S^{-1} S^{st} S O^{-1 s'}{}_{s} O^{-1 t'}{}_{t} = S^{s't'}$, and recognizing that $p^{\mali{NP}}_{\sigma}= \frac{\partial x^{\mali{SP} \sigma'}}{\partial x^{\mali{NP} \sigma}} \, p^{\mali{SP}}_{\sigma'}$, with $p^{\mali{SP}}_{\sigma}= i \frac{\partial}{\partial x^{\mali{SP} \sigma}}$, we find that $\gamma^s \, f^{\mali{NP} \sigma}{}_{s}\, p^{\mali{NP}}_{0 \sigma} (=\gamma^s \, f^{\mali{NP} \sigma}{}_{s}\, (p^{\mali{NP}}_{ \sigma} - \frac{1}{2} S^{st}\; \omega^{\mali{NP}}_{s t \sigma}))$ transforms into $\gamma^{s} \,f^{\mali{SP} \sigma}{}_{s} \, p^{\mali{SP}}_{0 \sigma}$ \begin{eqnarray} \gamma^{s} \,f^{\mali{SP} \sigma}{}_{s} \, p^{\mali{SP}}_{0 \sigma} &=&\gamma^{s} \,f^{\mali{SP} \sigma}{}_{s} \, \{p^{\mali{SP}}_{ \sigma} - \nonumber\\ &&\frac{1}{2} \, S^{s't'}\, i \varepsilon_{s' t'} ( \frac{ F\, x^{\mali{SP}}_{\sigma}}{f^{\mali{SP}}\,(f^{\mali{SP}}-1)\, \rho^{2}_0} + 2 i\, \frac{ \varepsilon_{\sigma}{}^{\tau}\,x^{\mali{SP}}_{\tau}}{ (2\,\rho_{0})^{2} (f^{\mali{SP}}-1)} )\}. \label{gammafposigma} \end{eqnarray} In the above equation we took into account that $\omega^{\mali{NP}}_{s t \sigma }$ transforms into $O^{-1 s'}{}_{s}\,O^{-1 t'}{}_{t}\,\frac{\partial x^{\mali{SP} \sigma'}}{\partial x^{\mali{NP} \sigma}} \, \ (\omega^{\mali{NP}}_{s' t' \sigma' } + O_{s' t"} (\frac{\partial \quad}{\partial x^{\mali{NP} \sigma"}} O^{-1 t"}{}_{t'}) \, \frac{\partial x^{\mali{NP} \sigma"}}{\partial x^{\mali{SP} \sigma'}})$, from where it follows that $\omega^{\mali{NP}}_{s t \sigma }$ transforms into \begin{eqnarray} && O^{-1 s'}{}_{s}\,O^{-1 t'}{}_{t}\,\frac{\partial x^{\mali{SP} \sigma'}}{\partial x^{\mali{NP} \sigma}} \ \omega^{\mali{SP}}_{s' t' \sigma' },\nonumber\\ \omega^{\mali{SP}}_{s t \sigma }&=& i \varepsilon_{s t} \,\{ \frac{ F \, x^{\mali{SP}}_{\sigma}}{ f^{\mali{SP}}\, \rho^{2}_{0}\, (f^{\mali{SP}}-1)} +2i \frac{ \varepsilon_{\sigma}{}^{\tau}\, x^{\mali{SP}}_{\tau}}{(2\rho_0)^2 \, (f^{\mali{SP}}-1)}\}. \label{omegasp} \end{eqnarray} Similarly we transform the term $\gamma^s \, \frac{1}{2 E^{\mali{NP}} } \, \{p^{\mali{NP}}_{\sigma}, E^{\mali{NP}}\, f^{\mali{NP} \sigma}{}_{s}\}_- $ into \begin{eqnarray} \gamma^s ( \frac{1}{2 E^{\mali{SP}} } \, \{p^{\mali{SP}}_{\sigma}, E^{\mali{SP}}\, f^{\mali{SP} \sigma}{}_{s}\}_- + \frac{1}{2} f^{\mali{SP} \sigma}{}_{s} \{p^{\mali{SP}}_{\sigma}, \ln(\frac{\rho^{\mali{SP}}}{2 \rho_0})^2\}_{-}\,). \end{eqnarray} The action $\int d^2 x^{\mali{NP}}{\cal L}^{\mali{NP}}_{W}$, with the density from Eq.(\ref{weylE}), transforms, when the coordinate system is put at the southern pole, as follows \begin{eqnarray} \int d^2 x^{\mali{NP}} {\cal L}^{\mali{NP}}_{W}&=& \int d^2 x^{\mali{SP}} \psi^{\mali{SP} \dagger} E^{\mali{SP}}S^{\dagger} \gamma^0 \gamma^s \, (f^{\mali{SP} \sigma'}{}_t \; \frac{\partial x^{\mali{NP} \sigma}}{\partial x^{\mali{SP} \sigma'}}\, O^{-1 t}{}_{s}\, \frac{\partial x^{\mali{SP} \sigma"}}{\partial x^{\mali{NP} \sigma}}\, \, p^{\mali{SP}}_{0 \sigma"} + \nonumber\\ && \,\frac{1}{2 E^{\mali{SP}} } \, \{p^{\mali{SP}}_{\sigma}, E^{\mali{SP}} \, f^{\mali{SP} \sigma}{}_{s}\}_- + \frac{1}{2}\, f^{\mali{SP} \sigma}{}_{s} \{p^{\mali{SP}}_{\sigma}, \ln( f^{\mali{SP}}-1)\}_{-}\,)\,S \,\psi^{\mali{SP}}, \label{actionweylEsp} \end{eqnarray} which leads to the Lagrange density \begin{eqnarray} {\cal L}^{\mali{SP}}_{W}&=&\psi^{\mali{SP} \dagger}E^{\mali{SP}} \gamma^0 \gamma^s \, ( f^{\mali{SP} \sigma}{}_s \, p^{\mali{SP}}_{0 \sigma} + \nonumber\\ && \frac{1}{2 E^{\mali{SP}} } \, \{p^{\mali{SP}}_{\sigma}, E^{\mali{SP}} \, f^{\mali{SP} \sigma}{}_{s}\}_- + \frac{1}{2}\, f^{\mali{SP} \sigma}{}_{s} \{p^{\mali{SP}}_{\sigma}, \ln(\frac{\rho^{\mali{SP}}}{2 \rho_0})^2\}_{-}\,)\,\psi^{\mali{SP}}. \label{weylEsp} \end{eqnarray} The requirement that $S^{-1} \gamma^0 \gamma^s \,S \, O^{-1 t}{}_{s} = \gamma^0 \gamma^t$ is fulfilled by the operator $S= e^{-i S^{56} \omega_{56}}$, and $\omega_{56}= 2 \phi +\pi$, so that in the space of the two vectors $(\stackrel{56}{(+)}\psi^{(4)}_{(+)}, \stackrel{56}{[-]}\psi^{(4)}_{(-)})$ \begin{eqnarray} S = \pmatrix{e^{i (\phi^{\mali{NP}}+ \frac{ \pi}{2})} & 0 \cr 0 & e^{-i (\phi^{\mali{NP}}+ \frac{ \pi}{2})} \cr}, \label{s} \end{eqnarray} with $\phi^{\mali{NP}} = - \phi^{\mali{SP}}$, while we have \begin{eqnarray} \gamma^0 \gamma^5 = \pmatrix{ 0 & -1 \cr -1 & 0 \cr}, \gamma^0 \gamma^6 = \pmatrix{ 0 & \; i \cr -i & 0 \cr}. \label{gamma56} \end{eqnarray} Let us look how does an eigenstate of $M^{ab}$ from Eq.~(\ref{mabx}), expressed with respect to the coordinate at the northern pole \begin{eqnarray} \psi^{\mali{NP}(6)}_{n+\frac{1}{2}}= (\alpha_{n}(\rho^{\mali{NP}}) \stackrel{56}{(+)} \psi^{(4)}_{(+)} + i \beta_{n}(\rho^{\mali{NP}}) \stackrel{56}{[-]} \psi^{(4)}_{(-)} \; e^{i \phi^{\mali{NP}}})\, e^{i n \phi^{\mali{NP}}}, \label{psi6np} \end{eqnarray} with the property \begin{eqnarray} M^{\mali{NP} 56} \psi^{\mali{NP}(6)}_{n+\frac{1}{2}}= (n+\frac{1}{2})\,\psi^{\mali{NP}(6)}_{n+\frac{1}{2}}, \label{mab} \end{eqnarray} where $ M^{\mali{NP} 56} = (S^{56} -i \frac{\partial}{\partial \phi^{\mali{NP}}})\,$, look like when we put the coordinate system at the southern pole. When putting the coordinate system at the southern pole not only $\phi^{\mali{NP}}$ transforms into $-\phi^{\mali{SP}}$, but also $\gamma^{6}$ goes into $-\gamma^{6}$, accordingly \begin{eqnarray} \label{spinorsouth} &&\stackrel{56}{(+)} \quad {\rm goes \;\; into } \quad \stackrel{56}{(-)}\nonumber\\ &&\stackrel{56}{[-]} \;\quad {\rm goes \;\; \,into } \quad \stackrel{56}{[+]}, \end{eqnarray} therefore $S^{56}\; \stackrel{56}{(-)}= - \frac{1}{2}\;\stackrel{56}{(-)} $ and $S^{56}\; \stackrel{56}{[+]}= \frac{1}{2}\;\stackrel{56}{[+]} $. Taking into account Eqs.~(\ref{spinorsouth}, \ref{s}, \ref{o}) we obtain \begin{eqnarray} &&\psi^{\mali{SP} (6)}_{n+\frac{1}{2}} (x^{\mali{NP} \tau})= S \;\psi^{\mali{NP}(6)}_{n+\frac{1}{2}}(x^{\mali{NP} \tau}(x^{\mali{SP} \tau})) \nonumber\\ && = (i \alpha_{n}(\frac{(2 \rho_0)^2}{\rho^{\mali{SP}}})\, e^{-i \phi^{\mali{SP}}}\, \stackrel{56}{(-)} \psi^{(4)}_{(+)} + \beta_{n}(\frac{(2 \rho_0)^2}{\rho^{\mali{SP}}})\, \stackrel{56}{[+]} \psi^{(4)}_{(-)} )\, e^{-i n \phi^{\mali{SP}}}\nonumber\\ && = (i \alpha^{\mali{SP}}_{-(n+1)} \stackrel{56}{(-)} \psi^{(4)}_{(+)} + \beta^{\mali{SP}}_{-n}\,e^{i \phi^{\mali{SP}}} \, \stackrel{56}{[+]} \psi^{(4)}_{(-)} )\, e^{-i (n +1)\phi^{\mali{SP}}}. \label{spsi6sp} \end{eqnarray} When evaluating $M^{\mali{SP} 56} = (S^{56} + i \frac{\partial}{\partial \phi^{\mali{SP}}})\,$ on $S \;\psi^{\mali{NP}(6)}_{n+\frac{1}{2}}(x^{\mali{NP} \tau}(x^{\mali{SP} \tau}))$ it follows \begin{eqnarray} (S^{56} + i \frac{\partial}{\partial \phi^{\mali{SP}}})\; S \;\psi^{\mali{NP}(6)}_{n+\frac{1}{2}}(x^{\mali{NP} \tau}(x^{\mali{SP} \tau})) &=& (n+\frac{1}{2}) \; S \;\psi^{\mali{NP}(6)}_{n+\frac{1}{2}}. \label{m56spsi6sp} \end{eqnarray} Accordingly the massless state $\psi^{\mali{NP}(6)m=0}_{\frac{1}{2}} = {\cal N}^{\mali{NP}}_0 \, f^{\mali{NP} (-F +\frac{1}{2})}\, \stackrel{56}{(+)}\, \psi^{(4)}_{(+)}$ from Eq.~(\ref{Massless}) looks, when transforming the coordinate system from the northern to the southern pole, as \begin{eqnarray} \psi^{\mali{SP}(6)m=0}_{\frac{1}{2}} &=& {\cal N}^{\mali{SP}}_0\, (f^{\mali{SP}} \,(\frac{2 \rho_0}{\rho^{\mali{SP}}})^2)^{(-F +\frac{1}{2})}\, \stackrel{56}{(-)}\, \psi^{(4)}_{(+)} \, e^{-i \phi^{\mali{SP}}}. \label{psisp} \end{eqnarray} Taking into account that $x^{\mali{SP}(5)} + i 2S^{56} x^{\mali{SP}(6)} = \rho^{\mali{SP}} \, e^{-i 2S^{56}\phi^{\mali{SP}}} $ and $\frac{\partial \quad}{\quad \partial x^{\mali{SP}(5)}} + i 2S^{56}\, \frac{\partial \quad}{\quad \partial x^{\mali{SP}(6)}} = e^{-i 2 S^{56} \phi^{\mali{SP}}}\, (\frac{\partial }{\partial \rho^{\mali{SP}}} - i 2 S^{56} \frac{1}{\rho^{\mali{SP}} }\, \frac{\partial\quad}{\partial \phi^{\mali{SP}}})$ we can write the equations of motion as \begin{eqnarray} \label{weylErhosp} &&if \, e^{-i \phi^{\mali{SP}} 2S^{56}}\, \{(\frac{\partial\quad}{\partial \rho^{\mali{SP}}} - \frac{i\, 2 S^{56}}{\rho^{\mali{SP}}} \, \frac{\partial\quad}{\partial \phi^{\mali{SP}}}) + S^{56}\, \frac{1}{\rho^{\mali{SP}}}( \frac{4 F}{f^{\mali{SP}}} - 2 \cdot 2 S^{56})\nonumber\\ &&+ \frac{1}{\rho^{\mali{SP}}}\, (1 - \frac{f^{\mali{SP}}-1}{f^{\mali{SP}}}) \} \, \psi^{(6)}+ \gamma^0 \gamma^5 \, m \, \psi^{(6)}=0. \end{eqnarray} For $\psi^{\mali{SP} (6)}_{n+ \frac{1}{2}} = ({\cal A}_{-(n+1) } e^{-i \phi^{\mali{SP}}} \,\stackrel{56}{(-)}\, \psi^{(4)}_{(+)} + {\cal B}_{-n} \,\stackrel{56}{[+]}\, \psi^{(4)}_{(-)})\, e^{-in\phi^{\mali{SP}}}$ we find the equations for ${\cal A}_{-(n+1)}$ and ${\cal B}_{-n}$ \begin{eqnarray} \label{weylErhosprho} &&-if \, \{(\frac{\partial\quad}{\partial \rho^{\mali{SP}}} + \frac{-n}{\rho^{\mali{SP}}} \, ) + \frac{1}{\rho^{\mali{SP}}}(\frac{2 F +1}{f^{\mali{SP}}} - 1)\} \, {\cal B}_{-n} + m \, {\cal A}_{-(n+1)} = 0, \nonumber\\ &&-if \, \{(\frac{\partial\quad}{\partial \rho^{\mali{SP}}} + \frac{n+1}{\rho^{\mali{SP}}} \, ) + \frac{1}{\rho^{\mali{SP}}}( \frac{-2 F +1}{f^{\mali{SP}}} - 1)\} \, {\cal A}_{-(n+1)} + m \, {\cal B}_{-n} = 0. \end{eqnarray} When using $f^{\mali{SP}}\frac{\partial\quad}{\partial \rho^{\mali{SP}}}= \frac{1}{\rho_0}\, \frac{\partial\quad}{\partial \vartheta^{\mali{SP}}} $ and $\frac{f^{\mali{SP}}}{\rho^{\mali{SP}}} = \frac{1}{\rho_0}\, \frac{1}{\sin \vartheta^{\mali{SP}}}$ Eq.(\ref{weylErhosprho}) transforms into \begin{eqnarray} \label{weylErhosptheta} &&(\frac{\partial\quad}{\partial \vartheta^{\mali{SP}}} + \frac{-n-1 + (F + \frac{1}{2})(1+\cos \vartheta^{\mali{SP}})}{\sin \vartheta^{\mali{SP}}}) \, {\cal B}_{-n} + i \rho_0 m \, {\cal A}_{-(n+1)} = 0.\nonumber\\ &&(\frac{\partial \quad}{\partial \vartheta^{\mali{SP}}} + \frac{n + (-F + \frac{1}{2})(1+\cos \vartheta^{\mali{SP}})}{\sin \vartheta^{\mali{SP}}}) \, {\cal A}_{-(n+1)} + i\rho_0 m \, {\cal B}_{-n} = 0. \end{eqnarray} Again we find for $2F =1$ \begin{eqnarray} &&\{\frac{1}{\sin \vartheta} \frac{\partial}{\partial \vartheta}(\sin \vartheta \frac{\partial}{\partial \vartheta} ) + [(\rho_0 m)^2 - \frac{n^2}{\sin^2\vartheta}]\} {\cal A}_{-(n+1)} =0,\nonumber\\ &&{\cal B}_{-n}= \, i\, \frac{1}{(\rho_0 m)^2} \, (\frac{\partial\quad}{\partial \vartheta^{\mali{SP}}} + \frac{n }{\sin \vartheta^{\mali{SP}}}) \, {\cal A}_{-(n+1)}. \label{sphthetasp} \end{eqnarray} Let us conclude this section by recognizing that we have at the south pole allowed a certain special singularity which is of the following type: Around a point in the 2-dimensional space - the singular point - we let the phase of the wave function rotate so that it turns around $2\pi$ as one goes around $2\pi$ in the direction to the singular point \textit{i.e.} as $\phi$ goes around. This would for a properly smooth function only be allowed provided that the magnitude of the wave function decreases linearly with the distance to the singular point. Of course, from the point of view of the structure of the singularity we can make a gauge transformation and replace the just mentioned phase rotation of the wave function by a singular (essentially constant) value of the spin connection on the circles around the singular point. \section{Spinors and the gauge fields in $d=(1+3)$} \label{properties1+3} To study how do spinors couple to the Kaluza-Klein gauge fields in the case of $M^{(1+5)}$, ``broken'' to $M^{(1+3)} \times S^2$ with the radius of $S^2$ equal to $\rho_0$ and with the spin connection field $\omega_{st \sigma} = i4F \varepsilon_{st} \frac{x_{\sigma}}{\rho}\frac{f-1}{\rho f}$ we first look for (background) gauge gravitational fields, which preserve the rotational symmetry around the axis through the northern and southern pole. Requiring that the symmetry determined by the Killing vectors of Eq.~(\ref{killings}) (following ref.~\cite{hnkk06}) with $f^{\sigma}{}_{s} = f \delta^{\sigma}_{s}, f^{\mu}{}_s=0, e^{s}{}_{\sigma}= f^{-1} \delta^{s}_{\sigma}, e^{m}{}_{\sigma}=0,$ is preserved, we find for the background vielbein field \begin{eqnarray} e^a{}_{\alpha} = \pmatrix{\delta^{m}{}_{\mu} & e^{m}{}_{\sigma}=0 \cr e^{s}{}_{\mu} & e^s{}_{\sigma} \cr}, f^{\alpha}{}_{a} = \pmatrix{\delta^{\mu}{}_{m} & f^{\sigma}{}_{m} \cr 0= f^{\mu}{}_{s} & f^{\sigma}{}_{s} \cr}, \label{f6} \end{eqnarray} with \begin{eqnarray} \label{background} f^{\sigma}{}_{m} &=& K^{(56)\sigma} B^{(5)(6)}_{\mu} f^{\mu}{}_{m} = \varepsilon^{\sigma}{}_{\tau} x^{\tau} A_{\mu} \delta^{\mu}_{m}, \nonumber\\ e^{s}{}_{\mu} &=& - \varepsilon^{\sigma}{}_{\tau} x^{\tau} A_{\mu} e^{s}{}_{\sigma}, \end{eqnarray} $ s=5,6; \sigma = (5),(6)$. Requiring that correspondingly the only nonzero torsion fields are those from Eq.~(\ref{T}) we find for the spin connection fields \begin{eqnarray} \omega_{st \mu} = \varepsilon_{st} A_{\mu}, \quad \omega_{sm \mu} = \frac{1}{2}f^{-1}\varepsilon_{s \sigma } x^{\sigma} \delta^{\nu}{}_{m} F_{\mu \nu}, \label{omega6} \end{eqnarray} $F_{\mu \nu}= A_{[\nu,\mu]}$. The $U(1)$ gauge field $A_{\mu}$ depends only on $x^{\mu}$. All the other components of the spin connection fields, except (by the Killing symmetry preserved) $\omega_{st\sigma}$ from Eq.~(\ref{weylE}), are zero, since for simplicity we allow no gravity in $(1+3)$ dimensional space. The corresponding nonzero torsion fields ${\cal T}^{a}{}_{bc}$ are presented in Eq.~(\ref{T}) and in the expressions following this equation, all the other components are zero. To determine the current, which couples the spinor to the Kaluza-Klein gauge fields $A_{\mu}$, we analyse (as in the refs.~\cite{hnkk06}) the spinor action (Eq.(~\ref{weylE})) \begin{eqnarray} {\cal S} &=& \int \; d^dx \bar{\psi}^{(6)} E \gamma^a p_{0a} \psi^{(6)} =\nonumber\\ && \int \, d^dx \bar{\psi}^{(6)} \gamma^s p_{s} \psi^{(6)}+ \nonumber \\ && \int \, d^dx \bar{\psi}^{(6)} \gamma^m \delta^{\mu}{}_{m} p_{\mu} \psi^{(6)} + \nonumber\\ && \int \, d^dx \bar{\psi}^{(6)} \gamma^m \delta^{\mu}{}_{m} A_{\mu} (\varepsilon^{\sigma}{}_{\tau} x^{\tau} p_{\sigma} + S^{56}) \psi^{(6)} + \nonumber\\ && {\rm \; terms } \propto x^{\sigma} \,{\rm or } \propto x^{5} x^{6}. \label{spinoractioncurrent} \end{eqnarray} Here $\psi^{(6)}$ is a spinor state in $d=(1+5)$ after the break of $M^{1+5}$ into $M^{1+3} \times $ $S^2$. $E$ is for $f^{\alpha}{}_{a}$ from Eq.~(\ref{f6}) equal to $f^{-2}$. The term in the second row in Eq.~(\ref{spinoractioncurrent}) is the mass term (equal to zero for the massless spinor), the term in the third row is the kinetic term, together with the term in the fourth row defines the covariant derivative $p_{0 \mu}$ in $d=(1+3)$. The terms in the last row contribute nothing when the integration over the disk (curved into a sphere $S^2$) is performed, since they all are proportional to $x^{\sigma}$ or to $ \varepsilon_{\sigma \tau} x^{\sigma} x^{\tau}\;$ ($-\gamma^{m} \,\frac{1}{2}S^{sm} \omega_{s m n} = -\gamma^{m}\,\frac{1}{2}\,f^{-1} F_{m n} \varepsilon_{s \sigma} x^{\sigma}$ and $-\gamma^m \,f^{\sigma}{}_{m}\frac{1}{2} \,S^{st} \omega_{st \sigma}= \gamma^m A_m x^{5}x^{6} S^{st} \varepsilon_{s t} \frac{4iF(f-1)}{f \rho^2}$). We end up with the current in $(1+3)$ \begin{eqnarray} j^{\mu} = \int \;E d^2x \bar{\psi}^{(6)} \gamma^m \delta^{\mu}{}_{m} M^{56} \psi^{(6)}. \label{currentdisk} \end{eqnarray} The charge in $d=(1+3)$ is proportional to the total angular momentum $M^{56} =L^{56} + S^{56}$ around the axis from the southern to the northern pole of $S^2$, but since for the choice of $ 2 F =1$ (and for any $0 < 2F \le 1 $) in Eq.~(\ref{masslesseqsolf1}) only a left handed massless spinor exists, with the angular momentum zero, the charge of a massless spinor in $d=(1+3)$ is equal to $1/2$. The Riemann scalar is for the vielbein of Eq.~(\ref{f6}) equal to ${\cal R}= -\frac{1}{2} \rho^2 f^{-2} F^{mn}F_{mn}$. If we integrate the Riemann scalar over the fifth and the sixth dimension, we get $-\frac{8\pi}{3} (\rho_0)^4 F^{mn}F_{mn}$. \section{Conclusions} \label{conclusion} We prove in this paper that one can escape from the "no-go theorem" of Witten~\cite{witten}, that is one can guarantee the masslessness of spinors and their chiral coupling to the Kaluza-Klein[like] gauge fields when breaking the symmetry from $d$-dimensional one to $M^{(1+3)} \times M^{d-4}$ space, in cases which we call the "effective two dimensionality" even without boundaries, as we proposed in the references~\cite{hnkk06}. Namely, we can guarantee the above mentioned properties of spinors, when the break $M^{(1+3)} \times M^{d-4}$, $d-4> 2$ occurs in a way that vielbeins and spin connections are completely flat in all but two dimensions, while the two dimensional space, although of finite volume, is noncompact with a particular spin connection contributing to the properties of spinors. In our particular case it is the zweibein (the zweibein of the $S^2$ sphere with a hole at the southern pole) on an infinite disc, which guarantees that the noncompact space has the finite volume and enables, together with the spin connection field on this disc ($\omega_{st \sigma}= i\,F \, \varepsilon_{st} \frac{x_{\sigma}}{f \, \rho^{2}_{0}}\,$, the $\omega_{st \sigma}$ field breaks the parity symmetry and the sign of $F$ makes a choice of the handedness of the massless state) that only one normalizable spinor state (of particular handedness) is massless, carrying the Kaluza-Klein charge of $\frac{1}{2}$ and coupling chirally to the corresponding Kaluza-Klein gauge field. Let us add that requiring normalizability of states in extra dimensions guarantees that states are normalizable in the whole $d=(1+ (d-1))$ space. Since the spin connection strength is determined only within an interval ($0 < 2F \le 1 $), what we proposed is not a fine tuning. Taking (in the absence of fermions) the action for the gravitational gauge fields with the linear curvature for $d=2$ (when any zweibein and any spin connection fulfills the corresponding equations of motion), we are allowed to make any choice of a zweibein and spin connection. (This choice leads to nonzero torsion.) There is the discrete spectrum of normalizable eigenstates of the Hermitean Hamiltonian on the infinite disc for the chosen zweibein and spin connection of any strength $F$ in the interval ($0 < 2F \le 1 $), as we proved in section~\ref{starting action}. The normalizable eigenstates, which are chosen to be at the same time the eigenstates of the total angular momentum on the disc $M^{56}= x^{5} p^{6}- x^{6} p^{5} + S^{56}$, with the eigenvalues $(n+1/2)$, carry the Kaluza-Klein charge $(n+1/2)$. The only massless state carries the charge $(\frac{1}{2})$. For the choice of $2F=1$ the normalizable massless state is independent of the coordinates on the disc. The normalizable massive states have the masses equal to $k(k+1)/\rho_0, k=1,2,3,..$, with $-k \le n \le k$. The spectrum is obviously discrete and stays discrete for all $F$ in the interval $0 < 2F \le 1 $ and for any finite $\rho_0$. The current is for all the solutions and also for all $F$ equal to zero. As long as the Hamiltonian is Hermitean on a disc, fermions can not leave the disc, unless an additional interaction (or a dynamical restoration of the symmetry $M^{(1+5)}$, that is the {\em phase transition}) would force them to go out of the disc, which is not the case for our toy model. Understanding the infinite disc as a $S^2$ sphere with the southern pole missing, a singularity of the type should be recognized: Around a point in the 2-dimensional space of $S^2$ - the singular point - we let the phase of the wave function rotate so that it turns around $2\pi$ as one goes around $2\pi$ in the direction to the singular point. But from the point of view of the structure of the singularity we can make a gauge transformation and replace the just mentioned phase rotation of the wave function by a singular (essentially constant) value of the spin connection on the circles around the singular point. The possibility that after the break a two dimensional manifold (with the zweibein of $S^2$, with one point missing and with a particular spin connection field) exists allowing only one normalizable massless state which is correspondingly mass protested and which couples to the Kaluza-Klein charge, opens, to our understanding, a new hope for the Kaluza-Klein[like] theories of the elegant version, with only the gravity, and will help to revive them.
1001.4458
\section{Transverse gravity and scale invariance} The hope that scale invariance could shed some light on the fact that the observed value of the cosmological constant scale is much lower than expected from the wilsonian viewpoint is certainly an old and cherished one. Let us mention just a couple of recent works \cite{Shaposhnikov:2008xb} \cite{Wetterich:2009az} where some entries into the bibliography can be found. \par The aim of the present work is to present a new twist of this idea in the framework of transverse gravity, where the full diffeomorphism invariance (Diff) is broken to those (TDiff) that preserve the Lebesgue measure. Transverse gravity has been studied in previous papers \cite{Alvarez:2005iy}-\cite{Alvarez:2009ga} where references to the earlier literature are included. \par Those transverse gravitational models that enjoy scale invariance (that is, rigid Weyl invariance in the sense of \cite{Iorio}), dubbed WTDiff in \cite{Alvarez:2006uu}) are (naively, as we shall see in a moment) characterized by tracefree field equations. This means that the actions must be scale invariant, at least on shell, that is \[ g^{\alpha\beta}{\delta\over \delta g^{\alpha\beta}}S=0 \] The big difference with Einstein's diffeomorphism invariant gravity is that now we can sprinkle powers of $g$ here and there. Under a global (i.e. constant) Weyl rescaling \[ g_{\alpha\beta}\rightarrow \Omega^2 g_{\alpha\beta} \] \[ g\rightarrow \Omega^{2n} g \] At the linear level with $\Omega\sim 1+\omega $, $\delta g_{\alpha\beta}=2\omega g_{\alpha\beta}$. Christoffels are invariant, and so is Riemann, so that \[ R\rightarrow \Omega^{-2} R \] This means that there is a purely gravitational (without scalar fields) scale invariant \footnote{ This action can be made Weyl gauge invariant, along the lines of \cite{Iorio} by means of a gauge field $W_\mu$ that transforms as \[ \delta W_\mu=\Omega^{-1}\partial_\mu\Omega \] and adding a term proportional to \[ \int d^n x\, |g|^{1\over n}\left(\nabla_\alpha W^\alpha + {n-2\over 2}W_\mu W^\mu\right) \] } action, i.e. \[ S_W\equiv -{1\over 2 \kappa^2} \int d^n x\,|g|^{1\over n} R \] The scaling behavior of matter is determined by the kinetic term (including the power of $|g|$ in front). For example, in Einstein's gravity, a scalar field with kinetic part \[ \sqrt{|g|}{1\over 2} g^{\mu\nu}\partial_\mu\Phi\partial_\nu\Phi \] implies that \[ \Phi\rightarrow \Omega^{1- n/2}\Phi \] which coincides with the naive dimension of the field. \par For Dirac fermions instead \[ \sqrt{|g|}i\bar{\psi} e_a^\mu \gamma^a \partial_\mu\psi \] yields the naive dimension again \[ \psi\rightarrow \Omega^{1-n\over 2}\psi \] \par Changing the power of $|g|$, for example, as in \[ |g|^a {1\over 2} g^{\mu\nu}\partial_\mu\Phi\partial_\nu\Phi \] implies that \[ \Phi\rightarrow \Omega^{1- na}\Phi \] It is plain that when $a= 1/n$ then the theory enjoys rigid Weyl invariance with inert matter fields. \par This means that with the measure \[ |g|^{1\over n} d^n x \] {\em rigid Weyl invariance implies that no potential is allowed, not even a mass term.} \par Interactions are, however, allowed, but must either be dressed with some gravitational scalar of weight $-2$, for example, \[ {c_p\over M^{p(n-2)+4-2n\over 2}} R\Phi^p \] (where $c_p$ are dimensionless constants, and $M$ is a mass scale). When perturbing around a nontrivial constant curvature background (such as de Sitter space), this gives rise to masses \[ m^2\sim c_2 \bar{R} \] which are naturally tiny if the radius of curvature is very large. \par Interacions are also allowed when they are totally decoupled from gravitation \cite{Alvarez:2007cp}, as in \[ d^n x\, V(\Phi_i) \] \par Similarly for Dirac fermions, \[ |g|^a i\bar{\psi} e_\alpha^\mu \gamma^\alpha \partial_\mu\psi \] yields \[ \psi\rightarrow \Omega^{1-2 a n\over 2}\psi \] The new condition for invariance with inert Dirac fermions is \[ a={1\over 2n} \] It is remarkable that this measure does not coincide with the bosonic one. \section{Low energy effective lagrangians} It is expected that lowest dimension operators compatible with the assumed symmetry (WTDiff) are bound to dominate the physics at low energies. Let us classify transverse scalars according to their dimension, writing also the corresponding scale invariant combination. \begin{itemize} \item {\bf Dimension zero}. Transverse dimension zero operators are \[ L_0\equiv F(|g|) \] so that the WTDiff cosmological constant is decoupled from gravity \[ \delta S_0\equiv \lambda \delta \int d^n x L_0=0 \] where $\lambda$ is be a dimension $n$ constant. \item {\bf Dimension two} Generic transverse dimension 2 operators are \begin{eqnarray} &&L_2^{(1)}=F(|g|) g^{\alpha\beta}\partial_\alpha g \partial_\beta g\nonumber\\ &&L_2^{(2)}=F(|g|) R \end{eqnarray} The WTDiff operator corresponding to the first one is \[ S_2^{(1)}\equiv -{1\over 2 \kappa_{1}^2}|g|^{1-2n\over n} g^{\alpha\beta}\partial_\alpha |g| \partial_\beta |g| \] where $\kappa_1^2$ is a new {\em gravitational constant} of dimension $2-n$ {\em a priori} unrelated to Newton's constant \begin{eqnarray} &&\delta S_2^{(1)}=-{1\over 2 \kappa_{1}^2}\int d^n x\left( -{1-2n\over n}|g|^{1-2n\over n} g^{\mu\nu}\partial_\mu |g| \partial_\nu |g| g _{\alpha\beta}\delta g^{\alpha\beta}+ |g|^{1-2n\over n} \partial_\alpha |g | \partial_\beta |g| \delta g^{\alpha\beta}-\right.\nonumber\\ &&\left.-2 |g|^{1-2n\over n}g^{\mu\nu}\partial_\mu |g|\partial_\nu\left(|g| g _{\alpha\beta}\delta g^{\alpha\beta}\right)\right)=\nonumber\\ &&-{1\over 2 \kappa_{1}^2}\int d^n x\left(- {1-2n\over n}|g|^{1-2n\over n} g^{\mu\nu}\partial_\mu |g| \partial_\nu |g| g _{\alpha\beta}\delta g^{\alpha\beta}+|g|^{1-2n\over n} \partial_\alpha |g| \partial_\beta |g| \delta g^{\alpha\beta}+\right.\nonumber\\ &&\left.+2 |g| \partial_\nu\left( |g|^{1-2n\over n}g^{\mu\nu}\partial_\mu |g|\right) g _{\alpha\beta}\delta g^{\alpha\beta}\right) \end{eqnarray} The gravitational equations of motion are now: \[ {\delta S_2^{(1)}\over \delta g^{\alpha\beta}}=|g|^{1-2n\over n}\partial_\alpha |g|\partial_\beta |g|-\left({1-2n\over n}|g|^{1-2n\over n} g^{\mu\nu}\partial_\mu |g| \partial_\nu |g| -2|g|\partial_\nu\left(|g|^{1-2n\over n} g^{\mu\nu} \partial_\mu |g|\right) \right)g_{\alpha\beta} \] where the gravitational constant has been deleted because it is not important in the absence of matter. These equations are traceless {\em up to a total derivative} \[ g^{\alpha\beta} {\delta S_2^{(1)}\over \delta g^{\alpha\beta}}=+2 n \partial_\nu\left( |g|^{1-n\over n}g^{\mu\nu}\partial_\mu |g|\right) \] This means that the Noether current associated to WTDiff is \[ W^\mu\equiv |g|^{1-n\over n}g^{\mu\nu}\partial_\nu |g| \] \item {\bf Dimension two (continued)} The second transverse dimension 2 operator is just a generalization of the usual Einstein-Hilbert lagrangian \[ L_2^{(2)}=F(|g|) R \] In order to compute the variation of the corresponding WTDiff operator \[ \delta S_2^{(2)}=\delta \left(-{1\over 2\kappa^2}\int d^n x |g|^{1/n} R\right) \] \sloppy The variation of the curvature scalar is needed \nopagebreak[4] \[ \delta R=\delta g^{\nu\sigma} R_{\nu\sigma}+\left(g_{\alpha\beta}\Delta -\nabla_{(\alpha}\nabla_{\beta)}\right) \delta g^{\alpha\beta} \] \nopagebreak[4] It follows \begin{eqnarray} &&\delta S_2^{(2)}=\int d^n x |g|^{1/n}\,\delta g^{\alpha\beta}\left( {1\over 2\kappa^2 n}g_{\alpha\beta}R-{1\over 2\kappa^2}R_{\alpha\beta}\right)\nonumber\\ &&-\int d^n x |g|^{1/n}\, {1\over 2\kappa^2}\left(g_{\alpha\beta}\Delta -\nabla_\alpha\nabla_\beta\right) \delta g^{\alpha\beta} \end{eqnarray} When \[ \delta g^{\alpha\beta}=-\Omega^2 g^{\alpha\beta} \] the action remains invariant, just because $\nabla_\alpha g_{\mu\nu}=0$. We must be careful with the integration by parts. A good place to start is the formula valid for any contravariant vector \cite{Eisenhart} \[ \nabla_\mu V^\mu={1\over \sqrt{|g|}}\partial_\mu\left(\sqrt{|g|} V^\mu\right) \] Let us integrate by parts the slightly more general integral \begin{eqnarray} &&\int d^n x f(g)\,\nabla_\mu \left(\nabla^\mu g_{\alpha\beta} \delta g^{\alpha\beta}-\nabla_\beta\delta g^{\mu\beta}\right)\equiv I_1-I_2\nonumber\\ &&I_1\equiv\int \partial_\mu\left({f\over \sqrt{|g|}}\right)\sqrt{|g|}\nabla_\beta\delta g^{\mu\beta}=\int \partial_\mu\left({f\over \sqrt{|g|}}\right) \sqrt{|g|}\left(\partial_\beta \delta g^{\mu\beta}+\Gamma^\mu_{\beta\sigma}\delta g^{\sigma \beta}+\Gamma^\beta_{\beta\sigma}\delta g^{\sigma\mu}\right)=\nonumber\\ &&\quad=\int \partial_\mu\left({f\over \sqrt{|g|}}\right) \sqrt{|g|}\left(\Gamma^\mu_{\beta\sigma}\delta g^{\sigma \beta}+\Gamma^\beta_{\beta\sigma}\delta g^{\sigma\mu}\right) -\partial_\beta\left(\partial_\mu\left({f\over \sqrt{|g|}}\right)\sqrt{|g|}\right) \delta g^{\mu\beta}\nonumber\\ &&I_2\equiv \int \partial_\mu\left({f\over \sqrt{|g|}}\right)\sqrt{|g|}\nabla^\mu g_{\alpha\beta} \delta g^{\alpha\beta}= \int \partial_\mu\left({f\over \sqrt{|g|}}\right)\sqrt{|g|}g^{\mu\lambda}\partial_\lambda\left( g_{\alpha\beta} \delta g^{\alpha\beta}\right)=\nonumber\\ &&\quad=-\int\partial_\lambda \left( \partial_\mu\left({f\over \sqrt{|g|}}\right) \sqrt{|g|} g^{\mu\lambda}\right) g_{\alpha\beta} \delta g^{\alpha\beta} \end{eqnarray} \nopagebreak In conclusion\footnote{ It is worth checking that this still gives zero for a metric rescaling. This means that both integrals must vanish separately when \[ \delta g^{\alpha\beta}= -\epsilon g^{\alpha\beta} \] This is obvious for $I_2$, which in this case reduces to the integral of a total derivative. With respect to the first integral, we shall employ the well-known formulas \cite{Eisenhart} \begin{eqnarray} &&\Gamma^\beta_{\beta\sigma}={1\over\sqrt{|g|}}\partial_\sigma \sqrt{|g|}\nonumber\\ &&g^{\alpha\beta}\Gamma^\mu_{\alpha\beta}=-{1\over\sqrt{|g|}}\partial_\lambda\left(\sqrt{|g|} g^{\mu\lambda}\right) \end{eqnarray} relating the Christoffels and the determinant. \begin{eqnarray} &&I_1=\int \Phi_\mu\sqrt{|g|}\left(-{1\over\sqrt{|g|}}\partial_\lambda\left(\sqrt{|g|} g^{\mu\lambda}\right)+{1\over\sqrt{|g|}}g^{\sigma\mu} \partial_\sigma \sqrt{|g|}\right)-g^{\mu\beta}\partial_\beta\left(\sqrt{|g|}\phi_\mu\right)=\nonumber\\ &&=\int \sqrt{|g|}\Phi_\mu\partial_\beta g^{\mu\beta}-\Phi_\mu \sqrt{|g|}\partial_\lambda g^{\mu\lambda}-\Phi_\mu g^{\mu\lambda}\partial_\lambda \sqrt{|g|}+\Phi_\mu g^{\sigma\mu}\partial_\sigma \sqrt{|g|}=0 \end{eqnarray} That is, the integrand itself vanishes. Under an arbitrary variation \begin{eqnarray} &&{\delta I_1\over \delta g^{\alpha\beta}}=\Phi_\mu \sqrt{|g|}\Gamma^\mu_{\alpha\beta}+\Gamma^\lambda_{\lambda\alpha}\phi_\beta \sqrt{|g|}-\partial_\beta\left(\sqrt{|g|}\Phi_\alpha\right)=\nonumber\\ &&-\sqrt{|g|}\partial_\beta\Phi_\alpha+\Phi_\mu \sqrt{|g|}\Gamma^\mu_{\alpha\beta}\equiv-\sqrt{|g|}\nabla_\beta\Phi_\alpha\nonumber\\ &&{\delta I_2\over \delta g^{\alpha\beta}}=-\partial_\lambda\left(\Phi_\mu\sqrt{|g|} g^{\mu\lambda}\right)g_{\alpha\beta}\equiv - \sqrt{|g|}\nabla_\lambda\left(\Phi_\mu g^{\mu\lambda}\right)g_{\alpha\beta} \end{eqnarray} where the covariant derivatives are defined as if $\Phi_\mu$ were a tensor; which it is not, so that those constructions do not enjoy all properties of covariant derivatives of tensors. Still, it is sometimes a useful abbreviation. }, calling $\Phi_\mu=\partial_\mu\left({f\over \sqrt{|g|}}\right)$\pagebreak \begin{eqnarray} &&\delta S_2^{(2)}=\int d^n x |g|^{1/n}\,\delta g^{\alpha\beta}\left( {1\over2\kappa^2 n}g_{\alpha\beta} R-{1\over 2\kappa^2}R_{\alpha\beta}\right)+\nonumber\\ &&+\int d^n x {1\over 2\kappa^2}\sqrt{|g|}\left(\nabla_{\beta}\Phi_{\alpha}-\nabla_\lambda\left(\Phi_\mu g^{\mu\lambda}\right)g_{\alpha\beta}\right)\delta g^{\alpha\beta}=\nonumber\\ &&=\int |g|^{1/n} {1\over 2\kappa^2}\Bigg[\left( {1\over n}g_{\alpha\beta} R-R_{\alpha\beta}\right)+{2-n\over 2n}|g|^{-1}\Bigg({2-3n\over 2n}g^{-1}g_\alpha g_\beta\qquad\qquad\nonumber\\ &&\qquad\qquad\quad-\nabla_\beta g_\alpha-\left({1-n\over n}g^{-1}g_\mu g_\nu g^{\mu\nu}+\partial_\mu(g_\nu g^{\mu\nu})\right)g_{\alpha\beta}\Bigg)\Bigg]\delta g^{\alpha\beta}d^n x \end{eqnarray} It is remarkable that Einstein's 1919 equations \[ R_{\mu\nu}-{1\over n} R g_{\mu\nu}= \kappa^2 \left(T_{\mu\nu}-{1\over n}T g_{\mu\nu}\right) \] which are truly traceless \cite{Einstein}, cf. also \cite{Alvarez:2005iy} do not seem to be obtainable from a variational principle of the sort we are studying which always yield equation of motion which are traceless only up to a total derivative. \end{itemize} \section{Conclusions} We have studied in the body of the paper a gravitational symmetry that forbids the presence of a cosmological constant. We believe that this is some progress insofar as we were not aware of any such symmetry previously known. \par It would be interesting to present our results in the Einstein frame. In the case of the second dimension 2 operator, which is the only one resembling the Einstein-Hilbert lagrangian, this would stem from the redefinition of a new spacetime metric such that \[ \sqrt{|g_e|}R[g_e]=|g|^{1\over n}R \] It is quite simple to realize that \[ g^e_{\mu\nu}\equiv g^{-{1\over n}}g_{\mu\nu} \] such that $g_e\equiv 1$. The restricted variational principle would then give true traceless equations of motion of the Einstein's 1919 sort \cite{Einstein}, except that in Einstein's mind the metric was not restricted by any unimodularity condition. \par We can understand our results from a different viewpoint. It is well known that transverse theories are equivalent, in a given reference system, to scalar-tensor theories \cite{Buchmuller:1988wx}\cite{Alvarez:2006uu}. A way of implementing this mapping is as follows: our second dimension 2 lagrangian is equivalent to \[ L=-{1\over 2\kappa^2}\sqrt{|g|}\phi R+\sqrt{|g|}\chi\left(\phi-|g|^{2-n\over 2n}\right) \] where $\phi$ and $\chi$ are two auxiliar scalar densities. It is now possible to find an unconstrained Einstein metric such that \[ \sqrt{|g_E|}R[g_E]=\sqrt{|g|}\phi R \] The answer is clearly \[ g^E_{\mu\nu}=\phi^{2\over n-2} g_{\mu\nu} \] (so that $g_E=g \phi^{2n\over n-2}$) and the full scalar-tensor lagrangian reads \begin{eqnarray} &&L=-\frac1{2\kappa^2}\phi\sqrt{|g|}R+\sqrt{|g|}\chi(\phi-|g|^{2-n\over 2n})=-\frac1{2\kappa^2}\sqrt{|g_E|}R_E+\sqrt{|g_E|}\phi^{-\frac2{n-2}}\chi(1-|g_E|^{2-n\over 2n})+\nonumber\\ &&+\frac{n-1}{2\kappa^2(n-2)}\left(2\partial_\mu\left(\sqrt{g_E}g_E^{\mu\nu}\frac{\partial_\nu\phi}{\phi}\right)-\sqrt{g_E}g_E^{\mu\nu}\frac{\partial_\mu\phi\partial_\nu\phi}{\phi^2}\right)\end{eqnarray} id est, it is of the unimodular type. It has however been stressed in the literature \cite{Alvarez:2005iy} that this is subtly not equivalent to choosing the unimodular gauge in general relativity, which is always allowed (and used many times by Einstein himself). The coupling to matter is independent of the scalar density $\phi$. For example, for a scalar field $\Phi$ (not to be confused with the scalar density $\phi$ of gravitational origin), \[ L_I=|g_E|^{1\over n} g_E^{\mu\nu} \partial_\mu\Phi \partial_\nu \Phi \] \par Under conformal transformations in the old frame \[ \phi\rightarrow \Omega^{2-n}\phi \] and for consistency, \[ \chi\rightarrow \Omega^{-2}\chi \] whereas the unimodular Einstein metric is inert. What looks like a purely gravitational symmetry in one frame, looks like a {\em matter} symmetry in another. Potential energy coupled to gravitation is again forbidden, because they appear in the new frame as \[ \phi^{-{2\over n-2}}V(\Phi) \] \par It is also interesting to follow the first dimension 2 term under this change of frame. It is easy to check that if the equations of motion are used, it reduces to \[ L_2^{(1)}={4 n^2\over (n-2)^2}\phi^{-2} g_E^{\mu\nu}\partial_\mu \phi \partial_\nu\phi \] (if the equations of motion are not used, there are other terms proportional to $\partial_\mu |g_E|$). \par Nevertheless, transverse theories are most likely severely constrained by experiment \cite{Alvarez:2009ga} and besides scale invariance has to be broken, at least by the Weyl anomaly \cite{Duff:1993wm}\cite{Boulanger:2007ab} (which has yet to be computed for transverse theories). \par Actually WTDiff makes an overkill, in the sense that it not only forbids a cosmological constant, but also any potential energy whatsoever which is coupled to gravitation. There is experimental evidence\footnote{ Although experiments tend to bound {\em differences} between properties of different objects, so that if those differences are universal there are not so constrained.} that potential energy does couple to gravitation \cite{Carlip:1997gf}, which is again an indication that scale symmetry must be badly broken in nature. \par The proper setting of the problem is most likely a cosmological one, in which the universe goes through different epochs characterized by different amounts of symmetry in the gravitational sector. Work on concrete models of this sort is in progress and we hope to report on that in the future. \section*{Acknowledgments} One of us (EA) is grateful from stimulating discussions with Jaume Garriga, Diego Blas, Alex Kehagias, Oriol Pujolas and Enric Verdaguer. This work has been partially supported by the European Commission (HPRN-CT-200-00148) as well as by FPA2009-09017 (DGI del MCyT, Spain) and S2009ESP-1473 (CA Madrid). R.V. is supported by a MEC grant, AP2006-01876.
1001.3921
\section{Introduction} Recently, quantum gravity at a Lifshitz point, which is power-counting renormalizable and hence potentially UV complete, was proposed by Ho\v{r}ava~\cite{ho1,ho2,ho3}. This theory for quantum gravity is not intended to be a unified theory like string theory. A hot issue of the Ho\v{r}ava-Lifshitz gravity is to answer to the question of whether it can accommodate the Ho\v{r}ava scalar $\psi$, in addition to two degrees of freedom (DOF) for a massless graviton. This additional scalar degree of freedom inevitably appears as a result of the reduced symmetry of diffeomorphism known as ``foliation diffeomorphsim"~\cite{CNPS,LP,SVW,BPS1,KA,HKG}. The authors~\cite{CNPS} have shown that without the projectability condition, a perturbative general relativity cannot be reproduced in the IR-limit of the $z=3$ deformed Ho\v{r}ava-Lifshitz gravity because of the strong coupling problem. With the projectability condition, the authors~\cite{SVW} have argued that $\psi$ is propagating around the Minkowski space but it has a negative kinetic term, showing a ghost mode. Moreover, it was found that the Ho\v{r}ava scalar is a ghost if the sound speed squared is positive (strong coupling problem)~\cite{BPS1}. Even the Lorentz-violating mass term was included, the mass term did not cure the ghost problem of the Ho\v{r}ava scalar~\cite{M-massive}. In order to resolve the strong coupling problem, Blas, Pujolas, and Sibiryakov have proposed an extended version of the Ho\v{r}ava-Lifshitz gravity where the lapse function $N$ may depend on the spatial coordinate $r$ (the theory is not projectable) and thus, terms of $\partial_i \ln N$ are included in the action~\cite{BPS2}. It was argued that this extended version are free from the strong coupling pathology. However, the extended theory could still suffer from the strong coupling at low energies in the kinetic term~\cite{PS}, but it can be evaded by including higher spatial derivative terms~\cite{BPS3}. Hence, up to now, the strong coupling issue is not completely resolved even though the extended theory was seriously considered. Specific cosmological implications of the $z=3$ Ho\v{r}ava-Lifshitz gravity with the Friedmann-Robertson-Walker (FRW) metric based on isotropy and homogeneity have recently been shown in~\cite{cal,KK,muk}, including homogeneous vacuum solution with chiral primordial gravitational waves~\cite{TS} and nonsingular cosmological evolution with the big bang of standard and inflationary universe replaced by a matter bounce~\cite{Bra,Rama,LS}. As far as the isotropic solutions are concerned, there is no difference between $z=2$~\cite{ho1} and $z=3$~\cite{ho2} Ho\v{r}ava-Lifshitz gravities because the Cotton tensor vanishes when using the isotropic FRW metric. Furthermore, one has introduced the $z=3$ deformed Ho\v{r}ava-Lifshitz gravity to find asymptotically flat background~\cite{KS,Myungbh}. On the other hand, the equations of general relativity lead to singularities when we look at the equations backwards the origin of time. Especially, we concentrate on a temporal singularity of the solutions to the Einstein equations for the mixmaster model (Bianchi IX Universe) describing an anisotropic and homogeneous cosmology. It was well known that the approach to singularity shows a chaotic behavior. The mixmaster universe~\cite{mix1,mix2,mix3,mix4,cl,mix5,mix6,mix7} could be described by a Hamiltonian dynamical system in a 6D phase space. Belinsky, Khalatnikov, and Lifshitz (BKL) had conjectured that this 6D phase system could be well approximated by a 1D discrete Gauss map that is known to be chaotic as one approaches the singularity~\cite{BKL}. Chernoff and Barrow have suggested that the mixmaster 6D phase space could be split into the product of a 4D phase space and a 2D phase space having regular variables~\cite{mix3}. Following Cornish and Levin~\cite{cl}, Lehner and Di Menza have found that the chaos in the mixmaster universe is obtained for the Hamiltonian system with potential having fixed walls, which describes the curvature anisotropy~\cite{mix6}. However, it turned out that the mixmaster chaos could be suppressed by (loop) quantum effects~\cite{BD,Bo}. In the loop quantum cosmology, the effective potential at decreasing volume labeled by ``discreteness $j$" are significantly changed in the vicinity of (0,0)-isotropy point in the anisotropy plane $(\beta_+,p_+)$. The potential at larger volumes exhibits a potential wall of finite height and finite extension. As the volume is decreased, the wall moves inward and its height decreases. Progressively, the wall disappears completely making the potential negative everywhere at a dimensionless volume of $(2.172j)^{3/2}$ in the Planck units. Eventually, the potential approaches zero from below. This shows that classical reflections will stop after a finite amount of time, implying that classical arguments about chaos are inapplicable. Once quantum effects are taken into account, the reflections stop just when the volume of a given patch is about the size of Planck volume. We point out that loop quantum gravity is a non-perturbative and background independent canonical quantization of general relativity, while loop quantum cosmology is a cosmological mini-superspace model quantized with methods of loop quantum gravity. Hence the discreteness of spatial geometry and the simplicity of setting allow for complete study of cosmological evolution. The difference between loop quantum cosmology and other approaches of quantum cosmology is that the input is plugged by a full quantum gravity theory, which introduces a discreteness to space-time. That is, in order to quantize general relativity, this discreteness manifests itself as quanta of space. Recently, we have investigated the $z=2$ deformed Ho\v{r}ava-Lifshitz gravity with coupling constant $\omega$ which leads to a nonrelativistic ``mixmaster" cosmological model~\cite{MKSP}. We have obtained that for $\omega>0$, there always exists chaotic behavior. This contrasts to the case of the loop mixmaster dynamics based on loop quantum cosmology~\cite{Bo}, where the mixmaster chaos is suppressed by loop quantum effects~\cite{BD}. We recognize that the role of UV coupling parameter $\omega$ is intrinsically different from the area quantum number $j$ of the loop quantum cosmology which controls the volume of the universe. In our case, time variable (related to the volume of $V=e^{3\alpha}$) as well as two physical degrees of anisotropy $\beta_{\pm}$ are treated in the standard way without quantization. However, in the loop quantum framework, all three scale factors were quantized using the loop techniques. Hence two are quite different: the potential wells at the origin never disappear for any $\omega>0$ in the $z=2$ Ho\v{r}ava-Lifshitz gravity, while in the loop quantum gravity the height of potential wall rapidly decreases until they disappears completely as the Planck scale is reached. On the other hand, it was interestingly shown that adding 4D curvature squared term $(^{4}R)^2$ (and possibly other) curvature squared terms to the Einstein gravity leads to an interesting result that the chaotic behavior is absent~\cite{BC1,BC2,BC3}. Hence it is very curious to see why $(^{4}R)^2$ does suppress chaotic behavior but 3D curvature squared terms of $\frac{3}{4\omega}R^2-\frac{2}{\omega}R_{ij}R^{ij}$ does not suppress chaotic behavior. It was argued that the absence of chaos in covariant higher curvature generalization of Einstein gravity $f(^{4}R)$ is due to the presence of a scalar $\varphi=\log f'(^{4}R)$~\cite{BBLP}. This scalar slows down the velocity of the point particle (the universe) relative to the moving walls and thus, the universe will bounce back only if it moves not too oblique relative to the walls. A few of collisions are sufficient to make it so oblique that it will not bounce off another wall. The universe will enter quickly in a definite Kasner trajectory and stay there all the time in its approach to the singularity. Hence, the evolution of the universe is not chaotic. Hence it is very interesting to investigate cosmological application of the $z=3$ Ho\v{r}ava-Lifshitz gravity in conjunction with the mixmaster universe based on the anisotropy and homogeneity because this Ho\v{r}ava-Lifshitz gravity may be regarded as a strong candidate for quantum gravity. The mixmaster universe in the $z=3$ Ho\v{r}ava-Lifshitz gravity was discussed in Ref.\cite{BBLP}. However, the authors~\cite{BBLP} have focused on the Cotton bilinear term $C_{ij}C^{ij}$ only and thus, have briefly sketched possible dynamical behaviors of the universe when approaching the initial singularity. For the isotropic case of the $z=3$ Ho\v{r}ava-Lifshitz gravity, the $k=1$ FRW universe with dark radiation and dust matter ($w=0$) has led to a matter bounce. If the Ho\v{r}ava-Lifshitz gravity is true, the universe did not bang-it bounced. That is, a universe filled with matter will contract down to a small but finite size and then bonce again, giving us the expanding universe that we see today. This bounce scenario indicates a key feature of the Ho\v{r}ava-Lifshitz gravity, showing an essential difference from the big bang scenario. On the other hand, for the anisotropic case of Einstein gravity, the mixmaster universe filled with stiff matter ($w=1$) has led to a non-chaotic universe because there is a slowing down of particle velocity, which is unable to reach any more the walls after some time in the moving wall picture. Hence, an urgent issue for an anisotropic mixmaster universe is to see whether there exists a mechanism to slow down the particle velocity in the $z=3$ Ho\v{r}ava-Lifshitz gravity. In this work, we wish to find whether a mechanism to stop chaotic behaviors exists in the Ho\v{r}ava-Lifshitz gravity. We will analyse the $z=3$ deformed Ho\v{r}ava-Lifshitz gravity without cosmological constant to make the situation simple. \section{$z=3$ deformed Ho\v{r}ava-Lifshitz gravity} In order to get an associated Hamiltonian within the ADM formalism~\cite{adm} of the $z=3$ deformed Ho\v{r}ava-Lifshitz gravity~\cite{ho1,KS,Myungch}, we have to find three potentials in 6D phase space: IR-potential $V_{IR}$ from 3D curvature $R$ and two UV-potentials: $V^{(I)}_{UV}$ from curvature squared terms of $R^2$ and $R_{ij}R^{ij}$ with UV coupling parameter $\omega$ and $V^{(II)}_{IR}$ from $C_{ij}R^{ij}$ and $C_{ij}C^{ij}$ with additional coupling constant $\epsilon$. We start with the action of the $z=3$ deformed Ho\v{r}ava-Lifshitz gravity~\cite{ho1,KS} \begin{eqnarray} S_\lambda = \int dtd^3x\sqrt{g}N\left[\frac{2}{\kappa^2}(K_{ij}K^{ij}-\lambda K^2) + \mu^4 R + \frac{\kappa^2\mu^2(1-4\lambda)}{32(1-3\lambda)}R^2 -\frac{\kappa^2\mu^2}{8}R_{ij}R^{ij} +\frac{\kappa^2\mu}{2\eta^2}C_{ij}R^{ij} -\frac{\kappa^2}{2\eta^4}C_{ij}C^{ij}\right] \end{eqnarray} with four parameters $\kappa,~\mu,~\lambda$, and $\eta$. In the case of $\lambda=1$, the above action leads to \begin{eqnarray} \label{action} S_{\lambda=1}= \int dtd^3x \sqrt{g}N \mu^4 \Bigg[\frac{1}{c^2}(K_{ij}K^{ij}-K^2) +R+\frac{3}{4\omega}R^2-\frac{2}{\omega}R_{ij}R^{ij} +\frac{8\sqrt{2}}{\omega^{7/6}\epsilon}C_{ij}R^{ij} -\frac{16}{\omega^{4/3}\epsilon^2}C_{ij}C^{ij}\Bigg] \end{eqnarray} where the two UV coupling parameters $\omega=16\mu^2/\kappa^2$ and $\epsilon=\eta^2c^{1/3}$ are introduced to control curvature and Cotten squared terms~\cite{Myungch}. In the limit of $\omega\to \infty ~(\kappa^2 \to 0)$, $S_{\lambda=1}$ reduces to Einstein gravity (GR) with the speed of light $c^2=\kappa^2\mu^4/2$ and Newton's constant $G=\kappa^2/(32\pi c)$. Now, let us introduce the metric for the mixmaster universe to distinguish between expansion (volume change: $\alpha$) and anisotropy (shape change: $\beta_{ij}$) as follows \begin{equation} \label{metric} ds^2=-dt^2+e^{2\alpha}e^{2\beta_{ij}}\sigma^i \otimes \sigma^j, \end{equation} where $\sigma^i$ are the 1 forms given by \begin{eqnarray} && \sigma^1=\cos\psi d\theta+\sin\psi\sin\theta d\phi,\nonumber\\ && \sigma^2=\sin\psi d\theta-\cos\psi\sin\theta d\phi,\nonumber\\ && \sigma^3=d\psi+\cos\theta d\phi \end{eqnarray} on the three-sphere parameterized by Euler angles ($\psi,\theta,\phi$) with $0\leq\psi<4\pi$, $0\leq\theta<\pi$, and $0\leq\phi<2\pi$. The shape change $\beta_{ij}$ is a $3\times 3$ traceless symmetric tensor with det[$e^{2\beta_{ij}}]=1$ expressed in terms of two independent shape parameters $\beta_\pm$ as \begin{equation} \label{betapm} \beta_{11}=\beta_++\sqrt{3}\beta_-,~~\beta_{22}=\beta_+-\sqrt{3}\beta_-,~~\beta_{33}=-2\beta_+. \end{equation} Then, the evolution of the universe can be described by giving $\beta_\pm$ as function of $\alpha$. Note that the $k=1$ FRW universe is the special case of $\beta_\pm=0$. Now we concentrate on the behavior near singularity. Then, the empty space without matter is sufficient to display the generic local evolution close to singularity because the terms due to dust matter or radiation are negligible near singularity. Before we proceed, let us consider the Einstein gravity. Using Eq. (\ref{metric}), the 3D curvature takes the form \begin{equation} R= -12e^{-2\alpha}V_{IR}(\beta_+,\beta_-), \end{equation} where the IR-potential of curvature anisotropy is given by \begin{eqnarray} V_{IR}(\beta_+,\beta_-) = \frac{1}{24}\Big[2e^{4\beta_+}\cosh(4\sqrt{3}\beta_-)+e^{-8\beta_+}\Big] - \frac{1}{12}\Big[2e^{-2\beta_+}\cosh(2\sqrt{3}\beta_-)+e^{4\beta_+}\Big]. \end{eqnarray} \begin{figure}[t] \includegraphics{fig1.eps} \caption{The typical potential well $V_{IR}$ for fixed $\alpha=1$. Three canyon lines are located at $\beta_-=0$ and $\beta_-=\pm \sqrt{3}\beta_+$. } \label{fig1.eps} \end{figure} \begin{figure}[t] \includegraphics{fig2.eps} \caption{The equipotential curves of $V_{IR}$. Left panel shows equipotential curves viewed from the top and right panel indicates the shape of potential $V_{IR}$, which has no local maxima along the canyon line $\beta_-=0$, compared to $V_{UV}^{(I)}$ and $V_{UV}^{(II)}$. } \label{fig2.eps} \end{figure} Figure \ref{fig1.eps} depicts a typical IR-potential where three canyon lines located at $\beta_-=0$ and $\beta_-=\pm \sqrt{3} \beta_+$, showing an axial symmetry. It has the shape of an equilateral triangle in the space labeled by ($\beta_+,\beta_-$) and exponentially steep walls far away from the origin. As is shown in Fig. \ref{fig2.eps}, the potential is the well close to the origin $(0,0)$: left panel shows equipotential curves viewed from the top, while right panel is the shape of potential. The origin (0,0), which corresponds to the isotropic case, is the global minimum with negative value. Near the origin, the IR-potential takes concentric forms of equipotential curves as \begin{equation} \label{zzvir} V_{IR}(0,0)\approx -\frac{1}{8}+(\beta^2_++\beta^2_-). \end{equation} On the other hand, the asymptotic form of the IR-potential for the case of $\beta_- \ll 1 $ is either $V_{IR}\approx 2e^{4\beta_+}\beta^2_-$ if $\beta_+ \to \infty$ or $V_{IR}\approx \frac{1}{24}e^{-8\beta_+}$ if $\beta_+\to -\infty$. That is, the both walls grow exponentially. $V_{IR}$ has no local maxima along $\beta_-=0$, compared to $V_{UV}^{(I)}$ and $V_{UV}^{(II)}$. For $\beta_-=0$, the IR-potential approaches zero from below if $\beta_+\to \infty$. Hence the point particle with positive energy $E>0$ can escape to infinity along the canyon lines. The smallest deviation from axial symmetry will turn the particle against the infinitely steep walls. The evolution of the universe is described by the motion of a point $\beta=(\beta_+,\beta_-)$ as a function of $\alpha$ using the time-dependent Lagrangian. The exponential wall picture of the IR-potential implies that a particle (the universe) runs through almost free (Kasner) epochs where the potential could be neglected, and it is reflected at the walls, resulting infinite number of oscillations. This implies that Einstein gravity with the IR-potential $V_{IR}$ shows chaotic behaviors when the singularity is approached~\cite{dhn}. The action (\ref{action}) provides the time-dependent Lagrangian \begin{eqnarray} \label{actionA} {\cal L}_{\lambda=1} = (4\pi)^2 \mu^4 e^{3\alpha}\left[-6(\dot{\alpha}^2-\dot{\beta}^2_+-\dot{\beta}^2_-) -12e^{-2\alpha}V_{IR}(\beta_+,\beta_-) - \frac{e^{-4\alpha}}{16\omega}V^{(I)}_{UV}(\beta_+,\beta_-) - e^{-6\alpha} V^{(II)}_{UV}(\beta_+,\beta_-) \right], \end{eqnarray} where the dot denotes $\frac{d}{cdt}$. One needs to introduce an emergent speed of light $c$ in order to see the UV behaviors, while for the IR behaviors, one chooses $c=1$ simply. Here, the UV-potential $V^{(I)}_{UV}$ is defined from the curvature squared terms as \begin{equation} \frac{3}{4\omega}R^2-\frac{2}{\omega}R_{ij}R^{ij} \equiv -\frac{e^{-4\alpha}}{16\omega} V^{(I)}_{UV}, \end{equation} where the UV-potential $V^{(I)}_{UV}$ takes the form of \begin{eqnarray} V^{(I)}_{UV}(\beta_+,\beta_-) &\equiv& -\left[40 \Big(e^{8\beta_+}\cosh(4\sqrt{3}\beta_-) +e^{2\beta_+}\cosh(6\sqrt{3}\beta_-)+e^{-10\beta_+}\cosh(2\sqrt{3}\beta_-)\Big) -40e^{2\beta_+}\cosh(2\sqrt{3}\beta_-)\right.\nonumber\\ && \left.+4e^{-4\beta_+}\cosh(4\sqrt{3}\beta_-)+2e^{8\beta_+}-20e^{-4\beta_+} - 42e^{8\beta_+}\cosh(8\sqrt{3}\beta_-) -21e^{-16\beta_+}\right]. \end{eqnarray} On the other hand, the other UV-potential $V^{(II)}_{UV}$ is found from the Cotton terms as \begin{equation} \frac{8\sqrt{2}}{\omega^{7/6}\epsilon}C_{ij}R^{ij} -\frac{16}{\omega^{4/3}\epsilon^2}C_{ij}C^{ij}\equiv -e^{-6\alpha} V^{(II)}_{UV}(\beta_+,\beta_-) \end{equation} with \begin{eqnarray} V^{(II)}_{UV}(\beta_+,\beta_-) &\equiv& V^{CR}_{UV}(\beta_+,\beta_-)+V^{CC}_{UV}(\beta_+,\beta_-) \nonumber \\ &=& \frac{8\sqrt{2}e^{\alpha}}{\omega^{7/6}\epsilon} \left[e^{-20\beta_+}+e^{-8\beta_+}-2e^{-14\beta_+}\cosh(2\sqrt{3}\beta_-)\right.\nonumber\\ &&~~~~~~~~ +\left.2e^{4\beta_+}(\cosh(4\sqrt{3}\beta_-)-\cosh(8\sqrt{3}\beta_-)) -2e^{10\beta_+}(\cosh(6\sqrt{3}\beta_-)-\cosh(10\sqrt{3}\beta_-))\right] \nonumber\\ &-& \frac{8}{\omega^{4/3}\epsilon^2} \left[6-3e^{-24\beta_+}+6e^{-18\beta_+}\cosh(2\sqrt{3}\beta_-) -e^{-12\beta_+}(1+2\cosh(4\sqrt{3}\beta_-))\right.\nonumber\\ &&~~~~~~~~ - 4e^{-6\beta_+}(\cosh(2\sqrt{3}\beta_-)-\cosh(6\sqrt{3}\beta_-)) -4\cosh(4\sqrt{3}\beta_-)-2\cosh(8\sqrt{3}\beta_-)\nonumber\\ && ~~~~~~~~-2e^{6\beta_+}(2\cosh(2\sqrt{3}\beta_-)+\cosh(6\sqrt{3}\beta_-)-3\cosh(10\sqrt{3}\beta_-))\nonumber\\ && ~~~~~~~~+2\left. e^{12\beta_+}(1-\cosh(4\sqrt{3}\beta_-)+3\cosh(8\sqrt{3}\beta_-)-3\cosh(12\sqrt{3}\beta_-))\right]. \end{eqnarray} We have thoroughly studied the $V^{(II)}_{UV}=0$ case of the $z=2$ deformed Ho\v{r}ava-Lifshitz gravity in~\cite{MKSP}, indicating that chaotic behavior persists, as the Einstein gravity did show. Thus, we point out that a key feature of the $z=3$ deformed Ho\v{r}ava-Lifshitz gravity is the presence of the UV-potential $V^{(II)}_{UV}$. As was mentioned in~\cite{BBLP}, the Cotton bilinear term $V^{CC}_{UV}$ contributes to $V^{(II)}_{UV}$ without $\alpha$. Near the origin $(\beta_+,\beta_-)=(0,0)$, the UV-potential of $V^{(II)}_{UV}$ is approximated by \begin{equation} V^{(II)}_{UV}(\beta_+,\beta_-) \approx \left(\frac{288\sqrt{2}e^\alpha}{\omega^{7/6}\epsilon}+\frac{864}{\omega^{4/3}\epsilon^2}\right) \left(\beta^2_++\beta^2_-\right). \end{equation} This means that $V^{(II)}_{UV}(0,0)=0$ has no contribution to the isotropic point (0,0), in contrast to the IR-potential $V_{IR}(0,0)=-1/8$ and UV-potential $V^{(I)}_{UV}(0,0)=-3$. Fig. \ref{fig3.eps} indicates shape changes of the UV-potential $V^{(II)}_{UV}$ for different values $\epsilon$, showing different local maxima $V_{lm}(\beta_+,0,1,\epsilon)$ for different $\epsilon$. The asymptotic form for $\beta_-\ll 1$ is either $V^{(II)}_{UV}\approx \frac{6144\beta^2_- e^{12\beta_+}}{\omega^{4/3}\epsilon^2}$ if $\beta_+\rightarrow\infty$, or $V^{(II)}_{UV}\approx \frac{24e^{-24\beta_+}}{\omega^{4/3}\epsilon^2}$ if $\beta_+\rightarrow-\infty$. An important point is that unlike $V_{IR}$ and $V^{(I)}_{UV}$, the asymptotic form of $V^{(II)}_{UV}\to V^{CC}_{UV}$ is independent of volume change $\alpha$. As before, for $E>V_{lm}$, the point particle can escape to infinity along the canyon lines $\beta_-=0$ and $\beta_-=\pm \sqrt{3}\beta_+$. \begin{figure}[t] \includegraphics{fig3.eps} \caption{The UV-potentials of $V^{(II)}_{UV}$ for $\alpha=0,~\beta_-=0,$ and $\omega=1$ with $\epsilon=1$ (solid), $10$ (dotted), and $100$ (dashed), respectively. There exist local maxima $V_{lm}$, compared to the IR-potential $V_{IR}$.} \label{fig3.eps} \end{figure} In order to appreciate implications of chaotic approach to the $z=3$ deformed Ho\v{r}ava-Lifshitz gravity, we have to calculate the Hamiltonian density by introducing three canonical momenta as \begin{equation} p_\pm=\frac{\partial {\cal L}_{\lambda=1}}{\partial\dot{\beta}_\pm}=12(4\pi)^2\mu^4e^{3\alpha}\dot{\beta}_\pm, ~~~p_\alpha=\frac{\partial {\cal L}_{\lambda=1}}{\partial\dot{\alpha}}=-12(4\pi)^2\mu^4e^{3\alpha}\dot{\alpha}. \end{equation} The normalized canonical Hamiltonian in 6D phase space is given by \begin{eqnarray} {\cal H}_{6D} &=&\frac{1}{2}(p^2_++p^2_--p^2_\alpha) +e^{4\alpha}\Big(V_{IR}+\frac{e^{-2\alpha}}{192 \omega}V^{(I)}_{UV} +\frac{e^{-4\alpha}}{12}V^{(II)}_{UV}\Big)\nonumber \\ \label{potalpha} &\equiv &\frac{1}{2}(p^2_++p^2_--p^2_\alpha)+V_{\alpha}(\beta_+,\beta_-,\omega,\epsilon), \end{eqnarray} where we have redefined ${\cal H}_{6D}=12(4\pi)^2\mu^4e^{3\alpha}{\cal H}_c$ using the canonical Hamiltonian ${\cal H}_c$, and chosen the parameter $12(4\pi)^2\mu^4=1$ for simplicity. Then, the Hamiltonian equations of motion are \begin{eqnarray} \label{betapmeq} && \dot{\beta}_\pm=p_\pm, ~~\dot{p}_\pm=-e^{4\alpha}\frac{\partial V_{IR}}{\partial\beta_\pm} -\frac{e^{2\alpha}}{192\omega}\frac{\partial V^{(I)}_{UV}}{\partial\beta_\pm} -\frac{1}{12}\frac{\partial V^{(II)}_{UV}}{\partial\beta_\pm},\\ \label{alphaeq} && \dot{\alpha}=-p_\alpha,~~\dot{p}_\alpha =-4e^{4\alpha}V_{IR} -\frac{e^{2\alpha}}{96\omega}V^{(I)}_{UV}-\frac{1}{12}V^{CR}_{UV} \end{eqnarray} in 6D phase space. \section{Isotropic evolution} Now, let us see what happens in the isotropic point of (0,0) where the $k=1$ FRW universe pops up. Since the isotropic potential does not receive any contribution from the Cotton tensor, it is given by \begin{equation} V_{\alpha}(0,0,\omega)=-\Big(\frac{e^{4\alpha}}{8}+\frac{e^{2\alpha}}{64 \omega}\Big). \end{equation} From the Hamiltonian constraint of ${\cal H}_{6D}\approx0$, we have the first Friedmann equation \begin{equation} \dot{\alpha}^2=-\frac{1}{4}\Big(\frac{1}{e^{2\alpha}}+\frac{1}{8\omega}\frac{1}{e^{4 \alpha}}\Big). \end{equation} Introducing the scaling factor $a=2e^{\alpha}$ with $H=\frac{\dot{a}}{a}=\dot{\alpha}$, the above equation leads to \begin{equation} H^2=-\Big(\frac{1}{a^2}+\frac{1}{2\omega}\frac{1}{a^4}\Big), \end{equation} which is the same equation appeared for the Ho\v{r}ava-Lifshitz cosmology~\cite{Bra,Rama,LS,Myungch}. The second term of right handed side represents the dark radiation with negative energy density. This means that the universe cannot evolve isotropically in vacuum without turning on some shearing components. Adding a matter density of $\rho=\frac{\rho_0}{a^{3(1+w)}}$ to the above equation leads to \begin{equation} H^2=-\Big(\frac{1}{a^2}+\frac{1}{2\omega}\frac{1}{a^4}\Big)+\frac{\rho_0}{a^{3(1+w)}}. \end{equation} The solution to this equation can be obtained for $-1/3<\omega <1/3$. Neglecting the first term of curvature, there can be a bounce in $a$ that replaces the initial singularity of the universe. This is the only case that dark radiation with negative energy density can grow with respect to a regular matter energy density. However, small derivations from isotropy will be dominant in the small volume limit of $a\to 0(\alpha \to -\infty)$ because the Cotton bilinear term, which is independent of $\alpha(a)$, kicks in $V_\alpha(\beta_+,\beta_-,\omega,\epsilon)$ and washes away the effects of dark radiation term. The kinetic energy of anisotropy parameter $\beta_\pm$ also contributes to the universe evolution. This implies that the cosmological bounce is unstable against anisotropy and the universe can be in singular state of Kasner universe. In the next section, we wish to study the mixmaster universe of the $z=3$ Ho\v{r}ava-Lifshitz gravity explicitly. \section{Chaotic behaviors in reduced 4D phase space} Chernoff and Barrow have showed that the mixmaster 6D phase space could be split into the product of a 4D phase space showing chaotic behavior and a 2D phase space showing regular behavior~\cite{mix3}. Hence, we confine the dynamical system to a 4D phase space describing the 4D static billiard in this section. Setting $\alpha = 1$, let us consider the motion of a particle (the universe) of coordinates ($\beta_+,\beta_-$) under the potential of \begin{equation} V(\beta_+,\beta_-,\omega,\epsilon) =e^4\Bigg[V_{IR}+\frac{V^{(I)}_{UV}}{192 e^2 \omega}+\frac{V^{(II)}_{UV}}{12 e^4}\Bigg]. \end{equation} This potential has the symmetry of an equilateral triangle reflecting the equivalence of three axes in the metric (\ref{betapm})~\cite{mix1}. Explicitly, a particle is moving in the potential with exponential walls bounding a triangle. We mention again that near the origin (0,0), the potential $V(\beta_+,\beta_-,\omega,\epsilon)$ takes approximately the form of \begin{equation} \label{zzv} V(0,0,\omega,\epsilon) \approx -\Big(\frac{e^{4}}{8}+\frac{e^2}{64\omega}\Big)+ \Big(e^4+\frac{17e^2}{4\omega}+\frac{24\sqrt{2}e}{\omega^{7/6}\epsilon} +\frac{72}{\omega^{4/3}\epsilon^2}\Big)\Big(\beta^2_++\beta^2_-\Big).\end{equation} Comparing (\ref{zzv}) with (\ref{zzvir}), the former reduces to the latter up to $e^4$ in the IR-limit of $ \omega \to \infty$. It turned out that adding the UV-potential $V^{(I)}_{UV}$ makes the potential well deeper, compared to $V^{(II)}_{UV}$. An important thing is to check whether the inflection point at the origin of $(\beta_+,\beta_-)=(0,0)$ appears as $\omega$ varies, which might show a signal of changing from chaotic to non-chaotic behavior. This inflection point is determined by the condition of \begin{equation} V''(\beta_+,0,\omega,\epsilon)|_{\beta_+=0}=V''(0,\beta_-,\omega,\epsilon)|_{\beta_-=0}=0, \end{equation} which leads to an algebraic equation \begin{equation} \label{inflection} 2e^4+\frac{17e^2}{2\omega}+\frac{48\sqrt{2}e}{\omega^{7/6}\epsilon}+\frac{144}{\omega^{4/3}\epsilon^2}=0. \end{equation} However, we find that there is no such positive solution $\omega$ and $\epsilon$ to Eq. (\ref{inflection}). This shows clearly that an inflection point could not be developed by adjusting $\omega$ and $\epsilon$. We have the same result even for negative $\epsilon$ because the Cotton potential $V_{IR}^{(II)}$ is always zero at the origin. This means that we could not make a transition from chaotic to non-chaotic behavior in the 4D phase space. Explicitly, as is shown in Fig. 4, the shapes of potential near the origin $(0,0)$ is not changed significantly as the parameter $\omega$ is changed from 100 to 0.01($\epsilon$ from 100 to 1). For $\omega=100,1$ cases, there are no essential differences when comparing with the IR case of $\omega=\infty$ (Einstein gravity: EG). It is found that the origin of (0,0) always remains global minimum, regardless of any value of $\omega$ which regulates the UV effects. The only difference is the appearance of local maxima $V_{lm}$ as $\omega$ decreases. \begin{figure}[b] \includegraphics{fig4.eps} \caption{Three types of potential graphs $V(\beta_+,0,\omega,\epsilon)$ for (a) the $\omega=100$ and $ \epsilon=100$ case (EG) without local maximum; (b) the $\omega=0.01 $ and $ \epsilon=100$ case ($z=2$ HL) with local maximum $V_{lm}=7.116$; (c) the $\omega=0.01$ and $\epsilon=1$ case ($z=3$ HL) with local maximum $V_{lm}=113.744$. } \label{fig4.eps} \end{figure} The chaos could be defined as being such that (i) the periodic points of the flow associated to the Hamiltonian are dense, (ii) there is a transitive orbit in the dynamical system, and (iii) there is sensitive dependence on the initial conditions. Our reduced system is described by the 4D Hamiltonian \begin{equation} {\cal H}_{4D} =\frac{1}{2}(p^2_++p^2_-) + V(\beta_+,\beta_-,\omega,\epsilon). \end{equation} \begin{figure}[t] \includegraphics{fig5.eps} \caption{Poincar\'{e} sections for the $\omega=100$ and $\epsilon=100$ case (EG) with (a) $E=-6.5$ (b) $E=-6.0$ (c) $E=-5.0$ (d) $E=-4.0$ } \label{fig5.eps} \end{figure} \begin{figure} \includegraphics{fig6.eps} \caption{Poincar\'{e} sections for the $\omega=0.01$ and $ \epsilon=100$ case ($z=2$ HL gravity) with (a) $ E=-15.0 $(b) $E=-13.0 $ (c) $E=-7.0$ (d) $E=-4.0$ .} \label{fig6.eps} \end{figure} \begin{figure} \includegraphics{fig7.eps} \caption{Poincar\'{e} sections for the $\omega=0.01$ and $ \epsilon=1$ case ($z=3$ HL gravity) (a) $E=3.0$ (b) $E=20.0$ (c) $E=50.0$ (d) $E=60.0$. } \label{fig7.eps} \end{figure} Now, let us perform simulations of the dynamics and represent Poincar\'{e} sections, which describe the trajectories in phase space $(p_+,\beta_+)$ by varying the total energy $E$ or ${\cal H}_{4D}$ of the system. We perform the analysis for three cases: $\omega=100~\epsilon=100$ (EG),~$\omega=0.01~\epsilon=100$($z=2$ HL gravity), and $\omega=0.01~\epsilon=1$ ($z=3$ HL gravity). We have found that the chaotic behavior persists for all $\omega>0$. Figs. 5, 6, and 7 represent as Einstein gravity (EG), $z=2$ HL gravity, and $z=3$ HL gravity, respectively. These show that the intersections of several computed trajectories are displaced in ($p_+,\beta_+$) with $\beta_-=0$ for different values of energies. In each plot, we choose an initial point which corresponds to a prescribed kinetic energy. The results of Poincar\'{e} sections show that for lower energy within the potential well, the integrable behavior dominates and the intersections of trajectories represent closed curves. On the other hand, for higher energy within the potential well, the closed curves are broken up gradually and the bounded phase space fills with a chaotic sea. The same kinds of plots have been obtained for the other phase space ($p_-,\beta_-$) with $\beta_+=0$. As a result, we have obtained that for $\omega>0$, there always exists chaotic behavior. This may contrast to the case of the loop mixmaster dynamics based on loop quantum cosmology~\cite{Bo}, where the mixmaster chaos is suppressed by loop quantum effects~\cite{BD}. However, we have to distinguish the parameter ``$\omega$" with the quantum number ``$j$". The former describes the regulation of UV effects without spacetime quantization, while the latter depicts the spacetime quantization and, handles the size of the universe. Hence, the role of UV coupling parameter $\omega$ is different from the quantum number $j$ of the loop quantum cosmology. In our case, time variable (related to the volume of $V=e^{3\alpha}$) as well as two physical degrees of anisotropy $\beta_{\pm}$ are treated in the standard way without quantization. However, in the loop quantum framework, all three scale factors were quantized using the loop techniques. Hence two are quite different: the potential wells at the origin never disappear for any $\omega>0$ in the $z=3$ Ho\v{r}ava-Lifshitz gravity, while in the loop quantum cosmology the height of potential wall rapidly decreases until they disappears completely as the Planck scale is reached. In order to see the similar effect like decreasing $j$, we consider the volume change $\alpha$ as the dynamical variable seriously and thus, need a further work in the 6D phase space. \section{Chaotic behavior in 6D phase space} We remind the reader that the true phase space is 6D for the vacuum universe, and thus, we have a movable billiard with the potential $V_\alpha(\beta_+,\beta_-,\omega,\epsilon)$ in Eq. (\ref{potalpha}) because the walls are moving with time since the logarithm of the volume change $\alpha=\frac{1}{3}\ln V$ and its derivative are entering in the system. In this case, $\alpha$ and $p_\alpha$ are regular variables as functions of time. In this section, we investigate a possibility of finding non-chaotic behaviors by considering the small volume limit of $\alpha \to -\infty$. To this end, it would be better to introduce a new time $\tau$ defined by~\cite{mis} \begin{equation} \label{newtime} \tau=\int \frac{dt}{V},~~V=e^{3\alpha}, \end{equation} which makes decoupling of the volume $\alpha$ from the shape $\beta_\pm$ explicitly. Starting from the action (\ref{action}) and integrating out the space variables, we have \begin{equation} \label{actionB} \bar{S}_{\lambda=1} = (4\pi)^2 \mu^4 \int d\tau \frac{e^{3\alpha}N}{V} \left[6(-\alpha^{'2}+\beta^{'2}_++\beta^{'2}_-) -V^2\left(12e^{-2\alpha}V_{IR}(\beta_+,\beta_-) + \frac{e^{-4\alpha}}{16\omega}V^{(I)}_{UV}(\beta_+,\beta_-) + e^{-6\alpha} V^{(II)}_{UV}(\beta_+,\beta_-)\right) \right], \end{equation} where the prime ($'$) denotes the derivatives with respect to $\tau$. Plugging $N=1$ into (\ref{actionB}), we have the Lagrangian as \begin{equation} \label{Lagtau} \bar{\cal L}_{\lambda=1} = (4\pi)^2 \mu^4 \left[6(-\alpha^{'2}+\beta^{'2}_++\beta^{'2}_-) -12e^{4\alpha}\left(V_{IR}(\beta_+,\beta_-) + \frac{e^{-2\alpha}}{192\omega}V^{(I)}_{UV}(\beta_+,\beta_-) + \frac{e^{-4\alpha}}{12} V^{(II)}_{UV}(\beta_+,\beta_-)\right) \right]. \end{equation} The canonical momenta are given by \begin{equation} \bar{p}_\pm=\frac{\partial \bar{\cal L}_{\lambda=1}}{\partial\beta'_\pm} =12(4\pi)^2\mu^4\beta'_\pm, ~~~\bar{p}_\alpha=\frac{\partial \bar{\cal L}_{\lambda=1}}{\partial\alpha'} =-12(4\pi)^2\mu^4\alpha'. \end{equation} Then, the canonical Hamiltonian in 6D phase space is obtained to be \begin{eqnarray} \bar{\cal H}_{6D} &=& \bar{p}_\alpha\alpha'+\bar{p}_+\beta'_++\bar{p}_-\beta'_--\bar{\cal L}_{\lambda=1}\nonumber\\ &=& \frac{1}{2}(\bar{p}^2_++\bar{p}^2_--\bar{p}^2_\alpha) +e^{4\alpha}\Big(V_{IR}+\frac{e^{-2\alpha}}{192 \omega}V^{(I)}_{UV} +\frac{e^{-4\alpha}}{12}V^{(II)}_{UV}\Big), \end{eqnarray} where we have chosen the parameter $12(4\pi)^2\mu^4=1$ for simplicity. Then, the Hamiltonian equations of motion are obtained as \begin{eqnarray} &&\label{appdix1} \beta'_\pm=\bar{p}_\pm, ~~\bar{p}'_\pm=-e^{4\alpha}\frac{\partial V_{IR}}{\partial\beta_\pm} -\frac{e^{2\alpha}}{192\omega}\frac{\partial V^{(I)}_{UV}}{\partial\beta_\pm} -\frac{1}{12}\frac{\partial V^{(II)}_{UV}}{\partial\beta_\pm},\\ &&\label{appdix2} \alpha'=-\bar{p}_\alpha, ~~\bar{p}'_\alpha =-4e^{4\alpha}V_{IR} -\frac{e^{2\alpha}}{96\omega}V^{(I)}_{UV}-\frac{1}{12}V^{CR}_{UV}. \end{eqnarray} We note that comparing Eqs. (\ref{appdix1}) and (\ref{appdix2}) with Eqs. (\ref{betapmeq}) and (\ref{alphaeq}), there is no change in the Hamiltonian and its equations of motion except replacing $t$ by $\tau$. From Eq. (\ref{appdix2}), the evolution of $\alpha$ is determined by \begin{equation} \alpha''=4e^{4\alpha}V_{IR} +\frac{e^{2\alpha}}{96\omega}V^{(I)}_{UV} +\frac{1}{12}V^{CR}_{UV}. \end{equation} Then, we obtain a 6D phase space consisting in the product of a 4D chaotic one times a 2D regular phase space for the $\alpha$ and $p_\alpha$ variables. As the volume goes to zero near singularity ($e^{4\alpha}\to 0,~p_\alpha \to 0$), one finds the limit \begin{equation} \bar{{\cal H}}_{6D} \to \frac{1}{2}\Big(\bar{p}_+^2+\bar{p}_-^2\Big)+K \not={\cal H}_{4D}. \end{equation} Hence, we note that the 6D system is not asymptotic in $\tau$ to the previous 4D system. Now, we are in a position to show whether the presence of the UV-potential can suppress chaotic behaviors existing in the IR-potential. In order to carry out it, we have to introduce two velocities: particle velocity $v_p$ and wall velocity $v_w$ defined by \begin{equation} v_p=\sqrt{\bar{p}_+^2+\bar{p}_-^2},~~v_w=\frac{d\beta_+^w}{d\tau}, \end{equation} where the wall location $\beta_+^w$ is determined by the fact that the asymptotic potential $K$ is significantly felt by the particle as \begin{equation} \label{asymK} \bar{p}^2_\alpha\approx 2K= \Bigg[\frac{e^{4\alpha-8\beta_+}}{12}+\frac{7e^{2\alpha-16\beta_+}}{32\omega} +\frac{4\sqrt{2}e^{\alpha-20\beta_+}}{3\omega^{7/6}\epsilon}+\frac{4e^{-24\beta_+}}{\omega^{4/3}\epsilon^2} \Bigg] \end{equation} in the limit of $\beta_+ \to -\infty$. On the other hand, the particle velocity is given by \begin{equation} \label{particlev} v_p=\sqrt{2\bar{{\cal H}}_{6D}+\bar{p}_\alpha^2-2K}. \end{equation} We would like to mention three limiting cases: IR-limit dominated by $V_{IR}$ and two UV-limits dominated by $V^{CR}_{UV}$ and $V^{CC}_{UV}$, respectively. In the IR-limit ($\omega \to \infty$) of Einstein gravity, the wall location is determined by \begin{equation} \beta_+^w \approx \frac{\alpha}{2}-\frac{1}{8}\ln\Big[12\bar{p}_\alpha^2\Big]. \end{equation} Then, the wall velocity is given by \begin{equation} v_w^{IR}=-\frac{d\beta_+^w}{d\tau} \approx \frac{\bar{p}_\alpha}{2}+\frac{e^{4\alpha-8\beta_+}}{24\bar{p}_\alpha}, \end{equation} which leads to \begin{equation} |v_w^{IR}| \approx \frac{|\bar{p}_\alpha|}{2}. \end{equation} As a result, we find that the particle velocity is always greater than the wall velocity as \begin{equation} v_p^{IR}=\sqrt{2{\bar{{\cal H}}_{6D}+\bar{p}_\alpha^2-2e^{4\alpha}V_{IR}}} \approx |\bar{p}_{\alpha}|>v_w^{IR}. \end{equation} Thus, there will be an infinite number of collisions of the particle against the wall since it will always catch a wall~\cite{mix6,mix7}. Next, let us investigate what happens in the UV-limit. We mention that the Cotton bilinear term $C_{ij}C^{ij}$ is marginal in the $z=3$ Ho\v{r}ava-Lifshitz action and it is expected to dominate in the UV regime. As its potential $V^{CC}_{UV}$ is shown, it is independent of volume change $\alpha$. Hence, in this UV regime, one may approximate Eq. (\ref{appdix2}) to be \begin{equation} \alpha'=-\bar{p}_\alpha,~~\bar{p}_\alpha' \approx 0 ~~(\alpha''\approx 0), \end{equation} which imply that the scale factor ($V=a_1a_2a_3=e^{3\alpha}$) of the universe will evolve as a free particle with the fixed momentum $\bar{p}_\alpha$ and thus, the volume of space diminishes linearly at early time. Concerning the shape $\beta_\pm$ of the universe, however, the potential $V^{CC}_{UV}$ plays no role in determining the wall and particle velocities definitely. The wall velocity is zero as \begin{equation} v^{CC}_w=\frac{d\beta^w_+}{d\tau}=-\frac{1}{12}\frac{\bar{p}'_\alpha}{\bar{p}_\alpha} \approx 0, \end{equation} while the particle velocity is determined to be imaginary \begin{equation} v^{CC}_p \approx \sqrt{\bar{p}_\alpha^2-\frac{4e^{-24\beta_+}}{\omega^{4/3}\epsilon^2}} \end{equation} for $\bar{p}_\alpha^2<\frac{4e^{-24\beta_+}}{\omega^{4/3}\epsilon^2}$ in the limit of $\beta_+ \to -\infty$. In this case, the role of Cotton bilinear term is trivial in the 6D phase space. Finally, we consider the $V^{CR}_{UV}$ term. The wall velocity takes the form \begin{equation} |v^{CR}_w|= \frac{|\bar{p}_\alpha|}{20}, \end{equation} and the particle velocity leads to \begin{equation} v^{CR}_p \approx \sqrt{\bar{p}_\alpha^2-\frac{4\sqrt{2}e^{\alpha-20\beta_+}}{3\omega^{7/6}\epsilon}}\approx |\bar{p}_\alpha|>|v_w^{CR}| \end{equation} in the limit of $\alpha \to -\infty$. This case is similar to the IR-limit of Einstein gravity. In summary, we could not observe a slowing down of the particle velocity due to the UV effects. However, similar to the Einstein gravity, the mixmaster universe of the $z=3$ deformed Ho\v{r}ava-Lifshitz gravity filled with stiff matter ($w=1$) has led to a non-chaotic universe because there is a slowing down of particle velocity which is unable to reach any more the walls after some time in the moving wall picture~\cite{mix6}. \section{Discussions} First of all, we wish to mention that the mixmaster universe has provided another example that the Ho\v{r}ava-Lifshitz gravity has shown chaotic behavior, as other chaotic dynamics of string or M-theory cosmology models~\cite{DH}. This may be because we did not quantize the Ho\v{r}ava-Lifshitz gravity and we have studied its classical aspects only. The two relevant parameters, which characterize the $z=3$ Ho\v{r}ava-Lifshitz gravity, are are $\omega$ and $\epsilon$. In the reduced 4D phase space (static billiard), there is no essential difference in the potentials between $z=1$ (Einstein gravity) and $z=3$ Ho\v{r}ava-Lifshitz gravity except the appearance of local maxima. Unfortunately, the local maxima does not change the chaotic motion significantly and thus, the chaotic behaviors persist in the $z=3$ Ho\v{r}ava-Lifshitz gravity without a matter. In the 6D phase space (movable billiard), the important issue was to see whether the potential $V^{CC}_{UV}$ from the Cotton bilinear term could slow down the particle velocity $v_p$ relative to the wall velocity $v_w$. However, we could not observe a slowing down of the particle velocity. At this stage, we compare our results with the mixmaster universe in the generalized uncertainty principle (GUP)~\cite{BM}. Considering a close connection between the $z=2$ Ho\v{r}ava-Lifshitz gravity and GUP~\cite{Myungch}, there may exist a cosmological relation between them. The chaotic behavior of the Bianchi IX model, which was not tamed by GUP effects, means that the deformed mixmaster universe is still a chaotic system. This is mainly because two physical degrees of anisotropy $\beta_{\pm}$ are considered as deformed, while the time variable is treated in the standard way. This supports that our approach (without quantization) is correct. Furthermore, it was shown that adding $(^{4}R)^2$ (and possibly other) curvature terms to the general relativity leads to the interesting result that the chaotic behavior is absent~\cite{BC1,BC2,BC3}. Hence it is very curious to see why $(^{4}R)^2$ does suppress chaotic behavior but $\frac{3}{4\omega}R^2-\frac{2}{\omega}R_{ij}R^{ij}$ does not suppress chaotic behavior. In the latter case, $f(R)$-action may be appropriate for this purpose~\cite{kluson}. Consequently, the presence of the UV-potentials from the $z=3$ deformed Ho\v{r}ava-Lifshitz gravity cannot suppress chaotic behaviors existing in the IR-potential, which comes from the Einstein gravity. \begin{acknowledgments} Y. S. Myung was supported by Basic Science Research Program through the National Research Foundation (NRF) of Korea funded by the Ministry of Education, Science and Technology (2009-0086861). Y.-W. Kim was supported by the Korea Research Foundation Grant funded by Korea Government (MOEHRD): KRF-2007-359-C00007. W.-S. Son and Y.-J. Park were supported by the Korea Science and Engineering Foundation (KOSEF) grant funded by the Korea government (MEST) through WCU Program (No. R31-20002). \end{acknowledgments}
1001.4522
\section{Introduction} The author was pleased and honored to have a part in a symposium honoring Eyring that was part of an American Chemical Society Meeting in Salt Lake City in March, 2009. This article is based on the author's talk in this symposium. A portion of this article together with two additional figures will appear in the {\it Bulletin for the History of Chemistry}, hereinafter referred to as the {\it Bulletin} article. Earlier reminiscences and biographies, including one by Henry Eyring, the scientist, and one by his grandson, Henry J. Eyring, have been published earlier$^{1-6}$. Jan Hayes, the organizer of the symposium and this issue, has pointed out to me that I am a coauthor of the last of Henry's publications. This is somewhat accidental as my book with him was the second edition of {\it Statistical Mechanics and Dynamics}, the first edition appearing nearly twenty years earlier. Additionally, the publisher wanted camera ready copy for the second edition and since my employer at the time, IBM, could hardly be expected to have me devote all my time to the preparation of the manuscript for this book, the production of the camera ready copy took five years. Had the preparation proceeded more quickly, I would not have occupied this position. Recently, one of Henry's sons told me that Henry loved me. This is no great distinction as Henry thought positively of everyone. However, perhaps he loved some people more than others. He was a very warm and generous person. My parents were nervous when they were to meet such an eminent person. He immediately put them at ease. In any case, my truthful reply to his son was that I loved him. Henry treated me as an honorary son. The two scientists of whom I am most fond, Henry Eyring and John Barker, both treated me as a honorary family member. For this I am deeply grateful. \section{Early years} Henry Eyring was grandson of Henry Eyring and Mary Brommeli, who came to America from Germany and Switzerland, respectively. His grandparents met as they travelled across the plains to Utah in 1860. They settled first in St. George in southern Utah and later were sent to northern Mexico to help establish a Mormon settlement. His grandfather was widely respected for his integrity. His grandmother spent some time in Berlin where she spent a brief period in jail because she refused to compromise her religious beliefs. His parents were Edward Eyring and Caroline Romney. Their son, Henry the scientist, was born in Colonia Juarez. Grandfather Henry Eyring owned a store. His father, Edward Eyring was a prosperous rancher with several hundred head of cattle. From this point, when I use the names, Henry Eyring and Henry, I refer to the grandson, the scientist. Colonia Juarez is a small town that is located in the Mexican state of Chihuahua and is southwest of El Paso and southeast of the Arizona border crossings. Since most of the readers of this article probably do not know the location of Colonia Juarez, a map of this region of Mexico will appear as Fig. 1 in the forthcoming {\it Bulletin} article and that is also available from the author. Several Mormon settlements were established in the late nineteenth century. Only two remain, Colonia Dublan and Colonia Juarez. Colonia Dublan does not appear on the map as it is now a suburb of Nuevos Casas Grandes, which is a sizable city and easily found in the map. It is roughly in the center of the map, located on Mexico Highway 10. Colonia Juarez is a small town, in a narrow valley, southwest of Nuevos Casas Grandes, near the end of short secondary road that was, until recently, a gravel road. The main industry of the region is fruit orchards. Today, Colonia Dublan / Nuevos Casas Grandes is the economic center of the region because there is more flat land and a railroad. However, Colonia Juarez is the religious/cultural center of the Mormon community and looks like a typical small town in Utah. The bilingual school, Academia Juarez, that Henry attended will be seen in Fig. 2 of the forthcoming {\it Bulletin}, which is a photograph that I took in 2007 and which is also available from the author. It is seen in the foreground at the bottom of the hill in this photograph. The building on the right dates back to Henry's time. The building on the left is more recent. Some of the houses of Colonia Juarez are barely visible in the background. Henry lived in Colonia Juarez until 1912. Because of the turmoil of the Mexican Revolution, life became dangerous. The Eyring family and most, if not all, of the Mormons were evacuated by rail to El Paso. The women and children were sent first and the men afterwards. Henry thought that he should be sent with the men and was disappointed that he was part of the first party. The expectation was that they would return soon. Some did but the Eyring family decided not to return. The family settled in Arizona in considerably reduced circumstances. The family thought that Henry was an American by birth. It was not until the 1930's that he found that he was not. Thus, some of his most important and famous work was accomplished while he was a Mexican. It is reasonable to say that he is probably Mexico's most famous chemist. It is an interesting aside, for me at least, that Pancho Villa briefly `invaded' the US at Columbus, NM, which is almost due north of Colonia Juarez. With the reluctant agreement of the Mexican government, General Pershing was sent on an unsuccessful expedition to capture Villa; Colonia Dublan was his headquarters and his guides were some of the local Mormons. However, Henry had left by then and missed this adventure. Henry attended the University of Arizona in 1919. He obtained a BSc and MSc in mining and metallurgical engineering. I, too, had a (mercifully) brief career in underground (copper) mining. Eyring decided that a mining/metallurgy career was not for him and he enrolled as a PhD student in chemistry at the University of California in Berkeley in 1925. After graduation, he spent two years engaged in teaching and research at the University of Wisconsin in Madison. There he met and in 1928 married his first wife, Mildred Bennion. They had three sons, Edward (Ted), Henry B. (Hal), and Harden. Mildred died in 1969. During her illness that extended over five years, Henry faithfully cared for her and was with her when she died. A few years after Mildred's death, he married Winifred Brennan, adopting her youngest daughters. \section{Berlin and Princeton} Following his stay in Wisconsin, in 1929 he was awarded a post doctoral fellowship to work at the Kaiser Wilhelm Institute in Dahlem in southwestern Berlin. Curiously, my parents lived in Dahlem for a time, as members of the diplomatic corps, while I was a doctoral student of Eyring. Eyring's original plan was to work with Bodenstein but, perhaps fortunately, Bodenstein was away and he collaborated with Michael Polanyi. Quantum mechanics was in its infancy and there was much to be done. Quantum mechanics had not yet been applied to study reactions. Eyring and Polanyi$^7$ chose to study the simplest reaction, the replacement reaction, H + H$_2$ $\longrightarrow$ H$_2$ + H by applying the Heitler-London method, including exchange. This was one of the first applications of quantum mechanics to obtain an energy surface for a reaction and, in my opinion, this was one of his most significant papers. After a short period back at Berkeley, he joined the faculty at Princeton, where he stayed until 1946. There he produced many important results. He developed his famous reaction rate theory$^8$. A typical plot of the energy, say as calculated by the method of Eyring and Polanyi, that the reacting molecules must trace is plotted in Fig. 1. This is Fig. 3 of the forthcoming {\it Bulletin} article. In this plot, the energy of the reactants is on the left and the energy of the products is on the right. As the incoming molecule approaches the molecule with which it will react, the energy increases. This energy barrier must be surmounted, rather like a hiker hiking up to and passing over a pass and then descending. The energy state of the products is on the right and this energy state may be greater or lesser than or equal to that of the reactants. In Fig. 1, the products have a lower energy; this is irrelevant to our argument. The height of this barrier is $\Delta E^{\ddagger}$. As would a hiker, the constituents of the reaction may hesitate briefly at the pass. Eyring coined the name {\it activated complex} for this chemically unstable species at the top of the barrier. It is convenient to use a simpler, perhaps simplistic, nonrigorous derivation than that used by Eyring. In addition to being simpler, this derivation has the advantage of not requiring that the reaction take place in a dilute gas. In a canonical ensemble, the probability, $P(E)$, of the system having an energy $E$ is \begin{equation} P(E)=\frac{\exp(-E/RT)}{\int_0^{\infty}\exp(-E/RT)dE}, \end{equation} where $R$ is the gas constant and $T$ is the temperature. The denominator is a normalizing factor that ensures that the total probability is one. The probability of the system having enough energy to reach the top of the pass is the integral of $P(E)$ from $\Delta E^{\ddagger}$ to infinity. This gives \begin{eqnarray} P(\Delta E^{\ddagger})=\frac{\int_{\Delta E^{\ddagger}}^{\infty}\exp(-E/RT)dE}{\int_0^{\infty}\exp(-E/RT)dE} =\exp(-\Delta E^{\ddagger}/RT). \end{eqnarray} This result assumes the canonical system, where the volume and number are constant. However, in the reaction it is the pressure and chemical potential that are constant. Hence, it is the Gibbs' free energy, $G$, rather than the energy, that should be used. We should consider \begin{equation} P(\Delta G^{\ddagger})=\exp(-\Delta G^{\ddagger}/RT). \end{equation} The mode in the activated complex that takes part in the reaction may be thought of a soft spring. This mode is soft, with a negative spring constant, because the activated complex is unstable. Using equipartition of energy for a soft spring, the `frequency' of oscillation, $\nu$, of this spring is given by $h\nu=kT$, where $k$ is Boltzmann's constant, the gas constant per molecule. Thus, formally the reaction rate constant is the product of $\nu$ and $P(\Delta G^{\ddagger})$, \begin{equation} k'=\frac{kT}{h}\exp(-\Delta G^{\ddagger}/RT). \end{equation} Of course, the reactants, on reaching the pass and forming an activated complex, may not cross the pass and form the products. They may fall back. Hence, it is often convenient to multiply the exponential in Eq. (3) by a factor, $\kappa$, that is called the transmission coefficient. Although there is no general method of calculating $\kappa$, Eyring's rate theory has been very illuminating and has been used in a wide variety of chemical and biological applications. Eyring was awarded the National Medal of Science, the Berzelius Medal, the Wolf Prize, and many other prestigious awards for this work but, alas, not a Nobel Prize. At Princeton, he started writing his famous book, {\it Quantum Chemistry}$^9$. This may have been the first book in English that used this title. The writing took a decade. Eyring told me that Kimball and Walter never met. In any case, the book became a standard text and was translated into several languages. It was the book from which I first studied quantum mechanics. Of course, I had encountered quantum mechanics but not as the exclusive subject of a course. Not only is quantum mechanics covered in this book but it is an excellent reference for special functions and group theory. \section{Utah} In 1946, with his wife's encouragement, he accepted the position of Dean of the Graduate School at the University of Utah. The University of Utah, a long established institution, planned to inaugurate a doctoral program; Henry found the chance to help build this program an irresistible temptation. In this he was highly successful. The University of Utah has a very prestigious graduate program. Earlier he had developed an interest in the theory of liquids. This, I assume, resulted from a desire to extend reaction rate theory from gas phase reactions to reactions in condensed phases. At the time it was thought that in contrast to gases and solids, there was no satisfactory theory of the liquid state. It is interesting that this is not true. The van der Waals theory did provide the basis of a satisfactory theory of liquids but this was not understood until recently. In any case, until the 1960's the thinking was, since the density of a liquid is not too different from that of a solid, a theory of the solid state would be a promising starting point. Eyring, and others, developed the cell or lattice theory of liquids. In reality this is a classical (as opposed to quantum) theory of a solid, due to the higher temperatures of most liquids. Eyring, and probably others, realized that the entropy of the cell theory lacked a factor of $Nk$. Eyring coined the term, {\it communal entropy}, and added the missing entropy arbitrarily. Although arbitrary, this is preferable to ignoring the issue and does give a liquid a different free energy than a solid. He went one step further and developed the idea that when a molecule evaporated, it left a hole or vacancy in the quasi-lattice of the liquid. Thus, for every molecule in the vapor phase, there would be a vacancy in the liquid that mirrored the gas molecule. If this were literally true the sum of the densities of the liquid and vapor would be a constant, equal to the critical density. This is not quite correct. The average density of the two phases is a linear function of the temperature but is not a constant and decreases somewhat as the temperature increases. Nonetheless, this reasoning provides a simple qualitative explanation of the law of rectilinear diameters. He `formalized' his reasoning into the {\it significant structure `theory'}$^{10,11}$ at Utah. Using the idea that a liquid is a mixture of molecules and vacancies that mimic the vapor molecules, the partition function, $Z$, could be written as \begin{equation} Z=Z_s^{V_s/V}Z_g^{(V-V_s)/V}, \end{equation} where $Z_s$ and $Z_g$ are the partition functions of the solid and vapor phases, respectively, and $V$ and $V_s$ are the volumes of the liquid and solid phases, respectively. Eyring used the Einstein theory and ideal gas theory for $Z_s$ and $Z_g$. The Einstein parameter, $\Theta_E$, and $V_s$ are taken from experiment. The significant structure theory is a description rather than a theory. Conventionally, a theory in statistical mechanics relates the properties of a system to the forces between the molecules whereas Eyring's description relates the properties of the liquid to those of the solid and vapor without obtaining either from the intermolecular forces. This said, Eyring by focussing on the volume on the important variable was on the right track and anticipated later developments, such as perturbation theory of liquids. One consequence of Eq. (5) is that the heat capacity, $C$, of monatomic liquid, such as argon becomes, since, for argon, $T$ greatly exceeds $\Theta_E$, \begin{equation} \frac{C}{Nk}=3\frac{V_s}{V}+\frac{3}{2}\frac{V-V_s}{V}. \end{equation} As is seen in Fig. 2 (Fig. 4 of the forthcoming {\it Bulletin} article), Eq. (6) gives a reasonably good description of the heat capacity. The heat capacity is a second derivative of the free energy and is difficult to obtain accurately in a theory. The experimental heat capacity becomes infinite at the critical point. Equation (6) does not predict this. Much has been made of this failure. However, it should be kept in mind that no simple theory predicts the singularity of the heat capacity at the critical point. Some are less successful than Eq. (6). For example, the augmented van der Waals theory (a widely accepted theory) gives the prediction $C=3Nk/2$. Later Eyring grafted the renormalization group approach onto Eq. (5) to obtain the singularity. However, I find this artificial. I collaborated with him in his study of liquids by applying the significant structure to liquid hydrogen. I also assisted in the writing of the book, {\it Statistical Mechanics and Dynamics} by Eyring, myself, Betsy Stover, and Ted Eyring. This book was an outgrowth of the lecture notes prepared by one his first students at Utah, Marilyn Alder. These notes were mimeographed and bound with a yellow cover and was referred to by students as the {\it yellow peril}. The book was rather unique in that the first chapter covered the field in an informal way and then the material was repeated more formally in the subsequent chapters. Needless to say, significant structure theory was included in one of the chapters. This book was moderately successful. With Jost, he and I collaborated on a multi-volume treatise on physical chemistry. During his final years, he became interested in cancer both because of Mildred's illness and because of the cancer that ultimately took his life. Betsy Stover came to him with the observation that the mortality curves of the experimental animals that had been exposed to radiation that caused them to die of bone cancer were striking similar to a Fermi-Dirac distribution. This suggested to Eyring that this was similar to saturation in adsorption and the rate of mutation that was responsible for the cancer was proportional to the product of the fraction of normal cells multiplied by the fraction of mutated cells. He and Stover wrote several papers under the general title of the {\it Dynamics of Life} that were based on this idea. \section{Summary} As I have mentioned Henry had a warm personality. At times, he became annoyed with someone (including me) but he never held a grudge. Also, despite his accomplishments, he never felt he was better than someone else. I found him to be very kind. He was quite athletic. In his youth he could run very fast. He tells the story of how he outran some students at the University of Arizona who wished to catch him because of the infraction of a foolish rule. He continued running throughout his life and raced his students. He could make a standing jump onto to the top of his desk. I know of only one other person who could do this. Roberto Benigni does this in the movie, {\it Life is Beautiful}. I recall that one day in the summer of 1976, while he and I were collaborating on the second edition of {\it Statistical Mechanics and Dynamics} that he had business in the center of city. Even though he was in his mid seventies, he walked from the univerity to downtown and back, a distance of about four miles that involved walking up a fairly steep hill on the return journey. Many people have conjectured about why he never won a Nobel Prize. Henry J. Eyring in reference 5 wonders if it was because he left Princeton for Utah. This is possible. Certainly, his cheering section of prominent people would have been greater if he had stayed at Princeton. However, one person at the University of Utah has won a Nobel Prize so it is not impossible to win a Nobel Prize at a `provincial' university. Others have wondered if the fact that Henry was religious played a role. Perhaps, it was due to Henry's intuitive style of research that was more fashionable in the 1930's than later. Peter Debye called Henry's style, {\it the inductive-deductive method}. Henry's description was that his method of finding the path through the forest was first to cut down all the trees in the forest. My feeling is that his not being awarded a Nobel Prize is part of the uncertainties of life. He won many prizes. He would not have won them if the above considerations were a factor. The Nobel Prize receives too much attention because of the amount of money involved. The other prizes are equally important. In any case, he was beloved by all who knew him. At Henry's funeral, Neal Maxwell, a friend and neighbor, former university colleague, and church leader, said that Henry taught us how to live well and how to die well. Not a bad epitaph.
2110.08014
\section{Introduction} Between April and September 2017, after spending 13 years exploring the Kronian system, the Cassini spacecraft performed 23 proximal orbits passing within Saturn's D ring, allowing us to directly probe the upper atmosphere of a gas giant planet. This ``Grand Finale'' phase of the mission culminated in a ``Final Plunge'' into Saturn's atmosphere on 15 September 2017 during which in-situ measurements were taken until the signal from the spacecraft was lost at an altitude of approximately 1360~km above the 1~bar level. The proximal orbits probed Saturn's equatorial region, with the final plunge spanning latitudes from about 13$^\circ$N to 9$^\circ$N as Cassini descended from 2500~km to below 1400~km altitude \citep{Waite2018}. A number of studies have detailed the influence of the rings on Saturn's upper atmosphere revealed by the Grand Finale observations. \citet{Wahlund2018} used measurements from the Radio and Plasma Wave Science instrument (RPWS) to derive electron densities during the proximal orbits. Large variations in electron density of up to two orders of magnitude were measured, induced by shadows from the rings reducing ionisation. Despite high variability between proximal orbits, \citet{Hadid2019} were able to measure a consistent difference in electron density derived from the RPWS Langmuir probe between the northern and southern hemispheres, explained by the influence of the rings. Variable response in the ionosphere linked to the B ring was interpreted as inter-hemispheric transport from the sunlit to the shadowed ionosphere in the light of INMS ion observations. The Ion Neutral Mass Spectrometer (INMS) measured neutral atmospheric composition down to the pressure level of about 1~nbar during the final plunge. \citet{Yelle2018} analysed the low-mass end of the INMS neutral spectrum to derive the number densities of H$_2$, He, CH$_4$, as well as the neutral temperature profile. They found that H$_2$ and He were in diffusive equilibrium but that there were high volume mixing ratios of CH$_4$ above the homopause, the latter with a mole fraction on the order of a few times 10$^{-4}$. The slope of the CH$_4$ mixing ratio was consistent with inflow from above the atmosphere, possibly from the rings. Atmospheric constituents from the high-mass region of the INMS spectrum measured during the final plunge have been studied by \citet{Waite2018}. Evidence of inflows of other molecules, such as CO and N$_2$, were found in the data, with possible source regions in the D ring identified. Before Cassini's final plunge, the only inflow from the rings that was known was H$_2$O, inferred from electron density measurements \citep[e.g.,][]{Jurac2005,Prange2006,Moore2006a}. There is evidence that inflow of heavy species from the rings has an impact on ion composition. \citet{Cravens2019} found that INMS recorded lower than expected quantities of the ions H$^+$ and H$_3^+$ during the proximal orbits. These light ions may have been destroyed in reactions with inflowing neutral molecules. Ionospheric modelling by \citet{Moore2018} confirms the destruction of light ions by heavy molecular species, with the resulting ionosphere containing large volume mixing ratios of molecular ions, such as H$_3$O$^+$, HCO$^+$. However, their model was not able to reproduce the number density of H$_3^+$; the calculated density was found to be too large in comparison to INMS data, indicating missing loss processes for this species in the ionosphere. In this study, we combine the INMS measurements during the final plunge with previous limb scans from the Cassini Composite InfraRed Spectrometer (CIRS) and stellar occultation measurements by the UltraViolet Imaging Spectrograph (UVIS) to obtain the most realistic profiles in altitude of the temperature and major neutral densities (H$_2$, H, He, CH$_4$) of the equatorial upper atmosphere down to the 1~bar pressure level (set by convention to z=0). We use this neutral upper atmosphere in an energy deposition model to predict ionisation rates under solar illumination, including photo-ionisation and electron-impact ionisation. We also determine photo-dissociation rate profiles of methane, which is key to initiate the chemical reactions leading to the formation of more complex hydrocarbons. We make use of high spectral resolution solar fluxes combined with a high resolution H$_2$ photo-absorption cross-section in order to test the importance of spectral resolution in Saturn ionospheric models. Some previous studies have included high resolution photo-absorption cross sections of H$_2$ at Saturn \citep{Kim2014} and Jupiter \citep{Kim1994}, and of N$_2$ at Titan \citep{Lavvas2011}; beyond the ionisation threshold wavelength (80.4~nm for H$_2$, 79.7~nm for N$_2$), the cross section is highly structured due to excitation processes. These studies found that in the higher resolution models photons penetrate deeper into the upper atmosphere, resulting in a significant increase in H and CH$_4$ ionisation. These species can be ionised over the highly structured photoabsorption of H$_2$ and N$_2$ as their ionisation threshold wavelength is 91.2~nm and 98.8~nm, respectively. The neutral upper atmospheric composition and temperature profiles along with ion production rates that we calculate in this paper will be important to determine accurate ion densities. Taking into account the H$_2$O influx \citep{Connerney1986,ODonoghue2013} will be necessary at this stage since water plays a critical role in ion-neutral chemistry. In addition, the photo-dissociation of methane is key to initiate the chemical reactions leading to the formation of more complex hydrocarbons such as benzene \citep{Koskinen2016}. Our paper is laid out as follows. In Sect.~\ref{sec:model_inputs}, we describe all of the inputs of our energy deposition model. Section~\ref{sec:ionospheric_model} describes the energy deposition model itself, and in Sect.~\ref{sec:results} we present and interpret our results. In Sect.~\ref{sec:conc}, we highlight the main findings and discuss their implications. \section{Key model inputs}\label{sec:model_inputs} There are a number of key inputs to our energy deposition model that we must assemble: the neutral temperature and density profiles (Sect.~\ref{sec:neutral_atm}) and the solar flux (Sect.~\ref{sec:solar_flux}), which we seek to acquire at a spectral resolution high enough to capture the structured region of the H$_2$ photo-absorption cross section (Sect.~\ref{sec:sigma_H2}). Other model inputs (reaction rates and remaining cross sections) are presented in Sect.~\ref{sec:ionospheric_model}. \subsection{Reconstructed neutral atmosphere during Cassini plunge}\label{sec:neutral_atm} We reconstruct the neutral atmosphere (temperature and densities of H$_2$, H, He, CH$_4$) during the final plunge of the Cassini spacecraft. To this effect, we use the deepest in-situ measurements taken during the final plunge by INMS. The INMS final plunge measurements provide densities of H$_2$, He, and CH$_4$, at pressures up to 1~nbar. To reconstruct the neutral atmosphere down to the 1~bar pressure level, we rely on previously combined CIRS limb scans and UVIS stellar occultation observations \citep{Koskinen2018}. \subsubsection*{Neutral temperature profile} We combine temperature measurements from CIRS and UVIS taken close to Saturn's equator with INMS temperatures taken during the final plunge. To correct for differences in latitude between the observations, we take a similar approach to \citet{Yelle2018} and use the effective potential (i.e., the sum of the gravitational and centrifugal potentials) as a vertical coordinate. Indeed, given Saturn's oblate shape and rapid rotation, the change in atmospheric parameters is not purely radial, but contains a latitudinal component. If we express the parameters as a function of the effective potential, we can assemble a temperature profile that is independent of latitude. We define the effective potential $\Phi$ as the sum of the gravitational potential and the centrifugal potential \citep{Helled2015}: \begin{equation}\label{eqn:potential} \Phi = \frac{GM}{r}\left(1-\sum_{n=1}^{4}J_{2n}P_{2n}(\sin\theta)\left(\frac{r_{\text{eq}}}{r}\right)^{2n}\right) + \frac{1}{2}\left(r\Omega\cos\theta\right)^2, \end{equation} where $G$ is the gravitational constant, $M$ is the mass of Saturn, $r$ is the radial distance, $\theta$ is the latitude, $\Omega$ is the angular velocity, $J_{2n}$ are the expansion coefficients and $P_{2n}$ are the Legendre polynomials. Note that this representation of the gravity field is not consistent with the Grand Finale data presented in \citet{Iess2019} and \citet{Militzer2019} but it will not have a major effect on our conclusions regarding energy deposition. Indeed, as shown in \citet{Koskinen2021}, improved estimates on the interior rotation rate and the detection of differential rotation from the Cassini Grand Finale observations result in a maximum difference of 1.1\% to the graviational accelation, compared to an acceleration based on the potential used in this paper. \begin{figure*} \centering \includegraphics[width=.9\textwidth]{T_vs_phi} \caption{Temperature profile measured by INMS during the final plunge (in blue), and profile assembled by \citet{Koskinen2018} (in orange) as a best fit to CIRS limb scans at $\Phi>6.788\times 10^8$~J~kg$^{-1}$ ($p>3~\mu$bar, lower dashed black line in panel a) and UVIS occultation ST14M10D03S7 at $\Phi<6.763\times 10^8$~J~kg$^{-1}$ ($p<0.04~\mu$bar, upper dashed black line in panel a). The constructed temperature profiles connecting these datasets are shown in panel b, labelled composite A (in a thick grey line), and composite B (in a thin black line).} \label{fig:Tsource} \end{figure*} The temperature profile from INMS during the final plunge as a function of the potential is shown in blue in Fig.~\ref{fig:Tsource}a \citep{Yelle2018}. To extend this profile down to the 1~bar pressure level (corresponding to $\Phi=6.845\times 10^8$~J~kg$^{-1}$), we make use of the temperature profile derived by \citet{Koskinen2018}, shown in orange in Fig.~\ref{fig:Tsource}a. The latter profile is determined as a best fit to UVIS and CIRS observations in the equatorial region; specifically, UVIS occultation ST14M10D03S7, measured at a planetographic latitude of 7.4$^{\circ}$S, and CIRS limb scans LIMBINTC001 and LIMBINT001, measured at planetographic latitudes spanning 10$^{\circ}$S -- 5$^{\circ}$S, and 15$^{\circ}$S -- 2$^{\circ}$N, respectively. The final plunge exospheric temperature measured by INMS of 354~K is consistent with the range of low latitude exospheric temperatures derived by \citet{Koskinen2018} from Cassini UVIS occultations, as well as the range of measurements and model results assembled by \citet{MullerWodarg2019}. Since the INMS profile was an in-situ observation during the final plunge, we make use of the exospheric temperature from this measurement in all of our model runs rather than that derived from the equatorial UVIS occultation profile in \citet{Koskinen2018}. We construct two different temperature profiles to connect the INMS final plunge exospheric temperature to CIRS and UVIS equatorial observations: we label these reconstructed profiles composite A and B (plotted in Fig.~\ref{fig:Tsource}b). Composite A is made up of the CIRS and UVIS temperatures from \citet{Koskinen2018} at potentials higher than $6.713\times 10^8$~J~kg$^{-1}$ (where the two profiles intersect), and the INMS final plunge temperatures at potentials lower than this value. Since this involves discarding the lowest altitude points of the INMS observations (with potentials between $6.713\times 10^8$ and $6.722\times 10^8$~J~kg$^{-1}$), we also construct composite B which connects the INMS temperature profile to the CIRS-derived temperature region at $\Phi>6.788\times 10^8$~J~kg$^{-1}$ using a Bates profile. The parameters of the Bates temperature profile (Equation~1 from \citet{Yelle1996}) used here are chosen to best fit the INMS final plunge temperatures and to connect to the uppermost temperatures from the CIRS scans. Composite profile B ignores temperature constraints from UVIS measurements between $\Phi=6.788$ -- $6.713\times 10^8$~J~kg$^{-1}$. \subsubsection*{Diffusion model for neutral species} To reconstruct the equatorial neutral density profiles at the time of the final plunge, we use the diffusion model described in \citet{Koskinen2018}. We include the dominant neutral species in the model: H$_2$, H, He, and CH$_4$. The details of this calculation can be found in \ref{sec:appendix}. The resulting mixing ratios from the diffusion model using temperature profile A are shown as a function of pressure in Fig.~\ref{fig:mixing_ratio_vs_p_diffusion}. To initialise the calculation, we need the volume mixing ratios at the 1 bar pressure level $x_{s0} = x_s(z=0)$ for each neutral species $s$. For CH$_4$, we use a value of $x_{\text{CH}_4,0} = 4.7\times 10^{-3}$ from \citet{Fletcher2009}, and for H, we assume $x_{\text{H},0} = 1.2\times 10^{-4}$, resulting in a volume mixing ratio of H that is less than 0.05 in the thermosphere, in agreement with \citet{Koskinen2013}. The lower boundary volume mixing ratio of He is chosen so that the He profile in the thermosphere matches the He volume mixing ratio from the INMS final plunge measurement: we obtain $x_{\text{He},0} = 0.134$ when using composite temperature profile A, and $x_{\text{He},0} = 0.120$ with composite temperature B. \begin{figure} \centering \includegraphics[width=.45\textwidth]{mixing_ratio_vs_p_diffusion_extendx} \caption{Mixing ratios of the neutral species as a function of pressure from the diffusion model using composite temperature A.} \label{fig:mixing_ratio_vs_p_diffusion} \end{figure} \subsubsection*{INMS final plunge volume mixing ratios} INMS records the number densities $n_s^{\text{INMS}}$ of the neutral species $s$, namely H$_2$, He, and CH$_4$. To obtain consistent volume mixing ratios of the INMS final plunge measurements, the volume mixing ratio $x_{\text{H}}^{\text{diff}}$ of H from the diffusion model is taken into account according to \begin{equation}\label{eqn:ntotINMS} n_{\text{tot}}^{\text{INMS}} = \frac{1}{1-x_{\text{H}}^{\text{diff}}} \sum_s n_s^{\text{INMS}}. \end{equation} $n_{\text{tot}}^{\text{INMS}}$ is the total density recorded over the region of the INMS measurement, including H$_2$, He, and CH$_4$ from INMS and H from the diffusion model. Thus we get the INMS volume mixing ratios of H$_2$, He, and CH$_4$ as follows \begin{equation}\label{eqn:xINMS} x_s^{\text{INMS}} = \frac{n_s^{\text{INMS}}}{n_{\text{tot}}^{\text{INMS}}}. \end{equation} \begin{figure} \centering \includegraphics[width=.45\textwidth]{mixing_ratio_vs_p_diffusion_INMS} \caption{Volume mixing ratios of the neutral species as a function of pressure from the diffusion model using composite temperature A (in solid lines), and INMS final plunge volume mixing ratios (crosses). The dashed black line shows the extension to higher pressures of the INMS CH$_4$ profile at a constant mixing ratio of $1.3\times 10^{-4}$.} \label{fig:mixing_ratio_vs_p_INMS} \end{figure} The derived INMS volume mixing ratios from Equation~\ref{eqn:xINMS} are plotted with crosses in Fig.~\ref{fig:mixing_ratio_vs_p_INMS}, along with the mixing ratios from the diffusion model (solid lines). We adjusted the 1 bar value of the He volume mixing ratio so that the INMS data would match the diffusion model mixing ratios at pressures less than 1~nbar. Furthermore, as discussed in \citet{Yelle2018}, during the final plunge INMS measured an influx of CH$_4$, possibly from Saturn's rings. Therefore the INMS mixing ratios for CH$_4$ do not match those from the diffusion model. To construct an estimate of the CH$_4$ mixing ratio during the final plunge, we extend the INMS mixing ratio to higher pressures at a constant mixing ratio value of $1.3\times 10^{-4}$ (black dashed line in Fig.~\ref{fig:mixing_ratio_vs_p_INMS}), until we intersect the values from the diffusion model, which we use at pressures from $0.1~\mu$bar to 1~bar. Note that in reality there is likely a minimum in the CH$_4$ mixing ratio between the INMS and UVIS measurements, although this would not significantly change the results of this study. \subsubsection*{Reconstructed neutral number densities} We use the ideal gas law to obtain the total number density $n_{\text{tot}}$ from the total atmospheric pressure and reconstructed temperature profiles: \begin{equation} n_{\text{tot}} = \frac{p}{kT}, \end{equation} where $k$ is the Boltzmann constant. Multiplying the volume mixing ratios by the total number density gives us the number density profiles for each neutral species, as shown in Fig.~\ref{fig:n_vs_p_INMS}. The density profiles as a function of pressure (Fig.~\ref{fig:n_vs_p_INMS}a) are independent of the temperature profile, however there is a difference in the densities above about 1100~km between profiles calculated using temperature composite A (coloured lines in Fig.~\ref{fig:n_vs_p_INMS}b), and composite B (black lines in Fig.~\ref{fig:n_vs_p_INMS}b). \begin{figure*} \centering \includegraphics[width=.9\textwidth]{n_vs_p_vs_z_diffusion_INMS} \caption{Number density of the neutral species as a function of pressure (panel a), and as a function of altitude (panel b). Solid lines are densities obtained from the diffusion model, dashed lines show the link between the diffusion model and the INMS CH$_4$ profile using a constant mixing ratio of $1.3\times 10^{-4}$. In panel (a), the INMS final plunge densities are shown in crosses. In panel (b), a comparison is made between the densities obtained using temperature composite A (in coloured lines), and B (in black).} \label{fig:n_vs_p_INMS} \end{figure*} \subsection{Solar flux}\label{sec:solar_flux} We have access to solar spectral data at a range of different spectral resolutions, however none of these quite match the high resolution of the H$_2$ photo-absorption cross section model ($\Delta\lambda = 10^{-3}$~nm, see Sect.~\ref{sec:sigma_H2}). The coarsest resolution spectrum we consider is from TIMED/SEE at a wavelength resolution of $\Delta\lambda = 1$~nm \citep{Woods2005}. We also make use of a slightly higher resolution spectrum ($\Delta\lambda = 0.1$~nm) from the Whole Heliosphere Interval (WHI) quiet Sun campaign \citep{Woods2009,Chamberlin2009}. Our highest resolution spectrum is a quiet Sun reference spectrum from the SOHO/SUMER instrument at $\Delta\lambda = 4\times 10^{-3}$~nm \citep{Curdt2001}. Some characteristics of these datasets and the date of the observations used are presented in Table~\ref{tab:sol_spectra}. \begin{table*} \centering \begin{threeparttable} \caption{Solar flux data sources} \begin{tabular}{lcccc} \toprule & Wavelength & Sampling & Resolution of & Date of \\ & range$^*$ & resolution$^{\dagger}$ & spectrum used & observation \\ & [nm] & [nm] & [nm] & \\ \midrule TIMED/SEE$^1$ & 0.5 -- 152 & 0.4 -— 7 & 1 & 14 April 2008 \\ WHI$^2$ & 0.1 -- 152& 0.1 -— 7 & 0.1 & 10 -- 16 April 2008 \\ SOHO/SUMER$^3$ & 67 -- 152 & 0.004 & 0.004 & 20 April 1997 \\ \bottomrule \end{tabular} \begin{tablenotes} \small \item Notes: $^*$Wavelength range used in this study, TIMED/SEE spectra extend to 190~nm and the complete WHI dataset extends to 2400~nm $^{\dagger}$Instrument resolution over the wavelength range used in this study. \item Sources: $^1$\citet{Woods2005} $^2$\citet{Woods2009,Chamberlin2009} $^3$\citet{Curdt2001}. \end{tablenotes} \label{tab:sol_spectra} \end{threeparttable} \end{table*} The final plunge on 15 September 2017 took place during quiet solar conditions, at solar minimum. We therefore use solar spectra during similar conditions, measured by F10.7 and Lyman $\alpha$ fluxes. The values of these reference fluxes on the day of the final plunge and on the dates each of the spectra we use in this study were recorded are shown in Table~\ref{tab:sol_conditions}. \begin{table} \centering \begin{threeparttable} \caption{Solar flux conditions} \begin{tabular}{lcc} \toprule Date & Lyman $\alpha$ flux & F10.7 \\ & [$10^{11}$~cm$^{-2}$~s$^{-1}$] & [sfu] \\ \midrule 20 April 1997 & 3.56 & 70.4 \\ 14 April 2008 & 3.50 & 69.0 \\ 15 September 2017 & 3.61 & 73.6 \\ \bottomrule \end{tabular} \label{tab:sol_conditions} \end{threeparttable} \end{table} The WHI quiet Sun spectrum is a composite of observations over different wavelength bands. In the soft X-ray, EUV, and FUV (the wavelengths that are absorbed in upper planetary atmospheres), the WHI dataset is composed of measurements from the XPS instrument on TIMED/SEE between 0.1 -- 6.0~nm, a rocket measurement between 6.0 -- 105~nm, the EGS instrument on TIMED/SEE between 105 -- 116~nm, and a SORCE/SOLSTICE spectrum beyond 116~nm. The rocket was launched on 14 April 2008, and the TIMED/SEE and SORCE/SOLSTICE spectra that compose the WHI spectrum are averages over 10 -- 16 April 2008 (solar minimum, see Table~\ref{tab:sol_conditions}). The SUMER spectrum extends from 67 to 152~nm at a resolution of $4\times 10^{-3}$~nm (see Table~\ref{tab:sol_spectra}). We combine this spectrum either with a low resolution TIMED/SEE spectrum or the WHI spectrum in order to produce solar spectra over 0.1 to 152~nm. In addition, since the SUMER instrument only observes a small portion of the Sun at the centre of the solar disk, it measures spectral radiances. Thus we must convert these data to irradiances (a disk-integrated quantity). We do so by ensuring that the integrated SUMER spectrum matches the integrated flux from TIMED/SEE or WHI over $\lambda_1 =67$~nm to $\lambda_2 =152$~nm: \begin{equation}\label{eqn:conversion_irrad} I_{\lambda}^{\text{SUMER}} = \frac{\int_{\lambda_1}^{\lambda_2} \! I_{\lambda'}^{\text{SEE}} \, \mathrm{d}\lambda'}{\int_{\lambda_1}^{\lambda_2} \! L_{\lambda'}^{\text{SUMER}} \, \mathrm{d}\lambda'} L_{\lambda}^{\text{SUMER}}, \end{equation} where $I_{\lambda}$ are irradiances and $L_{\lambda}$ are radiances. This results in the integrated irradiance over 67 -- 152~nm from the TIMED/SEE spectrum and from the WHI spectrum being both the same, equal to 8.6~mW~m$^{-2}$. Equation~\ref{eqn:conversion_irrad} provides only an approximation of solar irradiances. By scaling the SUMER radiances in this way, we are assuming that the radiance at disk centre is representative of that of the entire disk. This means that we neglect the contribution of active regions and coronal holes to the full disk irradiance. However, given that we consider a quiet Sun, these contributions should be small \citep{Schuhle1998}. A larger source of error is neglecting centre-to-limb variability, which can appear as either limb brightening or darkening, depending on the particular spectral line. An accurate conversion of the SUMER measurement to a disk-integrated spectrum is not trivial and is beyond the scope of this study. Using the three sources of solar spectral data described above (and presented in Table~\ref{tab:sol_spectra}), we construct four solar spectra: one `low', one `mid', and two `high' resolution spectra. The data sources used in each of these cases are given in Table~\ref{tab:construced_spectra}. The low-resolution spectrum has a resolution of $\Delta\lambda=1$~nm, and is composed of TIMED/SEE between 0.1~nm and 67~nm, and the SUMER spectrum degraded in resolution to $\Delta\lambda=1$~nm between 67 and 152~nm. The mid-resolution spectrum has $\Delta\lambda=0.1$~nm and is composed of the WHI spectrum between 0.1 and 67~nm combined with the SUMER spectrum degraded in resolution to $\Delta\lambda=0.1$~nm between 67 and 152~nm. The two high-resolution spectra (labelled \#1 and \#2) are constructed using the SUMER spectrum at $\Delta\lambda=0.004$~nm between 67 -- 152~nm, combined with either TIMED/SEE at $\Delta\lambda=1$~nm (spectrum \#1) or the WHI spectrum at $\Delta\lambda=0.1$~nm (spectrum \#2) between 0.1 -- 67~nm. \begin{table} \centering \begin{threeparttable} \caption{Constructed solar spectra} \begin{tabular}{l|c|c} \toprule & \multicolumn{2}{c}{Wavelength range} \\ \hline Label & 0.1 -- 67~nm & 67 -- 152~nm \\ \hline \multirow{2}{*}{Low res.} & TIMED/SEE & SOHO/SUMER \\ & $\Delta\lambda=1$~nm & degraded to $\Delta\lambda=1$~nm \\ \hline \multirow{2}{*}{Mid res.} & WHI & SOHO/SUMER \\ & $\Delta\lambda=0.1$~nm & degraded to $\Delta\lambda=0.1$~nm \\ \hline \multirow{2}{*}{High res.\ \#1} & TIMED/SEE & SOHO/SUMER \\ & $\Delta\lambda=1$~nm & full resolution \\ & & ($\Delta\lambda=0.004$~nm) \\ \hline \multirow{2}{*}{High res.\ \#2} & WHI & SOHO/SUMER \\ & $\Delta\lambda=0.1$~nm & full resolution \\ & & ($\Delta\lambda=0.004$~nm) \\ \bottomrule \end{tabular} \label{tab:construced_spectra} \end{threeparttable} \end{table} For our purposes, high-resolution spectra are only required at wavelengths where the chemical cross sections in our atmospheric model are highly structured. In the H$_2$, H, He, and CH$_4$ atmosphere that we consider, the H$_2$ photo-absorption cross section is very structured at $\lambda > 80.4$~nm, i.e.\ beyond the ionisation threshold of H$_2$ (see Sect.~\ref{sec:sigma_H2}). At wavelengths shorter than this value, the chemical cross sections vary smoothly. Hence, when constructing our `high-resolution' solar spectra, it is sufficient to use either a low-resolution TIMED/SEE observation (for case high res.\ \#1) or a mid-resolution WHI spectrum (for case high res.\ \#2) at shorter wavelengths. In addition, in order to capture differences in ionisation rates due only to the spectral resolution, and not to the precise spectral energy distribution, we rebin the SUMER spectrum to 1~nm resolution to use in the longer wavelength region of the low-resolution spectrum and we rebin SUMER to 0.1~nm resolution to use in the mid-resolution spectrum (see Table~\ref{tab:construced_spectra}). \begin{figure*} \centering \includegraphics[width=1\textwidth]{spectrum_SEE_SUMER_vertlines} \caption{Solar spectra from TIMED/SEE on 14 April 2008 (in black), and from SOHO/SUMER on 20 April 1997 (in yellow, and degraded to 1~nm resolution in red). Panel (b) is an enlargement of panel (a) between 80 and 110~nm (indicated by vertical dashed lines).} \label{fig:spectrum_SEE_SUMER} \end{figure*} \begin{figure*} \centering \includegraphics[width=1\textwidth]{spectrum_WHI_SUMER_vertlines} \caption{Solar spectra from WHI quiet Sun campaign 10 -- 16 April 2008 (in blue), and from SOHO/SUMER on 20 April 1997 (in yellow, and degraded to 0.1~nm resolution in red). Panel (b) is an enlargement of panel (a) between 80 and 110~nm (indicated by vertical dashed lines).} \label{fig:spectrum_WHI_SUMER} \end{figure*} The solar spectra are plotted in Figs.~\ref{fig:spectrum_SEE_SUMER} and \ref{fig:spectrum_WHI_SUMER}. Figure~\ref{fig:spectrum_SEE_SUMER} shows the TIMED/SEE spectrum (in black) along with the SOHO/SUMER spectrum at full resolution (in yellow), and the latter rebinned to the same resolution as TIMED/SEE, i.e.\ $\Delta\lambda=1$~nm (in red). Panel (b) is an enlargement of panel (a) focussing on the spectral region where the H$_2$ photo-absorption cross section is highly structured: from 80~nm to 110~nm. Beyond 110~nm, the cross section drops off at the atmospheric temperatures considered (see Sect.~\ref{sec:sigma_H2}). The rebinned SUMER spectrum allows a comparison with the SEE flux levels. We expect the fluxes to be similar since we have applied Equation~\ref{eqn:conversion_irrad} to obtain the SUMER spectral irradiance, setting the integrated flux between 67 and 152~nm to be identical between the two spectra. In general, there is a good agreement between the spectral shapes observed by SEE and SUMER, especially since these spectra are measured during different solar cycles (see Table~\ref{tab:sol_conditions}). Most importantly, the bins containing strong spectral lines match very well. The largest discrepancies occur in the bins showing lower fluxes, in particular near 95~nm, which has previously been noted by \citet{Curdt2001}. Overall we obtain a high coefficient of determination of $R^2=0.92$, when performing a linear fit between the TIMED/SEE spectrum between 80~nm and 110~nm and the SUMER spectrum rebinned to 1~nm resolution. Figure~\ref{fig:spectrum_WHI_SUMER} shows the WHI quiet Sun spectrum from 0.1 to 190~nm in blue upon which the full-resolution SUMER spectrum is superposed in yellow. The SUMER spectrum rebinned to match the resolution of WHI ($\Delta\lambda=0.1$~nm) is plotted in red. Once again, the two observations are very close: $R^2=0.82$ over the wavelength range 80~nm to 110~nm. Certain wavelength regions of the SUMER spectrum (e.g. the Lyman continuum at $\lambda<91.2$~nm) match better with the WHI spectrum than with that from SEE, which could be due to instrumental effects caused by the degradation of the SEE sensor. We do not expect exact agreement between the different spectra given solar variability, instrumental noise and that SUMER only measures a portion of the solar disk. Regions where the SEE or WHI spectra are slightly higher than the SUMER spectrum can be explained by the fact that most solar emission lines at these wavelengths undergo limb brightening \citep{Wilhelm1998,Schuhle1998}. Since the SUMER observation only includes the disk centre, our SUMER irradiances are missing this component. Despite this fact, the SEE, WHI, and SUMER spectra agree very well with each other, and the SUMER spectrum processed with Equation~\ref{eqn:conversion_irrad} is sufficient for the purposes of this study. \subsection{High-resolution H$_2$ photo-absorption cross section}\label{sec:sigma_H2} The photo-absorption cross section of molecular hydrogen is highly structured at wavelengths longer than the ionisation threshold ($\lambda_{\text{th,H}_2}=80.4$~nm). It is made up of very narrow absorption lines composing the Lyman, Werner, and Rydberg bands \citep{Abgrall1993a,Abgrall1993,Abgrall2000}. These lines result in the absorption of solar radiation by the H$_2$ molecule over an extended layer of atmosphere that can only be modelled by including a high-resolution H$_2$ cross section. Previous studies have found that absorption in the Lyman, Werner, and Rydberg bands can produce a layer of hydrocarbon ions in the lower ionosphere of Jupiter \citep{Kim1994} and Saturn \citep{Kim2014}. For the H$_2$ photo-absorption cross section used in this study, we take a combination of the \citet{Backx1976} low-resolution ($\Delta\lambda\sim 1$~nm) H$_2$ photo-absorption cross section for wavelengths below the ionisation threshold (80.4~nm) and temperature-dependent high-resolution ($\Delta\lambda=10^{-3}$~nm) calculations by \citet{Yelle1993} at longer wavelengths. In addition, the H$_2$ photo-dissociation cross section measured by \citet{Dalgarno1969} was added to the \citet{Yelle1993} calculations for wavelengths between 80.4~nm and 84.6~nm where this process was missing. The cross section is plotted in Fig.~\ref{fig:sigH2}, and is provided as a downloadable dataset \citep{Chadney2021}. \begin{figure*} \centering \includegraphics[width=1\textwidth]{sigH2zoom_DA} \caption{Panel a shows the molecular hydrogen photo-absorption cross section, using measurements by \citet{Backx1976} at $\lambda<80.4$, and high-resolution ($\Delta\lambda=10^{-3}$~nm) calculations from \citet{Yelle1993} at a temperature of 250~K at $\lambda>80.4$. Between 80.4 and 84.6~nm, the photo-dissociation cross section measured by \citet{Dalgarno1969} is included. Panel b is an enlargement of panel a, focussing on the highly structured region beyond the H$_2$ ionisation threshold. The contribution from the \citet{Dalgarno1969} photo-dissociation cross section is highlighted in a red dotted line.} \label{fig:sigH2} \end{figure*} Although the high-resolution H$_2$ cross section is dependent on temperature, in practice, the temperature matters little: there is a maximum difference of 15\% in some peak ionisation rates between model runs using \citet{Yelle1993} cross sections determined at 150~K and at 350~K (the range of thermospheric temperatures, see Fig.\ref{fig:Tsource}). However, this may be because even our high resolution solar spectrum from SUMER ($\Delta\lambda=4\times 10^{-3}$~nm) has a wavelength resolution that is too coarse. The effect of the temperature of the H$_2$ cross section on ionisation rates might be more significant if the calculations were done using a solar spectrum with a resolution closer to $\Delta\lambda=1\times 10^{-3}$~nm. Nevertheless, in our case the cross section temperature is not important, and in all the following calculations in this paper we use an H$_2$ high-resolution cross section from \citet{Yelle1993} at a temperature of 250~K; this value was chosen as mid way between the temperature at the bottom of the thermosphere and the exospheric temperature. \section{Energy deposition model}\label{sec:ionospheric_model} The energy deposition model developed for this study calculates a set of ionisation and photo-dissociation rates. The neutral species H$_2$, H, He, and CH$_4$ (profiles of which are determined in Sect.~\ref{sec:neutral_atm}) are ionised through photo-ionisation and electron-impact ionisation. The former is obtained by solving the Beer-Lambert law. We included the latter by using a suprathermal electron transport model that is based on the solution to the Boltzmann equation with transport, angular scattering, and energy degradation of photo-electrons and their secondaries taken into account \citep{Galand2009}. The incident source of energy is the solar spectrum derived in Sect.~\ref{sec:solar_flux}. Since we are modelling the equatorial atmosphere, we do not include electron precipitation from the magnetosphere. The photo-ionisation, electron-impact ionisation, and photo-dissociation reactions included are shown in Table~\ref{tab:reactionsiono} and the cross sections associated with each of these reactions are plotted in Fig.~\ref{fig:cross_sections}. For a more detailed description of the energy deposition model, see \citet{Chadney2016}. \begin{table*} \centering \small \caption{Chemical reactions used in the energy deposition model.} \begin{tabular}{llcl} \toprule \# & Reaction & Reference & \\ \midrule & Photo-ionisation: & & \\ 1 & $\text{H}_2 + h\nu \rightarrow \text{H}_2^+ + e^-$ & \multicolumn{2}{l}{\citet{Backx1976,Kossmann1989},} \\ & & \multicolumn{2}{l}{\citet{Chung1993,Yan1998}} \\ 2 & $\text{H}_2 + h\nu \rightarrow \text{H}^+ + \text{H} + e^-$ & \multicolumn{2}{l}{\citet{Chung1993} and 2H$^+$ references} \\ 3 & $\text{H}_2 + h\nu \rightarrow 2\text{H}^+ + 2e^-$ & \multicolumn{2}{l}{\citet{Dujardin1987,Kossmann1989a},} \\ & & \multicolumn{2}{l}{\citet{Yan1998}} \\ 4 & $\text{H} + h\nu \rightarrow \text{H}^+ + e^-$ & \multicolumn{2}{l}{\citet{Verner1996}} \\ 5 & $\text{He} + h\nu \rightarrow \text{He}^+ + e^-$ & \multicolumn{2}{l}{\citet{Verner1996}} \\ 6 & $\text{CH}_4 + h\nu \rightarrow \text{CH}_4^+ + e^-$ & \multirow{8}{*}{\hspace{-1em}$\left.\begin{array}{l} \\ \\ \\ \\ \\ \\ \\ \\ \end{array}\right\rbrace$\,\,\citet{Samson1989,Schunk2000}} \\ 7 & $\text{CH}_4 + h\nu \rightarrow \text{CH}_3^+ + \text{H} + e^-$ & \\ 8 & $\text{CH}_4 + h\nu \rightarrow \text{CH}_2^+ + \text{H}_2 + e^-$ & \\ 9 & $\text{CH}_4 + h\nu \rightarrow \text{CH}_2^+ + 2\text{H} + e^-$ & \\ 10 & $\text{CH}_4 + h\nu \rightarrow \text{CH}^+ + \text{H}_2 + \text{H} + e^-$ & \\ 11 & $\text{CH}_4 + h\nu \rightarrow \text{C}^+ + 2\text{H}_2 + e^-$ & \\ 12 & $\text{CH}_4 + h\nu \rightarrow \text{H}_2^+ + \text{CH}_2 + e^-$ & \\ 13 & $\text{CH}_4 + h\nu \rightarrow \text{H}^+ + \text{CH}_3 + e^-$ & \\ & & & \\ & Electron-impact ionisation: & & \\ 14 & $\text{H}_2 + e^- \rightarrow \text{H}_2^+ + e^- + e^-$ & \multicolumn{2}{|l}{\citet{VanWingerden1980,Ajello1991},} \\ 15 & $\text{H}_2 + e^- \rightarrow \text{H}^+ + \text{H} + e^- + e^-$ & \multicolumn{2}{|l}{\citet{Jain1992,Straub1996},} \\ 16 & $\text{H}_2 + e^- \rightarrow 2\text{H}^+ + 2e^- + e^-$ & \multicolumn{2}{|l}{\citet{Liu1998,Brunger2002}} \\ 17 & $\text{H} + e^- \rightarrow \text{H}^+ + e^- + e^-$ & \multicolumn{2}{l}{\citet{Brackmann1958,Burke1962},} \\ & & \multicolumn{2}{l}{\citet{Bray1991,MAYOL1997},} \\ & & \multicolumn{2}{l}{\citet{Stone2002,Bartlett2004}} \\ 18 & $\text{He} + e^- \rightarrow \text{He}^+ + e^- + e^-$ & \multicolumn{2}{l}{\citet{LaBahn1970,MAYOL1997},} \\ & & \multicolumn{2}{l}{\citet{Stone2002,Bartlett2004}} \\ 19 & $\text{CH}_4 + e^- \rightarrow \text{CH}_4^+ + e^- + e^-$ & \multirow{8}{*}{\hspace{-1em}$\left.\begin{array}{l} \\ \\ \\ \\ \\ \\ \\ \\ \end{array}\right\rbrace$\,\,\citet{Davies1989,Liu2006}} \\ 20 & $\text{CH}_4 + e^- \rightarrow \text{CH}_3^+ + \text{H} + e^- + e^-$ & \\ 21 & $\text{CH}_4 + e^- \rightarrow \text{CH}_2^+ + \text{H}_2 + e^- + e^-$ & \\ 22 & $\text{CH}_4 + e^- \rightarrow \text{CH}_2^+ + 2\text{H} + e^- + e^-$ & \\ 23 & $\text{CH}_4 + e^- \rightarrow \text{CH}^+ + \text{H}_2 + \text{H} + e^- + e^-$ & \\ 24 & $\text{CH}_4 + e^- \rightarrow \text{C}^+ + 2\text{H}_2 + e^- + e^-$ & \\ 25 & $\text{CH}_4 + e^- \rightarrow \text{H}_2^+ + \text{CH}_2 + e^- + e^-$ & \\ 26 & $\text{CH}_4 + e^- \rightarrow \text{H}^+ + \text{CH}_3 + e^- + e^-$ & \\ & & & \\ & Photo-dissociation: & & \\ 27 & $\text{CH}_4 + h\nu \rightarrow \text{CH}_3 + \text{H}$ & \multirow{4}{*}{$\left.\begin{array}{l} \\ \\ \\ \\ \end{array}\right\rbrace$\,\,\citet{Lavvas2011}, based upon \citet{Wang2000}} \\ 28 & $\text{CH}_4 + h\nu \rightarrow \text{CH}_2 + \text{H}_2$ & \\ 29 & $\text{CH}_4 + h\nu \rightarrow \text{CH} + \text{H}_2 + \text{H}$ & \\ 30 & $\text{CH}_4 + h\nu \rightarrow \text{H}^- + \text{CH}_3^+$ & \\ \bottomrule \end{tabular} \label{tab:reactionsiono} \end{table*} \begin{figure*} \centering \includegraphics[width=1\textwidth]{Xsections_reduced_newcols_annot} \caption{Photo-absorption (black and grey lines), photo-ionisation (solid coloured lines), and photo-dissociation (dotted coloured lines) cross sections for the four neutral species H$_2$ (panel a), H and He (panel b), and CH$_4$ (panel c). In panel b the curves showing total H and He photo-absorption cross sections are respectively indistinguishable from the H$^+$ and He$^+$ photo-ionisation cross sections.} \label{fig:cross_sections} \end{figure*} \section{Results and Discussion} \label{sec:results} \subsection{Ionisation rates} \label{sec:ionisation_rates} Ionisation rates through the reactions listed in Table~\ref{tab:reactionsiono} are plotted as a function of altitude in Fig.~\ref{fig:Pi_vs_z}, where the solid lines are the rates of photo-ionisation reactions and the dashed lines are the rates of the corresponding electron-impact ionisation reactions. The reaction rate profiles are shown at three different local times: 6~LT (panels a,b), 8~LT (panels c,d), and 12~LT (panels e,f). In panels a, c, and e we have plotted the rates of reactions forming H$_2^+$, H$^+$, and He$^+$ through reactions \#~1 to 5 and \#~14 to 18 (from Table~\ref{tab:reactionsiono}) for photo-ionisation and electron-impact ionisation, respectively. Panels b, d, and f show production rates from the ionisation of CH$_4$ leading to CH$_4^+$, CH$_3^+$, CH$_2^+$, CH$^+$, C$^+$, H$_2^+$, and H$^+$ by reactions \#~6 to 13 for photo-ionisation and \#~19 to 26 for electron-impact ionisation. The neutral atmospheric profiles are not expected to be significantly affected by local time for a fast rotating planet. It is hence legitimate to use them over the range of local times in order to assess the ionisation and photo-dissociation rates at different LTs. Photo-ionisation is the main ionisation process in the upper part of the ionosphere, above about 1000~km at 12 LT. At lower altitudes, the energy deposition of high-energy solar soft X-ray radiation ($\lambda < 20$~nm) results in large quantities of energetic secondary electrons allowing electron-impact ionisation to dominate, confirming earlier findings \citep[e.g.,][]{Kim1994,Galand2009,Kim2014} At all local times, the main ion formed in the upper ionosphere is H$_2^+$. In terms of photo-ionisation H$_2^+$ is the dominant ion produced above 850~km at 12~LT at which time its photo-ionisation production rate displays a broad peak between 1000 and 1500~km at 4.2~cm$^{-3}$~s$^{-1}$. This peak is due to photons from the strong solar He II line at 30.4~nm. Although it is the main ion formed, previous modelling \citep[e.g.,][]{Moore2004,Kim2014} shows that the H$_2^+$ ion is efficiently converted into H$_3^+$ at high altitudes by reaction with abundant molecular H$_2$ (see Fig.~\ref{fig:n_vs_p_INMS}), which yields IR thermal emissions \citep{ODonoghue2013}. The highly structured nature of the H$_2$ photo-absorption cross section at wavelengths beyond the H$_2$ ionisation threshold allows for low-energy photons (with wavelengths 84.6 - 120~nm) to penetrate down to altitudes as low as 800~km at 6~LT and $\sim700$~km at 12~LT if their energy falls within the wings of the very narrow lines that constitute the Lyman, Werner and Rydberg bands of the cross section. The result is a low-altitude narrow peak in the photo-ionisation rate profiles of atomic H to form H$^+$, and of CH$_4$ to form CH$_4^+$ and CH$_3^+$. These peaks are located between 750 and 850~km altitude at 12 LT, and stand out as being distinct in shape to the other production rate profiles in Fig.~\ref{fig:Pi_vs_z}. They result from ionisation associated with energy thresholds beyond 80~nm and are therefore sensitive to the highly structured H$_2$ photoabsorption cross section in this spectral region. At its peak near 800~km altitude, the rate of reaction \#~6 producing CH$_4^+$ reaches 3.2~cm$^{-3}$~s$^{-1}$ at 12 LT, making it the dominant reaction at this altitude. As shown in Fig.~\ref{fig:Pi_vs_z}(b,d,f), the production profiles of other hydrocarbon ions (CH$_2^+$, CH$^+$, C$^+$), and H$_2^+$ and H$^+$ from the ionisation of CH$_4$ do not display the strong peaks of the CH$_4^+$ and CH$_3^+$ rates. Indeed, apart from the reactions producing CH$_4^+$ and CH$_3^+$, the other photo-ionisation reactions of CH$_4$ included in this study have ionisation threshold wavelengths that are too low to be affected by low-altitude absorption of photons in the structured region of the H$_2$ cross-section (see Fig.~\ref{fig:cross_sections}). Instead the production rate profiles of these ions have two peaks: a low-altitude peak in the region where solar soft X-ray photons are absorbed, and a another, broader peak at higher altitudes caused by solar EUV photons interacting with the high quantities of neutral CH$_4$ discovered to be present at high altitudes (see Fig.~\ref{fig:n_vs_p_INMS}). The solar zenith angle decreases as local time moves towards noon. At higher zenith angles, a more extended column of atmosphere is required to achieve a given amount of absorption. Hence, the production peaks at 6 LT (panels a and b of Fig.~\ref{fig:Pi_vs_z}) and at 8 LT (panels c, d) are broadened, shifted to higher altitudes, and less intense compared to 12 LT (panels e, f). In addition, the altitude below which electron-impact ionisation dominates over photo-ionisation increases with solar zenith angle as soft X-ray and energetic EUV photons are absorbed at higher altitudes: at 12 LT, electron-impact ionisation dominates below 750~km, whereas at 6 LT, the threshold altitude is near 900~km. Photo-ionisation and electron-impact ionisation are of similar magnitude between about 750~km and 1000~km at 12~LT, and between about 900~km and 1500~km at 6~LT. In Sect.~\ref{sec:neutral_atm}, we derived neutral atmospheres based on two different temperatures profiles, composites A and B, each constructed using a different method to connect the INMS final plunge measurements to previous CIRS and UVIS observations. The photo-ionisation rate profiles shown in solid lines in Fig.~\ref{fig:Pi_vs_z} are produced using the neutral atmosphere derived from temperature composite A. To show the effect of the two different temperature profiles on the ionisation rates, in panel e of Fig.~\ref{fig:Pi_vs_z} we have additionally plotted the photo-ionisation rate profile of H$_2$ to form H$_2^+$ obtained using a neutral atmosphere derived from temperature composite B; this is shown in a dotted blue line. Differences between the two production rate profiles are seen at higher altitudes (where the temperature profiles differ). A neutral atmosphere derived from temperature composite B results in a peak H$_2^+$ photo-ionisation rate that is higher by a factor of 1.5, compared to calculations making use of a neutral atmosphere derived from temperature composite A. At altitudes above about 1700~km, H$_2^+$ photo-ionisation rates are lower by a factor of 2 when using temperature composite B, compared to A. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{Pi_vs_z_high-res_v2_casesAB} \caption{Photo-ionisation rates (solid coloured lines) and electron-impact ionisation rates (dashed lines) during Grand Finale conditions calculated using the high resolution solar spectrum \#1 and the high resolution H$_2$ photo-absorption cross section. Panels a and b show profiles calculated at 6~LT, panels c and d, at 8~LT, and panels e and f, at 12~LT. Panel f also contains in solid black curves the CH$_4$ photo-ionisation profiles obtained if there were no inflow of CH$_4$ (i.e.\, using CH$_4$ densities from the diffusion model plotted in the solid red curve in Fig.~\ref{fig:n_vs_p_INMS} and discussed in Sect.~\ref{sec:neutral_atm}). All profiles are calculated using a neutral atmosphere derived with temperature composite A (see Sect.~\ref{sec:neutral_atm}), apart from the blue dotted curve in panel e, which shows the effect of using temperature composite B on the H$_2^+$ photo-ionisation rate.} \label{fig:Pi_vs_z} \end{figure*} \subsection{H$_2^+$ number density} In order to validate our results, we compute the number density of the H$_2^+$ ion to compare with INMS measurements taken during Cassini's proximal orbits. The INMS instrument sampled Saturn's ionosphere down to altitudes close to 1700~km during the closest approach of orbits P288 and P292, which occurred on 14 August 2017 and 9 September 2017, respectively \citep{Moore2018,Waite2018,Cravens2019}. The measured H$_2^+$ densities along the trajectory of these orbits are shown in Fig.~\ref{fig:nH2p_vs_lat}, in orange points for P288 (panel a) and blue points for P292 (panel b). The measured densities reach a local minimum of about 0.2 - 0.5~cm$^{-3}$ at the altitude of closest approach. The sharp drop in density at latitudes below -15$^{\circ}$ is due to shadowing from the rings \citep{Moore2018}. \begin{figure*} \centering \includegraphics[width=1\textwidth]{H2p_vs_latitude_INMS} \caption{Comparison of modelled H$_2^+$ number densities (black solid curve) with those measured by INMS during proximal orbit 288 (orange points, panel a) and orbit 292 (blue points, panel b), as a function of planetocentric latitude. The spacecraft closest approach (which took place near noon local time for both orbits) is shown with a vertical dashed line. Corresponding altitude values along the trajectory are also shown. Note that while the latitude scale is linear, this is not the case for the altitudes. INMS measurements are from \citet{Moore2018}. The modelled H$_2^+$ number densities presented here are determined at a solar zenith angle of 27$^{\circ}$.} \label{fig:nH2p_vs_lat} \end{figure*} H$_2^+$ is the ion produced in the largest quantities above at least 1000~km in Saturn's upper atmosphere (see Fig.~\ref{fig:Pi_vs_z}). This ion reacts most efficiently with H$_2$ through proton transfer, which rapidly converts much of the population of H$_2^+$ into H$_3^+$: \begin{equation} \text{H}_2^+ + \text{H}_2 \xrightarrow{\text{k}} \text{H}_3^+ + \text{H}, \end{equation} with a reaction rate of $k=2\times 10^{-9}$~cm$^3$s$^{-1}$ \citep{Theard1974}. Below about 2000~km, chemical loss timescales are significantly lower than transport timescales and photochemical equilibrium is valid. There is a balance between the H$_2^+$ production rate ($P_{\text{H}_2^+}(z) = \nu_{\text{H}_2\rightarrow\text{H}_2^+}(z)\,n_{\text{H}_2}(z)$, where $\nu$ is the ionisation frequency) and its chemical loss: $L_{\text{H}_2^+}(z) = k\,n_{\text{H}_2^+}(z)\,n_{\text{H}_2}(z)$. Therefore we can determine the density of H$_2^+$ from our production rates (see Sect.~\ref{sec:ionisation_rates}) and the density of neutral H$_2$ (determined in Sect.~\ref{sec:neutral_atm}) according to the following: \begin{equation} \label{eqn:nH2p} n_{\text{H}_2^+}(z) = \frac{P_{\text{H}_2^+}(z)}{k\,n_{\text{H}_2}(z)}. \end{equation} Making use of Equation~\ref{eqn:nH2p}, we obtain the modelled H$_2^+$ number densities shown in the black solid curves in Fig.~\ref{fig:nH2p_vs_lat}, determined at noon local time. We take into account production of $\text{H}_2^+$ by photo-ionisation (reaction 1, Table~\ref{tab:reactionsiono}), and electron-impact ionisation due to photo-electrons (reaction 14, Table~\ref{tab:reactionsiono}). Electron-impact ionisation is responsible for about 10\% of the $\text{H}_2^+$ number density at these altitudes. The modelled density values shown in Fig.~\ref{fig:nH2p_vs_lat} are calculated at points along the INMS trajectory where the spacecraft's altitude is less than 2000~km, to ensure the assumption of photochemical equilibrium used in the above calculation is valid. We have also checked that our modelled density profile is not too sensitive to variations in solar zenith angle over these two periods. Keeping in mind that the solar flux is our estimate at Saturn for quiet solar activity (see Sect.~\ref{sec:solar_flux}) without further adjustment and our atmospheric model was derived from the final plunge (on 15 September 2017, see Sect.~\ref{sec:neutral_atm}), the modelled and INMS $\text{H}_2^+$ number densities agree very well. In particular, at the closest approach altitude of 1700~km, we obtain a modelled value of $n_{\text{H}_2^+}=0.36$~cm$^{-3}$, which is consistent with the measurements around 0.2 - 0.5~cm$^{-3}$ from INMS. This calculated density corresponds to an ionisation frequency of H$_2$ producing H$_2^+$ of $\nu_{\text{H}_2\rightarrow\text{H}_2^+}=0.72\times 10^{-9}$~s$^{-1}$ ($\nu_{\text{H}_2\rightarrow\text{H}_2^+}(z)=k\,n_{\text{H}_2^+}(z)$). This value is in agreement with $(0.7\pm0.3)\times 10^{-9}$~s$^{-1}$, values derived by \citet{Cravens2019} from INMS measurements during orbit P288 at latitudes between -2$^{\circ}$ and -10$^{\circ}$ around the spacecraft closest approach. \subsection{CH$_4$ photo-dissociation rates} The dissociation rate profiles of CH$_4$ with altitude at 12 LT by reactions \#~27 to \#~30 (see Table~\ref{tab:reactionsiono}) are shown in Fig.~\ref{fig:Pd_vs_z}. The production rates of CH$_3$ (reaction \#~27), CH$_2$ (reaction \#~28), and CH (reaction \#~29), plotted respectively in green, orange, and purple, have the same dependence on altitude above the peak, since the shape of the cross sections of each of these reactions is identical (see Fig.~\ref{fig:cross_sections}c). The difference at low altitudes is due to different threshold wavelengths for the photons taking part in these reactions. The production peak is located at an altitude of 750~km and reaches 220~cm$^{-3}$~s$^{-1}$ for the production of CH$_3$, 280~cm$^{-3}$~s$^{-1}$ for CH$_2$, and 30~cm$^{-3}$~s$^{-1}$ for CH. These three reactions are driven by Lyman $\alpha$ photons and their rate profiles are not sensitive to the highly structured H$_2$ photo-absorption cross section. The dissociation of CH$_4$ leading to H$^-$+CH$_3^+$ (reaction \#~30) takes place with photons at wavelengths between 46 and 101~nm (see the dotted pink cross section in Fig.~\ref{fig:cross_sections}c). This means that Lyman $\alpha$ photons cannot be responsible for this reaction. Instead, the peak dissociation rate, located at 790~km, is driven by photons penetrating to low altitudes in the structured region of the H$_2$ photo-absorption cross section. Thus reaction \#~30 is the only CH$_4$ photo-dissociation reaction considered in this study whose reaction rate profile is sensitive to the spectral resolution of the H$_2$ photo-absorption cross section used in the model calculation. Its reaction rate profile with altitude is plotted in Fig.~\ref{fig:Pd_vs_z} in pink: the dark pink curve with a peak at 790~km is obtained when running the model with the high-resolution H$_2$ photo-absorption cross section, whereas the light pink curve is obtained using the low-resolution H$_2$ cross section. Taking into account the extra absorption occurring at low altitudes in the wings of the narrowly structured H$_2$ photo-absorption cross section is necessary to correctly determine the production peak of reaction \#~30. The other CH$_4$ dissociation profiles plotted in Fig.~\ref{fig:Pd_vs_z} are determined with the high-resolution solar spectrum, however these profiles are not sensitive to the resolution of the solar flux model used among those considered. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Pd_vs_z_v2} \caption{CH$_4$ photo-dissociation rates during Grand Finale conditions at 12 LT calculated using the high resolution solar spectrum. The coloured curves show photo-dissociation rates including the inflow of CH$_4$ measured by INMS, whereas the black curves represent profiles obtained if there were no inflow of CH$_4$ (using the diffusion model discussed in Sect.~\ref{sec:neutral_atm} for the neutral CH$_4$ density profile). Only the reaction leading to H$^-$+CH$_3^+$ is sensitive to the spectral resolution of the H$_2$ photo-absorption cross section. The profile calculated using the high resolution H$_2$ cross section is shown in dark pink, and in light pink is the resulting profile when the low resolution H$_2$ cross section is used.} \label{fig:Pd_vs_z} \end{figure} \subsection{Effect of the CH$_4$ inflow} As discussed in Sect.~\ref{sec:neutral_atm}, during the final plunge INMS measured an influx of CH$_4$ \citep{Yelle2018} resulting in a constant volume mixing ratio of approximately $1.3 \times 10^{-4}$ at least down to the 1~nbar pressure level (see Fig.~\ref{fig:mixing_ratio_vs_p_INMS}). Without a high altitude source of methane, our diffusive model shows that the CH$_4$ mixing ratio only reaches values greater than $1.3 \times 10^{-4}$ at pressures higher than $10^{-7}$~bar. A large influx of methane from outside of the atmosphere has significant consequences for ion and neutral photochemistry. We have compared the effect of the methane influx on CH$_4$ photo-ionisation and photo-dissociation rate profiles. To carry out this comparison, we run the energy deposition model using the CH$_4$ number densities reconstructed only considering diffusion (solid red curve in Fig.~\ref{fig:n_vs_p_INMS}) to compare with the production rates determined with a neutral atmosphere that includes a methane influx (dashed red curve in Fig.~\ref{fig:n_vs_p_INMS}). The solid black curves in Figs.~\ref{fig:Pi_vs_z}(f) and \ref{fig:Pd_vs_z} show methane photo-ionisation and photo-dissociation rates, respectively, for an atmosphere without an influx of methane. The coloured curves in these plots are the rate profiles for the case with a methane influx. These figures show that the methane influx affects only the region above the peak, where the reaction rates are strongly enhanced due to the presence of additional neutral methane. The magnitude of the peak production rates are not changed by the influx. The total column methane photo-dissociation rate above the peak increases from $2.73\times 10^9$~cm$^{-2}$~s$^{-1}$ to $2.84\times 10^9$~cm$^{-2}$~s$^{-1}$ when the methane influx is considered, of which 41~\% forms CH$_3$, 53~\% forms CH$_2$, and 6~\% forms CH. \subsection{Effect of solar spectrum and cross section resolution} The reaction rates plotted in Figure~\ref{fig:Pi_vs_z} are calculated using the high resolution solar spectrum (\#~1, described in Sect.~\ref{sec:solar_flux}), along with the high resolution H$_2$ cross section (see Sect.~\ref{sec:sigma_H2}). In Figs.~\ref{fig:ratio_Pi_lowres} and \ref{fig:ratio_Pi_midres} we show the effect of running the energy deposition model using different spectral resolutions on photo-ionisation production rates at 12 LT. We note that electron impact ionisation is not affected by the use of high-resolution H$_2$ cross section and solar flux above 80~nm, as the solar photons associated with this spectral range generate photo-electrons that are too low in energy to ionise. The curves in Figs.~\ref{fig:ratio_Pi_lowres} and \ref{fig:ratio_Pi_midres} represent the ratio of photo-ionisation production rates between model calculations using low ($\Delta\lambda=1$~nm, Fig.~\ref{fig:ratio_Pi_lowres}) and mid ($\Delta\lambda=0.1$~nm, Fig.~\ref{fig:ratio_Pi_midres}) resolution solar spectra with calculations using the high resolution solar spectrum (high res.~\#~1 in Fig.~\ref{fig:ratio_Pi_lowres}, and high res.~\#~2 in Fig.~\ref{fig:ratio_Pi_midres}, see Table~\ref{tab:construced_spectra}) and the high resolution H$_2$ cross section. Each coloured curve represents a given reaction and the line styles represent the comparison of different combinations of resolutions with the high resolution case (see Table~\ref{tab:construced_spectra}): in Fig.~\ref{fig:ratio_Pi_lowres}, the dashed lines represent the case where the low resolution solar spectrum is combined with the low resolution H$_2$ cross section, whereas the solid lines result from calculations using the low resolution solar spectrum with the high resolution H$_2$ cross section. In a similar fashion, in Fig.~\ref{fig:ratio_Pi_midres}, calculations using the mid resolution spectrum with the low resolution H$_2$ cross section are in dashed lines, and results combining the mid resolution spectrum with the high resolution cross section are shown in solid lines. The largest deviation from the high resolution reference case is found when using the low resolution H$_2$ photo-absorption cross section, regardless of the resolution of the solar spectrum (see the dashed lines in Figs.~\ref{fig:ratio_Pi_lowres} and \ref{fig:ratio_Pi_midres}). Indeed, when using a low resolution model, there is no peak in H$^+$ production (from H) at 800~km altitude; this is reflected in panels b of Figs.~\ref{fig:ratio_Pi_lowres} and \ref{fig:ratio_Pi_midres} which show that the production from the low resolution model is equal to $6\times 10^{-5}$ times that from the high resolution model at this altitude (dashed red line). Likewise, the production of low altitude ionised hydrocarbons is not present in the models that use the low resolution cross section, as can be seen in panels d of Figs.~\ref{fig:ratio_Pi_lowres} and \ref{fig:ratio_Pi_midres}: the low resolution model values are equal to $3\times 10^{-4}$ times those from the high resolution model for CH$_4^+$ (minimum of the dashed blue line) and $6\times 10^{-2}$ times for CH$_3^+$ (minimum of the dashed orange line). The low altitude peaks in the production of H$^+$, CH$_4^+$, and CH$_3^+$ that appear when using a high resolution H$_2$ photo-absorption cross section were also noted by \citet{Kim2014}. We find that with the additional inclusion of the large neutral methane influx recorded by INMS during the Cassini Grand Finale, the enhanced CH$_4^+$, and CH$_3^+$ production resulting from the highly structured region of the H$_2$ cross section has a significant impact up to about 1500~km altitude (see Fig.~\ref{fig:ratio_Pi_lowres}d). The cases plotted in solid lines in Figs.~\ref{fig:ratio_Pi_lowres} and \ref{fig:ratio_Pi_midres} make use of the low or mid resolution solar spectra combined with the high resolution H$_2$ photo-absorption cross section. Interestingly, both of these cases capture the features of the fully high resolution model (i.e., high-resolution spectrum and cross section) very well for all the photo-ionisation reactions considered. Indeed, production rates determined using the low resolution solar spectrum combined with the high resolution H$_2$ cross section vary between a factor of 0.9875 and 1.0025 times those determined with the fully high resolution model. For the mid resolution solar spectrum combined with the high resolution cross section, the calculated production rates vary between 0.995 to 1.004 times those from the fully high resolution model. The largest differences occur at and just below the altitude of the production peaks. Since we are comparing between the same solar spectra at different spectral resolutions in each case -- low res.\,spectrum vs high res.~1 (see Table~\ref{tab:construced_spectra}) in Fig.~\ref{fig:ratio_Pi_lowres}, and mid res.\,spectrum vs high res.~2 in Fig.~\ref{fig:ratio_Pi_midres} -- these differences are only due to resolution of the solar spectrum. Hence, in the cases modelled in this paper, it transpires that incorporating a high resolution H$_2$ photo-absorption cross section into energy deposition models is far more important than making use of a high resolution solar spectrum. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{ratios_Pi_vs_z_lowres} \caption{Ratio of photo-ionisation production rates at 12 LT, showing a comparison of low-resolution cases with the high-resolution case. $P_{highres}$ is the rate calculated using the high res.~\#~1 solar spectrum (see Table~\ref{tab:construced_spectra}) and H$_2$ photo-absorption cross section. $P_X$ is the rate calculated with the low resolution solar spectrum and low resolution H$_2$ cross section (in dashed lines) or the low resolution spectrum and high resolution cross section (in solid lines). Panels a and b show photo-ionisation rates for different reactions (see Table~\ref{tab:reactionsiono}): \#4 (in red), \#1 (in blue), and \#2 (in green), whereas panels c and d show the rates for reactions \#6 (in blue), \#7 (in orange), and \#8 (in green). Panels b and d, are semi-log versions of panels a and c, respectively.} \label{fig:ratio_Pi_lowres} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{ratios_Pi_vs_z_midres} \caption{Ratio of photo-ionisation production rates at 12 LT, showing a comparison of mid-resolution cases with the high-resolution case. $P_{highres}$ is the rate calculated using the high res.~\#~2 solar spectrum (see Table~\ref{tab:construced_spectra}) and H$_2$ photo-absorption cross section. $P_X$ is the rate calculated with the mid resolution solar spectrum and low resolution H$_2$ cross section (in dashed lines) or the mid resolution spectrum and high resolution cross section (in solid lines). Panels a and b show photo-ionisation rates for different reactions (see Table~\ref{tab:reactionsiono}): \#4 (in red), \#1 (in blue), and \#2 (in green), whereas panels c and d show the rates for reactions \#6 (in blue), \#7 (in orange), and \#8 (in green). Panels b and d, are semi-log versions of panels a and c, respectively.} \label{fig:ratio_Pi_midres} \end{figure*} \section{Conclusions} \label{sec:conc} The aim of this study has been to determine the ionisation rates and CH$_4$ photo-dissociation rates in Saturn's equatorial upper atmosphere at the time and location of the Cassini final plunge. For this purpose, we have reconstructed upper atmosphere profiles of neutral temperature and major neutral densities, based upon measurements taken by INMS during Cassini's final plunge through the atmosphere. These in-situ INMS measurements were combined with previously measured CIRS limb scans and UVIS stellar occulations using a diffusion model. The resulting neutral composition contains large quantities of CH$_4$ at high altitudes, as directly measured by INMS, which are consistent with an inflow that could originate from the rings \citep{Yelle2018}. We show that the inflow does not affect the magnitude of the peak methane photo-ionisation or photo-dissociation rates, but causes a large enhancement of the rates above the peak. By fitting the results of our diffusion model to the final plunge measurements, we derive a He volume mixing ratio at the 1 bar pressure level of between 0.120 and 0.134, depending on the temperature profile used. In addition to the details of the neutral atmosphere, we also constructed solar spectra composed to match the conditions found during the the final plunge, i.e. quiet solar conditions during solar minimum. Using values of solar F10.7 and Lyman $\alpha$ flux recorded on the day of the final plunge, we have obtained a number of solar observations taken under similar conditions. These solar measurements have allowed us to construct solar spectra at three different spectral resolutions in order to test the effect of resolution on the energy deposition model results. Indeed, since our model contains H$_2$ photo-absorption cross sections with spectral resolution up to $1\times 10^{-3}$~nm, we would ideally use a solar spectrum of equivalent resolution. The ``high-resolution'' solar spectrum used in this study is from the SUMER instrument on board SOHO and has $\Delta \lambda=4\times 10^{-3}$~nm between 67 and 152~nm. As previous studies at Saturn \citep{Kim2014} and Jupiter \citep{Kim1994} have shown, using a high-resolution H$_2$ photo-absorption cross section that includes the very narrow absorption lines of the Lyman, Werner, and Rydberg bands results in the prediction of an additional layer of hydrocarbon ions at the bottom of the ionosphere. Indeed, we find a layer of hydrocarbon ion production near 800~km altitude (see Sect.~\ref{sec:ionisation_rates}) when using the high resolution H$_2$ cross section in the energy deposition model. Our calculated photo-ionisation rate profiles differ by less than $\pm 1.25$\% when using the low-resolution ($\Delta \lambda=1$~nm) or mid-resolution ($\Delta \lambda=0.1$~nm) solar spectra, compared to the calculations carried out with the high-resolution spectra. This indicates that as long as the full resolution H$_2$ photo-absorption cross section is used, it is much less important to also include a high resolution solar spectrum in energy deposition models of the upper atmosphere of Saturn. To this statement, we should apply the caveat that we do not have access to a solar spectrum with as high a spectral resolution as the H$_2$ photo-absorption cross section that we use. It is possible that our high-resolution spectrum is still too coarse to fully capture all of the effects on energy deposition of the highly-structured region of the H$_2$ cross section. Since the INMS measurements during Cassini's final plunge have shown higher levels of methane than have previously been measured, we have included in this study a calculation of CH$_4$ dissociation rates. We thus find production rates of methane fragments that peak near 750~km, with large quantities of methane fragments produced throughout the thermosphere, with consequences to models of neutral photochemistry. \section*{Acknowledgements} Work at Imperial College London was supported by the UK Science \& Technology Facilities Council (STFC) under grants ST/N000692/1 and ST/N000838/1. We would like to thank Werner Curdt very warmly for having provided us with the high-resolution SOHO/SUMER solar spectrum used in this study. We are grateful to the TIMED/SEE team, and the Whole Heliosphere Interval (WHI) team for providing us with their solar flux data sets. We would like to thank Luke Moore very much for providing us with the INMS H$_2^+$ dataset.
0801.2921
\section{SUPPLEMENTARY INFORMATION} \subsection{Nomenclature} ARPES: angular resolved photoemission spectroscopy, Bi-2212: Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$, Bi(Y)-2212: Y-doped Bi-2212, DoS: density of states, $\Delta$: energy gap, $\Delta T$: self-heating, $\Gamma$: depairing factor, reciprocal quasiparticle lifetime; HTSC: high temperature superconductor, $I$: current, IJJ: intrinsic Josephson junction. IVC: Current-Voltage characteristic, $N$: number of IJJ's in the mesa, $n$: concentration of mobile charge carriers, OD, OP, UD: overdoped, optimally doped, underdoped, respectively; QP: quasiparticle, $R_0$: zero bias resistance (in the normal state), $R_0^{QP}$: zero bias quasiparticle resistance in the superconducting state, is different from $R_0$ because of the pronounced hysteresis in the IVC's at $T<T_c$; $R_n$: tunnel (normal) resistance, $R_{th}$: thermal resistance of a mesa, SIS: Superconductor-Insulator-Superconductor, SIN: Superconductor-Insulator-Normal metal, STM: scanning tunneling microscope, TA: thermal activation, $T_c$: superconducting critical temperature, $U_{TA}$: thermal activation barrier, $V$: voltage, $V_g$: the sum-gap voltage $V_g=2N\Delta/e$ The capital M/S in front of the figure or equation number indicates that the reference is made to the Manuscript or the Supplementary information, respectively. \subsection{Influence of the quasiparticle life time on $dI/dV$ characteristics} We have argued in the manuscript that the observed crossover to $T-$independent slope and the correlated appearance of the negative excess resistance are inconsistent with the single QP tunneling mechanism of the $c-$axis transport. However, the interlayer tunneling in Bi-2212 may depend on a number of parameters \cite{Millis,Carbotte,Yamada}: (i) The QP life time, $1/\Gamma$, which, according to ARPES data, has a substantial $T-$dependence close to $T_c$, where it increases roughly linearly with $T$; \cite{Norman,Lee2007} (ii) Momentum conservation upon tunneling (coherent vs. incoherent tunneling); (iii) The directionality of $c-$axis tunneling, i.e., the angular dependence of the tunneling matrix element $t(\varphi_1,\varphi_2)$, caused by a non-spherical Fermi surface \cite{Millis,Carbotte}. (iv) Temperature and (v) angular dependence of the gap in the DoS and the points (i-iii) above. Deviations from pure $d-$wave symmetry were reported in recent ARPES experiments \cite{Lee2007}. It may not be obvious that experimental data can not be described by the single QP tunneling with a fortunate combination of all those parameters. Therefore, below we want to dwell upon this statement. We start with considering the consequences of points (ii, iii) and (v), i.e., coherence and directionality of tunneling and the symmetry of the order parameter. The single QP tunneling current is given by: \begin{eqnarray}\label{Eq.Tunn} I(V) = \int_0^{2\pi} \int_0^{2\pi} \int_{-\infty}^{+\infty} d\varphi_1 d\varphi_2 dE~~~~~~~~~ ~ \\ \nonumber t(\varphi_1,\varphi_2)\rho(E,\varphi_1)\rho(E+eV,\varphi_2)f(E)\left[1-f(E+eV) \right], \end{eqnarray} where $E$ is the energy of the QP with respect to the Fermi surface, $\varphi_{1,2}$ are the angles in the momentum space of the initial and the final state of the QP, and $\rho(E,\varphi_1)$ $\rho(E+eV,\varphi_2)$ are the corresponding QP DoS: \begin{equation}\label{N(G)} \rho(E,\varphi)=\Re[(E-i\Gamma)/\sqrt{(E-i\Gamma)^2-\Delta(\varphi)^2}], \end{equation} In Fig. S5 a) and b) we show numerically simulated SIS characteristics for different single QP tunneling scenarios. We assumed, $t(\varphi_1,\varphi_2)=$const for incoherent-nondirectional, $t(\varphi_1,\varphi_2) \propto \delta(\varphi_1-\varphi_2)$ for coherent-nondirectional and $t(\varphi_1,\varphi_2) \propto [\cos(k_x)-\cos(k_y)]^2 \delta(\varphi_1-\varphi_2)$ for coherent-directional tunneling \cite{Millis}. The simulations were made for $\Delta(\varphi = 0) = 35 meV$, at low $T$ and small depairing $\Gamma \ll \Delta(0)$. Detailed discussion of $dI/dV$ characteristics can be found in Ref. \cite{Yamada}. Irrespective of the scenario, the zero-bias resistance diverges (the conductance tends to zero) at $T\rightarrow 0$ and $\Gamma=0$. The divergence is removed by the finite $\Gamma$, see the inset in Fig. S6 b). Therefore, $dI/dV(V=0,T\rightarrow 0)$ provides the information about the value of $\Gamma$. We also note that $dI/dV(V)$ characteristics exhibit a sharp peak at the sum-gap voltage $V_g=2\Delta/e$ ( except for the curve A). It is expected that interlayer QP tunneling in Bi-2212 single crystals should be predominantly coherent (provided that the single crystal is pure enough so that there is no momentum scattering upon tunneling) and strongly directional, with dominating antinodal tunneling \cite{Millis,Carbotte}. Indeed, the curve C in Fig. S5 resembles most closely the experimental characteristics (cf. with the curves at $T=13.4K$ from Fig. M1a and b), see also a discussion in Refs. \cite{KrPhysC,Yamada}. \begin{figure}\label{FigS1} \includegraphics[width=2.5in]{SimulDwaveIVC_differentFigS1.EPS} \caption{(Color online). Simulated single QP characteristics for different tunneling scenarios. A: incoherent, non-directional, $d-$wave; B: coherent, non-directional, $d-$wave; C: coherent, directional, $d-$wave; D: $s-$wave. Note that the sharp sum-gap peak and the half-gap singularity in $dI/dV$ are inherent to most scenarios.} \end{figure} In what follows, we will analyze the incoherent-nondirectional single QP tunneling characteristics, representing an "extreme" $d-$wave case with the weakest sum-gap peak. The conclusions will, however, be universal for any single QP tunneling scenario. Now we consider the effect of $T$ (point iv) and $\Gamma$ (point i) on $dI/dV$ characteristics. It is clear that both parameters will smoothen $dI/dV(V)$ characteristics and fill-in the dip in conductance at $V=0$. However, they do this in a slightly different manner. This is illustrated in Figs. S6 and S7. \begin{figure} \includegraphics[width=3.0in]{IVC_G_T5_dWaveInCoh_SimulFigS2.EPS} \caption{(color online). Effect of the depairing factor on single QP characteristics. Shown are simulated characteristics at constant $T$ and $\Delta$ for varying $\Gamma$. Increase of $\Gamma$ tends to reduce the slope of $dI/dV$ characteristics. Inset in panel b) indicates that filling-in of the zero bias dip with $\Gamma$ occurs in a power law manner.} \end{figure} The increase of $\Gamma$ leads to smearing of the QP gap singularity and simultaneous filling-in of the sub-gap states in DoS, Eq.(S2). This naturally leads to smearing of the sum-gap peak and to filling-in of the zero-bias dip in $dI/dV$, as shown in the inset in Fig. S6 b). \begin{figure} \includegraphics[width=3.0in]{IVC_T_G01_dWaveInCoh_SimulFigS3c.EPS} \caption{(color online). Effect of $T$ and $\Delta(T)$ variation on single QP characteristics. Calculations were made for incoherent, non-directional, $d-$wave tunneling, the experimental $\Delta(T)$ dependence and linear $\Gamma(T)$. Note that increase of $T$ and decrease of $\Delta$ leads not only to smearing of $dI/dV$ characteristics, but also the appearance of the logarithmic zero-bias singularity. Inset shows the zero bias resistance vs. $1/T$ for different values of $\Gamma(T_c)$. It is seen that the excess QP resistance rapidly develops at $T<T_c$ even in the $d-$wave case. } \end{figure} The increase of $T$ leads to the decrease of $\Delta$. Both factors result in the increased number of excited QP's above the gap, which leads to rapid filling-in of the zero bias dip in conductance. However, the increase of $T$ (unlike $\Gamma$) keeps the gap singularity in DoS unchanged. From Fig. S7 it is seen that this leads not only to filling-in of the dip, but also to development of the maximum at $V=0$, representing the so called zero-bias logarithmic singularity \cite{Larkin}. The origin of the singularity is very simple: at elevated $T$ there is a substantial amount of thermally excited QP's just above the gap. At $V=0$ the partly filled gap singularities in the two electrodes are co-aligned, causing a large current flow from one electrode to another, which is exactly compensated by the counterflow from the second electrode. However, exact cancelation is lifted at an arbitrary small voltage across the junction, leading to a sharp maximum in $dI/dV$. From Fig. S7 it is seen that the zero-bias maximum in $d-$wave junctions is pronounced even in the extreme case of weakest sum-gap peak. We have checked that the zero-bias logarithmic singularity is inherent for single QP characteristics, irrespective of the tunneling scenario. Interestingly, the experimental characteristics do not exhibit the zero-bias logarithmic singularity, see Fig. M2a). Within the single QP tunneling scenario this is only possible if the depairing factor increases with $T$ at such a rate that it smears out the gap singularity in DoS and thus suppresses development of the singularity. We estimate that the corresponding $\Gamma$ at $T=T_c$ should be in the range 2-5 meV, somewhat smaller than deduced from ARPES \cite{Norman,Lee2007}. Now we can substantiate our statement. In Fig. S8 we show the attempt of maintaining the same slope of $\ln dI/dV(V)$ curves for progressively decreasing $\Delta$. Since increase of both $\Gamma$ and $T$ leads to filling-in of the zero bias dip, i.e., to decrease of the slope, the slope can be maintained only if $\Gamma$ and $T$ move in the opposite directions. It is impossible to maintain the slope under the physical requirements that $\Gamma$ increases and $\Delta$ decrease simultaneously with increasing $T$ because all of the three parameters work in the same direction - decrease the slope. The inset in Fig. S7 shows the calculated zero bias resistance vs. $1/T$ for incoherent, non-directional, $d-$wave single QP tunneling and different $\Gamma$. It is seen that a prominent excess resistance appears even in this extreme $d-$wave case and even for very large $\Gamma$. The dashed line shows that the excess resistance grows approximately as $T^{-2}$ in this case. The abrupt appearance of the excess QP resistance is a mere reflection of the abrupt opening of the superconducting gap at $T_c$. Therefore, the observed $T-$independent slope and the absence of excess resistance, observed for UD mesas, are inconsistent with the single QP tunneling and points towards the doping-dependent change in the interlayer transport mechanism. \begin{figure} \includegraphics[width=3.0in]{dWaveSimul_vsD_G_TFigS4b.EPS} \caption{\label{Fig1} (Color online). An attempt to maintain the slope of single QP characteristics at different $T$. It is seen that the slope can be maintained only under the un-physical requirement that $\Gamma$ increases and $\Delta$ decreases with decreasing $T$.} \end{figure} \subsection{Analysis of self-heating and non-equilibrium phenomena} Self-heating can distort IVC's of Josephson junctions at large bias. The temperature rise due to self-heating is given by a simple expression \begin{equation*}\label{Heat} \Delta T = P R_{th}(T), \end{equation*} where $P=IV$ is the dissipated power and $R_{th}$ is the effective thermal resistance of the junctions, which is $T-$dependent and, therefore, bias dependent \cite{Heat}. The influence of self-heating on IVC's of our mesa structures was thoroughly studied in Refs.\cite{JAP,KrPhysC,Heat,HeatCom} (see also a discussion of scaling of IVC's in the inset of Fig. 1 and Fig. 3 from Ref.\cite{KrTemp}). Despite relative simplicity of the phenomenon, discussion of self-heating in intrinsic tunneling spectroscopy has caused a considerable confusion, a large part of which has been caused by a series of publications by V.Zavaritsky \cite{Zavar}, in which he "explained" the non-linearity of intrinsic tunneling characteristics by assuming that there is no intrinsic tunneling. The irrelevance of this model to our subject was discussed in Ref.\cite{HeatCom}. A certain confusion might be also caused by a large spread in $R_{th}$, reported by different groups \cite{Gough,AYreply,Heat,WangH,Lee}. For the sake of clarity it should be emphasized that those measurements were made on samples of different geometries. It is clear that $R_{th}$ depends strongly on the geometry \cite{JAP,Heat}, e.g., $R_{th}$ can be much larger in suspended junctions with poor thermal link to the substrate \cite{WangH} than in the case when both top and bottom surfaces of the junctions are well thermally anchored to the heat bath \cite{Lee}. For mesa structures similar to those used in this study (a few $\mu m$ in-plane size, containing $N\simeq 10$ IJJ), there is a consensus that $R_{th}(4.2K) \sim 30-70 K/mW$ (depending on bias) \cite{AYreply,Heat} and $R_{th}(90K)\sim 5-10 K/mW$ \cite{Heat}. Larger values $R_{th} > 100K/mW$ claimed by some authors \cite{Gough} are unrealistic for our mesas because they can withstand dissipated powers in excess of $10 mW$ without being melted. Yet, we note that talking about a typical value of $R_{th}$ is equally senseless as talking about a typical value of a contact (Maxwell) electrical resistance: both depend on the geometry. Therefore, reduction of mesa sizes provides a simple way for reduction of self-heating \cite{JAP}. Consequently, variation of $dI/dV$ characteristics with the junction size and geometry provides an unambiguous way of discriminating artifacts of self-heating from the spectroscopic features \cite{Heat}. \begin{figure} \includegraphics[width=3.8in]{Heat_Simul.EPS} \caption{\label{FigS7} (Color online). Simulated distortion of SIS tunneling characteristics by strong self-heating. a) Undistorted IVC's at different $T$. Simulations were made for typical parameters of our mesas, $T-$dependent thermal conductivity of Bi-2212, and for coherent, directional, $d-$wave tunneling. b) distorted IVC's at the same base temperatures; c) The mesa temperature as a function of bias. d) Temperature dependence of the genuine superconducting gap (solid line) and the "measured" gap obtained from distorted IVC's (dashed curve). Note that even strong self-heating ($T$ reaches $T_c/2$ at $V_g$ at 4.2K) does not cause considerable distortion of the measured gap. Data from Ref.{\cite{KrPhysC}}} \end{figure} How self-heating {\it can} distort the IVC's of Josephson junctions is obvious: since self-heating rises the effective $T$ it may affect the IVC only via $T-$dependent parameters. There are three such parameters: the quasiparticle resistance, the superconducting switching \cite{Kras_TA,Collapse} current and the superconducting gap. They will affect the IVC's of Bi-2212 mesas, containing several stacked Josephson junctions, in the following manner: The consecutive increase of $T$ upon sequential switching of IJJ's from the superconducting to the resistive state will distort the periodicity of quasiparticle branches. Each consecutive QP branch will have a smaller QP resistance (smaller $V$ at given $I$) and smaller switching current, see the discussion in Refs. \cite{HeatCom,KrTemp}. This type of distortion of the QP branches becomes clearly visible (at base $T=4.2K$) when $\Delta T \gtrsim 20K$ \cite{JAP,HeatCom}. The $T-$dependence of $\Delta$ may lead to appearance of back-bending of the IVC at the sum-gap knee. Fig. S9 reproduces the results of numerical simulation of such the distortion, made specifically for the case of Bi-2212 mesa with the corresponding $T-$dependent parameters (see Ref.\cite{KrPhysC} for details). Fig. S9 a) shows a set of simulated undistorted IVC's at different $T$ for coherent, directional, $d-$wave tunneling with a $\Delta(T)$, shown by the solid line in Fig. S9 d). Panels b) and c) show the distorted IVC's and the actual junction temperature, respectively. The dashed line in panel d) represents the "measured" gap obtained from the peak in distorted $dI/dV$ characteristics. Remarkably, the deviation from the true $\Delta(T)$ is marginal, despite large self-heating, $\Delta T \simeq T_c/2$ at $4.2 K$! The robustness of the measured gap with respect to self-heating is caused by a flat $T-$dependence of the superconducting gap at $T<T_c/2$. If self-heating reaches $T_c$ at the sum-gap knee, an acute back-bending appears in the IVC's, however even this does not cause principle changes in the "measured" $\Delta(T)$. In experiments on large mesas \cite{Gough} or suspended structures \cite{WangH}, in which acute self-heating was reported, no clear Ohmic tunneling resistance could be observed at high bias, in stark contrast to our mesas, see Fig. M1a, and Refs.\cite{KrTemp,KrMag,Doping}. Simulations as in Fig. S9 clearly show that irrespective of self-heating the "measured" gap vanishes at $T=T_c$, i.e., simultaneously with the true gap. Thus trivial self-heating simply can not "hide" the qualitative $\Delta(T)$ dependence. For example, there is no way in which one can get the vanishing "measured" (self-heating affected) gap if the true gap is $T-$independent. Therefore, the discrepancy in the measured strong $T-$dependence of the gap at $T_c$ in intrinsic tunneling (and recent ARPES \cite{Lee2007}), or complete $T-$independence in STM measurements \cite{Renner} can not be attributed to self-heating. To quantitatively analyze the significance of self-heating in our mesas, we provide values of the dissipation power $P=IV$ for the studied mesas: For the near-OP Bi(Y)-2212 mesa at $T=4.9K$ dissipation at the sum-gap peak is $0.82 mW$. From the previous analysis of the size dependence of $V_g$ for similar mesas \cite{Heat} it was observed that $V_g$ becomes size-independent, and therefore not affected by self-heating, for small mesas with $P(V_g) < 1mW $. All the mesas studied in this manuscript fall into this category. All of them exhibit perfect periodicity of quasiparticle branches \cite{KrTemp} and none of them exhibit back-bending at any $T$, which according to Fig. S9, implies that self-heating at $V_g$ is less than $T_c/2$ at $T=4.2K$. This is consistent with previous in-situ measurements \cite{Heat}, $R_{th}(T=4.2K,V=V_g) \simeq 30-40K/mW$. The rest of the data presented in the manuscript is not affected by self-heating for the following reasons: The effective $T_{eff}$ in Fig. M2b) was obtained at $V/N = 30meV$ at which the dissipation power was only $P \simeq 3.8 \mu W$ at $T=5.5K$ (for comparison, $P(V_g)=0.21 mW$ for the same curve). Therefore, self-heating here is negligible (sub-Kelvin). Besides, as argued in the manuscript, the slope of $\ln dI/dV(V)$ in Fig. M2 is apparently $V$ and $P$ independent, therefore the same data could be obtained at much smaller or larger $P$. The data in Figs. M3 b-d) and M4 is free from self-heating because it was obtained at zero bias. Finally, it is important to emphasize that the concept of heat diffusion is inapplicable for small Bi-2212 mesas containing only few atomic layers. The phonon transport in this case is ballistic \cite{Heat,Uher} and the energy flow from the mesa is determined not by collisions between the tunneled non-equilibrium QP with thermal phonons, but by spontaneous emission of a phonon upon relaxation of the non-equilibrium QP \cite{Cascade}. This process is not hindered at $T=0$. Therefore, the effective $R_{th}$ (and self-heating) can be much smaller because it is not limited by poor thermal conductivity at $T=0$, but is determined by the fast, almost $T-$independent, non-equilibrium QP relaxation time. The concept of self-heating becomes adequate only in the bulk of the Bi-2212 crystal, where the dissipation power density and the temperature rise are much smaller due to the much larger area of the crystal. For more details see the discussion in Ref. \cite{Heat}. The non-equilibrium energy transfer channel is specific for atomic scale intrinsic Josephson junctions made of perfect single crystals. It can explain a remarkably low self-heating at very high bias \cite{Cascade}. \subsection{Thermal activation analysis} \begin{figure} \includegraphics[width=3.0in]{Cosh_Fit_ResTun.EPS} \label{FigS6} \caption{(Color online). a) Simulated $dI/dV$ characteristics according to Eq.(M1) for the same $T$ as in Fig. M2 a). It is seen that at $T>T_c$ they provide a good fit to experimental data, including the crossing point. b) The slope of experimental characteristics from Fig. M2 a) at $V/N=30mV$ (red squares, left axis) and the logarithm of zero bias resistance (green circles, right axis) vs. $1/T$. It is seen that both follow the TA behavior at $T>T_c$ and simultaneously deviate downwards in the superconducting state, indicating that appearance of the negative excess resistance and the crossover to quantum $c-$axis transport are correlated. The dashed-dotted line represents the slope of resonant tunneling characteristics (shown for comparison).} \end{figure} The TA current is given by a simple expression: \begin{equation} I_{TA} \propto n(T) \exp\left[-\frac{U_{TA}}{k_B T}\right]\cosh\left[\frac{eV}{2k_B T}\right], \label{ITA} \end{equation} from which follows Eq.(M1). Fig. S10 a) shows the $dI/dV(V)$ characteristics in a semi-log scale calculated from Eq.(M1) for the same $T$ and scale as in Fig. M2 a), in order to facilitate the direct comparison. Simulations were made for constant $\sigma_N$ and $U_{TA} = 24 meV$, which were adjusted to obtain similar $R_0(T)$ values as in the experimental data from Fig. M2 a) at $T>T_c$; and for $n(T) \propto T$. Considerable grows of mobile carrier concentration $n(T)$ with $T$ in UD crystals was reported in Hall effect measurements \cite{Hall}. The assumption of linear $n(T)$ is supported by observation that for strongly UD mesas it is $R_0$ (circles in Fig. S10 b), rather than $R_0/T$, that is more accurately described by the Arrhenius law (dashed line). From comparison of Figs. M2a) and S10a) it is seen that Eq.(M1) does provide a good fit to experimental data at $T>T_c$, including the crossing point. Simultaneously a dramatic discrepancy is seen below $T_c$. This indicates that the slopes of $dI/dV(V)$ curves in the superconducting state are not determined by the real $T$, but by some constant effective temperature, as shown in Fig. M2 b). Fig. S10 b) shows the slope of experimental curves from Fig. M2a) at $V/N = 30 mV$ (red squares) and the logarithm of zero bias resistance (green circles) as a function of $1/T$. Clear linear dependence of both at $T>T_c$ indicates that the IVC's in the normal state have thermal activation nature both at zero and finite bias, consistent with Eq.(M1). According to Eq.(M1) the effective temperature can be explicitly obtained from the slope $d/dV[\ln(dI/dV)]=$ $(e/2k_BT) \tanh (eV/2k_BT)$ $\simeq e/2k_BT$. Here there are two slight obstacles: First, the $tanh$ term deviated from unity at high $T$ leading to eventual saturation of the slope at $1/T\rightarrow 0$. The exact $T-$depensence according to Eq.(M1) is shown by the solid (magenta) line in Fig. S10 b). We emphasize that there are no fitting parameters in this curve. It is seen that Eq.(M1) reproduces quite well $T-$dependence of the slopes, except for a small offset. This additional offset is caused by the fact that the slope of experimental curves becomes negative at large $T$ \cite{Doping}. The simple TA expression, Eq.(M1), does not reproduce this negative slope. However, more advanced TA simulations do reproduce the negative slope and, in fact, all the details of experimental $dI/dV$ characteristics at $T>T_c$ \cite{Silva}. It should be noted that TA-like behavior is quite universal for many process. Except for pure thermal activation over the finite barrier without the gap in the electronic DoS \cite{Silva}, it may appear in pure tunneling characteristics in the presence of the gap in electronic DoS, due to TA-like behavior of the Fermi factor (even though this would require a very specific correlation between $T-$dependent factors mentioned in sec.I); as a result of inelastic tunneling via the impurity \cite{Impurity}, or elastic tunneling via a resonant state \cite{Abrikosov} in the tunnel barrier. Abrikosov has shown that the later scenario can quantitatively reproduce the interlayer characteristics of HTSC in the normal state \cite{Abrikosov}. The blue dash-dotted line in Fig. S10 b) represents the slope $d/dV[ln(dI/dV)]$ in the case of resonant tunneling with the appropriate energy of the resonant state. It follows the simple linear TA behavior at $k_B T$ lower than the resonant energy and saturates at higher $T$. Comparison with the resonant tunneling calculation shows that the saturation of the slope at high $T$ results in appearance of a negative offset in the linear $d/dV[ln(dI/dV)]$ vs $1/T$ dependence at $1/T\rightarrow 0$. Below the saturation temperature, the actual temperature can be easily extracted from the $d/dV[ln(dI/dV)]$ slopes, compensated by the offset at $1/T \rightarrow 0$, as indicated in Fig. S10 b) : \begin{equation*}\label{TeffOff} T_{eff} = \frac{e}{2k_B \left[d/dV[\ln(dI/dV)]-Offset(T^{-1}\rightarrow 0)\right]}. \end{equation*} This expression was used for obtaining the effective $T$ shown in Fig. M2b). From Fig. S10 b) it is seen that at $T<T_c$ both $\ln R_0$ and the slope of experimental curves simultaneously deviate downwards with respect to the linear TA behavior. Therefore, both are consequences of the same phenomenon (which we attributed to doping-induced change in the $c-$axis transport) only at zero and finite bias, respectively.
0801.2134
\section{Introduction} The proper theory of the motion of a spinning mass in a gravitational field is due to Mathisson~\cite{1}, Papapetrou~\cite{2}, and Dixon~\cite{3}. A main aspect of this theory, which already appears in the work of Mathisson~\cite{1}, is the existence of a spin-curvature force \begin{equation}\label{eq:1} F^\alpha =-\frac{c}{2}R^\alpha_{\;\;\beta \mu \nu} u^\beta S^{\mu\nu}.\end{equation} Here $u^\mu$ is the unit four-velocity vector of the spinning mass; that is, $u^\mu =dx^\mu /d\tau$, where $x^\mu =(ct,x,y,z)$ and $\tau /c$ is the proper time. The signature of the metric is $+2$ throughout this paper. In the linear approximation of general relativity, with the spinning mass held at rest in the stationary exterior field of a rotating central source and keeping only first-order terms in spin, $F^\alpha=(0,\mathbf{F})$, where \cite{4} \begin{equation}\label{eq:2} \mathbf{F}=-\mathbf{\nabla} (\mathbf{S}\cdot \mathbf{\Omega}_P).\end{equation} Here $\mathbf{\Omega}_P$ is the precession frequency of a test gyroscope held at rest outside the central source of angular momentum $\mathbf{J}$; far from the source, \begin{equation}\label{eq:3}\mathbf{\Omega}_P =\frac{G}{c^2r^5}[3(\mathbf{J}\cdot \mathbf{r})\mathbf{r}-\mathbf{J}r^2],\end{equation} so that $c\mathbf{\Omega}_P=\mathbf{B}_g$ is the familiar dipolar gravitomagnetic field of the source. It follows from Eq.~\eqref{eq:2} that one can define the Hamiltonian for the spin-gravity coupling as \begin{equation}\label{eq:4}\mathcal{H}=\mathbf{S}\cdot \mathbf{\Omega}_P.\end{equation} This Mathisson Hamiltonian is a direct analogue of $-\mathbf{\mu}\cdot \mathbf{B}$ coupling in electrodynamics~\cite{5}. Imagine now the test gyroscope that is held at rest but precesses with frequency $\mathbf{\Omega}_P$ as before. If the gravitational interaction is turned off, the gyro keeps its direction fixed with respect to the background global inertial frame by the principle of inertia. The former precessional motion is recovered, however, from the viewpoint of a local observer that is at rest in a frame of reference rotating with frequency $\mathbf{\Omega}=-\mathbf{\Omega}_P$. This is an instance of the gravitational Larmor theorem~\cite{5}, which follows from Einstein's principle of equivalence. To this latter motion in the rotating frame, one can associate a new Hamiltonian $\mathcal{H}'$, which can be obtained from $\mathcal{H}$ by replacing $\mathbf{\Omega}_P$ with $-\mathbf{\Omega}$. Thus the Hamiltonian due to the coupling of spin with rotation is given by \begin{equation}\label{eq:5}\mathcal{H}'=-\mathbf{S}\cdot \mathbf{\Omega}.\end{equation} The classical couplings \eqref{eq:4} and \eqref{eq:5} are expected to be valid for intrinsic spin as well. This is mainly based on the study of relativistic wave equations in gravitational fields and accelerated frames of reference, see~\cite{6} for some examples; a more complete discussion as well as list of references is given in \cite{7}. It follows from the inertia of intrinsic spin that to every spin Hamiltonian in a laboratory fixed on the Earth, one must add \begin{equation}\label{eq:6} \delta \mathcal{H}\approx -\mathbf{S}\cdot \mathbf{\Omega}_\oplus +\mathbf{S}\cdot \mathbf{\Omega}_{P\oplus}.\end{equation} For a spin-$\frac{1}{2}$ particle, the spin-rotation part of Eq.~\eqref{eq:6} implies that the maximum energy difference between spin-up and spin-down states is $\hbar \Omega_\oplus \approx 10^{-19}$eV. As pointed out in~\cite{8}, the experimental results of \cite{9} constitute an indirect measurement of this coupling. Further evidence in this direction, based on an analysis of the muon $g-2$ experiment, is discussed in \cite{10}; for other observational aspects of spin-rotation coupling see \cite{7}. Moreover, the corresponding energy difference for the spin-gravity term in Eq.~\eqref{eq:6} is $\hbar \Omega_{P\oplus}\approx 10^{-29}$eV. As discussed in \cite{7}, even in a space-borne laboratory in orbit around Jupiter, this Mathisson coupling would still be too small to be measurable at present by several orders of magnitude. An interesting recent discussion of the theoretical as well as observational aspects of spin-gravity coupling is contained in \cite{11}. Finally, a fundamental aspect of the Mathisson coupling should be noted here: For a classical gyro, its spin is proportional to its mass and the gravitational force \eqref{eq:2} is then proportional to the mass of the gyro, as it should be; however, for a spin-$\frac{1}{2}$ particle, the magnitude of spin is $\hbar /2$ and the corresponding gravitational Stern-Gerlach force \eqref{eq:2} violates the universality of free fall. Thus the weight of a neutron with spin up is generally different from the weight of the same neutron with spin down; however, this effect is too small to be measurable in the foreseeable future~\cite{7}. Nevertheless, this observation indicates that the simple coupling \eqref{eq:4} for intrinsic spin as well as its Larmor-equivalent \eqref{eq:5} could have consequences that are of basic significance for relativity theory and gravitation. This important point constitutes the main theme of this paper and will be illustrated in subsequent sections. In practice, it is indeed much simpler to work with \eqref{eq:5} than with \eqref{eq:4}; therefore we concentrate on the photon spin-rotation coupling in the rest of this paper. \section{Photon helicity-rotation coupling}\label{s:2} Consider a thought experiment in which an observer rotates uniformly with frequency $\Omega$ about the direction of propagation of an incident plane monochromatic electromagnetic wave of frequency $\omega$. The object of the experiment is to measure $\omega '$, the wave frequency according to the rotating observer. Specifically, we assume that the wave propagates along the $z$ direction and the observer follows a circle of radius $r$ about the origin of spatial coordinates in the $(x,y)$ plane. The natural orthonormal tetrad frame associated with the observer is given by \begin{align}\label{eq:7} \lambda^\mu _{\;\; (0)}&= \gamma (1,-\beta \sin \varphi ,\beta \cos \varphi ,0),\\ \label{eq:8} \lambda^\mu _{\;\;(1)} &= (0,\cos \varphi,\sin \varphi,0),\\ \label{eq:9} \lambda^\mu_{\;\;(2)} &= \gamma (\beta ,-\sin \varphi ,\cos \varphi ,0),\\ \label{eq:10} \lambda^\mu_{\;\;(3)}&=(0,0,0,1).\end{align} Here $\varphi =\Omega t=\gamma \Omega \tau /c$, $\beta =r\Omega /c$, and $\gamma =(1-\beta ^2)^{-1/2}$. The observer's local temporal axis is along its four-velocity $\lambda^\mu _{\;\;(0)}$ and its spatial frame $\lambda^\mu_{\;\;(i)}$, $i=1,2,3$, is such that its axes point along the radial, tangential, and $z$ directions, respectively. According to the standard Doppler effect, the frequency of the wave measured by the observer is $\omega '_D=-k_\mu \lambda ^\mu_{\;\;(0)} =\gamma \omega$, where the Lorentz factor accounts for time dilation. In this general approach, the rotating observer is assumed to be pointwise inertial and hence at rest in a comoving inertial frame (``hypothesis of locality") and the Doppler effect follows from the invariance of the phase of the wave under Lorentz transformations between the global background inertial frame and the instantaneous inertial frames of the observer. There is, however, another way to measure frequency based on the fact that at least a few periods of the wave must be registered before the observer can determine $\omega '$. To this end, we suppose that the observer can make pointwise determinations of the incident field. The result can be expressed in terms of instantaneous Lorentz transformations or equivalently as \begin{equation}\label{eq:11} F_{(\alpha )(\beta)}(\tau ) =F_{\mu\nu}\lambda^\mu_{\;\;(\alpha )} \lambda^\nu_{\;\;(\beta)} .\end{equation} This quantity, upon Fourier analysis, yields \cite{12} \begin{equation}\label{eq:12} \omega '=\gamma (\omega \mp \Omega).\end{equation} The upper (lower) sign refers to an incident positive (negative) helicity wave. For the photon energy, we find that \begin{equation}\label{eq:13} E'=\gamma (E\mp \hbar\Omega ),\end{equation} where $\pm \hbar$ is the photon helicity. Thus Eqs. \eqref{eq:12} and \eqref{eq:13} contain, in addition to the transverse Doppler effect, the influence of the spin-rotation coupling. Eq.~\eqref{eq:12} can be written as $\omega '=\omega'_D (1\mp \Omega /\omega)$, where $\Omega /\omega$ is the ratio of the reduced wavelength of the radiation $\lambda /(2\pi)$ to the acceleration length $\mathcal{L}$ of the observer, $\mathcal{L}=c/\Omega$. The Doppler effect is recovered when this ratio vanishes in the JWKB limit. For oblique incidence, the analogue of Eq.~\eqref{eq:13} is \begin{equation}\label{eq:14} E'=\gamma (E-\hbar M\Omega),\end{equation} where $\hbar M$ is the total angular momentum of the radiation along the axis of rotation. Thus $\omega '=\gamma (\omega -M\Omega)$, where $M=0,\pm 1,\pm 2,\dots$, for a scalar or a vector field, while $M\mp \frac{1}{2}=0,\pm 1,\pm 2,\dots$, for a Dirac field. In the JWKB approximation, Eq.~\eqref{eq:14} may be expressed as $E'=\gamma(E-\mathbf{J} \cdot \mathbf{\Omega})$; hence, $E'=\gamma (E-\mathbf{v}\cdot\mathbf{p})-\gamma \mathbf{S}\cdot\mathbf{\Omega}$, where $\mathbf{J}=\mathbf{r}\times \mathbf{p}+\mathbf{S}$ and $\mathbf{v}=\mathbf{\Omega}\times\mathbf{r}$. It is important to note that $\omega '$ vanishes for $\omega =M\Omega$, while $\omega '$ can be negative for $\omega <M\Omega$. The former circumstance poses a basic difficulty, while the latter is a consequence of the absolute character of accelerated motion~\cite{12}. It is useful to provide an intuitive explanation for the appearance of the spin-rotation term in Eq.~\eqref{eq:12}. In an incident positive (negative) helicity wave, the electric and magnetic fields rotate with frequency $\omega$ in the positive (negative) sense about the direction of propagation of the wave. The observer rotates about this direction with frequency $\Omega$; therefore, relative to the observer, the electric and magnetic fields of the incident wave rotate with frequency $\omega -\Omega\; (\omega +\Omega)$ in the positive (negative) helicity case. While the relative circular motion accounts for the subtraction (addition) of frequencies, the Lorentz factor in Eq.~\eqref{eq:12} takes care of time dilation. This factor is unity for the rotating observer at $r=0$, hence $\omega '=\omega \mp \Omega$ in this case; the fact that only the Lorentz factor distinguishes rotating observers at different radii in Eq.~\eqref{eq:12} follows intuitively from the circumstance that each such observer is locally equivalent to the one at $r=0$, since each is locally a center of rotation of frequency $\Omega$. The existence of spin-rotation coupling in Eq.~\eqref{eq:12} can be observationally demonstrated by various means including the GPS, where it accounts for the phenomenon of phase wrap-up. That is, for $\gamma \ll 1$ and $\Omega \ll\omega$, $\omega '\approx\omega \mp \Omega$ has been verified with $\omega /(2\pi)\sim 1\text{ GHz}$ and $\Omega /(2\pi)\sim 8\text{ Hz}$ \cite{13}. Further observational aspects of Eq.~\eqref{eq:12} are discussed in \cite{14}. The exact result $\omega '=\gamma (\omega -\Omega)$ for incident positive-helicity radiation has a fundamental consequence that must now be addressed. This relation implies that $\omega '=0$ for $\omega =\Omega$. The incident radiation stands completely still with respect to all observers that uniformly rotate with frequency $\omega$ about the direction of propagation of the wave. That by a mere rotation an observer can stand still with an electromagnetic wave is analogous to the pre-relativistic formula for the Doppler effect where an observer moving with speed $c$ along a beam of light would see an electromagnetic field that is spatially oscillatory but at rest. This paradoxical circumstance played a role in Einstein's path to relativity theory (see p. 53 of \cite{15}, which contains Einstein's autobiographical notes). The origin of this defect in Eq.~\eqref{eq:12} must be sought in Eq.~\eqref{eq:11}, namely the assumption that the field measured by the rotating observer is pointwise the same as that measured by the momentarily comoving inertial observer (``hypothesis of locality"); a brief critique of this notion of locality is contained in the next section. The other nonlocal assumption, involving the Fourier analysis of the measured field, is reasonable, since a number of periods of the wave must be received by the accelerated observer before $\omega '$ could be adequately measured. \section{Hypothesis of locality}\label{s:3} According to the standard theory of relativity, Lorentz invariance is extended to accelerated observers in Minkowski spacetime via the hypothesis of locality, namely, the assumption that an accelerated observer, at each instant along its worldline, is momentarily equivalent to an otherwise identical hypothetical comoving inertial observer. For time determination, this assumption reduces to the clock hypothesis. Thus an accelerated observer is pointwise inertial and this supposition provides operational significance for Einstein's principle of equivalence \cite{16}. Regarding the source of this important postulate of relativity theory, it must be noted that Lorentz introduced it as an approximation in his discussion of the Lorentz-Fitzgerald contraction of electrons in curvilinear motion (see section 183 of \cite{17}). Einstein mentioned it in his discussion of accelerated systems (see p. 60 of \cite{18}). Weyl likened it to the assumption of adiabaticity in thermodynamics (see pp. 176-177 of \cite{19}). The locality assumption originates from Newtonian mechanics, where the state of a particle is determined by its position and velocity. The accelerated observer shares the same state with the comoving inertial observer; hence, locality is exact and no new physical assumption is needed if all physical phenomena could be reduced to pointlike coincidences of classical particles and null rays. However, when wave phenomena are taken into consideration, the locality hypothesis would be approximately valid whenever $\lambda \ll \mathcal{L}$. Here $\lambda$ is the characteristic wavelength of the phenomena under observation and $\mathcal{L}$, the acceleration length, is the characteristic length scale for the variation of the state of the observer. In practice, deviations from locality are expected to be of order $\lambda /\mathcal{L}$ and are generally very small, since $\mathcal{L}$ is quite long; for instance, $c^2/g_\oplus \approx 1$ lyr and $c/\Omega_\oplus \approx 28$ AU for an observer in a laboratory fixed on the Earth. The consistency of these ideas can be illustrated by two examples of general interest. Imagine a classical charged particle of mass $m$ and charge $q$ that is subject to an external force $\mathbf{F}_{\operatorname{ext}}$. The accelerated charge radiates electromagnetic radiation with characteristic wavelength $\lambda \sim \mathcal{L}$. The hypothesis of locality is thus violated since $\lambda /\mathcal{L}\sim 1$. This means that the state of the charged particle cannot be given at each instant by its position and velocity alone. This is consistent with the equation of motion of the particle, which reduces to the Abraham-Lorentz equation \begin{equation}\label{eq:15} m\frac{d\mathbf{v}}{dt}-\frac{2}{3} \frac{q^2}{c^3} \frac{d^2\mathbf{v}}{dt^2}+\dots =\mathbf{F}_{\operatorname{ext}}\end{equation} in the nonrelativistic approximation. Consider next muon decay in a storage ring \cite{20}. This experiment has verified with good accuracy relativistic time dilation $\tau _\mu=\gamma\tau _\mu^0$, where $\tau _\mu^{0}$ is the lifetime of the muon at rest. To mimic the circular acceleration of a muon in a storage ring and take the quantum nature of this particle into account, one can suppose that the muon decays from a high-energy Landau level in a constant magnetic field. Based on the detailed calculation reported in \cite{21}, \begin{equation}\label{eq:16} \tau _\mu \approx \gamma \tau _\mu ^{ 0} \left[ 1+\frac{2}{3} \left( \frac{\lambda _C}{\mathcal{L}}\right)^2\right],\end{equation} where $\lambda_C$ is the Compton wavelength of the muon and $\mathcal{L} =c^2/a$, where $a\sim 10^{18}g_\oplus$ is the effective acceleration of the muon. The correction to the standard formula in Eq. \eqref{eq:16} is very small $(\sim 10^{-25})$, but nonzero. \section{Nonlocality}\label{s:4} To go beyond the hypothesis of locality, let us return to Eq. \eqref{eq:11} and consider its generalization. Let $\mathcal{F}_{(\alpha )(\beta )} (\tau )$ be the field that is actually measured by the accelerated observer. Here $\tau$ is measured by the background inertial observers using $d\tau =cdt/\gamma$. The most general linear relationship between $\mathcal{F}_{(\alpha )(\beta)} (\tau )$ and the field measured by the infinite sequence of comoving inertial observers, given by Eq. \eqref{eq:11}, that preserves causality is given by \cite{22} \begin{equation}\label{eq:17} \mathcal{F}_{(\alpha )(\beta)} (\tau )=F_{(\alpha )(\beta)} (\tau )+\int^\tau_{\tau_0} K_{(\alpha )(\beta)}^{\;\;\;\;\;\;\;\;\;(\gamma )(\delta)} (\tau ,\tau ')F_{(\gamma )(\delta )}(\tau ') d\tau '.\end{equation} Here $\tau_0$ is the instant at which the acceleration is turned on and the kernel $K$ is such that it vanishes in the absence of acceleration. The integral in Eq.~\eqref{eq:17} has the form of an average over the past worldline of the accelerated observer; moreover, it is expected to vanish in the JWKB limit $(\lambda /\mathcal{L}\to 0)$. It is a consequence of the Volterra-Tricomi theorem~\cite{23} that under reasonable physical conditions the relationship between $\mathcal{F}_{(\alpha )(\beta)}$ and $F_{(\alpha )(\beta)}$ is unique. How should the kernel be determined? This involves various complications~\cite{22}, but a key idea is that the kernel should be so chosen as to prevent the circumstance encountered in section~\ref{s:2}. That is, we introduce the fundamental postulate that a basic radiation field can never stand completely still with respect to an arbitrary observer. A detailed treatment of the nonlocal theory of accelerated systems is contained in~\cite{24} and the references cited therein. This theory is in agreement with available observational data; moreover, it forbids the existence of a fundamental scalar (or pseudoscalar) field. What are the implications of nonlocality for the photon helicity-rotation coupling in the thought experiment of section~\ref{s:2}? There are basically two aspects of the problem that are altered by nonlocality: (i) As determined by the rotating observer, for $\omega >\Omega$ the amplitude of the positive-helicity incident wave is enhanced, while the amplitude of the negative-helicity wave is diminished. (ii) For $\omega =\Omega$, the field is not static in the positive helicity case; instead, it varies like $t$ as in the case of resonance. It is important to verify these purely nonlocal effects experimentally. The task here is complicated by the fact that the behavior of rotating measuring devices must be known. An interesting discussion of such issues of principle is contained in \cite{25}. We therefore turn to a different approach based on the correspondence principle in nonrelativistic quantum mechanics. The study of electrons in rotational motion within the framework of quantum theory could shed light on the question of the correct classical theory of accelerated systems. In connection with (i), the cross section $\sigma$ for the photoionization of hydrogen atom has been studied with the electron in a circular state with respect to the incident radiation that would correspond to the motion of the observer in section~\ref{s:2}. A detailed investigation reveals that $\sigma_+>\sigma_-$, where $\sigma _+ (\sigma_-)$ is the cross section in the case that the electron rotates in the same (opposite) sense as the helicity of the incident radiation~\cite{26}. The situation in (ii) can be mimicked by the transition of an electron in a circular ``orbit" about a uniform magnetic field to the next energy state as a result of absorption of a photon of frequency $\Omega_c$ and definite helicity that is incident along the direction of the magnetic field. Here $\Omega_c$ is the electron cyclotron frequency. Let $P$ be the probability of transition to the next energy state. A detailed investigation reveals that in the correspondence regime, $P_+\propto t^2$, while $P_-=0$, corresponding to the positive and negative helicity cases, respectively~\cite{26}. It appears from these studies that the nonlocal theory is in better agreement with quantum theory than the standard theory of relativity that is based on the hypothesis of locality~\cite{26}. \section{Discussion}\label{s:5} Mathisson's spin-gravity Hamiltonian leads, via the gravitational Larmor theorem, to the spin-rotation Hamiltonian. For the photon, helicity-rotation coupling has the consequence that a rotating observer can in principle be comoving with an electromagnetic wave such that the wave is oscillatory in space but stands completely still with respect to the observer. The source of this difficulty is the hypothesis of locality that is the basis for the extension of Lorentz invariance to accelerated observers and the subsequent transition to general relativity. The nonlocal theory of accelerated systems is briefly described; in this theory, instead of the locality assumption, where a curved worldline is in effect replaced at each instant by the straight tangent worldline, one considers in addition an average over the past worldline of the observer. The consequences of this nonlocal special relativity are briefly described. The nonlocal theory is in agreement with available observational data. It remains to extend this theory to a nonlocal theory of gravitation. \section*{Acknowledgement} I am grateful to Andrzej Trautman for his kind invitation to present this work and warm hospitality at the Mathisson Conference (17-20 October 2007, Warsaw, Poland).
0801.3314
\section{Introduction} Local times for semimartingales have been widely studied. See for example the monograph \cite{RY} and the references therein. On the other hand, local times of Gaussian processes have also been the object of a rich probabilistic literature; see for example the recent paper \cite{MR} by Marcus and Rosen. A general criterion for the existence of a local time for a wide class of anticipating processes, which are not semimartingales or Gaussian processes, was established by Imkeller and Nualart in \cite{NuIm}. The proof of this result combines the techniques of Malliavin calculus with the criterion given by Geman and Horowitz in \cite% {GH}. This criterion was applied in \cite{NuIm} to the Brownian motion with an anticipating drift, and to indefinite Skorohod integral processes. The aim of this paper is to establish the existence of the occupations densities for two classes of stochastic processes related to the fractional Brownian motion, using the approach introduced in % \cite{NuIm}. First we consider a Gaussian process $B=\{B_{t}, t\in [0,1]\}$ with an absolutely continuous random drift \begin{equation*} X_{t}= B_{t} +\int_{0}^{t} u_{s}ds, \end{equation*} where $u$ is a stochastic process measurable with respect to the $\sigma$% -field generated by $B$. We assume that the variance of the increment of the Gaussian process $B$ on an interval $[s,t]$ behaves as $|t-s|^{2\rho}$, for some $\rho\in (0,1)$. This includes, for instance, the bifractional Brownian motion with parameters $% H,K\in (0,1)$. Under reasonable regularity hypotheses imposed to the process $u$ we prove the existence of a square integrable occupation density with respect to the Lebesque measure for the process $X$. Our second example is represented by the indefinite divergence (Skorohod) integral $X=\{X_{t}, t\in [(0,1]\}$ with respect to the fractional Brownian motion with Hurst parameter $H\in (\frac 12, 1)$, that is \begin{equation*} X_{t}= \int_{0}^{t} u_{s} \delta B^H_{s}. \end{equation*} We provide integrability conditions on the integrand $u$ and its iterated derivatives in the sense of Malliavin calculus in order to deduce the existence of a square integrable occupation densities for $X$. We organized our paper as follows. Section 2 contains some preliminaries on the Malliavin calculus with respect to Gaussian processes. In Section 3 we prove the existence of the occupation densities for perturbed Gaussian processes and in Section 4 we treat the case of indefinite divergence integral processes with respect to the fractional Brownian motion. \vskip0.5cm \section{Preliminaries} Let $\{B_{t} , t\in [0,1]\}$ be a centered Gaussian process with covariance function \begin{equation*} R(t,s):=E( B_{t}B_{s}), \end{equation*} defined in a complete probability space $(\Omega, \mathcal{F},P)$. By ${% \mathcal{H}}$ we denote the canonical Hilbert space associated to $B$ defined as the closure of the linear space generated by the indicator functions $\{ \mathbf{1}_{[0,t]}, t\in [0,1]\} $ with respect to the inner product \begin{equation*} \langle \mathbf{1}_{[0,t]} , \mathbf{1}_{[0,s] } \rangle _{{\mathcal{H}}} =R(t,s), \hskip0.5cm s,t\in [0,1]. \end{equation*} The mapping $\mathbf{1}_{[0,t]} \to X_{t}$ can be extended to an isometry between ${\mathcal{H}}$ and the first Gaussian chaos generated by $B$. We denote by $B(\varphi) $ the image of an element $\varphi \in {\mathcal{H}}$ by this isometry. We will first introduce some elements of the Malliavin calculus associated with $B$. We refer to \cite{N06} for a detailed account of these notions. For a smooth random variable $F=f\left( B(\varphi _{1}), \ldots , B(\varphi_{n})\right) $, with $\varphi_{i} \in {\mathcal{H}}$ and $f\in C_{b}^{\infty}(R^{n})$ ($f$ and all its partial derivatives are bounded) the derivative of $F$ with respect to $B$ is defined by \begin{equation*} D F =\sum_{j=1}^{n}\frac{\partial f}{\partial x_{j}}(B(\varphi_{1}),\dots,B(% \varphi_{n}))\varphi_{j}. \end{equation*} For any integer $k\ge 1$ and any real number $p\ge 1$ we denote by $\mathbb{D% }^{k,p}$ the Sobolev space defined as the the closure of the space of smooth random variables with respect to the norm \begin{equation*} \Vert F\Vert_{k,p}^{p}=E(|F|^{p})+\sum_{j=1}^{k}\Vert D^{j}F\Vert_{L^{p}(\Omega;{\mathcal{H}}^{\otimes j})}^{p}. \end{equation*} Similarly, for a given Hilbert space $V$ we can define Sobolev spaces of $V$% -valued random variables $\mathbb{D}^{k,p}(V)$. Consider the adjoint $\delta $ of $D $ in $L^2$. Its domain is the class of elements $u\in L^{2}(\Omega;{\mathcal{H}})$ such that \begin{equation*} E(\langle D F,u\rangle_{{\mathcal{H}}})\leq C\Vert F\Vert_{2}, \end{equation*} for any $F\in \mathbb{D} ^{1,2}$, and $\delta \left( u\right) $ is the element of $L^{2}(\Omega)$ given by \begin{equation*} E(\delta (u)F)=E(\langle D F,u\rangle_{{\mathcal{H}}}) \end{equation*} for any $F\in \mathbb{D}^{1,2}$. We will make use of the notation $\delta (u)=\int_{0}^{1}u_{s}\delta B_{s}$. It is well-known that $\mathbb{D}^{1,2}({% \mathcal{H}})$ is included in the domain of $\delta $. Note that $E(\delta ( u ) )=0$ and the variance of $\delta(u)$ is given by \begin{equation} E(\delta (u)^{2})=E(\Vert u\Vert_{{\mathcal{H}}}^{2})+E(\langle D u,(D u)^{\ast}\rangle_{{\mathcal{H}}\otimes{\mathcal{H}}} ), \label{squaremean} \end{equation} if $u\in \mathbb{D}^{1,2}({\mathcal{H}})$, where $(D u)^{\ast}$ is the adjoint of $D u$ in the Hilbert space ${\mathcal{H}}\otimes{\mathcal{H}}$. We have Meyer's inequality \begin{equation} E(|\delta (u)^{p}|)\leq C_{p}\left( E(\Vert u\Vert_{{\mathcal{H}}% }^{p})+E(\Vert D u\Vert_{{\mathcal{H}}\otimes{\mathcal{H}}}^{p})\right), \label{meyer} \end{equation} for any $p>1$. We will make use of the property \begin{equation} F\delta (u)=\delta (Fu)+\langle D F,u\rangle_{{\mathcal{H}}}. \label{p1} \end{equation} if $F\in \mathbb{D}^{1,2}$ and $u\in \mathrm{Dom}(\delta )$ such that $Fu\in \mathrm{Dom}(\delta )$. We also need the commutativity relationship between $% D $ and $\delta $ \begin{equation} \label{comm} D \delta (u)= u + \int_{0}^{1} D u_{s}\delta B_{s}, \end{equation} if $u\in \mathbb{D}^{1,2}({\mathcal{H}})$ and the process $\{D_{s}u, s\in [0,1]\}$ belongs to the domain of $\delta $. Throughout this paper we will assume that the centered Gaussian process $% B=\{B_{t},t\in \lbrack 0,1]\}$ satisfies \begin{equation} C_{1}(t-s)^{2\rho }\leq E(|B_{t}-B_{s}|^{2})\leq C_{2}(t-s)^{2\rho }, \label{cond-cov} \end{equation}% for some $\rho\in (0,1)$ with $C_{1},C_{2}$ two positive constants not depending on $t,s$. It will follow from the Kolmogorov criterium that $B$ admits a H\"{o}lder continuous version of order $\delta $ for any $\delta <\rho $. Throughout this paper we will denote by $C$ a generic constant that may be different from line to line. \begin{example} The bifractional Brownian motion (see, for instance \cite{HV}), denoted by $B^{H,K}$, is defined as a centered Gaussian process starting from zero with covariance \begin{equation} \label{covbiFBM} R(t,s)= \frac{1}{2^{K}}\left( \left( t^{2H} + s^{2H}\right) ^{K} -\vert t-s\vert ^{2HK}\right) \end{equation} where $H\in (0,1) $ and $K\in (0,1]$. When $K=1$, then we have a standard fractional Brownian motion denoted by $B^H$. It has been proven in \cite{HV} that for all $s\leq t$, \begin{equation} \label{qh} 2^{-K}\vert t-s\vert ^{2HK} \leq E\left| B^{H,K}_{t}- B^{H,K}_{s}\right| ^{2}\leq 2^{1-K}\vert t-s\vert ^{2HK} \end{equation} so relation (\ref{cond-cov}) holds with $\rho=HK$. A stochastic analysis for this process can be found in \cite{KRT} and a study of its occupation densities has been done in \cite{ET}, \cite{XT}. \end{example} For a measurable function $x: [0,1]\to \mathbb{R}$ we define the occupation measure \begin{equation*} \mu (x)(C) =\int_0^1 \mathbf{1}_{C}(x_{s})ds, \end{equation*} where $C $ is a Borel subset of $\mathbb{R}$ and we will say that $x$ has on occupation density with respect to the Lebesque measure $\lambda $ if the measure $\mu $ is absolutely continuous with respect to $\lambda$. The occupation density of the function $x$ will be the derivative $\frac{d\mu _{t}}{d\lambda }$. For a continuous process $\{X_{t}, t\in [0,1]\}$ we will say that $X$ has an occupation density on $[0,1]$ if for almost all $\omega \in \Omega$, $X(\omega)$ has an occupation density on $[0,1]$. We will use the following criterium for the existence of occupation densities (see \cite{NuIm}). Set $T=\{(s,t)\in \lbrack 0,1]^{2}:s<t\}$. \begin{theorem} \label{th1} Let $\{X_{t},t\in \lbrack 0,1]\}$ be a continuous stochastic process such that $X_{t}\in \mathbb{D}^{2,2}$ for every $t\in \lbrack 0,1]$. Suppose that there exists a sequence of random variables $\{F_{n},n\geq 1\}$ with $\bigcup_{n}\{F_{n}\not=0\}=\Omega $ a.s. and $F_{n}\in \mathbb{D}% ^{1,1} $ for every $n\geq 1$, two sequences $\alpha _{n}>0,\delta _{n}>0$, a measurable bounded function $\gamma :[0,1]\rightarrow \mathbb{R}$, and a constant $\ \theta >0$, such that: \begin{description} \item[a)] For every $n\geq 1$, $|t-s|\leq \delta _{n}$, and on $% \{F_{n}\not=0\}$ we have \begin{equation} \langle \gamma D(X_{t}-X_{s}), \mathbf{1}_{[s,t]}\rangle _{{\mathcal{H}}% }>\alpha _{n}|t-s|^{\theta },\qquad \mathrm{a.s.}. \label{a} \end{equation} \item[b)] For every $n\geq 1$ \begin{equation} \int_{T}E(\langle \gamma DF_{n},\mathbf{1}_{[s,t]}\rangle _{\mathcal{H}% })|t-s|^{-\theta }dtds<\infty. \label{b} \end{equation} \item[c)] For every $n\geq 1$ \begin{equation} \int_{T}E\left( \left| F_{n}\left\langle \gamma ^{\otimes 2}DD(X_{t}-X_{s}),% \mathbf{1}_{[s,t]}^{\otimes 2}\right\rangle _{\mathcal{H}^{\otimes 2}}\right| \right) |t-s|^{-2\theta }dsdt<\infty . \label{c} \end{equation} \end{description} Then the process $\{X_{t},t\in \lbrack 0,1]\}$ admits a square integrable occupation density on $[0,1]$. \end{theorem} \begin{remark} The original result has been stated in \cite{NuIm} with $\theta =1$ in the case of the standard Brownian motion. On the other hand, by applying Proposition 2.3 and Theorem 2.1 in \cite{NuIm} it follows easily that this criterium can be stated for any $\theta >0$. \end{remark} \setcounter{equation}0 \vskip0.5cm \section{Occupation density for Gaussian processes with random drift} We study in this part the existence of the occupation density for Gaussian processes perturbed by a absolute continuous random drift. The main result of this section is the following. \begin{theorem} Let $\{B_{t},t\in \lbrack 0,1]\}$ be a Gaussian process satisfying (\ref% {cond-cov}). Consider the process $\{X_{t},t\in \lbrack 0,1]\}$ given by \begin{equation*} X_{t}=B_{t}+\int_{0}^{t}u_{s}ds, \end{equation*}% and suppose that the process $u$ satisfies the following conditions: \begin{enumerate} \item $u \in \mathbb{D}^{2,2}(L^{2}([0,1]))$. \item $E\left( \left( \int_{0}^{1}\left\| D^{2}u_{t}\right\| _{\mathcal{% H\otimes H}}^{p}dt\right) ^{q/p}\right) <\infty $, for some $q>1$, $p>\frac{1% }{1-\rho }$. \end{enumerate} Then, the process $X$ has a square integrable occupation density on the interval $[0,1]$. \end{theorem} \vskip10pt\noindent \textit{Proof:}\hskip10pt We are going to apply Theorem % \ref{th1}. Notice first that $X_{t}\in \mathbb{D}^{2,2}$ for all $t\in \lbrack 0,1]$. For any $0\leq s<t\leq 1$, using (\ref{comm}) and (\ref% {cond-cov}) we have% \begin{eqnarray*} \left\langle D(X_{t}-X_{s}),\mathbf{1}_{[s,t]}\right\rangle _{\mathcal{H}} &=&\left\langle \mathbf{1}_{[s,t]},\mathbf{1}_{[s,t]}\right\rangle _{% \mathcal{H}}+\left\langle \int_{s}^{t}Du_{r}dr,\mathbf{1}_{[s,t]}\right% \rangle _{\mathcal{H}} \\ &\geq &C_{1}(t-s)^{2\rho }-\left| \left\langle \int_{s}^{t}Du_{r}dr,\mathbf{1% }_{[s,t]}\right\rangle _{\mathcal{H}}\right| \\ &\geq &C_{1}(t-s)^{2\rho }-\sqrt{C_{2}}(t-s)^{\rho }\int_{s}^{t}\left\| Du_{r}\right\| _{\mathcal{H}}dr. \end{eqnarray*}% By H\"{o}lder's inequality, if $\frac{1}{p}+\frac{1}{q}=1$, we obtain% \begin{equation*} \int_{s}^{t}\left\| Du_{r}\right\| _{\mathcal{H}}dr\leq (t-s)^{\frac{1}{q}% }\left( \int_{0}^{1}\left\| Du_{r}\right\| _{\mathcal{H}}^{p}dr\right) ^{% \frac{1}{p}}. \end{equation*}% Fix a natural number $n\geq 2$, and choose a function $\varphi _{n}(x)$, which is infinitely differentiable with compact support, such that $\varphi _{n}(x)=1$ if $|x|\leq n-1$, and $\varphi _{n}(x)=0$, if $|x|\geq n$. Set $% F_{n}=\varphi _{n}\left( \left( \int_{0}^{1}\left\| Du_{t}\right\| _{% \mathcal{H}}^{p}dt\right) ^{\frac{1}{p}}\right) $. The random variable $F_{n} $ belogs to $\mathbb{D}^{1,q}$. In fact, it suffices to write $F_{n}=\varphi _{n}(G)$, where% \begin{equation*} G=\sup_{\substack{ h\in L^{q}([0,1];\mathcal{H)} \\ \left\| h\right\| \leq 1 }}\int_{0}^{1}\left\langle Du_{r},h_{r}\right\rangle _{\mathcal{H}}dr, \end{equation*}% which implies% \begin{eqnarray*} \left\| DF_{n}\right\| _{\mathcal{H}} &=&\left\| \varphi _{n}^{\prime }(G)DG\right\| _{\mathcal{H}}\leq \left\| \varphi _{n}^{\prime }\right\| _{\infty }\sup_{\substack{ h\in L^{q}([0,1];\mathcal{H)} \\ \left\| h\right\| \leq 1}}\left\| \int_{0}^{1}\left\langle D^{2}u_{r},h_{r}\right\rangle _{\mathcal{H}^{\otimes 2}}dr\right\| _{% \mathcal{H}} \\ &\leq &\left\| \varphi _{n}^{\prime }\right\| _{\infty }\left( \int_{0}^{1}\left\| D^{2}u_{r}\right\| _{\mathcal{H}^{\otimes 2}}^{p}dr\right) ^{\frac{1}{p}}\in L^{q}(\Omega ). \end{eqnarray*}% Then, on \ the set $\{F_{n}\neq 0\}$, $\left( \int_{0}^{1}\left\| Du_{t}\right\| _{\mathcal{H}}^{p}dt\right) ^{\frac{1}{p}}\leq n$, and we get% \begin{eqnarray*} \left\langle D(X_{t}-X_{s}),\mathbf{1}_{[s,t]}\right\rangle _{\mathcal{H}} &\geq &C_{1}(t-s)^{2\rho }-n\sqrt{C_{2}}(t-s)^{\rho +\frac{1}{q}} \\ &=&(t-s)^{2\rho }\left[ C_{1}-n\sqrt{C_{2}}(t-s)^{\frac{1}{q}-\rho }\right] , \end{eqnarray*}% and \ property a) of Theorem \ref{th1} holds with a suitable choice of $% \alpha _{n}$ and $\delta _{n}$ because $\frac{1}{q}-\rho >0$, and with $% \theta =2\rho $ and $\gamma =1$. Finally, conditions b) and c) can also be checked:% \begin{equation*} \int_{T}\frac{E\left( \left| \left\langle DF_{n},\mathbf{1}% _{[s,t]}\right\rangle _{\mathcal{H}}\right| \right) }{|t-s|^{2\rho }}% dsdt\leq \sqrt{C_{2}}\int_{T}\frac{E\left( \left\| DF_{n}\right\| _{\mathcal{% H}}\right) }{|t-s|^{\rho }}dsdt<\infty , \end{equation*}% and% \begin{eqnarray*} &&\int_{T}\frac{E\left( \left| F_{n}\left\langle D^{2}(X_{t}-X_{s}),\mathbf{1% }_{[s,t]}^{\otimes 2}\right\rangle _{\mathcal{H}^{\otimes 2}}\right| \right) }{|t-s|^{4\rho }}dsdt \\ &\leq &\left\| F_{n}\right\| _{\infty }C_{2}\int_{T}\frac{E\left( \left\| D^{2}(X_{t}-X_{s})\right\| _{\mathcal{H}^{\otimes 2}}\right) }{|t-s|^{2\rho }% }dsdt<\infty , \end{eqnarray*}% because% \begin{eqnarray*} E\left( \left\| D^{2}(X_{t}-X_{s})\right\| _{\mathcal{H}^{\otimes 2}}\right) &=&E\left( \left\| \int_{s}^{t}D^{2}u_{r}dr\right\| _{\mathcal{H}^{\otimes 2}}\right) \leq \int_{s}^{t}E\left( \left\| D^{2}u_{r}\right\| _{\mathcal{H}% ^{\otimes 2}}\right) dr \\ &\leq &(t-s)^{\frac{1}{q}}E\left[ \left( \int_{0}^{1}\left\| D^{2}u_{r}\right\| _{\mathcal{H}^{\otimes 2}}^{p}dr\right) ^{\frac{1}{p}}% \right] , \end{eqnarray*}% and $\frac{1}{q}-2\rho =1-\frac{1}{p}-2\rho >-1$, because \ $p>\frac{1}{% 2(1-\rho )}$. \hfill \vrule width.25cm height.25cm depth0cm\smallskip \vskip0.5cm \begin{remark} These conditions are intrinsic and they do not depend on the structure of the Hilbert space $\mathcal{H}$. In the case of the Brownian motion, this result is slightly weaker than Theorem 3.1 in \cite{NuIm}, because we require a little more integrability. \end{remark} \vskip0.5cm \setcounter{equation}0 \section{Occupation density for Skorohod integrals with respect to the fractional Brownian motion} We study here the existence of occupation densities for indefinite divergence integrals with respect to the fractional Brownian motion. Consider a process of the form $X_{t}=\int_{0}^{t}u_{s}\delta B_{s}^{H}$, $% t\in \lbrack 0,1]$, where $B$ is fractional Brownian motion with Hurst parameter $H\in \left( \frac{1}{2},1\right) $, and $u$ is an element of $% \mathbb{D}^{1,2}(L^{2}([0,1]))\subset \mathrm{Dom}\left( \delta \right) $. We know that the covariance of the fractional Brownian motion can be written as \begin{equation} E(B_{t}^{H}B_{s}^{H})=\int_{0}^{t}\int_{0}^{s}\phi (\alpha ,\beta )d\alpha d\beta , \label{H1} \end{equation}% where $\phi (\alpha ,\beta )=H(2H-1)|\alpha -\beta |^{2H-2}$. For any $0\leq s<t\leq 1$, and $\alpha \in \lbrack 0,1]$ we set \begin{equation} f_{s,t}(\alpha ):=\int_{s}^{t}\phi (\alpha ,\beta )d\alpha d\beta . \label{f} \end{equation} We also know (see e.g. \cite{N06}) that the canonical Hilbert space associated to $B$ satisfies: \begin{equation} \label{inclu} L^{2}\left( [0,1]\right) \subset L^{\frac{1}{H}}\left( [0,1]\right)\subset \mathcal{H}. \end{equation} The following is the main result of this section. \begin{theorem} Consider the stochastic process $X_{t}=\int_{0}^{t}u_{s}\delta B_{s}^{H}$ where the integrand $u$ satisfy the following conditions for some $q>\frac{2H% }{1-H}$ and $p>1$ such that $\frac 1p +2 < H(p+1)$: \begin{description} \item[I1)] {\ }$u\in \mathbb{D}^{3,2}(L^{2}([0,1]))$. \item[I2)] {\ }$\int_{0}^{1} \int_0^1 [ E( | D_{t}u_s |^p) +E( \| \left| D_{t}Du_s\right| \|_{\mathcal{H}} ^p) +E( \| \left| D_{t}DDu_s\right| \|_{\mathcal{H}\otimes \mathcal{H}} ^p) ] dsdt <\infty $. \item[I3)] $\int_{0}^{1}E\left( |u_{t}|^{-\frac{p}{p-1}(q+1)}\right) dt<\infty $. \end{description} Then the process $\{X_{t},t\in \lbrack 0,1]\}$ admits a square integrable occupation density on $[0,1]$. \end{theorem} \vskip10pt\noindent \textit{Proof:}\hskip10pt We will use the criteria given in \cite{NuIm} and recalled in Theorem \ref{th1}. Condition I1) implies that $X_{t}\in \mathbb{D}^{2,2}$ for all $t\in \lbrack 0,1]$. On the other hand, from Theorem 7.8 in \cite{KRT} (or also by a slightly modification of Theorem 5 in \cite{AN}) we obtain the continuity of the paths of the process $X$. Note that from Lemma 2.2 in \cite{NuIm} corroborated with hypothesis I3). we obtain the existence of a function $\gamma :[0,1]\rightarrow \{-1,1\} $ such that $\gamma _{t}u_{t}=|u_{t}|$ for almost all $t$ and $\omega $. We are going to show conditions a), b) and c) of Theorem \ref{th1}. \vskip 0.3cm \noindent \textit{Proof of condition a): } Fix $0\leq s<t\leq 1$. From (\ref% {comm}) we obtain \begin{equation*} D(X_{t}-X_{s})=u\mathbf{1}_{[s,t]}+\int_{s}^{t}Du_{r}\delta B_{r}^{H}, \end{equation*}% and we can write \begin{equation} \langle \gamma (X_{t}-X_{s}), \mathbf{\ 1}_{[s,t]}\rangle _{\mathcal{H}% }=\langle |u|\mathbf{1}_{[s,t]},\mathbf{1}_{[s,t]}\rangle _{\mathcal{H}% }+\langle \gamma \int_{s}^{t}Du_{r}\delta B_{r}^{H},\mathbf{1}% _{[s,t]}\rangle _{\mathcal{H}}. \label{z1} \end{equation}% We first study the term \begin{equation*} \langle |u|\mathbf{1}_{[s,t]},\mathbf{1}_{[s,t]}\rangle _{\mathcal{H}% }=\int_{s}^{t}\int_{s}^{t}|u_{\alpha }|\phi (\alpha ,\beta )d\alpha d\beta =\int_{s}^{t}|u_{\alpha }|f_{s,t}(\alpha )d\alpha . \end{equation*}% For any $q>1$ we have \begin{eqnarray*} E(|B_{t}^{H}-B_{s}^{H}|^{2}) &=&\int_{s}^{t}f_{s,t}(\alpha )d\alpha \\ &=&\int_{s}^{t}\left( |u_{\alpha }|f_{s,t}(\alpha )\right) ^{\frac{q}{q+1}% }\left( |u_{\alpha }|f_{s,t}(\alpha )\right) ^{-\frac{q}{q+1}}f_{s,t}(\alpha )d\alpha , \end{eqnarray*}% and using H\"{o}lder's inequality with orders $\frac{q+1}{q}$ and $q+1$, we obtain \begin{equation*} E(|B_{t}^{H}-B_{s}^{H}|^{2})\leq \left( \int_{s}^{t}|u_{\alpha }|f_{s,t}(\alpha )d\alpha \right) ^{\frac{q}{q+1}}\left( \int_{s}^{t}|u_{\alpha }|^{-q}f_{s,t}(\alpha )d\alpha \right) ^{\frac{1}{q+1}% }. \end{equation*}% Hence, using that% \begin{equation*} f_{s,t}(\alpha )\leq f_{0,1}(\alpha )=H(2H-1)\int_{0}^{1}|\alpha -\beta |^{2H-2}d\beta =H\left( \alpha ^{2H-1}+(1-\alpha )^{2H-1}\right) \leq H, \end{equation*}% we get \begin{equation} \int_{s}^{t}|u_{\alpha }|f_{s,t}(\alpha )d\alpha \geq C |t-s|^{\frac{2H(q+1)}{q% }}Z_{q}^{-\frac{1}{q}}, \label{z2} \end{equation}% where $Z_{q}=$ $\int_{0}^{1}|u_{\alpha }|^{-q}d\alpha $. On the other hand, for the second summand in the right-hand side of (\ref{z1}% ) we can write, using H\"{o}lder's inequality. \begin{eqnarray} \left| \left\langle \gamma \int_{s}^{t}Du_{r}\delta B_{r}^{H},\mathbf{1}% _{[s,t]}\right\rangle _{\mathcal{H}}\right| &\leq &\int_{0}^{1}\left| \int_{s}^{t}D_{\alpha }u_{r}\delta B_{r}^{H}\right| f_{s,t}(\alpha )d\alpha \notag \\ &\leq &\left( \int_{0}^{1}f_{s,t}(\alpha )^{\frac{p}{p-1}}d\alpha \right) ^{% \frac{p-1}{p}} \notag \\ &&\times \left( \int_{0}^{1}\left| \int_{s}^{t}D_{\alpha }u_{r}\delta B_{r}\right| ^{p}d\alpha \right) ^{\frac{1}{p}}. \label{bb2} \end{eqnarray}% We can write% \begin{eqnarray} \left( \int_{0}^{1}f_{s,t}(\alpha )^{\frac{p}{p-1}}d\alpha \right) ^{\frac{% p-1}{p}} &=&c_{H}\left\| \int_{s}^{t}|\cdot -\beta |^{2H-2}d\beta \right\| _{L^{\frac{p}{p-1}}([0,1])} \notag \\ &\leq &c_{H}\left\| \mathbf{1}_{[s,t]}\ast |\cdot |^{2H-2}\mathbf{1}% _{[-1,1]}\right\| _{L^{\frac{p}{p-1}}(\mathbb{R})}, \label{bb3} \end{eqnarray}% where $c_{H}=H(2H-1)$. Young's inequality with exponents $a$ and $b$ in $% (1,\infty )$ such that $\frac{1}{a}+\frac{1}{b}=2-\frac{1}{p}$ yields \begin{equation} \left\| \mathbf{1}_{[s,t]}\ast |\cdot |^{2H-2}\mathbf{1}_{[-1,1]}\right\| _{L^{\frac{p}{p-1}}(\mathbb{R})}\leq \left\| \mathbf{1}_{[s,t]}\right\| _{L^{a}(\mathbb{R})}\left\| |\cdot |^{2H-2}\mathbf{1}_{[-1,1]}\right\| _{L^{b}(\mathbb{R})}. \label{bb4} \end{equation}% Choosing $b<\frac{1}{2-2H}$ and letting $\eta =\frac{1}{a}<2H-\frac{1}{p}$ we obtain from (\ref{bb2}), (\ref{bb3}), and (\ref{bb4}) \begin{equation*} \left| \left\langle \gamma \int_{s}^{t}Du_{r}\delta B_{r}^{H},\mathbf{1}% _{[s,t]}\right\rangle _{\mathcal{H}}\right| \leq C|t-s|^{\eta }\left( \int_{0}^{1}\left| \int_{s}^{t}D_{\alpha }u_{r}\delta B_{r}\right| ^{p}d\alpha \right) ^{\frac{1}{p}}. \end{equation*}% Now we will apply Garsia-Rodemich-Ramsey's lemma (see \cite{GRR}) with $\Phi (x)=x^{p}$, $p(x)=x^{\frac{m+2}{p}}$ and to the continuous function $% u_{s}=\int_{0}^{s}D_{\alpha }u_{r}\delta B_{r}$ (use again Theorem 5 in \cite% {AN}), and we get \begin{equation} \left| \left\langle \int_{s}^{t}Du_{r}\delta B_{r},\gamma \mathbf{1}% _{[s,t]}\right\rangle _{\mathcal{H}}\right| \leq C|t-s|^{\eta +\frac{m}{p}}% Y_{m,p}^{\frac{1}{p}}, \label{bb5} \end{equation}% where% \begin{equation*} Y_{m,p}=\int_{0}^{1}\int_{0}^{1}\int_{0}^{1}\frac{\left| \int_{x}^{y}D_{\alpha }u_{r}\delta B_{r}\right| ^{p}}{|x-y|^{m+2}}% dxdyd\alpha . \end{equation*}% Substituting (\ref{z2}) and (\ref{bb5}) into (\ref{z1}) yields \begin{eqnarray*} \langle \gamma D(X_{t}-X_{s}),1_{[s,t]}\rangle _{\mathcal{H}} &\geq &|t-s|^{% \frac{2H(q+1)}{q}}Z_{q}^{-\frac{1}{q}}-C|t-s|^{\eta +\frac{m}{p}}Y_{m,p}^{% \frac{1}{p}} \\ &=&|t-s|^{\frac{2H(q+1)}{q}}\left( Z_{q}^{-\frac{1}{q}}-C|t-s|^{\delta }Y_{m,p}^{\frac{1}{p}}\right) , \end{eqnarray*}% where $\delta =\eta +\frac{m}{p}-2H-\frac{2H}{q}$. With a right choice of $% \eta $ the exponent $\delta $ is positive, provided that $m-\frac{1}{p}-% \frac{2H}{q}>0$, because $\eta <2H-\frac{1}{p}$. \ Taking into account that $\frac{2H}{q}<1-H$, it suffices that% \begin{equation} m>\frac{1}{p}+1-H. \label{c1} \end{equation} We construct now the sequence $\left\{ F_{n},n\geq 1\right\} $. \ Fix a natural number $n\geq 2$, and choose a function $\varphi _{n}(x)$, which is infinitely differentiable with compact support, such that $\varphi _{n}(x)=1$ if $|x|\leq n-1$, and $\varphi _{n}(x)=0$, if $|x|\geq n$. Set $% F_{n}=\varphi _{n}\left( G\ \right) $, where $G=Z_{q}+Y_{m,p}$. Then clearly the sequences $\alpha _{n}$ and $\delta _{n}$ required in Theorem \ref{th1} can be constructed on the set $\left\{ F_{n}\not=0\right\} $, with $\theta =2H+\frac{2H}{q}$. It only remains to show that the random variables $F_{n}$ are in the space $% \mathbb{D}^{1,1}$. \ For this we have to show that the random variables \ $% \left\| DZ_{q}\right\| _{\mathcal{H}}$ and $\left\| DY_{m,p}\right\| _{% \mathcal{H}}$ are integrable on the set $\{G\leq n\}$. First notice that, as in the proof of \ Proposition 4.1 of \cite{NuIm}, we can show that $% E\left( \left\| DZ_{q}\right\| _{\mathcal{H}}\right) <\infty $. This follows from the integrability conditions I3) and% \begin{equation} \int_{0}^{1}E\left( \Vert Du_{t}\Vert _{\mathcal{H}}^{p}\right) dt<\infty , \label{c3} \end{equation}% which holds because of \ I2), the continuous embedding of of $L^{\frac{1}{H}% }([0,1])$ into $\mathcal{H}$ (see \cite{MMV}), and the fact that $pH\geq 1$. On the other hand, we can write \begin{equation*} DY_{m,p}=p\int_{0}^{1}\int_{0}^{1}\int_{0}^{1}\left| \xi _{x,y,\alpha }\right| ^{p-1}\mathrm{sign}(\xi _{x,y,\alpha })D\xi _{x,y,\alpha }|x-y|^{-m-2}dxdyd\alpha , \end{equation*}% where $\xi _{x,y,\alpha }=\int_{y}^{x}D_{\alpha }u_{r}\delta B_{r}$. Thus \begin{eqnarray*} \Vert DY_{m,p}\Vert _{\mathcal{H}} &\leq &p\int_{0}^{1}\int_{0}^{1}\int_{0}^{1}\left| \xi _{x,y,\alpha }\right| ^{p-1}\Vert D\xi _{x,y,\alpha }\Vert _{\mathcal{H}}|x-y|^{-m-2}dxdyd\alpha \\ &\leq &p(Y_{m,p})^{\frac{p-1}{p}}\left( \int_{0}^{1}\int_{0}^{1}\int_{0}^{1}\Vert D\xi _{x,y,\alpha }\Vert _{% \mathcal{H}}^{p}|x-y|^{-m-2}dxdyd\alpha \right) ^{1/p}. \end{eqnarray*}% Now, to show that $1_{(G\leq n)}\Vert DY_{m,p}\Vert _{\mathcal{H}}$ belongs to $L^{1}(\Omega )$, it suffices to show that the random variable \begin{equation*} Y=\int_{0}^{1}\int_{0}^{1}\int_{0}^{1}\Vert D\xi _{x,y,\alpha }\Vert _{% \mathcal{H}}^{p}|x-y|^{-m-2}dxdyd\alpha \end{equation*}% has a finite expectation. \ Since, for any $0\leq y<x\leq 1$ \begin{equation*} D\xi _{x,y,\alpha }=\mathbf{1}_{[y,x]}D_{\alpha }u+\int_{y}^{x}DD_{\alpha }u_{s}\delta B_{s}^{H}, \end{equation*}% we have \begin{eqnarray*} Y &\leq &C\left( \int_{0}^{1}\int_{0}^{1}\int_{0}^{1}\Vert \mathbf{1}% _{[y,x]}D_{\alpha }u\Vert _{\mathcal{H}}^{p}|x-y|^{-m-2}dxdyd\alpha \right. \\ &&+\left. \int_{0}^{1}\int_{0}^{1}\int_{0}^{1}\Vert \int_{y}^{x}DD_{\alpha }u_{s}\delta B_{s}^{H}\Vert _{\mathcal{H}}^{p}|x-y|^{-m-2}dxdyd\alpha \right) \\ := &&C(Y_{1}+Y_{2}). \end{eqnarray*}% From the continuous embedding of $L^{\frac{1}{H}}([0,1])$ into $\mathcal{H}$% , we obtain% \begin{eqnarray*} Y_{1} &\leq &C\int_{0}^{1}\int_{0}^{1}\int_{0}^{1}\Vert \mathbf{1}% _{[y,x]}D_{\alpha }u\Vert _{L^{1/H}([0,1])}^{p}|x-y|^{-m-2}dxdyd\alpha \\ &\leq &C|x-y|^{pH-1}\int_{0}^{1}\int_{0}^{1}\int_{0}^{1}\int_{y}^{x}\left| D_{\alpha }u_{r}\right| ^{p}|x-y|^{-m-2}drdxdyd\alpha . \end{eqnarray*}% Hence, $E(Y_{1})<\infty $, by Fubini's theorem, Proposition 3.1 in \cite{NuIm} and condition I2), provided% \begin{equation} m<pH-1. \label{c1a} \end{equation} On the other hand, using the estimate (\ref{meyer}), and again the continuous embedding of $L^{\frac{1}{H}}([0,1])$ into $\mathcal{H}$ yields% \begin{eqnarray*} E\left( \Vert \int_{y}^{x}DD_{\alpha }u_{s}\delta B_{s}^{H}\Vert _{\mathcal{H% }}^{p}\right) &\leq &C\ E\left( \left\| D_{\alpha }Du_{\cdot}\mathbf{1}% _{[y,x]}(\cdot)\right\| _{\mathcal{H}^{\otimes 2}}^{p}+\left\| D_{\alpha }DDu_{\cdot}\mathbf{1}% _{[y,x]}(\cdot)\right\| _{\mathcal{H}^{\otimes 3}}^{p}\right) \\ &\leq &C\ E\Big( \left\| \left|D_{\alpha }Du_{\cdot }\right| \mathbf{1}_{[y,x]}(\cdot)\right\| _{L^{1/H}([0,1];\mathcal{H})}^{p} \\ &&+\left\| \left| D_{\alpha }DDu_{\cdot} \right| \mathbf{1}% _{[y,x]}(\cdot )\right\| _{L^{1/H}([0,1];\mathcal{H}^{\otimes 2})}^{p}\Big) \\ &\leq &C|x-y|^{pH-1}\Bigg ( \int_{y}^{x}E\left( \left\| \left| D_{\alpha }Du_{r}\right| \right\| _{\mathcal{H}}^{p}\right) dr \\ && +\int_{y}^{x}E\left( \left\| \left| D_{\alpha }DDu_{r}\right| \right\| _{\mathcal{H}^{\otimes 2}}^{p}\right) dr\Bigg ) . \end{eqnarray*}% As before we obtain $E(Y_{2})<\infty $ by Fubini's theorem and condition I2), provided (\ref{c1a}) holds. \ Notice that \ condition \ $\frac{1}{p}% +2<H(p+1)$ implies that we can choose an $m$ such that \ (\ref{c1}) and (\ref% {c1a}) hold. \vskip0.3cm \noindent \textit{Proof of condition b): } Define $A_{n}=\{G\leq n\}$. Then, condition b) in Theorem \ref{th1} follows from \begin{eqnarray*} \int_{T}E(\langle \gamma DF_{n},\mathbf{1}_{[s,t]}\rangle _{\mathcal{H}% })|t-s|^{-\theta }dtds &\leq &C\int_{T}E(\mathbf{1}_{A_{n}}\left| \langle \gamma DG, \mathbf{1}_{[s,t]}\rangle _{\mathcal{H}}\right| )|t-s|^{-\theta }dtds \\ &\leq &CE\left( \mathbf{1}_{A_{n}}\Vert DG\Vert _{\mathcal{H}}\right) \int_{T}|t-s|^{H-\theta }dsdt<\infty , \end{eqnarray*}% since $E\left( \mathbf{1}_{A_{n}}\Vert DG\Vert _{\mathcal{H}}\right) <\infty $ and $\theta -H=H+\frac{2H}{q}<1$. \vskip0.3cm \noindent \textit{Proof of condition c):} We have \begin{equation*} D_{\alpha }D_{\beta }(X_{t}-X_{s})=\mathbf{1}_{[s,t]}(\beta )D_{\alpha }u_{\beta }+\mathbf{1}_{[s,t]}(\alpha )D_{\beta }u_{\alpha }+\int_{s}^{t}D_{\alpha }D_{\beta }u_{r}\delta B_{r}^{H}. \end{equation*}% Hence$\ $ \begin{eqnarray*} \left\langle \gamma ^{\otimes 2} DD(X_{t}-X_{s}),\mathbf{1}_{[s,t]}^{\otimes 2}\right\rangle _{\mathcal{H}^{\otimes 2}} &=&\left\langle \gamma ^{\otimes 2}\mathbf{1}% _{[s,t]}(\beta )D_{\alpha }u_{\beta },\mathbf{1}% _{[s,t]}^{\otimes 2}\right\rangle _{\mathcal{H}^{\otimes 2}}+\left\langle \gamma ^{\otimes 2} \mathbf{1}_{[s,t]}(\alpha )D_{\beta }u_{\alpha },\mathbf{1% }_{[s,t]}^{\otimes 2}\right\rangle _{\mathcal{H}^{\otimes 2}} \\ &&+\left\langle \gamma ^{\otimes 2}\int_{s}^{t}D_{\alpha }D_{\beta }u_{r}\delta B_{r}^{H},\mathbf{1}_{[s,t]}^{\otimes 2}\right\rangle _{% \mathcal{H}^{\otimes 2}} \\ := &&J_{1}(s,t)+J_{2}(s,t)+J_{3}(s,t). \end{eqnarray*}% For $i=1,2,3$, we set% \begin{equation*} A_{i}=E\left( F_{n}\int_{T}|t-s|^{-2\theta }\left| J_{i}(s,t)\right| dsdt\right) . \end{equation*}% Let us compute first% \begin{eqnarray*} A_{1} &\leq &C\int _{T}\int_{T}|t-s|^{2H-2\theta }E\left( \Vert |D_{\alpha }u_{\beta }|\mathbf{1}_{[s,t]}(\beta )\Vert _{\mathcal{H}^{\otimes 2}}\right) dsdt \\ &=&C\ \int_{T}\int_{T}|t-s|^{2H-2\theta }\left( \int_{s}^{t}\int_{s}^{t}\varphi (\beta ,y)d\beta dy\right)^{\frac{1}{2}} dsdt, \end{eqnarray*}% where% \begin{equation*} \varphi (\beta ,y)=\int_{0}^{1}\int_{0}^{1}E\left( |D_{\alpha }u_{\beta }||D_{x}u_{y}|\right) \phi (\alpha ,x)\phi (\beta ,y)d\alpha dx. \end{equation*}% By Fubini's theorem $A_{1}<\infty $, because $2H-2\theta >-2$, which is equivalent to $q>H$, and% \begin{equation*} \int_{0}^{1}\int_{0}^{1}\varphi (\beta ,y)d\beta dy \leq E\left( \left\| \left| Du \right| \right\| _{% \mathcal{H}^{\otimes 2}}^{2}\right) \end{equation*}% and this is finite because of the inclusion of $L^{2}([0,1])$ in $\mathcal{H}$ (\ref{inclu}). In the same way we can show that $A_{2}<\infty $. Finally, \begin{eqnarray*} A_{3} &=&E\left( F_{n}\int_{T}\int_{T}|t-s|^{-2\theta }\left| \left\langle \gamma ^{\otimes 2} \int_{s}^{t}D_{\alpha }D_{\beta }u_{r}\delta B_{r}^{H},% \mathbf{1}_{[s,t]}^{\otimes 2}\right\rangle _{\mathcal{H}^{\otimes 2}}\right| dsdt\right) \\ &\leq &C\int_{T}\int_{T}|t-s|^{2H-2\theta }E\left( \left\| \int_{s}^{t}DDu_{r}\delta B_{r}^{H}\right\| _{\mathcal{H}^{\otimes 2}}\right) dsdt, \end{eqnarray*}% and we conclude as before by using for example the bound (\ref{meyer}) for the norm of the Skorohod integral and the condition I2). \hfill \vrule width.25cm height.25cm depth0cm\smallskip \vskip0.5cm \begin{remark} If $p= \frac {1+\sqrt{17}}2$, then $\frac 1p +2 < H(p+1)$ for all $H>\frac 12 $. \end{remark} \addcontentsline{toc}{chapter}{Bibliographie}
1306.4386
\section{Introduction} The arithmetic properties of Fourier coefficients of modular forms are involved in many areas of number theory, and they have formed the topic of a vast amount of research. A prototypical modular form is the Dedekind eta function \[\eta(z): = q^{1/24} \prod_{n=1}^{\infty} (1-q^n),\qquad q:=e^{2\pi iz},\] whose inverse generates the partition function (the prototypical function of additive number theory) via the well-known relation \begin{equation}\label{etadef} \frac1{\eta (z)}= q^{-1/24}\sum_{n=0}^{\infty} p(n) q^n. \end{equation} Most of what is known about the arithmetic properties of $p(n)$ stems from the interpretation of \eqref{etadef} as a weakly holomorphic modular form of weight $-1/2$. Although there are far too many results to mention individually, we mention Ono's paper \cite{Ono:partition} and the subsequent work \cite{AhlgrenOno} which show that Ramanujan's famous congruence \begin{equation}\label{ramcong5} p(5n+4) \equiv 0 \pmod{5} \end{equation} has an analogue for any modulus $M$ coprime to $6$ (see also the recent work of Folsom, Kent and Ono \cite{FolsomKentOno}). Treneer \cite{Treneer1}, \cite{Treneer2} has extended these results to cover any weakly holomorphic modular form. In the other direction, the first author and Boylan \cite{AhlgrenBoylan} showed that there are no congruences as simple as \eqref{ramcong5} for primes $\geq 13$. The situation has been less clear for the primes which divide $24$. Parkin and Shanks \cite{ParkinShanks} conjectured that \[ \# \left\{ n \leq x: p(n) \not\equiv 0 \pmod{2} \right\} \sim \frac{x}{2},\] and there is an analogous folklore conjecture modulo $3$. The current results are far from this expectation; for example the best result for the number of odd values of $p(n)$ is due to Nicolas \cite{Nicolas}, who obtained the bound $\sqrt{x}(\log\log x)^K/\log x$ for any $K$. An old conjecture of Subbarao \cite{Subbarao} states that there are no linear congruences for $p(n)$ modulo $2$. In striking recent work, Radu \cite{Radu:Crelle} has proved Subbarao's conjecture, as well as its analog modulo $3$, with clever and technical arguments. In this paper we will adapt the methods of Radu's paper to prove non-vanishing results for the coefficients of mock theta functions and weakly holomorphic modular forms of certain types. The function \[ f(q) = \sum_{n=0}^{\infty} a(n)q^n=1 + \sum_{n=1}^{\infty} \frac{ q^{n^2}}{(1+q)^2 (1+q^2)^2 \cdots (1+q^n)^2 } \] is a prototypical mock theta function (see, for example, works of Zwegers \cite{Zwegers:2001}, Bringmann and Ono \cite{Ono:fq}, and Zagier \cite{Zagier:2009}). Using a standard argument and \cite{Treneer1}, \cite{Treneer2}, one deduces that there are linear congruences $a(mn+t)\equiv 0\pmod {\ell^j}$ for any prime power $\ell^j$ with $\ell\geq 5$. The function $f(q)$ coincides modulo $2$ with the generating function for partitions. To see this, define the {\em rank} of a partition $\lambda$ as $\lambda_1 - \ell (\lambda)$, where $\lambda_1$ is the largest part of $\lambda$ and $\ell(\lambda)$ is the number of parts. Define $N_{e}(n)$ and $N_{o}(n)$ as the number of partitions of $n$ with even and odd ranks, respectively. Then we have \begin{equation}\label{fqrank} f(q) := \sum_{n=0}^{\infty} a(n) q^n = 1 + \sum_{n=1}^{\infty} \( N_{e} (n) - N_{o} (n) \) q^n . \end{equation} Since $p(n) = N_{e} (n) + N_{o} (n)$, we have $a(n) \equiv p(n) \pmod{2}$ for all $n$, and Radu's result implies that there are no linear congruences for $a(n)$ modulo $2$. Here we will prove \begin{theorem}\label{mainthm1} For any positive integer $m$ and any integer $t$ we have \[\sum a(mn+t)q^n\not \equiv 0\pmod 3 .\] \end{theorem} The mock theta function \[ \omega(q):=1+\sum_{n=1}^\infty \frac{q^{2n^2+2n}}{(1+q)^2(1+q^3)^2\dots (1+q^{2n+1})^2} = \sum_{n=0}^{\infty} c(n) q^n \] appears naturally with $f(q)$ as the component of a vector-valued mock modular form (see, for example, \cite{Zwegers:2001}). Andrews \cite{Andrews:Durfee} gives a partition theoretic interpretation for $c(n)$ as the number of partitions of $n+1$ into nonnegative integers such that every part in the partition, with the possible exception of the largest part, appears as a pair of consecutive integers. For example, there are $6$ such partitions of $5$: \[ 5,\;\; 4+(1+0),\;\;3+(1+0)+(1+0),\;\;2+(2+1),\;\;2+(1+0)+(1+0)+(1+0),\;\;(1+0)+\cdots+(1+0). \] This function behaves quite differently modulo $2$. In fact, Andrews \cite[Theorem 31]{Andrews:Durfee} has shown that $c(n)$ is odd if and only if $n = 6j^2 + 4j$ for some integer $j$. For the modulus $3$, we will prove \begin{theorem}\label{mainthm2} For any positive integer $m$ and any integer $t$ we have \[\sum c(mn+t)q^n\not \equiv 0\pmod 3. \] \end{theorem} It is natural to ask how these results extend to more general classes of modular forms. In this direction, we investigate the class \[ \mathcal S(B, k, N, \chi) := \left\{ \eta^{B} (z) F(z) : \text{$F(z) \in M_{k}^{!} (\Gamma_0 (N) , \chi )$} \right\}, \] where $k$ is an integer or half-integer and $M_{k}^{!} (\Gamma_0 (N) , \chi )$ is the space of weakly holomorphic modular forms of weight $k$ and level $N$ with character $\chi$ (see Section 2 for definitions). If $f(z) \in \mathcal S(B, k, N, \chi)$, then we have \[ f(z) = q^{B/24} \sum_{n \ge n_0} a_{f} (n) q^n. \] We show that certain forms of this type do not possess linear congruences modulo $2$ or $3$. If $m$ is a positive integer and $B$ is an integer with $6 \nmid B$, then write $m=2^r 3^s m'$ with $(m', 6)=1$, and define a divisor of $m$ by \begin{equation}\label{qmb} Q_{m, B}=\begin{cases} m'\ \ &\text{if}\ \ (B, 6)=1,\\ 2^rm'\ \ &\text{if}\ \ (B, 6)=2,\\ 3^sm'\ \ &\text{if}\ \ (B, 6)=3. \end{cases} \end{equation} \begin{theorem} \label{etathm} Suppose that $\ell=2$ or $\ell=3$. Suppose that $f\in \mathcal S(B, k, N, \chi)$ and that $f$ has a pole at infinity and leading coefficient equal to $1$. Suppose that $\ell\nmid BN$ and that the coefficients of $f$ are $\ell$-integral rational numbers. Then for any positive integers $m$ and $t$ with $(Q_{m, B},N)=1$, we have \[ \sum a_{f} (m n + t)q^n \not\equiv 0 \pmod{\ell}. \] \end{theorem} \begin{remark} The analogous statement will hold for forms with algebraic coefficients, where $\ell$ is replaced by any prime ideal over $\ell$. \end{remark} As an application, we consider eta-quotients, which we express in the standard form \begin{equation}\label{etaproddef} f (z) = \prod_{\delta \mid N } \eta ( \delta z )^{r_{\delta}}. \end{equation} Writing \begin{equation}\label{bfdef} B=B_f:=\sum \delta r_{\delta}, \end{equation} we have \begin{equation}\label{etaprodcoeff} f(z) = q^\frac {B}{24}\sum a_f (n) q^n. \end{equation} \begin{corollary} \label{etacor} Suppose that $f(z)$ is an eta-quotient as in \eqref{etaproddef}--\eqref{etaprodcoeff} and that $f$ has a pole at infinity. Suppose that $\ell=2$ or $\ell=3$ and that $\ell\nmid B$. Write $N=\ell^s N'$ with $\ell\nmid N'$. Then for any positive integers $m$ and $t$ with $(Q_{m, B}, N' )=1$, we have \[ \sum a_f (m n + t )q^n \not\equiv 0 \pmod{\ell}. \] \end{corollary} \begin{remark} The hypotheses are satisfied if $\ell\nmid BN$ and $(m, N)=1$. \end{remark} We give some examples involving Corollary \ref{etacor}. \begin{example} A $k$-multipartition of $n$ is a $k$-tuple of partitions $(\pi_1 , \pi_2, \ldots, \pi_k)$ such that $| \pi_1 | + | \pi_2 | + \cdots + | \pi_k | =n$. The generating function for $k$-multipartitions is \[ \sum_{n=0}^{\infty} p_{k} (n) q^{n - k/24} = \eta^{-k} (z). \] Various congruences for $p_k (n)$ have been studied (see, for example, \cite{multi1}, \cite{multi2}, \cite{multi3}). For example, Andrews \cite{multi1} showed that for each prime $p\geq 5$ there are $(p+1)/2$ values of $b$ with $1 \le b \le p$ for which $p_{p-3} (pn+b) \equiv 0 \pmod{p}$. Corollary \ref{etacor} shows that if $\ell=2$ or $\ell=3$ and $\ell \nmid k$, then there are no linear congruences for $p_{k} (n)$ modulo $\ell$. \end{example} \begin{example} A cubic partition of $n$ is a bi-partition ($\pi_1$, $\pi_2$) such that $\pi_2$ contains no odd part. For example, the cubic partitions of $3$ are \[ (3, \varnothing ), \quad (2+1, \varnothing) , \quad (1+1+1, \varnothing), \quad (1,2). \] The generating function for these partitions is \[ f_{\operatorname{cu}}(z)=\sum_{n=0}^{\infty} \operatorname{cu}(n) q^{n-3/24} = \eta^{-1} (z) \eta^{-1} (2z). \] In this case, the quantity $B$ from \eqref{bfdef} is a multiple of $3$. So Corollary~\ref{etacor} does not apply for the prime $\ell=3$. In fact, H.-C. Chan \cite{HeiChi} has shown that \[ \operatorname{cu}(3n+2) \equiv 0 \pmod{3}. \] Corollary \ref{etacor} implies that there is no linear congruence for $\operatorname{cu}(n)$ modulo $2$. \end{example} \begin{example} To explain Ramanujan's congruences for $p(n)$, the {\em crank} of a partition was introduced by Andrews and Garvan \cite{AGcrank}. Let $M_{e} (n)$ and $M_{o} (n)$ denote the number of partitions of $n$ with even and odd crank, respectively. As a counterpart to \eqref{fqrank}, we have \[ \sum_{n=0}^{\infty} (M_{e} (n) - M_{o} (n) ) q^{n - 1/24} = \frac{\eta^{3}(z)}{\eta^{2}(2z)}. \] In \cite{CLK}, Choi, Lovejoy and Kang showed that \[ M_{e} (5n+4) - M_{o} (5n+4) \equiv 0 \pmod{5}. \] Here $B=-1$, so Corollary \ref{etacor} guarantees that there are no linear congruences modulo $2$ or $3$. \end{example} \begin{example} Andrews \cite{Frob1} introduced the generalized Frobenius symbol $c\phi_2$ and showed that \[ \sum_{n=0}^{\infty} c\phi_2 (n) q^{n-1/12} = \frac{\eta^5(2z)}{\eta^4 (z)\eta^2(4z)}. \] Andrews \cite[Cor 10.1 and Thm 10.2]{Frob1} proved that \[ c\phi_2 ( 2n+1) \equiv 0 \pmod{2} \quad \text{and} \quad c\phi_2 (5n+3) \pmod{5}, \] and many congruence properties of these symbols have since been investigated (see, for example, \cite{Frob2}, \cite{PauleRadu}, and \cite{Frob3}). Corollary \ref{etacor} shows that there is no linear congruence of the form $c\phi_2 (mn+t)$ modulo $3$ with odd $m$ (it is likely that an adaptation of these methods can be used to remove the restriction on $m$ in this case). \end{example} \begin{example} The assumption that $f$ has a pole at infinity is necessary. The function $\eta(z)=q^\frac1{24}\sum(-1)^k q^\frac{k(3k+1)}2$ provides a simple example. For another example, the generating function for the number of $4$-core partitions of $n$ is given by \[ \sum_{n=0}^{\infty} c_{4} (n) q^{n+15/24} = \frac{\eta^{4} (4z)}{\eta(z)}, \] and M. Hirschhorn and J. Sellers \cite{HS4core} have shown that \[ c_{4} (9n+2) \equiv 0 \pmod{2}. \] \end{example} \begin{example} The assumption that $(Q_{m, B}, N)=1$ is also necessary in general. For example, if we define $a(n)$ by \[ \sum_{n=0}^{\infty} a(n) q^{n-5/24} = \eta^{-1} (5z), \] then we have $a(5n+1) \equiv a(5n+2) \equiv \cdots \equiv a(5n+4) \equiv 0 \pmod{2}$. \end{example} The first author and Boylan \cite{ABparity} proved that if $\ell$ is prime and $f=\sum a(n)q^n$ is a weakly holomorphic modular form with $f\not\equiv 0\pmod \ell$, then then \begin{equation}\label{ABresult} \# \left\{ n \leq x : a_f (n) \not\equiv 0 \pmod{ \ell } \right\} \gg_{f,K} \frac{\sqrt{x}}{\log x} (\log\log x)^{K} \end{equation} for any positive integer $K$. In each situation where the results described above imply that $\sum a(mn+t)q^n\not\equiv 0\pmod\ell$, the lower bound \eqref{ABresult} applies to the number of non-zero coefficients (this will be clear from the method of proof). The proofs follow the outline given by Radu \cite{Radu:Crelle}, and require a careful analysis of the integrality properties of the functions $\sum a(mn+t)q^n$ at various cusps. In Section 2, we give some background on modular forms and mock modular forms. In Sections 3 and 4, we prove Theorems \ref{mainthm1} and \ref{mainthm2}. In Section 5, we prove Theorem \ref{etathm} and its corollary. \section{Preliminaries} We recall the definitions of harmonic weak Maass forms and mock modular forms (see, for example, \cite{Bruinier:2004fk}, \cite{Ono:2009} or \cite{Zagier:2009} for details). Given $k\in \frac{1}{2}\mathbb{Z}\setminus \mathbb{Z}$, $z=x+iy$ with $x, y\in \mathbb{R}$, $4\mid N$ and a Dirichlet character $\chi$ modulo $N$, a {\it harmonic weak Maass form of weight $k$ with Nebentypus $\chi$ on $\Gamma_0(N)$} is a smooth function $F:\mathbb{H}\to \mathbb{C}$ satisfying the following: \begin{enumerate} \item For all $ \left(\begin{smallmatrix}a&b\\c&d \end{smallmatrix} \right)\in \Gamma_0(N)$ and all $z \in \mathbb{H}$, we have \begin{displaymath} F \left(\frac{a z +b}{c z +d} \right) = \leg{c}{d}\epsilon_d^{-2k} \chi(d)\,(cz+d)^{k}\ F(z), \end{displaymath} where \begin{equation*} \epsilon_d:=\begin{cases} 1 \ \ \ \ &{\text {\rm if}}\ d\equiv 1\pmod 4,\\ i \ \ \ \ &{\text {\rm if}}\ d\equiv 3\pmod 4. \end{cases} \end{equation*} \item $\Delta_k(F)=0$, where $\Delta_k$ is the weight $k$ hyperbolic Laplacian, given by \begin{equation*}\label{laplacian} \Delta_k := -y^2\left( \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}\right) + iky\left( \frac{\partial}{\partial x}+i \frac{\partial}{\partial y}\right). \end{equation*} \item The function $F$ has at most linear exponential growth at the cusps of $\Gamma_0(N)$. \end{enumerate} We denote by $H_{k}\left(\Gamma_0(N), \chi\right)$ the space of harmonic weak Maass forms of weight $k$ with Nebentypus $\chi$ on $\Gamma_0(N)$. We denote the subspace of weakly holomorphic forms (i.e., those meromorphic forms whose poles are supported at the cusps of $\Gamma_0(N)$) by $M_{k}^!\left(\Gamma_0(N), \chi\right)$, and the space of holomorphic forms by $M_{k}\left(\Gamma_0(N), \chi\right)$ (if $\chi$ is trivial, then we drop it from the notation). Each harmonic weak Maass form $F$ decomposes uniquely as the sum of a holomorphic part $F^{+}$ and a non-holomorphic part $F^{-}$. The holomorphic part, which is known as a \textit{mock modular form}, is a power series in $q$ with at most finitely many negative exponents. Now define \begin{equation}\label{FDEF} F(z)=(F_0(z), F_1(z), F_2(z))^T:=\left(q^{-1/24}f(q),\ 2q^{1/3}\omega(q^{1/2}), \ 2q^{1/3}\omega(-q^{1/2})\right)^T \end{equation} and \[G:=(g_1, g_0, -g_2)^T,\] where the $g_i (z)$ are theta functions defined by \begin{equation}\begin{aligned}\label{TF} g_0(z)&=\sum_{n\in\mathbb{Z}}(-1)^n\left(n+\tfrac13\right)q^{\frac32\left(n+\frac13\right)^2},\\ g_1(z)&=-\sum_{n\in\mathbb{Z}} \left(n+\tfrac16\right)q^{\frac32\left(n+\frac16\right)^2},\\ g_2(z)&=\sum_{n\in\mathbb{Z}} \left(n+\tfrac13\right)q^{\frac32\left(n+\frac13\right)^2}.\\ \end{aligned} \end{equation} Zwegers \cite{Zwegers:2001} showed that \begin{equation}\label{zwegersdef} T(z):=F(z)-2i\sqrt{3}\int_{-\overline z}^{i\infty}\frac{G(\tau)}{(-i(z+\tau))^\frac12}\, d\tau \end{equation} transforms as \begin{equation} \label{htrans}\begin{aligned} T(z+1)=&\begin{pmatrix} \zeta_{24}^{-1}& 0&0\cr 0&0&\zeta_3\cr 0&\zeta_3&0\cr \end{pmatrix}T(z),\\ T(-1/z)=\sqrt{-i z}&\begin{pmatrix} 0& 1&0\cr 1&0&0\cr 0&0&-1\cr \end{pmatrix}T(z).\\ \end{aligned} \end{equation} We also require the incomplete gamma function, given by \[\Gamma(\alpha, x):=\int_x^\infty e^{-t}t^{\alpha-1}\, dt.\] \section{Proof of Theorem \ref{mainthm1}} We work with the function \[M(z)=F_0(z)-2i\sqrt{3}\int_{-\overline z}^{i\infty}\frac{g_1(\tau)}{(-i(z+\tau))^\frac12}\, d\tau. \] A computation using \eqref{FDEF}, \eqref{TF} and \eqref{zwegersdef} shows that $M(z)$ has the form \begin{equation*}M(z)=q^{-1/24}\sum a(n)q^n+q^{-1/24}\sum b(n) \Gamma\left(\tfrac12, 4\pi y (n+\tfrac1{24})\right)q^{-n},\end{equation*} where \begin{equation} \label{quadcond} b(n)=0 \qquad \text{unless}\qquad n=\frac{k(3k+1)}2\qquad \text {for some integer $k$}. \end{equation} For ease of notation we will write \begin{equation}M(z)=H(z)+NH(z),\label{Mform} \end{equation} where \begin{equation}H(z)=F_0(z)=q^{-1/24}\sum a(n)q^n,\label{Hform} \end{equation} and $NH(z)$ is the non-holomorphic part. Suppose that $m$ is a positive integer and that $t$ is a non-negative integer. Setting $\zeta_m:=e^{\frac{2\pi i}m}$, we define \begin{equation}\label{mmtdef} M_{m,t} := \frac1m\sum_{\lambda=0}^{m-1}M_{\lambda, m, t} \end{equation} where \begin{equation}\label{mlmtdef} M_{\lambda, m, t} := \zeta_{m}^{-\lambda (t-1/24)} M\( \( \begin{matrix} 1 & \lambda \\ 0 & m \end{matrix} \) z\). \end{equation} Using notation in analogy with \eqref{Mform}, we write \[M_{m,t}=H_{m,t}+NH_{m,t}.\] A calculation gives \begin{equation}\label{hmt} H_{m,t} = q^{ \frac{t-1/24}{m}} \sum a(mn+t) q^n \end{equation} and \begin{equation}\label{gmt} NH_{m,t} = q^{ \frac{t-1/24}{m}} \sum b(mn-t) \Gamma\left(\tfrac12, 4\pi y\cdot(n+\tfrac{1/24-t}m ) \right)q^{-n}. \end{equation} In order to prove Theorem \ref{mainthm1} we must show that for any progression $t\pmod m$ we have \begin{equation}\label{goal} H_{m, t}\not \equiv 0\pmod 3. \end{equation} We call a progression $t\pmod m$ {\it good} if for some $p\mid m$ we have $\leg{1-24t}p=-1$. By \eqref{quadcond} and \eqref{gmt} we see that if $t\pmod m$ is good, then $NH_{m, t}=0$. Suppose that the progression $ t\pmod m$ is not good. In this case we pick a prime $p\geq 5$ with $p\nmid m$ and a quadratic non-residue $x\pmod p$, and we find a solution to the system of congruences \begin{align*} T&\equiv t\pmod m\\ T&\equiv \frac{1-x}{24}\pmod p. \end{align*} After replacing $t\pmod m$ by the sub-progression $T\pmod{mp}$, we are reduced in proving \eqref{goal} to considering progressions which are good. The next two propositions describe the transformation properties of the forms $M_{m, t}$. Given a positive integer $m$, we define \begin{equation}\label{nmdef} N_m:=\begin{cases} 2m\ \ \ &\text{if}\ \ \ (m, 6)=1,\\ 8m\ \ \ &\text{if}\ \ \ (m, 6)=2,\\ 6m\ \ \ &\text{if}\ \ \ (m, 6)=3,\\ 24m\ \ \ &\text{if}\ \ \ (m, 6)=6.\\ \end{cases} \end{equation} \begin{proposition} \label{modular} Suppose that $t\pmod m$ is good. Then \[M_{m, t}^{24m}=H_{m, t}^{24m} \in M_{12m}^!(\Gamma_1(N_m)).\] \end{proposition} Now if $A=\begin{pmatrix} a & b \\ c & d \end{pmatrix}\in \Gamma_0(N_m)$ has $3\nmid a$ then we define $t_A$ to be any integer with \begin{equation}\label{tadef} t_A \equiv a^2 t + \frac{1-a^2}{24} \pmod{m}. \end{equation} \begin{proposition} \label{Mtransprop} Suppose that $t\pmod m$ is good. Then for every $A=\begin{pmatrix} a & b \\ c & d \end{pmatrix}\in \Gamma_0(N_m)$ with $3\nmid a$ we have \begin{equation}\label{gamma0trans1} M_{m, t}^{24m}\big|_{12m}A = M_{m, t_A}^{24m}. \end{equation} \end{proposition} In each case, the matrices $A$ as above with $3\nmid a$ generate $\Gamma_1(N_m)$. Therefore Proposition~\ref{modular} follows immediately from Proposition~\ref{Mtransprop}. For Proposition~\ref{Mtransprop} we require a transformation law from the work of Bringman and Ono \cite[p. 251]{Ono:fq}. In particular, for $A:=\begin{pmatrix} a & b \\ c & d \end{pmatrix}\in \Gamma_0(2)$ with $c>0$, we have \begin{equation}\label{Mtrans} M\left(\frac{az+b}{cz+d}\right)=w(A)\cdot (cz+d)^\frac12 M(z), \end{equation} where $w(A)$ is the root of unity given by \begin{equation}\label{wabcd} w(A):=i^{-\frac12}\cdot e^{-\pi i s(-d, c)}\cdot (-1)^\frac{c+1+ad}2\cdot e^{2\pi i(-\frac{a+d}{24c}-\frac a 4+\frac{3 d c}8)} \end{equation} and $s(d,c)$ is the Dedekind sum defined by \begin{equation}\label{dedsum} s(d,c) = \sum_{r=1}^{c-1} \left( \frac{r}{c} - \left\lfloor \frac{r}{c} \right\rfloor - \frac{1}{2} \right) \left( \frac{dr}{c} - \left\lfloor \frac{dr}{c} \right\rfloor - \frac{1}{2} \right).\end{equation} For any $A\in {\rm SL}_2(\mathbb{Z})$, we have the following (see, e.g., \cite[p. 247]{Lewis}): \begin{equation}\label{lewis} 12s(-d, c)+\frac{a+d}c\in \mathbb{Z}, \end{equation} and so \begin{equation}\label{wa} w(A)^{24}=1.\end{equation} We also require a transformation law which follows from Lemma~2 of Lewis \cite{Lewis} (we have corrected a sign error in the statement of that lemma). \begin{lemma} \label{LewisLem2} Let $m$ be a positive integer. Then for every $A=\begin{pmatrix} a & b \\ c & d \end{pmatrix}\in \Gamma_0(N_m)$ with $c>0$ and $3 \nmid a$ we have \[ s ( d+c , mc ) = s (d,mc) + \frac{ 1-a^2 }{12m} + \text{even integer}. \] \end{lemma} \begin{proof}[Proof of Proposition~\ref{Mtransprop}] Suppose that $A=\begin{pmatrix} a & b \\ c & d \end{pmatrix}\in \Gamma_0(N_m)$ has $3\nmid a$. We may suppose that $c>0$. For each $\lambda\pmod m$, \eqref{mlmtdef} gives \begin{equation}\label{trans1} M_{\lambda, m, t}( Az) = \zeta_{m}^{- \lambda (t -1/24)} M\( \begin{pmatrix} 1 & \lambda \\ 0 & m \end{pmatrix} Az\) = \zeta_{m}^{-\lambda ( t -1/24 )} M \(A_\lambda\begin{pmatrix} 1 & \lambda^{\prime} \\ 0 & m \end{pmatrix} z \), \end{equation} where \begin{equation}\label{alamdef} A_\lambda = \( \begin{matrix} a+ c \lambda & \frac{-\lambda^{\prime} c \lambda - \lambda^{\prime} a + b + d \lambda }{m} \\ mc & d - c \lambda^{\prime} \end{matrix} \), \end{equation} and $\lambda^{\prime} \in \{ 0,1,\ldots,m-1 \}$ is chosen with \begin{equation}\label{lamprimedef} a \lambda^{\prime} \equiv b + d \lambda \pmod{m}. \end{equation} Note that $\lambda'$ travels the residue classes mod $m$ with $\lambda$. Using \eqref{trans1} and recalling the definitions \eqref{mlmtdef} and \eqref{tadef}, we obtain \begin{equation}\label{maz} M_{\lambda, m, t}( Az) =(cz+d)^\frac12 w(A_\lambda) \zeta_{m}^{-\lambda ( t -1/24 )}\zeta_{m}^{\lambda' (t_A -1/24 )}M_{\lambda', m, t_A}(z). \end{equation} We find that \begin{align*} w(A_\lambda) \zeta_{m}&^{-\lambda ( t -1/24 )}\zeta_{m}^{\lambda' (t_A -1/24 )}\\ &= i^{-\frac12}e^{-\pi i s(-d+c\lambda', mc)} \cdot (-1)^\frac{mc+1+(a+c\lambda)(d-c\lambda^{\prime})}2 \cdot e^{2\pi i(-\frac{a+d}{24mc}-\frac{a +c\lambda} 4+\frac{3mc(d-c\lambda^{\prime})}8)}\zeta_{m}^{-\lambda t+\lambda't_A}\\ &=\zeta_1(A, m)e^{-\pi i s(-d+c\lambda', mc)}\cdot (-1)^\frac{-a c\lambda^{\prime}+cd\lambda}2 \cdot e^{2\pi i(-\frac{a+d}{24mc}-\frac{c\lambda} 4-\frac{3mc^2\lambda^{\prime}}8)}\zeta_{m}^{-\lambda t+\lambda't_A}, \end{align*} where $\zeta_1(A,m)$ is a $24$th root of unity which depends only on $A$ and $m$ (and not on $\lambda$). We see that \begin{equation}\label{minus1} (-1)^\frac{-a c\lambda^{\prime}+cd\lambda}2 \cdot e^{2\pi i(-\frac{ c\lambda} 4-\frac{3mc^2\lambda^{\prime}}8)}=1 \end{equation} by writing $c=2c_0$ and separating cases according to the parity of $c_0$ (note that if $c_0$ is odd then $m$ is odd). Therefore we have \[w(A_\lambda) \zeta_{m}^{-\lambda ( t -1/24 )} \zeta_{m}^{\lambda' (t_A -1/24 )} =\zeta_1(A, m)e^{-\pi i s(-d+c\lambda', mc)} \cdot e^{2\pi i(-\frac{a+d}{24mc})}\zeta_{m}^{-\lambda t+\lambda't_A}. \] We apply Lemma \ref{LewisLem2} iteratively to the matrices \[\begin{pmatrix} -a& b-ja\cr c&-d+jc\cr \end{pmatrix}\in \Gamma_0(N_m),\ \ \ 1\leq j\leq \lambda^{\prime}-1\] to find that \[ s ( -d + \lambda^{\prime} c , mc) = s (-d, mc ) +\lambda^{\prime} \frac{1-a^2}{12m} + \text{an even integer} . \] Also, since $\begin{pmatrix} a(1-bc) & -b^2c/m\cr mc & d\end{pmatrix}\in {\rm SL}_2(\mathbb{Z})$, we see from \eqref{lewis} that \[12s(-d, mc)+\frac{a+d}{mc}-\frac{ab}m\in \mathbb{Z}.\] It follows from the last three equations that \begin{equation}\label{simp} w(A_\lambda) \zeta_{m}^{-\lambda ( t -1/24 )} \zeta_{m}^{\lambda' (t_A -1/24 )} =\zeta_2(A, m) e^{-2\pi i\lambda^{\prime}\frac{1-a^2}{24m}}\zeta_{m}^{-\lambda t+\lambda't_A}, \end{equation} where $\zeta_2(A,m)$ is a $24m^{\rm th}$ root of unity which depends only on $A$ and $m$. Recalling \eqref{tadef} and the fact that $\lambda \equiv a^2\lambda'-ab \pmod{m}$, we find that \begin{equation}\label{simp2} \zeta_{m}^{-\lambda t+\lambda't_A}= \zeta_m^{abt} \zeta_m^{\lambda'\frac{1-a^2}{24}}. \end{equation} Combining \eqref{simp} and \eqref{simp2}, we conclude that \begin{equation}\label{simp3} w(A_\lambda) \zeta_{m}^{-\lambda ( t -1/24 )} \zeta_{m}^{\lambda' (t_A -1/24 )} =\zeta_3(A, m), \end{equation} where $\zeta_3(A,m)$ is a $24m^{\rm th}$ root of unity which depends only on $A$ and $m$. Proposition~\ref{Mtransprop} follows immediately from \eqref{simp3}, \eqref{maz}, and \eqref{mmtdef}. \end{proof} \begin{lemma} \label{reductionlemma} Suppose that $t \pmod {m}$ is good. Write $m=2^s3^r Q$ with $(Q, 6)=1$. If \[\sum a(mn+t) q^n\equiv 0\pmod 3\] then \[\sum a(Qn+t) q^n\equiv 0\pmod 3.\] \end{lemma} \begin{proof} The arithmetic progression $t\pmod Q$ is the disjoint union of the arithmetic progressions \begin{equation}\label{arithprog} t+\ell Q\pmod m, \ \ \ \ 0\leq \ell< 2^s 3^r. \end{equation} By the argument in \cite[Theorem 4.2]{Radu:Crelle}, we see that as $a$ ranges over integers with $(a, 6m)=1$, the quantity \[ t_{A} \equiv ta^2 + \frac{1-a^2}{24} \pmod{m} \] covers each of the progressions in \eqref{arithprog}. Let $\Delta$ be the usual normalized cusp form of level one and weight 12. By Proposition \ref{modular} there is a positive integer $j$ such that \[ M_{m,t}^{24m} \Delta^{j} \in M_{12(m + j)} ( \Gamma_{1} (N_m) ). \] If $A=\begin{pmatrix} a & b \\ c & d \end{pmatrix}\in \Gamma_0(N_m)$ has $3\nmid a$ then by Proposition \ref{Mtransprop} we have \begin{equation}\label{orbits} M_{m,t}^{24m} \Delta^{j} \Big|_{12(m+j)} A = M_{m,t_{A}}^{24m} \Delta^{j}. \end{equation} We require a fact proved by Deligne and Rapoport (see \cite[VII, Corollary 3.12]{Deligne} or \cite[Corollary 5.3]{Radu:Crelle}): If $f\in M_k(\Gamma_1(N))$ has coefficients in $\mathbb{Z}[\zeta_N]$ then the same is true of $f|_k\gamma$ for each $\gamma\in\Gamma_0(N)$. It follows from \eqref{orbits} that if $M_{m, t}\equiv 0\pmod 3$, then for each $t_A$ with $(a, 6m)=1$ we have $M_{m, t_A}\equiv 0\pmod 3$. By \eqref{arithprog} we conclude that $M_{Q, t}\equiv 0 \pmod 3$, as desired. \end{proof} After Lemma \ref{reductionlemma} we are reduced to proving that if $Q$ is a positive integer with $(Q, 6)=1$ and $t\pmod Q$ is good, then \begin{equation}\label{etp} M_{Q, t}\not\equiv 0\pmod 3. \end{equation} By Proposition~\ref{modular} we see that for sufficiently large $j$, we have \begin{equation}\label{fsQt} h:= M_{Q,t}^{24Q} \Delta^{j} \in M_{12(Q+j)} (\Gamma_{1} (2Q ) ). \end{equation} We compute the first term in the expansion of $h$ at the cusp $1/2$. \begin{proposition}\label{cusp12} Let $h$ be the modular form defined in \eqref{fsQt}. Then we have \[ h \Big|_{12(Q+j)} \( \begin{matrix} 1 & 0 \\ 2 & 1 \end{matrix} \) = Q^{-12Q}q^{-Q^2 + j } + \cdots. \] \end{proposition} \begin{proof} By \eqref{mmtdef} and \eqref{mlmtdef} we have \begin{align}\label{mqt} M_{Q,t} \( \( \begin{matrix} 1 & 0 \\ 2 & 1 \end{matrix} \) z \) = \frac{1}{Q} \sum_{\lambda=0}^{Q-1} \zeta_{Q}^{ - \lambda ( t-1/24)} M\( \( \begin{matrix} 1 & \lambda \\ 0 & Q \end{matrix} \) \( \begin{matrix} 1 & 0 \\ 2 & 1 \end{matrix} \) z \). \end{align} We find that \[ \begin{pmatrix} 1 & \lambda \\ 0 & Q \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 2 & 1 \end{pmatrix} \\ = C_\lambda \begin{pmatrix} 1 & \lambda^{\prime} \\ 0 & Q/d_{\lambda} \end{pmatrix} \begin{pmatrix} d_{\lambda} & 0 \\ 0 & 1 \end{pmatrix} , \] where \[d_{\lambda}: = ( 1+2 \lambda, Q),\] $\lambda^{\prime} \in \{0,1,2,\ldots,\frac{Q}{d_{\lambda}} - 1 \}$ is uniquely defined by the congruence \[\frac{1+2\lambda}{d_{\lambda}} \lambda^{\prime} \equiv \lambda \pmod{ Q/d_{\lambda}},\] and \[ C_\lambda :=\begin{pmatrix} \frac{1 + 2 \lambda}{d_{\lambda}} & \frac{ - \frac{1+2 \lambda}{d_{\lambda}} \lambda^{\prime} + \lambda}{Q/d_{\lambda}} \\ 2 Q/d_{\lambda} & -2 \lambda^{\prime} + d_{\lambda} \end{pmatrix} \in \Gamma_{0} (2). \] By \eqref{Mtrans}, we obtain \begin{equation}\label{MC} M \(C_\lambda \frac{d_{\lambda} z + \lambda^{\prime}}{Q/d_{\lambda}} \) = w(C_\lambda)\cdot \( d_{\lambda} ( 2 z+1) \)^{1/2} M \( \frac{d_{\lambda} z + \lambda^{\prime}}{Q/d_{\lambda}} \). \end{equation} Since $M=q^{-1/24}+\dots$, we see from \eqref{mqt} and \eqref{MC} that the leading term in the expansion of \begin{equation}\label{mqtexp} (2z+1)^{-1/2}M_{Q,t} \( \begin{pmatrix} 1 & 0 \\ 2 & 1 \end{pmatrix} z \) \end{equation} arises from those $\lambda$ with $d_\lambda=Q$. The only such $\lambda $ is $\lambda_0=(Q-1)/2$, in which case we have $\lambda_0'=0$ and $C_{\lambda_0} :=\begin{pmatrix}1& (Q-1)/2 \\ 2 & Q \end{pmatrix}.$ Therefore the leading term in \eqref{mqtexp} is \[\frac1{\sqrt{Q}} w(C_{\lambda_0}) \zeta_Q^{\left(\frac{1-Q}2\right)(t-1/24)}\cdot q^{\frac{-Q}{24}}+\dots.\] Proposition~\ref{cusp12} follows immediately from \eqref{fsQt} and \eqref{wa}. \end{proof} Theorem~\ref{mainthm1} now follows quickly. \begin{proof}[Proof of Theorem \ref{mainthm1}] Deligne and Rapoport (\cite[Corollary 3.13 and \S 4.8]{Deligne} or \cite[\S 12.3.5]{Diamond}) proved that if $f\in M_k(\Gamma_1(N))$ has coefficients in $\mathbb{Z}[\frac1N, \zeta_N]$, then the same is true of its expansion at each cusp. Suppose by way of contradiction that \eqref{etp} is false. Then the modular form $3^{-24Q} h\in M_{12(Q+j)}(\Gamma_1(2 Q))$ has integral coefficients. It follows that the coefficients of \[3^{-24Q} h \Big|_{12(Q+j)} \( \begin{matrix} 1 & 0 \\ 2 & 1 \end{matrix} \)\] lie in $\mathbb{Z}[ \frac{1}{2Q}, \zeta_{2Q} ]$. This contradicts Proposition~\ref{cusp12}, and Theorem~\ref{mainthm1} is proved. \end{proof} \section{Proof of Theorem \ref{mainthm2}} The flow of the argument is similar to that of the last section. We now work with the function \[ \Omega (z) = 2 q^{2/3} \omega (q) -2i\sqrt{3}\int_{-\overline{2z}}^{i\infty}\frac{g_0 (\tau)}{(-i(2z+\tau))^\frac12}\, d\tau.\] A computation shows that \[ \Omega(z)= 2 q^{2/3} \omega( q) + q^{-1/3} \sum d(n) \Gamma \( \tfrac{1}{2} , 4\pi (n +\tfrac{1}{3} ) y \) q^{-n} , \] where \[ d(n)=0 \qquad \text{unless}\qquad n = 3k^2 + 2k\qquad \text {for some integer $k$}. \] To isolate arithmetic progressions, we define \[ \Omega_{m,t} := \frac{1}{m} \sum_{\lambda=0}^{m -1} \zeta_{m}^{-\lambda( t+2/3)} \Omega \( \( \begin{matrix} 1 & \lambda \\ 0 & m \end{matrix} \) z \). \] A calculation gives \[ \Omega_{m,t} = q^{\frac{t+2/3}{m}} \sum c(mn+t) q^n + q^{\frac{t+2/3}{m}} \sum d(mn-t-1) \Gamma \( \frac{1}{2}, 4\pi \( n - \frac{t+2/3}{m} \) y \) q^{-n}. \] In this case, we call the arithmetic progression $t \pmod{m}$ {\it good} if \[ \( \frac{-3t-2}{p} \) = -1 \quad\quad\text{for some prime $p \mid m$.} \] If $t\pmod m$ is good, then $\Omega_{m,t}$ is weakly holomorphic. As in the last section, it suffices to prove that when $t\pmod m$ is good we have \[\Omega_{m,t}\not\equiv 0\pmod 3.\] The transformation properties of $\Omega(z)$ are described in work of Andrews \cite[Theorems 2.1 and 2.4]{Andrews:trans}. Using these results with \eqref{htrans}, we find that for $A=\( \begin{matrix} a & b \\ c & d \end{matrix} \) \in \rm{SL}_2 (\mathbb{Z})$ with $c>0$, we have \begin{equation}\label{OmegaTrans} \Omega (Az) = \begin{cases} w_1 (A) (cz +d)^{1/2} \Omega(z) &\text{ if $c$ is even,}\\ w_2 (A)\cdot 2^{-1/2} (cz +d )^{1/2} M (z/2) &\text{ if $d$ is even,} \end{cases} \end{equation} where $w_1 (A)$ and $w_2(A)$ are the roots of unity defined by \begin{equation}\label{omegadefine}\begin{aligned} w_1 (A) &:= (-i)^{1/2} (-1)^{(a-1)/2}e^{-\pi i s(-d, c/2)} e^{2\pi i \( \frac{3ab}{4} - \frac{a+d}{12c} \)},\\ w_2 (A) &:= i^{1/2} (-1)^{(32a-d)/24c} e^{-\pi i s(-d/2, c)} e^{ -\frac{\pi i}{2} ( 2a+b -3 -3ab+3a/c )}. \end{aligned} \end{equation} Note that we have fixed a sign error in \cite[Theorem 2.1]{Andrews:trans}. \begin{proposition} \label{Omegatransprop} Suppose that $t\pmod m$ is good. Let $N_m$ be as defined in $\eqref{nmdef}$. For every $A=\begin{pmatrix} a & b \\ c & d \end{pmatrix}\in \Gamma_0(2N_m)$ with $3\nmid a$ we have \begin{equation}\label{gamma0trans1omea} \Omega_{m, t}^{24m}\big|_{12m}A = \Omega_{m, t_A}^{24m}, \end{equation} where \[t_A: = t a^2 + \tfrac23 (a^{2}-1).\] \end{proposition} \begin{proof}[Proof of Proposition \ref{Omegatransprop}] Write \[\Omega_{\lambda,m,t} (z ) = \zeta_{m}^{- \lambda (t +2/3)} \Omega\( \begin{pmatrix} 1 & \lambda \\ 0 & m \end{pmatrix} z\).\] Suppose that $A \in \Gamma_0 (2N_m)$ with $3 \nmid a$. Then \[ \Omega_{\lambda,m,t} (Az ) = \zeta_{m}^{-\lambda(t+2/3)} \Omega \( A_\lambda \( \begin{matrix} 1 & \lambda^{\prime} \\ 0 & m \end{matrix} \) z \), \] where $A_\lambda$ and $\lambda^{\prime}$ are defined as in \eqref{alamdef} and \eqref{lamprimedef}. Using \eqref{OmegaTrans}, we find that \begin{equation}\label{step1} \Omega_{\lambda,m,t} (Az ) = (cz+d)^{\frac{1}{2}} w_1(A_\lambda) \zeta_m^{-\lambda(t+\frac23)}\zeta_m^{\lambda'(t_A+\frac23)}\Omega_{\lambda^{\prime}, m, t_A} (z) \end{equation} Note that $4m \mid c$ and that $ad\equiv 1\pmod{4m}$. Moreover, from Lemma~\ref{LewisLem2} and \eqref{lewis} we have \[ s(-d+\lambda^{\prime} c , mc/2) = s( -d, mc/2) + \lambda^{\prime} \frac{1- a^2 }{6m} + \text{an even integer}, \] and \[ s(-d, mc/2)+\frac{a+ d }{6mc}-\frac{ab}{6m} \in \mathbb{Z}/12. \] Therefore, \begin{align*} w_1(A_\lambda)&=(-i)^{1/2} (-1)^{ (a+c\lambda -1)/2} e^{-\pi i s(-d+c \lambda^{\prime} , mc/2)} e^{2\pi i\left(\frac {3 (a+c\lambda)(-\lambda^{\prime} c \lambda - \lambda^{\prime} a + b + d \lambda)}{4m} - \frac{a+d}{12mc }+\frac{\lambda^{\prime}-\lambda}{12m}\right)}\\ &=\omega_1\cdot e^{-\pi i s(-d+c \lambda^{\prime} , mc/2)} e^{2\pi i \left(\frac{ 3 (-\lambda^{\prime} a^2 +\lambda)} {4m} - \frac{a+d}{12mc }+\frac{\lambda^{\prime}-\lambda}{12m}\right)}\\ &= \omega_2\cdot e^{2\pi i \left(\frac{ 3 (-\lambda^{\prime} a^2 +\lambda)} {4m}+\lambda^{\prime}\frac{a^2-1}{12m}+\frac{\lambda^{\prime}-\lambda}{12m}\right)} \end{align*} where $\omega_1$, etc. denote $24m^{\rm th}$ roots of unity which depend only on $A$, $m$, and $t$. Using this together with the fact that $\lambda \equiv a^2 \lambda^{\prime} - ab \pmod{m}$, we obtain \begin{align*} w_1(A_\lambda)\zeta_m^{-\lambda(t+\frac23)}\zeta_m^{\lambda'(t_A+\frac23)}&= \omega_2\cdot\zeta_m^{-\lambda t+\lambda^{\prime}(t_A-\frac23(a^2-1))}\\ &= \omega_3\cdot\zeta_m^{\lambda^{\prime}(t_A-a^2t-\frac23(a^2-1))}\\ &=\omega_3. \end{align*} Proposition~\ref{Omegatransprop} follows immediately from this together with \eqref{step1}. \end{proof} In this case we cannot pull out powers of $2$ from the arithmetic progressions in question. We have \begin{lemma} \label{reductionlemma2} Suppose that $t \pmod {m}$ is good. Write $m=3^r Q$ with $(Q, 3)=1$. If \[\sum c(mn+t) q^n\equiv 0\pmod 3\] then \[\sum c(Qn+t) q^n\equiv 0\pmod 3.\] \end{lemma} \begin{proof} The arithmetic progression $t\pmod Q$ is the disjoint union of the arithmetic progressions \begin{equation}\label{arithprog2} t+\ell Q\pmod m, \ \ \ \ 0\leq \ell< 3^r. \end{equation} By the argument in \cite[Theorem 4.2]{Radu:Crelle}, we see that as $a$ ranges over integers with $3\nmid a$, the quantity \[ t_{A} \equiv ta^2 + \frac{2}{3} (1-a^2) \pmod{m} \] covers each of the progressions in \eqref{arithprog2}. By Proposition \ref{Omegatransprop} there is a positive integer $j$ such that \[ \Omega_{m,t}^{24m} \Delta^{j} \in S_{12(m + j)} ( \Gamma_{1} (2N_m) ). \] If $A=\begin{pmatrix} a & b \\ c & d \end{pmatrix}\in \Gamma_0(2 N_m)$ has $3\nmid a$ then by Proposition \ref{Omegatransprop} we have \[ \Omega_{m,t}^{24m} \Delta^{j} \Big|_{12(m+j)} A = \Omega_{m,t_{A}}^{24m} \Delta^{j}. \] If $\Omega_{m, t}\equiv 0\pmod 3$, then the fact recorded after \eqref{orbits} shows that for each $t_A$ with $3\nmid a$ we have $\Omega_{m, t_A}\equiv 0\pmod 3$. By \eqref{arithprog2} we conclude that $\Omega_{Q, t}\equiv 0 \pmod 3$, as desired. \end{proof} By Proposition~\ref{Omegatransprop} we see that for sufficiently large $j$, we have \begin{equation}\label{omegasQt} h_{\Omega}:= \Omega_{Q,t}^{24Q} \Delta^{j} \in M_{12(Q+j)} (\Gamma_{1} (4Q ) ). \end{equation} In this case, we compute the first term in the expansion of $h_{\Omega} \Big| \( \begin{matrix} 1 & 0 \\ 1& 1 \end{matrix}\)$. \begin{proposition}\label{cusp13} Let $h_{\Omega}$ be the modular form defined in \eqref{omegasQt}. Then we have \[ h_{\Omega} \Big|_{12(Q+j)} \( \begin{matrix} 1 & 0 \\ 1 & 1 \end{matrix} \) = (-1)^Q\cdot(2Q)^{-12Q} q^{-\frac{Q^2} 2 + j } + \cdots. \] \end{proposition} \begin{proof} Recall that \begin{align}\label{omegaqt} \Omega_{Q,t} \( \( \begin{matrix} 1 & 0 \\ 1 & 1 \end{matrix} \) z \) = \frac{1}{Q} \sum_{\lambda=0}^{Q-1} \zeta_{Q}^{ - \lambda ( t+2/3)} \Omega \( \( \begin{matrix} 1 & \lambda \\ 0 & Q \end{matrix} \) \( \begin{matrix} 1 & 0 \\ 1 & 1 \end{matrix} \) z \). \end{align} We find that \[ \begin{pmatrix} 1 & \lambda \\ 0 & Q \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix} \\ = D_\lambda \begin{pmatrix} 1 & \lambda^* \\ 0 & Q/d_{\lambda} \end{pmatrix} \begin{pmatrix} d_{\lambda} & 0 \\ 0 & 1 \end{pmatrix} , \] where \[d_{\lambda}: = ( 1+ \lambda, Q),\] $\lambda^{*}$ is chosen to satisfy the congruence \[\frac{1+\lambda}{d_{\lambda}} \lambda^{*} \equiv \lambda \pmod{ Q/d_{\lambda}},\] and \[ D_\lambda :=\begin{pmatrix} \frac{1 + \lambda}{d_{\lambda}} & \frac{ - \frac{1+ \lambda}{d_{\lambda}} \lambda^{*} + \lambda}{Q/d_{\lambda}} \\ Q/d_{\lambda} & - \lambda^{*} + d_{\lambda} \end{pmatrix} \in {\rm SL}_2(\mathbb{Z}). \] Moreover, we may choose the values of $\lambda^*$ in such a way that \[-\lambda^*+d_\lambda\ \ \text{is even whenever $Q/d_\lambda$ is odd.}\] By \eqref{OmegaTrans}, we obtain \begin{equation}\label{OmegaD} \Omega \(D_\lambda \frac{d_{\lambda} z + \lambda^{*}}{Q/d_{\lambda}} \) = \begin{cases} w_1(D_\lambda)\cdot \( d_{\lambda} ( z+1) \)^{1/2} \Omega \( \frac{d_{\lambda} z + \lambda^{*}}{Q/d_{\lambda}} \) &\text{if $Q/d_{\lambda}$ is even,} \\ w_2(D_\lambda)\cdot \( \frac{d_{\lambda}}{2} ( z+1) \)^{1/2} M \( \frac{d_{\lambda} z + \lambda^{*}}{2 Q/d_{\lambda}} \) &\text{if $Q/d_{\lambda}$ is odd.} \end{cases} \end{equation} Since $M=q^{-1/24}+\dots$ and $\Omega = q^{2/3} + \dots$, we see from \eqref{omegaqt} and \eqref{OmegaD} that the leading term in the expansion of \begin{equation}\label{omegaqtexp} (z+1)^{-1/2} \Omega_{Q,t} \( \begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix} z \) \end{equation} arises from those $\lambda$ with $d_\lambda=Q$. The only such $\lambda $ is $\lambda_0=Q-1$. We may choose $\lambda_0^*=Q$, so that $D_{\lambda_0} :=\begin{pmatrix} 1& -1 \\ 1 & 0 \end{pmatrix}.$ Therefore the leading term in \eqref{omegaqtexp} is \[\frac1{\sqrt{2Q}} w_2 (D_{\lambda_0})\zeta_Q^{(1-Q)(t+2/3)} e^\frac{-2\pi iQ}{48} \cdot q^{\frac{-Q}{48}}+\dots,\] and Proposition~\ref{cusp13} follows from \eqref{omegadefine}. \end{proof} Theorem~\ref{mainthm2} follows by the argument used to prove Theorem~\ref{mainthm1} in the last section. \section{Proof of Theorem \ref{etathm} and Corollary \ref{etacor}} Recall that \[ \mathcal{S}(B,k,N,\chi) := \left\{ \eta^{B} (z) F(z) : \text{$F(z) \in M_{k}^{!} (\Gamma_0 (N) , \chi )$} \right\}. \] Suppose that $\ell=2$ or $\ell=3$ and that \begin{equation} \label{fdefinition} f(z)= q^{B/24} \sum_{n=n_0}^\infty a_{f} (n) q^n=q^{n_0+B/24}+\dots \in \mathcal{S}(B,k,N,\chi) \end{equation} has rational, $\ell$-integral coefficients and a pole at infinity. Given $m$ and $t$ we define \[ f_{m,t} := \frac{1}{m} \sum_{\lambda=0}^{m-1} f_{\lambda,m,t} := \frac{1}{m} \sum_{\lambda=0}^{m-1} \zeta_{m}^{ - \lambda ( t+ B/24)} f \( \( \begin{matrix} 1 & \lambda \\ 0 & m \end{matrix} \) z \). \] A calculation as in \eqref{hmt} reveals that \[ f_{m,t} = q^{ \frac{t+ B/24}{m}} \sum a_{f} (mn+t) q^n . \] We recall the transformation formula of the eta function (see. e.g. \cite{Lewis}). \begin{lemma} \label{etatrans} Suppose that $A = \(\begin{matrix} a & b \\ c & d \end{matrix} \) \in \Gamma_0 (1)$ has $c>0$. Then \[ \eta ( A z ) = \exp \( \frac{\pi i }{12} \( \frac{ a+d}{c} - 12 s \( d, c \) \) \) \sqrt{-i (cz +d)}\cdot \eta ( z ), \] where $s(d,c)$ is the Dedekind sum defined in \eqref{dedsum}. \end{lemma} Define \begin{equation}\label{nmdef1} N_m:=\begin{cases} m\ \ \ &\text{if}\ \ \ (m, 6)=1,\\ 8m\ \ \ &\text{if}\ \ \ (m, 6)=2,\\ 3m\ \ \ &\text{if}\ \ \ (m, 6)=3,\\ 24m\ \ \ &\text{if}\ \ \ (m, 6)=6\\ \end{cases} \end{equation} (this differs slightly from the definition of $N_m$ in Section~3). Suppose that $A=\(\begin{matrix} a & b \\ c & d \end{matrix} \) \in \Gamma_0 (N N_m )$ has $(a, 6)=1$, and let $A_\lambda$ and $\lambda^{\prime}$ be as defined in \eqref{alamdef} and \eqref{lamprimedef}. Suppose that $k$ is an integer. Then we have \begin{equation} \begin{aligned}\label{halfcase} f_{\lambda, m, t} ( Az ) &= \zeta_{m}^{- \lambda (t + B/24)} f \( \( \begin{matrix} 1 & \lambda \\ 0 & m \end{matrix} \) A z \) \\ &= \zeta_{m}^{-\lambda ( t + B/24 )} f \( A_{\lambda} \( \begin{matrix} 1 & \lambda^{\prime} \\ 0 & m \end{matrix} \) z\) \\ &=\sqrt{-i (cz+d)}^{B} (cz+d)^{k} \chi (d - c \lambda^{\prime} ) e^{\frac{B \pi i}{12} \( \frac{a+d}{mc} + \frac{\lambda - \lambda^{\prime}}{m} -12s(d-c\lambda^{\prime},mc) \)} \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\cdot \zeta_{m}^{- \lambda (t + B/24)} f \( \( \begin{matrix} 1 & \lambda^{\prime} \\ 0 & m \end{matrix} \) z \) \\ &= \sqrt{-i (cz+d)}^{B} (cz+d)^{k} \chi (d) e^{\frac{B \pi i}{12} \( \frac{a+d}{mc} -12s(d-c\lambda^{\prime},mc) \)} \zeta_{m}^{- \lambda t + \lambda^{\prime} t_A } f_{\lambda^{\prime}, m, t_A } , \end{aligned}\end{equation} where \begin{equation}\label{tdefnew} t_A \equiv ta^{2} -\frac{B(1-a^2)}{24} \pmod{m}. \end{equation} From Lemma 2 of \cite{Lewis} (correcting the sign error), we find that \[ s ( d - c\lambda^{\prime} , mc) = s(d,mc) - \lambda^{\prime} \frac{1-a^{2}}{12m} +\text{an even integer}, \] and that \[ 12 s (d,mc) - \frac{a+d}{mc} + \frac{ab}{m} \in \mathbb{Z}. \] Since $\lambda \equiv a^2 \lambda^{\prime} - ab \pmod{m}$, \eqref{halfcase} reduces to \[ f_{\lambda, m, t} ( Az ) = \zeta_{24m}^{\Phi(A,B,m,t)} \chi (d) \sqrt{-i (cz+d)}^{B} (cz+d)^{k} f_{\lambda^{\prime}, m, t_A}, \] where $\Phi(A,B,m, t)$ is an integer depending only on $A$, $B$, $m$, and $t$. We conclude that for $A \in \Gamma_0(N N_m)$ with $(a, 6)=1$ we have \begin{equation} \label{etatranscon} ( f_{m,t} (A z) )^{24mN} = (c z + d)^{24mN\(k+ B/2\)} ( f_{m,t_A} (z) )^{24mN}. \end{equation} If $k$ is not an integer, then the factor $\chi (d) (cz+d)^{k}$ in \eqref{halfcase} is replaced by \[ \( \frac{ mc}{d-c \lambda^{\prime}} \)\epsilon_{d-c \lambda^{\prime}}^{-2k} \chi (d) (cz + d)^{k}. \] We have $\epsilon_{d-c \lambda^{\prime} } = \epsilon_{d}$ since $c$ is a multiple of $4$. To show that $\leg {mc}{d-c \lambda^{\prime}}$ is independent of $\lambda^{\prime}$, write $mc=2^e p_1\dots p_t$ with odd primes $p_i$. For each $i$ we have $\leg{p_i}{d-c\lambda^{\prime}}=(-1)^{\frac{p_i^2-1}{2}\frac{d^2-1}2}\leg{d}{p_i}$. If $8\mid c$ then $\leg{2^e}{d-c\lambda^{\prime}}=\leg{2^e}{d}$, while if $8\nmid c$ then $m$ is odd, so that $e=2$. Thus, we can conclude \eqref{etatranscon} in this case as well. By \eqref{etatranscon}, there is a positive integer $j$ such that \[ f_{m,t}^{24mN} \Delta^{j} \in S_{24mN(k + B/2)+12j} ( \Gamma_{1} (N N_m) ), \] and if $A=\begin{pmatrix} a & b \\ c & d \end{pmatrix}\in \Gamma_0(N N_m)$ has $(a,6)=1$ then \[ f_{m,t}^{24mN} \Delta^{j} \Big| A = f_{m,t_{A}}^{24mN} \Delta^{j}. \] Now recall the definition \eqref{qmb} of $Q_{m, b}$, and write $Q=Q_{m, B}$ for simplicity. The argument in \cite[Theorem 4.2]{Radu:Crelle} shows that as $A$ ranges over elements of $\Gamma_0(N N_m)$ with $(a,6)=1$, the quantity $t_{A}$ in \eqref{tdefnew} covers each of the progressions \[t+j Q\pmod m, \ \ \ \ 0\leq j< m/Q.\] So if $f_{m, t} \equiv 0 \pmod{\ell}$ we may conclude as before that $f_{Q, t}\equiv 0\pmod\ell$, where $(Q, N)=1$. To obtain a contradiction we calculate the expansion of $f_{Q,t}$ at the cusp $1/N$. We have \begin{align*} f_{Q,t} \( \( \begin{matrix} 1 & 0 \\ N & 1 \end{matrix} \) z \) &= \frac{1}{Q} \sum_{\lambda = 0 }^{Q-1} \zeta_{Q}^{-\lambda (t+B/24)} f \( \( \begin{matrix} 1 & \lambda \\ 0 & Q \end{matrix} \) \( \begin{matrix} 1 & 0 \\ N & 1 \end{matrix} \) z \) \\ &=\frac{1}{Q} \sum_{\lambda = 0 }^{Q-1} \zeta_{Q}^{-\lambda (t+B/24)} f \( C^{\prime} \( \begin{matrix} 1 & \lambda^{\prime} \\ 0 & Q/d_{\lambda} \end{matrix} \) \( \begin{matrix} d_{\lambda} & 0 \\ 0 & 1 \end{matrix} \) z \), \end{align*} where $d_{\lambda} = \text{gcd} ( 1+ \lambda N , Q)$, $\lambda^{\prime}$ is a solution of $\frac{1+\lambda N}{d_{\lambda}} \lambda^{\prime} \equiv \lambda \pmod{\frac{Q}{d_{\lambda}}}$, and \[ C^{\prime} = \( \begin{matrix} \frac{1+ \lambda N}{d_{\lambda}} & \frac{ -\( \frac{1+\lambda N}{d_{\lambda}} \) \lambda^{\prime} + \lambda}{Q/d_{\lambda}} \\ NQ / d_{\lambda} & -N\lambda^{\prime} + d_{\lambda} \end{matrix} \)\in \Gamma_0(N). \] Since $f$ has a pole at infinity, we see that the leading term of \[(Nz+1)^{-k - B/2} f_{Q,t}\] arises from the unique $\lambda_0$ with $d_{\lambda_0} =Q$. Since $\lambda^{\prime}=0$, the coefficient of this term is \[ \xi Q^{B/2 + k - 1} , \] where $\xi$ is a $24QN^{\rm th}$ root of unity. We have $f_{Q, t}^{24QN}\Delta^j\in S_j(\Gamma_1(NN_Q))$. Note that if $\ell\nmid B$ then $\ell\nmid N_Q$ by \eqref{qmb} and \eqref{nmdef1}. Therefore $\ell\nmid NN_Q$, so we obtain a contradiction as before, and Theorem \ref{etathm} is proved.\qed We finish by treating the case of eta-quotients. \begin{proof}[Proof of Corollary \ref{etacor}] Suppose that $f$ is the $\eta$-quotient \begin{equation*} f(z) = \prod_{\delta \mid N } \eta ( \delta z )^{r_{\delta}} \end{equation*} and recall that \begin{equation}\label{almostdone2} B = \sum_{\delta \mid N} \delta r_{\delta}.\end{equation} If $\ell\mid \delta$ for some $\delta$, then, writing $\delta=\ell^s\delta'$, we replace the factor $\eta(\delta z)^{r_\delta}$ by the factor $\eta(\delta' z)^{\ell^s r_\delta}$. These factors are congruent modulo $\ell$, and the value of $B$ is not affected by this replacement. After this discussion, we may assume that $\ell\nmid N$. Write \[ f (z) = \eta^{B} (z) \frac{ f (z)}{\eta^{B} (z)}. \] and set \begin{equation}\label{almostdone1} k=\frac12\(\sum_{\delta \mid N} r_{\delta} - B\). \end{equation} Suppose first that $\ell=2$. In this case $N$ is odd, so $B$ and $\sum_{\delta\mid N} r_\delta$ have the same parity, and $k$ is an integer. We have \begin{equation}\label{almostdone3} N^2 \left( \sum_{\delta \mid N} \frac{r_{\delta}}{\delta} - B\right) \equiv 0\pmod {24} \end{equation} (to see this, consider the cases $3\nmid N$ and $3\mid N$ separately). In view of \eqref{almostdone2} and \eqref{almostdone3}, a standard criterion \cite{Newman} applies to show that $f(z)/\eta^B(z)\in M_k^!(\Gamma_0(N^2), \chi)$ for some character $\chi$. If $\ell=3$ and $k$ is an integer, then a similar argument shows that $f(z)/\eta^B(z)\in M_k^!(\Gamma_0(N^4), \chi)$ for some $\chi$. Finally, suppose that $k$ is not an integer. Then \eqref{almostdone1} and \eqref{almostdone2} show that $N$ must be even. So we again have $f(z)/\eta^B(z)\in M_k^!(\Gamma_0(N^4), \chi)$ for some $\chi$. In each of the cases, Corollary~\ref{etacor} follows from Theorem~\ref{etathm}. \end{proof} \bibliographystyle{plain}
1602.05750
\section{Introduction} \noindent Differentiable extensions of functions were considered already in the 1920's. In \cite{J}, V.~Jarn\'ik proved that every real-valued differentiable function defined on a~perfect subset of~$\R$ can be extended to an everywhere differentiable function on~$\R$ (he even proved a~stronger form of this result with preservation of Dini derivatives). This result was independently obtained by G.~Petruska and M.~Laczkovich in \cite{PL} (even with some additional estimates for the derivative of the extended function) and generalized to real-valued differentiable functions defined on arbitrary closed subsets of~$\R$ by J.~Ma\v{r}\'ik in \cite{M}. Extensions of vector-valued functions defined on (not necessarily closed) subsets of~$\R$ that preserve the derivative or even some other local properties (e.g. boundedness, continuity or Lipschitz property) were investigated by A.~Nekvinda and L.~Zaj\'i\v{c}ek in \cite{NZ}. In \cite[Theorem 7]{ALP}, V.~Aversa, M.~Laczkovich and D.~Preiss proved a~result concerning the extendibility to a~real-valued differentiable function on $\rn$. In particular, they proved that given a~function $f$ (defined on some nonempty closed set $F\subset\rn$) and a~derivative of $f$ (with respect to $F$), there exists an everywhere differentiable extension to $\rn$ that preserves the prescribed derivative if and only if this prescribed derivative is a~Baire one function on~$F$. (Recall that a function is {\em Baire one} if it is the point-wise limit of a sequence of continuous functions.) The existence of continuously differentiable extensions of real-valued functions defined on closed subsets of $\rn$ was studied in \cite{W} by H.~Whitney already in the 1930's (even with preservation of higher orders of smoothness). The vector-valued case can be found, e.g., in \cite[Theorem 3.1.14]{Fed}. In \cite{KZ}, M.~Koc and L.~Zaj\'i\v{c}ek proved a~result that naturally jointly generalized both the extension result of V.~Aversa, M.~Laczkovich and D.~Preiss \cite{ALP} as well as the $C^1$ case of the Whitney's extension theorem for real-valued functions defined on closed subsets of $\rn$ (see, e.g., \cite[\S\,6.5]{EG}). Their result \cite[Theorem~3.1]{KZ} can be roughly described as a~theorem on extendibility to a~differentiable function with preservation of points of continuity of the derivative. We were able to generalize this result further, with the main focus on {\em vector-valued} functions. \bledenWSxviOK We added several other new features, for example non-restrictive assumptions allowing arbitrary function (existing differentiability and continuity points are preserved), the preservation of point-wise, local and global Lipschitz property, or generalization to infinite-dimensional domains. One of the main contributions (extensions of vector-valued Baire one functions) was, due to its different nature and technical difficulty, moved to a~separate paper~\cite{KocKolarB1}. \eledenWSxvi Our main results on differentiable extensions (see Theorem~\ref{thm:infinf} and Theorem~\ref{thm:fininf}) can be jointly formulated in the following way (recall that for $p\in \N\cup\{\infty\}$, $C^p$ denotes the class of $p$-times continuously differentiable functions in Fr\'echet sense; note that the notion does not change if Fr\'echet sense is replaced by G\^ateaux sense): \newcounter{saveenum} \begin{theorem} \label{thm:difext} Let $X$, $Y$ be normed linear spaces, $\theset\subset X$ a~closed set, $f\fcolon \theset\to Y$ an~arbitrary function and $L\fcolon \theset\to\mathcal L(X,Y)$ a~Baire one function. Let $p\in\N\cup\{\infty\}$. Then there exists a~function $\bar{f}\fcolon X\to Y$ such that \begin{enumerate}[\textup\bgroup (i)\egroup] \item\label{thm:difext:item:ext} $\bar{f}=f$ on $\theset$, \item\label{thm:difext:item:cont} if $a\in \theset$ and $f$ is continuous at $a$ \textup(with respect to $\theset$\textup), then $\bar{f}$ is continuous at $a$, \item\label{thm:difext:item:hoelder} if $a\in \theset$, $\alpha\in (0,1]$ and $f$ is $\alpha$-H\"{o}lder continuous at $a$ \textup(with respect to $\theset$\textup), then $\bar{f}$ is $\alpha$-H\"{o}lder continuous at $a$; in~particular, if $f$ is Lipschitz at $a$ \textup(with respect to $\theset$\textup), then $\bar{f}$ is Lipschitz at $a$, \item\label{thm:difext:item:frechet} if $a\in \theset$ and $L(a)$ is a~relative Fr{\'e}chet derivative of $f$ at $a$ \textup(with respect to $\theset$\textup), then $(\bar{f})^\prime(a)=L(a)$, \item\label{thm:difext:item:contcomp} $\bar{f}$ is continuous on $X\setminus \theset$, \item\label{thm:difext:item:smoothcomp} if $X$ admits $C^p$-smooth partition of unity, then $\bar{f}\in{C^p}(X\setminus \theset,Y)$. % % \setcounter{saveenum}{\value{enumi}} \end{enumerate} Moreover, if $\dim X<\infty$, then \begin{enumerate}[\textup\bgroup (i)\egroup] \setcounter{enumi}{\value{saveenum}} \item\label{thm:difext:item:strict} if $a\in \theset$, $L$ is continuous at $a$ and $L(a)$ is a~relative strict derivative of $f$ at $a$ \textup(with respect to $\theset$\textup), then the Fr{\'e}chet derivative $(\bar{f})^\prime$ is continuous at $a$ with respect to $(X\setminus \theset)\cup\{a\}$ and $L(a)$ is the strict derivative of $\bar{f}$ at $a$ \textup(with respect to $X$\textup), \MKlistopadxvb \item\label{thm:difext:item:lip-Loc-Glob} \bledenxvipoWSOK if $a\in \theset$, $R>0$, $L$ is bounded on $B(a,R) \cap \theset$ and $f$ is Lipschitz continuous on $B(a,R) \cap \theset$, then $\bar{f}$ is Lipschitz continuous on $B(a,r)$ for every $r<R$; if $L$ is bounded on $\theset$ and $f$ is Lipschitz continuous on $ \theset$, then $\bar{f}$ is Lipschitz continuous on $X$. \eledenxvipoWS \MKlistopadxve \end{enumerate} \end{theorem} \begin{remark}\label{rem:ZAthm:diffext} \myParBeforeItems \begin{enumerate}[(a)] \item Statement \itemref{thm:difext:item:smoothcomp} of Theorem~\ref{thm:difext} can be modified as follows: if $X$ admits $\mathcal F$-partition of unity where $\mathcal F$ is a~fixed class of functions on $X$ from Lemma~\ref{l:XbezFjakoDefF}, then $\bar{f}|_{X\setminus F}$ is of class $\mathcal F$ (where we say that $g|_U$ is of class $\mathcal F$, on an open set $U$, if for every $x\in U$, there is a~neighborhood $V$ of $x$ such that $g|_V$ is a~restriction of a~function from $\mathcal F$). \item The assumptions on partitions of unity cannot be removed from (\ref{thm:difext:item:smoothcomp}), see Proposition~\ref{prop:necessary}. \item\label{rem:ZAthm:diffext:item:C1} In (\ref{thm:difext:item:strict}), if we additionally assume that $L(b)$ is a~relative Fr\'echet derivative of $f$ at $b$ (with respect to $F$) for every $b\in U$ where $U$ is a~relative neighborhood of $a$ in $F$, then $(\bar f)'$ is continuous at $a$ (with respect to $X$). \item The condition $\dim X<\infty$ cannot be removed from \itemref{thm:difext:item:strict} or \itemref{thm:difext:item:lip-Loc-Glob}, see Remark~\ref{rem:finDimNecess} for an example. \end{enumerate} \end{remark} If the Nagata dimension $\dim_N F$ of $F$ is finite (see \cite[Definition~4.26]{BBI} or \cite[p.~13]{BBII}), there is no obvious obstacle for extending with preservation of the Lipschitz property \cite[Theorem~6.26]{BBII}.\footnote{\relax \bunordruhapodruheschXVIOK Space $Y$ can be used as the range, see~\cite[Proposition~6.5]{BBII} \eunordruhapodruheschXVI whose proof works for normed linear spaces.} Thus we rise a~natural question: \begin{question} Can the condition $\dim X< \infty$ in Theorem~\ref{thm:difext} (\itemref{thm:difext:item:strict} and \itemref{thm:difext:item:lip-Loc-Glob}) be replaced by condition $\dim_N F < \infty$? \end{question} \begin{remark} Let $F$ be a~closed subset of $\rn$, $Y$ be normed linear space. The reader might wonder if, for every function $f \fcolon F \to Y$ differentiable at every point of $F$ (with respect to $F$), there is a differentiable extension $\bar f\fcolon \rn \to Y$. The answer depends on quality of $F$ and kinds of differentiability under consideration: \myParBeforeItems \begin{enumerate}[a\textup)] \item A condition on $F$ that assures the existence of a~differentiable extension for every differentiable function $f$ on $F$ can be found in \cite[Corollary~4.3]{KZ}. It originates from \cite[Theorem~4~(ii)]{ALP}; in both papers, it was formulated with real-valued functions in mind only but it works in the vector-valued case too. Indeed, for $F$ satisfying this condition, the relative derivative $L:=f^\prime$ is always a~Baire one function on $F$ by \cite[Proposition~3~(ii), Theorem~4~(ii)]{ALP} (their proofs work for vector-valued functions as well). For $F$ satisfying the condition, the extension $\bar f$ can be obtained by Theorem~\ref{thm:difext}. \item However, this extension does not necessarily exist for a~general set $F$: \cite[Theorem~5]{ALP} gives an example of a~compact set $F\subset \R^2$ and a~(uniquely) differentiable function $f$ on $F$ such that $f'$ is not Baire one on $F$ and $f$ therefore cannot be extended to a~differentiable function on $\R^2$. \item A weaker condition on $F$ (namely that $\Span \Tan (F,x) = \rn$ for every $x\in \der F$) is sufficient if we ask for a~differentiable extension of a~{\em strictly} differentiable function (see \cite[Proposition~4.10]{KZ}). This result for real functions can be generalized to vector-valued functions. For more details, see Proposition~\ref{prop:KZ410}, where the condition $\Span \Tan (F,x) = \rn$ is relaxed to $\Span \Ptg (F,x) = \rn$. \item For a~positive result on $C^1$ extensions of strictly differentiable functions see \cite[Corollary~4.7]{KZ} which can be extended for vector-valued functions with the use of Theorem~\ref{thm:difext}, otherwise following the proofs from~\cite{KZ}. Again, a~condition has to be imposed on the set $F$, see \cite[Example~4.14]{KZ}. \item \bunordruhapodruheschXVIOK Proposition~\ref{prop:KZstrictstrict} contains another result on $C^1$ extensions of strictly differentiable functions, with assumption of continuity of the derivative and, again, a~condition on the set $F$. \item For more on this topic see \cite{KolarClanekIII}. \eunordruhapodruheschXVI \end{enumerate} \end{remark} \begin{remark} Note that all Hilbert spaces and spaces $c_0(\Gamma)$ with arbitrary set $\Gamma$ admit $C^\infty$-smooth partition of unity, all reflexive spaces and spaces with a~separable dual admit $C^1$-smooth partition of unity, $L_p$ spaces with $p\in[1,\infty)$ admit partition of unity of the same smoothness order as their canonical norms. The existence of a~$C^p$-smooth bump implies the existence of $C^p$-smooth partitions of unity in all separable spaces as well as in all reflexive spaces. We refer the reader to Remark~\ref{rem:part} and Remark~\ref{rem:bumps} for more information concerning the existence of partitions of unity in various spaces. \end{remark} \smallbreak The key ingredient in our proof of Theorem \ref{thm:difext} is our recently developed extension theorem for vector-valued Baire one functions (cf. \cite[Theorem 6]{ALP} and \cite[Theorem 2.4]{KZ}, where special cases of this result were proved): \begin{theorem}\cite[Theorem 1.1]{KocKolarB1}\label{thm:B1ext} Let $(X,\ddd)$ be a~metric space, $F\subset X$ a~closed set, $Z$ a~normed linear space and $L\fcolon F\to Z$ a~Baire one function. Then there exists a~continuous function $A\fcolon (X\setminus F) \to Z$ such that \begin{equation}\label{eq:ALP3} \lim_ {\substack{x\to a\\ x\in X\setminus F}} \left\|A(x) - L(a)\right\|_Z \frac{\dist (x,F)}{\ddd(x,a)} = 0 \tag{NT} \end{equation} for every $a\in \boundary F$, \begin{equation}\label{eq:continuity} \lim_ {\substack{x\to a\\ x\in X\setminus F}} A(x) = L(a) \tag{C} \end{equation} whenever $a\in\boundary F$ and $L$ is continuous at $a$, and \blistopadxvH \begin{equation}\label{eq:boundedness} \text{$A$ is bounded on $B(a,r) \setminus F$} \tag{B} \end{equation} whenever $a\in F$, $r\in (0,\infty)\cup\{\infty\}$ and $L$ is bounded on $B(a, 12 r) \cap F$. \elistopadxvH \end{theorem} Besides this introductory section, our paper consists of three more sections. Section \ref{sec:prelim} contains the definitions of~the notions related to derivatives and partitions of unity as well as an auxiliary proposition about relativizations of partitions of unity to open subsets. Section \ref{sec:infinf} is devoted to extensions of vector-valued functions from closed subsets of {\em infinite} dimensional spaces, whereas Section \ref{sec:fininf} extends the proofs of the previous section to obtain stronger results for {\em finite} dimensional domains. \bunorpodruheschXVIOK There is also technical Appendix~\ref{apen:partition} on partitions of unity and Appendix~\ref{apen:KZ} containing a~small elaboration of a~theme from \cite[Section~4]{KZ}. \eunorpodruheschXVI \section{Basic notions and preliminaries}\label{sec:prelim} \noindent Let $X$ and $Y$ be normed linear spaces. We denote by $\mathcal L(X,Y)$ the set of all bounded linear operators from $X$ to $Y$. For $u\in\mathcal L(X,Y)$, the number $$\left\|u\right\| _{\mathcal L(X,Y)} :=\inf\left\{K>0\setcolon \left\|u(x)\right\|_Y\leq K\left\|x\right\|_X \text{ for every } x\in X\right\}$$ is called the \textit{norm} of the linear operator $u$. The set $\mathcal L(X,Y)$ equipped with the norm $\left\|\cdot\right\|_{\mathcal L(X,Y)}$ forms a~normed linear space, which is complete provided $Y$ is complete. \begin{definition}\label{D:Rel_der} Let $X$ and $Y$ be normed linear spaces, $A\subset X$ an arbitrary set, $f\fcolon A\to Y$ a~function and $a\in A$. \begin{itemize} \item[(i)] A bounded linear operator $\operFa\fcolon X\to Y$ is called a~\textit{relative Fr{\'e}chet derivative of $f$ at $a$} (with respect to $A$) if either $a$ is an isolated point of $A$, or \begin{equation \lim\limits_{\substack{x\to a\\x\in A}}\frac{\left\|f(x)-f(a)-\operFa(x-a)\right\|_Y}{\left\|x-a\right\|_X}=0. \end{equation} \end{itemize} \begin{itemize} \item[(ii)] A bounded linear operator $\operSa\fcolon X\to Y$ is called a~\textit{relative strict derivative of $f$ at $a$} (with respect to $A$) if either $a$ is an~isolated point of $A$, or \begin{equation}\label{eq:strict-def} \lim\limits_{\substack{y\to a\\x\to a\\x,y\in A,\,x\neq y}} \frac{\left\|f(y)-f(x)-\operSa (y-x)\right\|_Y}{\left\|y-x\right\|_X}=0 \quad\text{(with $x=a$ or $y=a$ allowed)}. \end{equation} \end{itemize} \begin{itemize} \item[(iii)] We say that $L\fcolon A\to\mathcal L(X,Y)$ is a~\textit{relative Fr{\'e}chet} (resp.\ \textit{strict}) \textit{derivative of $f$} (on $A$) if $L(a)$ is a~relative Fr{\'e}chet (resp.\ strict) derivative of $f$ at $a$ (with respect to $A$) for each $a\in A$. \end{itemize} \end{definition} \begin{remark} \myParBeforeItems \begin{enumerate}[(a)] \item At interior points of $A$, the notion of relative Fr{\'e}chet (resp.\ strict) derivative agrees with the classical notion of the Fr{\'e}chet (resp.\ strict) derivative. In particular, if it exists then it is uniquely determined. \item Classically, Fr{\'e}chet differentiability of functions between normed linear spaces is introduced only for functions defined on open sets. \item A relative strict derivative of $f$ is clearly also a~relative Fr{\'e}chet derivative of $f$. \item If we consider $X=\rn$ and a~function $f\fcolon A\subset\rn\to Y$ in Definition \ref{D:Rel_der}, then we can omit the assumption of boundedness for operators $\operFa$ and $\operSa $, since every linear function defined on a~finite-dimensional normed linear space is bounded automatically. \item If $X=\rn$ then $\operFa$ is called a~{\em relative derivative of $f$ at $a$} or, if $a$ is an interior point of $A$, the~{\em derivative of $f$ at $a$}. \end{enumerate} \end{remark} \begin{definition}\label{D:P-Lip} Let $X$ and $Y$ be normed linear spaces, $A\subset X$ an arbitrary set, $f\fcolon A\to Y$ a~function and $a\in A$. If $\alpha\in (0,1]$, we say that \textit{$f$ is $\alpha$-H\"{o}lder continuous at $a$} (with respect to $A$) if either $a$ is an isolated point of $A$, or $$\limsup\limits_{\substack{x\to a\\x\in A}}\frac{\left\|f(x)-f(a)\right\|_Y}{\left\|x-a\right\|^{\alpha}_X}<\infty.$$ We say that \textit{$f$ is Lipschitz at $a$} (with respect to $A$) if it is $1$-H\"{o}lder continuous at $a$ (with respect to $A$), i.e., either $a$ is an isolated point of $A$, or $$\limsup\limits_{\substack{x\to a\\x\in A}}\frac{\left\|f(x)-f(a)\right\|_Y}{\left\|x-a\right\|_X}<\infty.$$ As usual, $f$ is {\em point-wise $\alpha$-H\"older} ({\em point-wise Lipschitz}) if $f$ is $\alpha$-H\"older continuous (Lipschitz) at every $a\in A$. And $f$ is {\em locally $\alpha$-H\"older} ({\em locally Lipschitz}) if it is $\alpha$-H\"older (Lipschitz) (in the classical sense) in a neighborhood of every $a\in A$. \end{definition} If $U=X$ then the following definitions agree with the usual notions (see \cite[16.1., p.~165]{KM} or \cite[p.~304]{HHZ}). \bledenxv If $U\subset X$ is a \eledenxv general open set, we essentially consider the restrictions to $U$. \begin{definition}\label{def:rozklad} Let $X$ be a~metric space, $\F$ a~class of functions on $X$ and $U\subset X$ an open set. A {\em locally finite partition of unity in $U$} (shortly a~{\em partition of unity in $U$}) is a~collection $\{\psi_\gamma\}_{\gamma\in\Gamma}$ of real-valued functions $\psi_\gamma$ on $X$ such that \bledenxvipoWSOK $\sum_{\gamma\in\Gamma} \psi_\gamma(x) = 1$ for every $x\in U$ and \eledenxvipoWS there is a~neighborhood $V_y$ of $y$, for every $y\in U$, so that all but a~finite number of $\psi_\gamma$ vanish on $V_y$. If $\psi_\gamma \in \F$ for every $\gamma\in\Gamma$, we talk about an {\em $\F$-partition of unity}. If $\F$ is not specified, usually the continuous functions are assumed. We say that a~(locally finite) partition of unity $\{\psi_\gamma\}_{\gamma\in\Gamma}$ in $U$ is {\em subordinated to} an open cover $\mathcal U$ of $U$ if for every $\gamma\in\Gamma$ there is $U_\gamma\in \mathcal U$ such that $\spt (\psi_\gamma) \subset U_\gamma$, where $\spt (\psi_\gamma) = \overline {\{x\in X\setcolon \psi_\gamma(x) \neq 0\}}$. \bledenxv \eledenxv We say that $U$ {\em admits $\F$-partition of unity} if for every open cover $\mathcal U$ of $U$ there is a~locally finite $\F$-partition of unity $\{\psi_\gamma\}_{\gamma\in\Gamma}$ in $U$ subordinated to $\mathcal U$. \end{definition} \begin{lemma}\label{l:XbezF}\label{l:XbezFjakoDefF} Let $X$ be a~normed linear space. Let $\F$ be \begin{enumerate}[\textup\bgroup (a)\egroup] \item the class of all continuous functions on $X$, or \item the class of all continuous functions on $X$ that are $p_1$-times G\^ateaux differentiable for some $p_1\in\N\cup\{\infty\}$, or \item the class of all $p_2$-times Fr\'echet differentiable functions on $X$ for some $p_2\in\N\cup\{\infty\}$, or \item the class of all $C^{p_3}$-smooth functions on $X$ for some $p_3\in\N\cup\{\infty\}$, or \item the class of all point-wise $\alpha_1$-H\"older continuous functions on $X$ for some $\alpha_1\in (0,1]$, or \item the class of all locally $\alpha_2$-H\"older continuous functions on $X$ for some $\alpha_2\in (0,1]$, in particular \\\vspace\itemsep the class of all locally Lipschitz continuous functions on $X$, or \item the intersection of two or several of the above classes \textup(for some $p_1$, $p_2$, $p_3$, $\alpha_1$, $\alpha_2$\textup). \end{enumerate} Let $\F^+$ be the class of all non-negative functions from $\F$. If $X$ admits $\F$-partition of unity, then every open set $U\subset X$ admits $\F^+$-partition of unity. \end{lemma} \begin{proof} We get immediately that $X$ admits $\F^+$-partition of unity. Indeed, given a~locally finite partition of unity $\{\psi_\gamma\}_{\gamma\in\Gamma}\subset \F$, we put $\widetilde \psi_\gamma = \psi_\gamma^2 / \sum_{\beta\in\Gamma} \psi_\beta^2$ for every $\gamma\in\Gamma$. Then $\sum_{\gamma\in\Gamma} \widetilde \psi_\gamma = 1$ and, for every $\gamma\in\Gamma$, $\widetilde \psi_\gamma \in \F^+$ and $\spt \widetilde \psi_\gamma = \spt \psi_\gamma$. Let $U\subset X$ be an arbitrary open set and let $\mathcal U$ be an open cover of $U$. Set $F=X\setminus U$. Define \[ d_n = \begin{cases} 1/n, & n \in \N, \\ \infty, & n \le 0. \end{cases} \] Fix $n\in \N $. Let \begin{align*} U_n &= \{ x \in X \setcolon d_n < \dist(x, F) < d_{n-3} \}, \\ F_n &= \{ x \in X \setcolon d_{n-1} \le \dist(x, F) \le d_{n-2} \}, \\ \mathcal U _ n &= \{ U_n \cap G \setcolon G \in \mathcal U \} \cup \{ X \setminus F_n \} . \end{align*} Then $\mathcal U_n$ is an open cover of $X$. Let $\{\phi_{n,\gamma}\}_{\gamma \in \AAAA_n}$ be an $\F^+$-partition of unity subordinated to $\mathcal U_n$. Let $\BBBB_n = \{ \gamma \in \AAAA_n \setcolon \spt \phi_{n,\gamma} \not\subset X\setminus F_n \}$. Then $\{\phi_{n,\gamma}\}_{\gamma \in \BBBB_n}$ is subordinated to $\mathcal U$, $\sum_{\gamma \in \BBBB_n} \phi_{n,\gamma} (x) = 1$ for every $x \in F_n$ and $\spt \phi_{n,\gamma} \subset U_n$ for every $\gamma \in \BBBB_n$. The family $\{ \phi_{n,\gamma} \setcolon n\in \N, \gamma \in \BBBB_n \}$ is subordinated to $\mathcal U$ and locally finite in $U$. Every $x \in U$ belongs to one or two of the sets $F_n$, and at most three sets $U_n$. Thus \begin{equation}\label{eq:wdef} 1\le w(x) := \sum_{ n\in \N, \gamma \in \BBBB_n } \phi_{n,\gamma} \le 3 \end{equation} for every $x \in U$. For every $n\in\N$ and $\gamma\in\BBBB_n$, let $\psi_{n,\gamma} (x) = \phi_{n,\gamma} (x)/ w(x)$ for $x\in U$ and note again that the sum in \eqref{eq:wdef} is finite in a~neighborhood of every point of $U\supset U_n \supset \spt \phi_{n,\gamma}$. For every $n\in\N$ and $\gamma\in\BBBB_n$, extend $\psi_{n,\gamma}$ by setting $\psi_{n,\gamma}(x) = 0$ for $x\in X\setminus U$. Then $\{ \psi_{n,\gamma} \} _ {n\in \N, \gamma \in \BBBB_n } $ is a~locally finite $\F^+$-partition of unity in $U$. \end{proof} \begin{remark}\label{rem:part} \myParBeforeItems \begin{enumerate}[(a)] \item Every metric space admits partition of unity formed by continuous functions. Moreover, it even admits partition of unity formed by Lipschitz continuous (hence locally Lipschitz continuous, as used in Lemma~\ref{l:XbezFjakoDefF}) functions (see \cite[the proof of Theorem]{Fried}). \item If $X$ is a~WCD Banach space, then $X$ admits partition of unity formed by continuous functions that are G\^ateaux differentiable \cite[Corollary~VIII.3.3]{DGZ}. The class of WCD spaces contains all separable and all reflexive Banach spaces \cite[Example~VI.2.2]{DGZ}. \item There is a~Banach space that is not WCD and admits $C^\infty$-smooth partition of unity, e.g.\ JL space of W.B.~Johnson and J.~Lindenstrauss (see \cite[p.~369]{DGZ}, for the definition of JL space, see \cite{JLspace}). \item If $X^*$ is a~WCG Banach space, then $X$ admits $C^1$-smooth partition of unity \cite[Corollary~VIII.3.11]{DGZ}. This includes all reflexive spaces as well as all spaces with a~separable dual. \item If a~Banach space $X$ admits a~LUR norm whose dual norm is also LUR, then $X$ admits $C^1$-smooth partition of unity \cite[Theorem~VIII.3.12~(i)]{DGZ}. Hence, if $K^{(\omega_1)}=\emptyset$, then $C(K)$ admits $C^1$-smooth partition of unity \cite[Corollary~VIII.3.13]{DGZ}. Note that if $K^{(\omega_0)}=\emptyset$, then $C(K)$ admits even $C^\infty$-smooth partition of unity (see \cite{DGZ-1990}). \item $L_p$ spaces with $p\in[1,\infty)$ admit partition of unity of the same smoothness order as their canonical norms, i.e. $C^\infty$-smooth partition of unity for $p$ even integer, $C^{p-1}$-smooth partition of unity for $p$ odd integer and, if $p$ is not an~integer, $C^{[p]}$-smooth partition of unity, where $[p]$ denotes the integer part of $p$ (see \cite[Corollary~VIII.3.11]{DGZ} and \cite[Theorem~V.1.1]{DGZ}). \ite All Hilbert spaces and spaces $c_0(\Gamma)$ with arbitrary set $\Gamma$ admit $C^\infty$-smooth partition of unity \cite[Theorem~2 and Theorem~3]{T} (see also \cite[16.16]{KM}). \end{enumerate} \end{remark} \begin{remark}\label{rem:bumps} Let $p\in\N\cup\{\infty\}$. \myParBeforeItems \begin{enumerate}[(a)] \item It is an open problem whether every Banach space that admits a~$C^p$-smooth bump must also admit $C^p$-smooth partition of unity (see \cite[p.~370, Problem~VIII.1]{DGZ}, \cite[p.~179]{FM} and \cite[p.~172]{KM}). \item The existence of a~$C^p$-smooth bump implies the existence of $C^p$-smooth partitions of unity for example for separable spaces (see \cite{BF} or \cite[p.~360]{DGZ}) and for reflexive spaces \cite[Theorem VIII.3.2]{DGZ}. More generally, it also holds for Banach spaces whose dual is WCG \cite[16.13(4)]{KM}, for WCD Banach spaces \cite[p.~351, Theorem VIII.3.2]{DGZ} (cf.\ also \cite[53.15 and 16.18]{KM}), which includes reflexive spaces and separable spaces as we already noted, and for duals of Asplund spaces \cite[53.15 and 16.18]{KM}. A result on Banach spaces with PRI and $C^p$-smooth partitions of unity can be found in \cite[Corollary 4]{Haydon}. In particular, Banach spaces with "nice" ("separable") PRI and with a~$C^p$-smooth bump function admit $C^p$-smooth partition of unity, see \cite[Remark 3.3]{GTWZ}, \cite[page 369, lines 26--27]{DGZ} and \cite[16.18]{KM}. % \end{enumerate} \end{remark} \section{Vector-valued functions in infinite dimensional domain}\label{sec:infinf} \begin{theorem}\label{thm:infinf} Let $X$, $Y$ be normed linear spaces, $F\subset X$ a~closed set, $f\fcolon F\to Y$ an~arbitrary function and $L\fcolon F\to\mathcal L(X,Y)$ a~function that is Baire one on $F$. Then there exists a~function $\bar{f}\fcolon X\to Y$ such that \begin{enumerate}[\textup\bgroup (i)\egroup] \item\label{thm:infinf:item:ext} $\bar{f}=f$ on $F$, \item\label{thm:infinf:item:cont} if $a\in F$ and $f$ is continuous at $a$ \textup(with respect to $F$\textup), then $\bar{f}$ is continuous at $a$, \item\label{thm:infinf:item:hoelder} if $a\in F$, $\alpha\in (0,1]$ and $f$ is $\alpha$-H\"{o}lder continuous at $a$ \textup(with respect to $F$\textup), then $\bar{f}$ is $\alpha$-H\"{o}lder continuous at $a$; in particular, if $f$ is Lipschitz at $a$ \textup(with respect to $F$\textup), then $\bar{f}$ is Lipschitz at $a$, \item\label{thm:infinf:item:frechet} if $a\in F$ and $L(a)$ is a~relative Fr{\'e}chet derivative of $f$ at $a$ \textup(with respect to $F$\textup), then $(\bar{f})^\prime(a)=L(a)$, \item\label{thm:infinf:item:contcomp} $\bar{f}$ is continuous on $X\setminus F$, \item\label{thm:infinf:item:smoothcomp} if $X$ admits $\mathcal F$-partition of unity where $\mathcal F$ is a~fixed class of functions on $X$ from Lemma~\ref{l:XbezFjakoDefF}, then $\bar{f}|_{X\setminus F}$ is of class~$\mathcal F$.\footnote{\label{foot:restrclass}\relax To provide a~formal definition for \itemref{thm:infinf:item:smoothcomp}, we say that $g|_U$ is of class $\mathcal F$ (on an open set $U$) if for every $x\in U$, there is a~neighborhood $V$ of $x$ such that $g|_V$ is a~restriction of a~function from $\mathcal F$.} \end{enumerate} \end{theorem} \begin{proof} If $F=\emptyset$, the theorem trivially holds. Further suppose that $F$ is nonempty. For every $x\in X$, we set \begin{equation}\label{r(x)} r(x):=\frac{1}{20} \dist(x,F). \end{equation} Further, for every $x\in X\setminus F$, we choose any point $\widehat x\in F$ such that \begin{equation}\label{hat} \left\|x-\widehat x\tinyspaceafterwidehat \right\|_X\leq 2\dist(x,F). \end{equation} If (\ref{thm:infinf:item:smoothcomp}) is under consideration, $X$ admits $\mathcal F$-partition of unity. If this is not the case, it admits at least continuous partition of unity (since $X$ is a~metric space) \bledenWSxviOK and we let $\mathcal F$ be the class of continuous functions on $X$. \eledenWSxvi By Lemma~\ref{l:XbezF}, there exists a~non-negative locally finite $\mathcal F$-partition of unity $\{\phi_\gamma\}_{\gamma\in\Gamma}$ on $X\setminus F$ subordinated to the covering $\{B(x,10r(x))\setcolon x\in X\setminus F\}$. So, in particular, \begin{equation}\label{Pr0} \{\phi_\gamma\}_{\gamma\in\Gamma}\subset{\mathcal F}, \end{equation} \begin{equation}\label{Pr1} 0\leq\phi_\gamm {\rm\ for\ every\ }\gamma\in\Gamma, \end{equation} \begin{equation}\label{Pr2} \sum\limits_{\gamma\in\Gamma}\phi_\gamma(x)=1 {\rm\ for\ every\ } x\in X\setminus F \end{equation} and for every $\gamma\in\Gamma$ there is $x_\gamma\in X\setminus F$ such that \begin{equation}\label{Pr3} \spt\phi_\gamma\subset B(x_\gamma,10r(x_\gamma)). \end{equation} For every $x\in X\setminus F$, we denote \begin{equation}\label{Sx1} \ixsetGx :=\{\markTC \gamma\in \Gamma \setcolon B(x,10r(x))\cap B(x_\gamma,10r(x_\gamma))\neq\emptyset\}. \end{equation} Clearly, if $\markTD \gamma\in \markTD \Gamma \setminus \ixsetGx$ then $\phi_\gamma(x)=0$ by \eqref{Pr3}. Moreover, if $\markTA \gamma\in \ixsetGx$ \markTD then \[ \left|r(x)-r(x_\gamma)\right| \leq \Lip(r) \left\|x-x_\gamma\right\|_X = \frac{1}{20}\left\|x-x_\gamma\right\|_X \overset{ \eqref{Sx1} }{ \leq } \frac{1}{20}\left(10r(x)+10r(x_\gamma)\right) . \] \markTD Hence \begin{equation}\label{Pr4} \frac{1}{3}\leq\frac{r(x)}{r(x_\gamma)}\leq 3\qquad {\rm{whenever}}\ \markTD \gamma\in \ixsetGx. \end{equation} \smallbreak Let $A\fcolon (X\setminus F)\to\mathcal L(X,Y)$ be the function constructed in Theorem \ref{thm:B1ext} (with $Z=\mathcal L(X,Y)$). Define $\bar f\fcolon X\to Y$ by \begin{equation}\label{Ext_of_f} \bar f(x):= \begin{cases} \, f(x) & \text{if $x\in F$,}\\ \, \sum\limits_{\gamma\in\Gamma}\phi_\gamma(x)\left[f(\widehat{x_\gamma})+ A(x_\gamma)(x-\widehat{x_\gamma})\right] & \text{if $x\in X\setminus F$}. \end{cases} \end{equation} \smallbreak Obviously, $\bar{f}=f$ on $F$, which proves (\ref{thm:infinf:item:ext}). Since linear mappings are $C^\infty$-smooth and the partition of unity $\{\phi_\gamma\}_{\gamma\in\Gamma}$ is locally finite, we easily conclude using~\eqref{Pr0} that $\bar f | _{X\setminus F}$ is of class $\mathcal F $. Assertions (\ref{thm:infinf:item:smoothcomp}), if under consideration, and (\ref{thm:infinf:item:contcomp}) are therefore fulfilled. \smallbreak Let $a\in F$. For arbitrary $x\in X\setminus F$ and $\markTA \gamma\in \ixsetGx$, by \eqref{r(x)}, \eqref{hat}, \eqref{Sx1} and \eqref{Pr4}, we get \begin{equation}\label{E_1} \left\|x_\gamma-x\right\|_X \leq 10r(x_\gamma)+10r(x) \leq 40r(x) = 2\dist(x,F), \end{equation} and likewise with $x_\gamma$ in the place of~$x$ on the right-hand side \begin{equation \left\|x_\gamma-x\right\|_X\leq 10r(x_\gamma)+10r(x)\leq 40r(x_\gamma) = 2\dist(x_\gamma,F), \end{equation} \begin{equation \left\| \,} % \tmspace +\thinmuskip {.1667em \widehat x-x_\gamma\right\|_X\leq\left\| \,} % \tmspace +\thinmuskip {.1667em \widehat x-x\right\|_X+\left\|x-x_\gamma\right\|_X \leq 2\dist(x,F)+2\dist(x,F)=4\dist(x,F), \end{equation} $$ \left\|\widehat{x_\gamma}-x_\gamma\right\|_X \leq 2 \dist(x_\gamma, \theset) \leq 2\left\|\,} % \tmspace +\thinmuskip {.1667em \widehat x-x_\gamma\right\|_X \leq 8\dist(x,F),$$ \begin{equation}\label{E_3} \left\|\widehat{x_\gamma}-\widehat x\tinyspaceafterwidehat \right\|_X\leq\left\|\widehat{x_\gamma}-x_\gamma\right\|_X +\left\|x_\gamma-\widehat x \tinyspaceafterwidehat \right\|_X \leq 8\dist(x,F)+4\dist(x,F)=12\dist(x,F), \end{equation} \begin{equation}\label{E_4} \left\|\widehat{x_\gamma}-x\right\|_X\leq\left\|\widehat{x_\gamma}-x_\gamma\right\|_X +\left\|x_\gamma-x\right\|_X \leq 8\dist(x,F)+2\dist(x,F)=10\dist(x,F), \end{equation} and likewise \begin{equation}\label{E_4.1} \left\|\widehat{x_\gamma}-x\right\|_X\leq\left\|\widehat{x_\gamma}-x_\gamma\right\|_X +\left\|x_\gamma-x\right\|_X \leq 2\dist(x_\gamma,F)+2\dist(x_\gamma,F)=4\dist(x_\gamma,F). \end{equation} Since $\dist(x,F)\leq\left\|x-a\right\|_X$, by \eqref{hat}, \eqref{E_1} and \eqref{E_4}, we obtain \begin{equation}\label{E_1_a} \left\|x_\gamma-a\right\|_X\leq\left\|x_\gamma-x\right\|_X+\left\|x-a\right\|_X \leq 3\left\|x-a\right\|_X, \end{equation} \begin{equation}\label{E_2_a} \left\|\widehat{x_\gamma}-a\right\|_X\leq\left\|\widehat{x_\gamma}-x\right\|_X +\left\|x-a\right\|_X\leq 11\left\|x-a\right\|_X, \end{equation} \begin{equation}\label{E_3_a} \left\|\,} % \tmspace +\thinmuskip {.1667em \widehat x-a\right\|_X\leq\left\|\,} % \tmspace +\thinmuskip {.1667em \widehat x-x\right\|_X+\left\|x-a\right\|_X \leq 3\left\|x-a\right\|_X. \end{equation} \smallbreak Since assertions (\ref{thm:infinf:item:cont}), (\ref{thm:infinf:item:hoelder}) and (\ref{thm:infinf:item:frechet}) are clearly satisfied for $a\in\interior(F)$, we will further assume that $a\in\boundary F$. If $x\in X\setminus F$, by \eqref{Pr1}, \eqref{Pr2}, \eqref{Pr3}, \eqref{Sx1} and \eqref{Ext_of_f}, we obtain \begin{align} \left\|\bar f(x)-\bar f(a)\right\|_Y &=\left\|\sum\limits_{\gamma\in\Gamma}\phi_\gamma(x)\left[f(\widehat{x_\gamma}) +A(x_\gamma)(x-\widehat{x_\gamma})-f(a)\right]\right\|_Y\nonumber\\ &=\left\|\sum\limits_{\gamma\in\Gamma}\phi_\gamma(x)\left[f(\widehat{x_\gamma})-f(a) +L(a)(x-\widehat{x_\gamma})+(A(x_\gamma)-L(a))(x-\widehat{x_\gamma})\right]\right\|_Y\nonumber\\ &\leq\sum\limits_{\markTA \gamma\in \ixsetGx}\phi_\gamma(x)\left\|f(\widehat{x_\gamma})-f(a)\right\|_Y +\sum\limits_{\markTA \gamma\in \ixsetGx}\phi_\gamma(x)\left\|L(a)\right\|_{\mathcal L(X,Y)} \left\|x-\widehat{x_\gamma}\right\|_X\label{Est}\\ & \qquad+\sum\limits_{\markTA \gamma\in \ixsetGx}\phi_\gamma(x)\left\|A(x_\gamma)-L(a)\right\|_{\mathcal L(X,Y)} \dist(x_\gamma,F)\frac{\left\|x-\widehat{x_\gamma}\right\|_X}{\dist(x_\gamma,F)}.\nonumber \end{align} \smallbreak First suppose that $f$ is continuous at $a$ (with respect to $F$) and fix $\eps_1>0$. There exists $\delta_1>0$ such that \begin{equation}\label{Cont} \left\|f(z)-f(a)\right\|_Y\leq\eps_1\qquad \text{for\ every\ }z\in F, \ \left\|z-a\right\|_X<\delta_1. \end{equation} By \eqref{eq:ALP3} from Theorem~\ref{thm:B1ext}, there exists $\delta_2>0$ such that \begin{equation}\label{Bd_beh1} \left\|A(t)-L(a)\right\|_{\mathcal L(X,Y)}\dist(t,F)<\eps_1 \qquad \text{for\ every\ }t\in X\setminus F,\ \left\|t-a\right\|_X<\delta_2. \end{equation} Let $x\in X\setminus F$ be arbitrary with $\left\|x-a\right\|_X<\min\left(\eps_1,\frac{\delta_1}{11},\frac{\delta_2}{3}\right)$. Then we deduce from \eqref{E_1_a} and \eqref{E_2_a} that $\left\|x_\gamma-a\right\|_X<\delta_2$ and $\left\|\widehat{x_\gamma}-a\right\|_X<\delta_1$ for every $\markTA \gamma\in \ixsetGx$. Thus by \eqref{Pr1}, \eqref{Pr2}, \eqref{E_4}, \eqref{E_4.1}, \eqref{Est}, \eqref{Cont}, \eqref{Bd_beh1} and $\dist(x,F)\leq\left\|x-a\right\|_X$, we obtain \begin{align} \left\|\bar f(x)-\bar f(a)\right\|_Y&\leq\eps_1 +10\left\|L(a)\right\|_{\mathcal L(X,Y)}\eps_1+4\,\eps_1 =\left(5+10\left\|L(a)\right\|_{\mathcal L(X,Y)}\right)\eps_1.\nonumber \end{align} Since $\eps_1>0$ was arbitrary, $\bar f$ is continuous at $a$ and thus (\ref{thm:infinf:item:cont}) is proved. \smallbreak Next, suppose that $\alpha\in (0,1]$ and $f$ is $\alpha$-H\"older continuous at $a$ (with respect to $F$). Then there exist $K>0$ and $\delta_3>0$ such that \begin{equation}\label{Hoelder} \left\|f(z)-f(a)\right\|_Y\leq K\left\|z-a\right\|^\alpha_X\qquad \text{for\ every\ }z\in F, \ \left\|z-a\right\|_X<\delta_3. \end{equation} By \eqref{eq:ALP3} from Theorem~\ref{thm:B1ext}, there exists $\delta_4>0$ such that \begin{equation}\label{Bd_beh2} \left\|A(t)-L(a)\right\|_{\mathcal L(X,Y)}\dist(t,F)<\left\|t-a\right\|_X \qquad \text{for\ every\ }t\in X\setminus F,\ \left\|t-a\right\|_X<\delta_4. \end{equation} Let $x\in X\setminus F$ such that $\left\|x-a\right\|_X<\min\left(\frac{\delta_3}{11},\frac{\delta_4}{3},1\right)$. Then, for every $\markTA \gamma\in \ixsetGx$, using \eqref{E_1_a} and \eqref{E_2_a} we get $\left\|x_\gamma-a\right\|_X<\delta_4$ and $\left\|\widehat{x_\gamma}-a\right\|_X<\delta_3$. Similarly as above, by \eqref{Pr1}, \eqref{Pr2}, \eqref{E_4}, \eqref{E_4.1}, \eqref{E_1_a}, \eqref{E_2_a}, \eqref{Est}, \eqref{Hoelder}, \eqref{Bd_beh2} and $\dist(x,F)\leq\left\|x-a\right\|_X$, we get \begin{align} \left\|\bar f(x)-\bar f(a)\right\|_Y&\leq K\sum\limits_{\markTA \gamma\in \ixsetGx}\phi_\gamma(x)\left\|\widehat{x_\gamma}-a\right\|^{\alpha}_X +10\left\|L(a)\right\|_{\mathcal L(X,Y)}\left\|x-a\right\|_X\\ & \qquad +\ 4\sum\limits_{\markTA \gamma\in \ixsetGx}\phi_\gamma(x)\left\|x_\gamma-a\right\|_X\nonumber\\ &\leq\left(11^{\alpha}K+10\left\|L(a)\right\|_{\mathcal L(X,Y)}+12\right)\left\|x-a\right\|^{\alpha}_X,\nonumber \end{align} since $\left\|x-a\right\|_X\leq\left\|x-a\right\|^{\alpha}_X$ as $\left\|x-a\right\|_X<1$ and $\alpha\in (0,1]$. Hence $\bar f$ is $\alpha$-H\"older continuous at~$a$ and (\ref{thm:infinf:item:hoelder}) is proved. \smallbreak Finally, we prove (\ref{thm:infinf:item:frechet}). Fix $\eps_2>0$. Since $L(a)$ is a~Fr\'echet derivative of $f$ at $a$ (with respect to $F$), there exists $\delta_5>0$ such that \begin{equation}\label{Der} \left\|f(z)-f(a)-L(a)(z-a)\right\|_Y\leq\eps_2\left\|z-a\right\|_X\qquad \text{for\ every\ } z\in F,\ \left\|z-a\right\|_X<\delta_5. \end{equation} By \eqref{eq:ALP3} from Theorem \ref{thm:B1ext}, there exists $\delta_{6}>0$ such that \begin{equation}\label{Bd_beh} \left\|A(t)-L(a)\right\|_{\mathcal L(X,Y)}\frac{\dist(t,F)}{\left\|t-a\right\|_X}<\eps_2\qquad \text{for\ every\ }t\in X\setminus F,\ \left\|t-a\right\|_X<\delta_{6}. \end{equation} Let $x\in X\setminus F$ be arbitrary satisfying $\left\|x-a\right\|_X<\min\left( \frac{\delta_5}{11} , \frac{\delta_{6}}{3} \right) $. Then, for every $\markTA \gamma\in \ixsetGx$, we get $\left\|x_\gamma-a\right\|_X<\delta_{6}$ and $\left\|\widehat{x_\gamma}-a\right\|_X<\delta_5$ by \eqref{E_1_a} and \eqref{E_2_a}. Thus by \eqref{Pr1}, \eqref{Pr2}, \eqref{Pr3}, \eqref{Sx1}, \eqref{Ext_of_f}, \eqref{E_4.1}, \eqref{E_1_a}, \eqref{E_2_a}, \eqref{Der} and \eqref{Bd_beh}, we obtain \begin{align*} \left\|\bar f(x)-\bar f(a)-L(a)(x-a)\right\|_Y & =\left\|\sum\limits_{\gamma\in\Gamma}\phi_\gamma(x) \left[f(\widehat{x_\gamma})+A(x_\gamma)(x-\widehat{x_\gamma})-f(a) -L(a)(x-a)\right]\right\|_Y\nonumber\\ \qquad\qquad&=\left\|\sum\limits_{\gamma\in\Gamma} \phi_\gamma(x)\left[f(\widehat{x_\gamma})-f(a)-L(a)(\widehat{x_\gamma}-a) +(A(x_\gamma)-L(a))(x-\widehat{x_\gamma})\right]\right\|_Y\nonumber\\ &\leq\sum\limits_{\markTA \gamma\in \ixsetGx}\phi_\gamma(x) \left\|f(\widehat{x_\gamma})-f(a)-L(a)(\widehat{x_\gamma}-a)\right\|_Y\nonumber\\ &\qquad+\sum\limits_{\markTA \gamma\in \ixsetGx}\phi_\gamma(x)\left\|A(x_\gamma)-L(a)\right\|_{\mathcal L(X,Y)} \left\|x-\widehat{x_\gamma}\right\|_X\nonumber\\ &\leq\sum\limits_{\markTA \gamma\in \ixsetGx}\phi_\gamma(x)\,\eps_2\left \|\widehat{x_\gamma}-a\right\|_X\nonumber\\ &\qquad+\sum\limits_{\markTA \gamma\in \ixsetGx}\phi_\gamma(x)\left\|A(x_\gamma)-L(a)\right\|_{\mathcal L(X,Y)} \frac{\dist(x_\gamma,F)}{\left\|x_\gamma-a\right\|_X}\frac{\left\|x-\widehat{x_\gamma}\right\|_X} {\dist(x_\gamma,F)}\left\|x_\gamma-a\right\|_X\nonumber\\ &\leq11\,\eps_2\left\|x-a\right\|_X+12\,\eps_2\left\|x-a\right\|_X=23\,\eps_2\left\|x-a\right\|_X. \end{align*} Since $\eps_2>0$ was arbitrary, we finally get $$\lim\limits_{\substack{ x\to a \\ x\in X\setminus F }}\frac{\left\|\bar f(x)-\bar f(a)-L(a)(x-a)\right\|_Y}{\left\|x-a\right\|_X}=0.$$ Since $L(a)$ is a~Fr\'echet derivative of $\bar f$ at $a$ with respect to $F$, we deduce $(\bar{f})^{\prime}(a)=L(a)$, which proves (\ref{thm:infinf:item:frechet}). \end{proof} \bledenWSxviOK By a~straightforward application of Theorem~\ref{thm:infinf}, we obtain the following generalization of \cite[Theorem~7]{ALP} for infinite-dimensional domains and vector-valued functions. \eledenWSxvi \begin{corollary}\label{ALP_do_Y} Let $X$ be a~normed linear space that admits Fr\'echet differentiable partition of unity, $F\subset X$ a~nonempty closed set, $Y$~a~normed linear space, $f\fcolon F\to Y$ an arbitrary function and $L\fcolon F\to\mathcal L(X,Y)$ a~relative Fr\'echet derivative of $f$ (with respect to $F$). Then $L$ is Baire one on $F$ if and only if there exists a~function $\bar{f}\fcolon X\to Y$ such that $\bar{f}$ extends $f$, $\bar{f}$ is Fr\'echet differentiable everywhere on $X$ and $(\bar{f})^{\prime}=L$ on~$F$. \end{corollary} The following proposition shows that the assumption on partitions of unity cannot be removed from (\ref{thm:infinf:item:smoothcomp}). The remaining statements of Theorem~\ref{thm:infinf} require only continuous partitions of unity which are available in all metric spaces. \begin{proposition}\label{prop:necessary} Let $X$ be a~normed linear space and $\widetilde{p}\in\N\cup\{\infty\}$. The following statements are equivalent: \begin{enumerate}[\textup\bgroup (a)\egroup] \item\label{prop:necessary:item:partition} The space $X$ admits $C^{\widetilde{p}}$-smooth partition of unity or partition of unity formed by continuous functions $\widetilde{p}$-times differentiable in Fr{\'e}chet or G\^ateaux sense. \MKlistopadxvb \item\label{prop:necessary:item:vector-full} For every normed linear space $Y$, a~nonempty closed set $F\subset X$, a~function $f\fcolon F\to Y$ and a~Baire one function $L\fcolon F\to\mathcal L(X,Y)$, there exists a~function $\bar{f}\fcolon X\to Y$ that satisfies the conclusions of Theorem~\ref{thm:infinf} including \blistopadxvH the respective conclusion of (\ref{thm:difext:item:smoothcomp}) with $p=\widetilde{p}$. \elistopadxvH \MKlistopadxve \item\label{prop:necessary:item:scalar-minimal} Given any nonempty closed set $F\subset X$ and a Fr\'echet smooth (or even locally constant) function $f\fcolon F\to \R$, there exists a~function $\bar{f}\fcolon X\to \R$ that satisfies at least the following properties from Theorem~\ref{thm:infinf}: (\ref{thm:difext:item:ext}), (\ref{thm:difext:item:cont}) and the respective conclusion of (\ref{thm:difext:item:smoothcomp}) with $p=\widetilde{p}$. \end{enumerate} \end{proposition} \begin{proof} The implication (\ref{prop:necessary:item:vector-full}) $\Rightarrow$ (\ref{prop:necessary:item:scalar-minimal}) is obvious and (\ref{prop:necessary:item:partition}) $\Rightarrow$ (\ref{prop:necessary:item:vector-full}) follows by Theorem~\ref{thm:infinf}. The third implication (\ref{prop:necessary:item:scalar-minimal}) $\Rightarrow$ (\ref{prop:necessary:item:partition}) follows by \cite[Lemma VIII.3.6, (ii) $\Rightarrow$ (i)]{DGZ} (or, more precisely, the proof of it, since the argument does not use the completeness of $X$) as soon as we show that given sets $A\subset W \subset X$, where $A$ is closed and $W$ open, there exists a~$C^{\widetilde{p}}$-smooth function $h \fcolon X \to [0,1]$ such that $A \subset h^{-1} (0,\infty) \subset W$. To do so, assume $A$ and $W$ are as indicated. Set $B=X\setminus W$ and $F=A\cup B$. Let $L(x)=0\in X^*$ for $x\in F$ and $f(x)=1$ for $x \in A$, $f(x)=0$ for $x\in B$. By~(\ref{prop:necessary:item:scalar-minimal}), there exists a~function $\bar f$ that satisfies conclusions (\ref{thm:infinf:item:ext}), (\ref{thm:infinf:item:cont}) and (\ref{thm:infinf:item:smoothcomp}) of Theorem~\ref{thm:infinf} (with $p=\widetilde{p}$). This extension $\bar f$ is not necessarily $\widetilde{p}$-times continuously differentiable on the boundary of $F$. However, $h(x) := \varphi(\bar f(x))$ satisfies all required properties if $\varphi$ is a~suitable smooth function (e.g., $\varphi \fcolon \R \to [0,1]$ with $\varphi = 0$ on $(-\infty, 1/4]$ and $\varphi=1$ on $[3/4, \infty)$); $h^\prime$ vanishes in a~neighborhood of the boundary of $F$ by (\ref{thm:infinf:item:ext}) and (\ref{thm:infinf:item:cont})). \end{proof} \section{Vector-valued functions in finite dimensional domain}\label{sec:fininf} \noindent \bledenxvipoWSOK In this section, the domain space is \eledenxvipoWS the Euclidean space $\rn$ ($n\in\N$). The norm on $\rn$ is denoted by $\left|\cdot\right|$. We identify $\rn$ with its dual space $(\rn)^*$ of all linear functionals on $\rn$. It will be convenient to use the following {\em tensor product} notation. If $\psi\in X^{*}$ and $y\in Y$, then $(y\otimes\psi)(u):=\psi(u)\,y$ for every $u\in X$. Note that $y\otimes\psi\in\mathcal L(X,Y)$. In particular, if $\phi\fcolon \rn\to\R$ is differentiable at $x\in\rn$ and $y\in Y$, then $y\otimes{\phi^\prime(x)}\in\mathcal L(\rn,Y)$ and $(y\otimes{\phi^\prime(x)})(u)=(\phi^\prime(x))(u)\,y\in Y$ for every $u\in\rn$. \bledenxvipoWSOK Hence $y\otimes \phi^\prime(x)$ is the derivative of vector-valued function $t\mapsto \phi(t)\, y$ at $x$. \eledenxvipoWS The following theorem generalizes the main extension result from \cite{KZ} to the case of vector-valued functions \bledenxvipoWSOK (see~Corollary~\ref{cor:KZ_do_Y}, \eledenxvipoWS compare with \cite[Theorem~3.1]{KZ}). \begin{theorem}\label{thm:fininf} Let $F\subset\rn$ be a~closed set, $Y$~a~normed linear space, $f\fcolon F\to Y$ an~arbitrary function and $L\fcolon F\to\mathcal L(\rn,Y)$ a~function that is Baire one on $F$. Then there exists a~function $\bar{f}\fcolon \rn\to Y$ such that \begin{enumerate}[\textup\bgroup (i)\egroup] \item\label{thm:fininf:item:ext} $\bar{f}=f$ on $F$, \item\label{thm:fininf:item:cont} if $a\in F$ and $f$ is continuous at $a$ \textup(with respect to $F$\textup), then $\bar{f}$ is continuous at $a$, \item\label{thm:fininf:item:hoelder} if $a\in F$, $\alpha\in (0,1]$ and $f$ is $\alpha$-H\"{o}lder continuous at $a$ \textup(with respect to $F$\textup), then $\bar{f}$ is $\alpha$-H\"{o}lder continuous at $a$; in particular, if $f$ is Lipschitz at $a$ \textup(with respect to $F$\textup), then $\bar{f}$ is Lipschitz at $a$, \item\label{thm:fininf:item:frechet} if $a\in F$ and $L(a)$ is a~relative Fr{\'e}chet derivative of $f$ at $a$ \textup(with respect to $F$\textup), then $(\bar{f})^\prime(a)=L(a)$, \item\label{thm:fininf:item:infsmoothcomp} $ \bar{f} | _ {\rn \setminus \theset} \in{C^\infty}(\rn\setminus F,Y)$, \item\label{thm:fininf:item:strict} if $a\in F$, $L$ is continuous at $a$ and $L(a)$ is a~relative strict derivative of $f$ at $a$ \textup(with respect to $F$\textup), then the Fr{\'e}chet derivative $(\bar{f})^\prime$ is continuous at $a$ with respect to $(\rn\setminus F)\cup\{a\}$ and $L(a)$ is the strict derivative of $\bar{f}$ at $a$ \textup(with respect to $\rn$\textup), \item\label{thm:fininf:item:lip-Loc-Glob} \bledenxvipoWSOK if $a\in F$, $R>0$, $L$ is bounded on $B(a,R) \cap F$ and $f$ is Lipschitz continuous on $B(a,R) \cap F$, then $\bar{f}$ is Lipschitz continuous on $B(a,r)$ for every $r<R$; if $L$ is bounded on $F$ and $f$ is Lipschitz continuous on $ F$, then $\bar{f}$ is Lipschitz continuous on $\rn$. \eledenxvipoWS \end{enumerate} \end{theorem} The strategy of the proof is analogous to the one used in the proof of Theorem \ref{thm:infinf}. Assertions (\ref{thm:fininf:item:ext})-(\ref{thm:fininf:item:infsmoothcomp}) follow directly from Theorem~\ref{thm:infinf} as $\rn$ admits $C^\infty$-smooth partition of unity. To ensure (\ref{thm:fininf:item:strict})-(\ref{thm:fininf:item:lip-Loc-Glob}), we need a~special $C^\infty$-smooth partition of unity in $\rn\setminus F$ that meets several additional requirements analogous to those used in proofs of \cite[Theorem~3.1]{KZ} and the $C^1$ case of Whitney's extension theorem in \cite{EG}, namely \eqref{P1} and \eqref{P7} below. Since we decided to include the preservation of the global Lipschitz continuity (see~\itemref{thm:fininf:item:lip-Loc-Glob}), we had to introduce a~slight change compared to \cite{KZ} and \cite{EG}. \begin{lemma}\label{l:specPart} \bledenWSxviOK There are $C_1, C_2 >1$ depending only on the dimension $n\in \N$ with the following property: Let $F\subset\rn$ be a~nonempty closed set. There \markTD exist $\{x_j\}_{j\in \N}\subset\rn\setminus F$ and $\{\phi_j\}_{j\in\N}\subset C^{\infty}(\rn\setminus F,\R)$ \eledenWSxvi such that, letting \begin{equation} \label{Sx} \ixsetSx :=\{\markTC j \in \N \setcolon B(x,10r(x))\cap B(x_j,10r(x_j))\neq\emptyset\} \end{equation} and \begin{equation} \label{eq:rbezmin} r(x):=\frac{1}{20}\dist(x,F), \end{equation} we have, for every $j\in\N$ and $x\in\rn\setminus F$, \begin{equation}\label{P1} \card(\ixsetSx ) \leq \blistopadxvH C_1, \elistopadxvH \end{equation} \begin{equation}\label{P2} \frac{1}{3}\leq\frac{r(x)}{r(x_j)}\leq 3\qquad {\text{if}}\ \markTD j\in \ixsetSx , \end{equation} \begin{equation}\label{P3} 0\leq\phi_j \end{equation} \begin{equation}\label{P4} \spt\phi_j\subset B(x_j,10r(x_j)), \end{equation} \begin{equation}\label{P5} \sum\limits_{j\in \N}\phi_j(x)=1, \end{equation} \begin{equation}\label{P6} \sum\limits_{j\in \N}{\phi_j}^{\prime}(x)=0 \end{equation} and \begin{equation}\label{P7} |{\phi_j}^{\prime}(x)| \leq \frac{ \blistopadxvH C_2 \elistopadxvH }{r(x)}. \end{equation} \end{lemma} \blistopadxvH The proof of Lemma~\ref{l:specPart} is standard. It can be derived from a~very similar statement that is proven in \cite[pp.~245--247]{EG} and summarized in \cite[Step~1 on p.~1031]{KZ}. Statements in the same spirit can also be found in \cite{SingularIntegrals} and \cite[Theorem~2.2]{MdeGuzman}. For the sake of completeness, we prove the lemma in \bunordruhapodruheschXVIOK Appendix~\ref{apen:partition}. \eunordruhapodruheschXVI \begin{proof}[Proof of Theorem~\ref{thm:fininf}.] If $F$ is empty, the theorem trivially holds. Further suppose that $F$ is nonempty. Let $C_1, C_2>1$, $\{x_j\}_{j\in \N}\subset\rn\setminus F$, $\{\phi_j\}_{j\in \N}\subset C^{\infty}(\rn\setminus F,\,\R)$, $\ixsetSx $ and $r(x)$ be as in Lemma~\ref{l:specPart}. For every $x\in\rn\setminus F$, we choose any point $\widehat x\in F$ such that \begin{equation}\label{Dist} \left|x-\widehat x\tinyspaceafterwidehat \right|=\dist(x,F). \end{equation} \smallbreak Let $A\fcolon (\rn\setminus F)\to\mathcal L(\rn,Y)$ be the function constructed in Theorem \ref{thm:B1ext} (with $X=\rn$ and $Z=\mathcal L(\rn,Y)$). Define $\bar f\fcolon \rn\to Y$ by \begin{equation}\label{Ext_of_f*} \bar f(x):= \begin{cases} \, f(x)&\text{if $x\in F$},\\ \, \sum\limits_{j\in \N}\phi_j(x)\left[f(\widehat{x_j}) +A(x_j)(x-\widehat{x_j})\right] &\text{if $x\in\rn\setminus F$}. \end{cases} \end{equation} \smallbreak As the formula for the extended function $\bar{f}$ is the same one as in the proof of Theorem~\ref{thm:infinf} and the partition of unity $\{\phi_j\}_{j\in \N}$ in $\rn\setminus F$ is only a~special case of the partition of unity $\{\phi_\gamma\}_{\gamma\in \Gamma}$ used in the proof of Theorem~\ref{thm:infinf}, assertions (\ref{thm:fininf:item:ext})-(\ref{thm:fininf:item:infsmoothcomp}) follow immediately by applying the proof of Theorem~\ref{thm:infinf} for the special case when $X=\rn$. \medbreak \MKlistopadxvb It remains to prove assertions (\ref{thm:fininf:item:strict})-(\ref{thm:fininf:item:lip-Loc-Glob}). We \blistopadxvH need \elistopadxvH some auxiliary estimates and computations. \MKlistopadxve Let $a\in F$. For arbitrary $x\in\rn\setminus F$ and $\markTA j\in \ixsetSx $, by \eqref{P2}, \eqref{Sx}, \eqref{eq:rbezmin} and \eqref{Dist}, we get \begin{equation}\label{Es_1} |x_j-x|\leq 10r(x_j)+10r(x)\leq 40r(x) = 2\dist(x,F), \end{equation} and likewise with $x_j$ in the place of~$x$ on the right-hand side \begin{equation |x_j-x|\leq 10r(x_j)+10r(x)\leq 40r(x_j) = 2\dist(x_j,F), \end{equation} \begin{equation |\,} % \tmspace +\thinmuskip {.1667em \widehat x-x_j|\leq|\,} % \tmspace +\thinmuskip {.1667em \widehat x-x|+|x-x_j| \leq\dist(x,F)+2\dist(x,F)=3\dist(x,F), \end{equation} $$ |\widehat{x_j}-x_j| = \dist(x_j, F) \leq|\,} % \tmspace +\thinmuskip {.1667em \widehat x-x_j|\leq 3\dist(x,F), $$ \begin{equation}\label{Es_3} |\widehat{x_j}-\widehat x\tinyspaceafterwidehat |\leq|\widehat{x_j}-x_j| +|x_j-\widehat x\tinyspaceafterwidehat |\leq 3\dist(x,F)+3\dist(x,F)=6\dist(x,F), \end{equation} \begin{equation}\label{Es_4} |\widehat{x_j}-x|\leq|\widehat{x_j}-x_j|+|x_j-x| \leq 3\dist(x,F)+2\dist(x,F)=5\dist(x,F) . \end{equation} Since $\dist(x,F)\leq|x-a|$, by \eqref{Dist}, \eqref{Es_1} and \eqref{Es_4}, we obtain \begin{equation}\label{Es_1_a} |x_j-a|\leq|x_j-x|+|x-a|\leq 3|x-a|, \end{equation} \begin{equation}\label{Es_2_a} |\widehat{x_j}-a|\leq|\widehat{x_j}-x|+|x-a|\leq 6|x-a|, \end{equation} \begin{equation}\label{Es_3_a} |\,} % \tmspace +\thinmuskip {.1667em \widehat x-a|\leq|\,} % \tmspace +\thinmuskip {.1667em \widehat x-x|+|x-a|\leq 2|x-a|. \end{equation} For $x\in\rn\setminus F$, differentiating $\bar{f}$ at $x$, by \eqref{P1}, \eqref{P4}, \eqref{P5}, \eqref{P6}, \eqref{Sx} and \eqref{Ext_of_f*}, we get \begin{align} \nonumber (\bar{f}\,)^\prime(x) &= \sum\limits_{\markTA j\in \ixsetSx } \phi_j(x)A(x_j)+\sum\limits_{\markTA j\in \ixsetSx } \left[f(\widehat{x_j})+A(x_j)(x-\widehat{x_j})\right] \otimes{\phi_j}^\prime(x)\\ &=\sum\limits_{\markTA j\in \ixsetSx }\phi_j(x)L(a)+ \sum\limits_{\markTA j\in \ixsetSx }\phi_j(x)\left[A(x_j)-L(a)\right] \nonumber\\ &\qquad+\sum\limits_{\markTA j\in \ixsetSx } \left[f(\,} % \tmspace +\thinmuskip {.1667em \widehat x\tinyspaceafterwidehat )-L(a)(\,} % \tmspace +\thinmuskip {.1667em \widehat x-x)\right]\otimes{\phi_j}^\prime(x) \nonumber\\ &\qquad+\sum\limits_{\markTA j\in \ixsetSx } \left[f(\widehat{x_j})-f(\,} % \tmspace +\thinmuskip {.1667em \widehat x\tinyspaceafterwidehat )-L(a) (\widehat{x_j}-\widehat x\tinyspaceafterwidehat )\right]\otimes{\phi_j}^\prime(x) \nonumber\\ &\qquad+\sum\limits_{\markTA j\in \ixsetSx } \left[(A(x_j)-L(a))(x-\widehat{x_j})\right]\otimes{\phi_j}^\prime(x) \nonumber\\ &=L(a)+\sum\limits_{\markTA j\in \ixsetSx }\phi_j(x)\left[A(x_j)-L(a)\right] \label{eq:Dfdole} \\ &\qquad+\sum\limits_{\markTA j\in \ixsetSx } \left[f(\widehat{x_j})-f(\,} % \tmspace +\thinmuskip {.1667em \widehat x\tinyspaceafterwidehat )-L(a) (\widehat{x_j}-\widehat x\tinyspaceafterwidehat )\right]\otimes{\phi_j}^\prime(x) \nonumber\\ &\qquad+\sum\limits_{\markTA j\in \ixsetSx } \left[(A(x_j)-L(a))(x-\widehat{x_j})\right]\otimes{\phi_j}^\prime(x). \nonumber \end{align} \blistopadxvH Now, we direct our attention to assertions \itemref{thm:fininf:item:strict} and \itemref{thm:fininf:item:lip-Loc-Glob}. \begin{claim}\label{claim:strictAndLip} Let the following be defined as above: $F\subset \rn$, $Y$ a normed linear space, $L\fcolon F\to \mathcal L(\rn,Y)$, $f\fcolon F\to Y$, $\bar f\fcolon \rn \to Y$ and $A\fcolon ( \rn\setminus F ) \to \mathcal L(\rn,Y)$. Suppose that $a\in F$, $r_1, r_2\in (0,\infty)\cup\{\infty\}$ and $K_1, K_2 \ge 0$ satisfy \begin{align}\label{eq:AnearLa} \left\|A(t)-L(a)\right\|_{\mathcal L(\rn,Y)} & \le K_1 && \text{for\ every\ }t\in\rn\setminus F,\ |t-a|< r_1 \\\noalign{\noindent and} \label{eq:LipOrStrict} \left\|f(z)-f(y)-L(a)(z-y)\right\|_Y & \leq K_2|z-y| && \text{for\ every\ } y,\,z\in F,\ \max\left(|y-a|,|z-a|\right)<r_2 . \end{align} For $x, y \in \rn$, denote \begin{equation}\label{eq:Exy-def} E_{xy}: = \left\| \bar f(y) - \bar f(x) - L(a)(y-x) \right\|_Y = \sup_{\substack{T\in Y^*\\ \left\|T\right\|_{Y^*}\leq 1}} \left| T\left( \bar f(y) - \bar f(x) - L(a)(y-x) \right)\right| . \end{equation} Let $r_3=\min\left( r_1/3, r_2/6 \right)$ and $ K_3= (1+5\cdot 20 C_1 C_2) K_1 + 6\cdot 20 C_1 C_2 K_2 $, where $C_1$, $C_2$ are the constants from Lemma~\ref{l:specPart}. Then \begin{align}\label{eq:DfCont} \left\|(\bar{f}\,)^\prime(x)-L(a)\right\|_{\mathcal L(\rn,Y)} \le K_3 \qquad & \text{ for all $x\in\rn\setminus F$ such that $ |x-a| < r_3 $ } \\\noalign{\noindent and} \label{eq:Exy-est} E_{xy} \le 33 K_3 \, \left | y - x \right | \qquad & \text{ for all $x,\,y \in\rn$ such that $ \max\left(|x-a|,|y-a|\right) < r_3/2 $. } \end{align} \end{claim} \elistopadxvH Postponing the proof of Claim~\ref{claim:strictAndLip}, we now proceed to the proof of assertion (\ref{thm:fininf:item:strict}). As its conclusion clearly holds for $a\in\interior(F)$, we can further assume that $a\in\boundary F$. \blistopadxvH Fix $\eps_1>0$. Note that $L$ is assumed to be continuous at $a$ (with respect to $F$). By \eqref{eq:continuity} from Theorem \ref{thm:B1ext}, there exists $ r_1 > 0 $ such that (cf.~\eqref{eq:AnearLa}) \begin{equation}\label{Cont_of_A} \left\|A(t)-L(a)\right\|_{\mathcal L(\rn,Y)}<\eps_1\qquad \text{for\ every\ }t\in\rn\setminus F,\ |t-a|< r_1. \end{equation} \elistopadxvH Since we assume that $L(a)$ is a~strict derivative of $f$ at $a$ (with respect to $F$), there exists $ r_2 > 0 $ such that (cf.~\eqref{eq:LipOrStrict}) \begin{equation \left\|f(z)-f(y)-L(a)(z-y)\right\|_Y\leq\eps_1|z-y|\qquad \text{for\ every\ } y,\,z\in F,\ \max\left(|y-a|,|z-a|\right)<r_2. \end{equation} By \eqref{eq:DfCont} from Claim~\ref{claim:strictAndLip} applied with $K_1=K_2=\varepsilon_1$, we get $r_3>0$ such that \begin{align}\label{eq:DfCont-applied} \left\|(\bar{f}\,)^\prime(x)-L(a)\right\|_{\mathcal L(\rn,Y)} &\le K_3 \qquad & & \text{ for all $x\in\rn\setminus F$ such that $ |x-a| < r_3 $, } \end{align} with $K_3= \left[1+220C_1C_2\right]\eps_1 $. Since $\eps_1>0$ was arbitrary and $(\bar{f})^\prime(a)=L(a)$ (note that we already proved~\itemref{thm:fininf:item:frechet}), we get that $(\bar{f})^\prime$ is continuous at $a$ with respect to $(\rn\setminus F)\cup\{a\}$. Likewise, the estimate of $E_{xy}$ provided by \eqref{eq:Exy-est} shows that $L(a)$ is the strict derivative of $\bar f$ at $a$. Hence, the proof of assertion (\ref{thm:fininf:item:strict}) is finished. \smallbreak \bledenxvipoWSOK To prove assertion \itemref{thm:fininf:item:lip-Loc-Glob}, we prove that if $a\in F$, $r\in (0,\infty)\cup\{\infty\}$, $L$ is bounded on $B(a,72r) \cap F$ and $f$ is Lipschitz continuous on $B(a,12r) \cap F$, then $\bar{f}$ is Lipschitz continuous on $B(a,r)$. Both statements of~\itemref{thm:fininf:item:lip-Loc-Glob} then obviously follow either by a~standard compactness argument or using the case $r=\infty$. \eledenxvipoWS Assume that $a\in F$, $r\in (0,\infty)\cup\{\infty\}$, $L$ is bounded on $B(a,72r) \cap F$ and $f$ is Lipschitz continuous on $B(a,12r) \cap F$. Let $K_0$ denote the Lipschitz constant of $f$. Then there is $C_0>0$ such that $\|A\|_{\mathcal L(\rn,Y)} \le C_0$ on $B(a,6r)\cap (\rn\setminus F)$ since $A$ was obtained from Theorem~\ref{thm:B1ext} (cf.~\eqref{eq:boundedness}). Thus we have~\eqref{eq:AnearLa} with $K_1= C_0 + \left\|L(a)\right\|_{\mathcal L(\rn,Y)}$ and $r_1=6 r$. Using the Lipschitz property of $f$, we obtain~\eqref{eq:LipOrStrict} with $K_2=K_0+\left\|L(a)\right\|_{\mathcal L(\rn,Y)}$ and $r_2=12r$. An application of Claim~\ref{claim:strictAndLip}, namely of~\eqref{eq:Exy-est}, gives \[ \left \| \bar f(y) - \bar f(x) \right\| _ Y \le \left( 33 K_3 + \left\| L(a) \right\|_{\mathcal L(\rn,Y)} \right) \, \left | y - x \right| \] for every $x,y \in \rn$ such that $\max( \left|x-a\right|, \left|y-a\right|) < r_3/2 = r$, which is the required Lipschitz property of~$\bar f$, cf.~\itemref{thm:fininf:item:lip-Loc-Glob}. This concludes the proof of Theorem~\ref{thm:fininf} except that we still have to show that Claim~\ref{claim:strictAndLip} holds true. \end{proof} \begin{proof}[Proof of Claim~\ref{claim:strictAndLip}] Let also the other symbols be defined as above (that is, $x_j$, $\phi_j$ ($j\in \N$), $\ixsetSx$ and $r(x)$ ($x\in \rn\setminus \theset$) are as in Lemma~\ref{l:specPart}, $\widehat x$ as in~\eqref{Dist} etc.). Let $x\in\rn\setminus F$ and $ |x-a| < r_3 := \min\left(\frac{r_1}{3},\frac{r_2}{6}\right) $. Then, for every $\markTA j\in \ixsetSx $, using \eqref{Es_1_a}, \eqref{Es_2_a} and \eqref{Es_3_a}, we get $|x_j-a|<r_1$ and $ \max ( |\widehat{x_j}-a|, |\,} % \tmspace +\thinmuskip {.1667em \widehat x-a| ) <r_2 $. \blistopadxvH \bledenxvipoWSOK By \eqref{eq:Dfdole}, \eledenxvipoWS \begin{align} \left\| (\bar{f}\,)^\prime(x)-L(a) \right\|_{\mathcal L(\rn,Y)} &\leq\sum\limits_{\markTA j\in \ixsetSx }\phi_j(x)\left\|A(x_j)-L(a)\right\|_{\mathcal L(\rn,Y)} \nonumber\\ &\qquad+\sum\limits_{\markTA j\in \ixsetSx } \left\|f(\widehat{x_j})-f(\,} % \tmspace +\thinmuskip {.1667em \widehat x\tinyspaceafterwidehat )-L(a)(\widehat{x_j}-\widehat x\tinyspaceafterwidehat )\right\|_Y \left|{\phi_j}^\prime(x)\right|\nonumber\\ &\qquad+\sum\limits_{\markTA j\in \ixsetSx } \left\|A(x_j)-L(a)\right\|_{\mathcal L(\rn,Y)}|x-\widehat{x_j}| \left|{\phi_j}^\prime(x)\right|\nonumber . \end{align} Estimating the first term by~\eqref{eq:AnearLa} together with~\eqref{P3} and \eqref{P5}, the second one by~\eqref{eq:LipOrStrict} with~\eqref{P1}, \eqref{P7} and \eqref{Es_3}, and the third one by~\eqref{eq:AnearLa} with~\eqref{P1}, \eqref{P7} and \eqref{Es_4}, we get \begin{align} \left\| (\bar{f}\,)^\prime(x)-L(a) \right\|_{\mathcal L(\rn,Y)} &\leq K_1 + 6\dist(x,F)\,\frac{20 C_1 C_2}{\dist(x,F)}\, K_2 + 5\dist(x,F)\,\frac{20 C_1 C_2}{\dist(x,F)}\, K_1 \nonumber \\ &\leq (1+5\cdot 20 C_1 C_2) K_1 + 6\cdot 20 C_1 C_2 K_2 = K_3 . \label{eq:DfContProved} \end{align} Thus we obtained \eqref{eq:DfCont}. \elistopadxvH \medbreak Next, we want to prove~\eqref{eq:Exy-est}, the estimate of $E_{xy}$. If (\ref{thm:fininf:item:cont}) of Theorem~\ref{thm:fininf} were applicable at every point of $ a\in F $, this could have been done easily using the continuity of $(\bar f)'$ at $ a\in F $ (also the mean value theorem would be used on parts of the segment $L_{xy}$ together with the estimate $E_{xy} \le E_{xu} + E_{uv} + E_{vy}$ analogously to the arguments that follow), but we can deal with the general case as well. From~\eqref{eq:AnearLa}, we have \begin{equation}\label{eq:Abounded} \left\|A(x)\right\|_{\mathcal{L}(\rn,Y)} \le M \end{equation} whenever $\left|x-a\right| < r_1$, where $ M = \left\| L(a) \right\|_{\mathcal{L}(\rn,Y)} + K_1 $. Note that $K_2 \le K_3$ by the definition of $K_3$, since $C_1, C_2 > 1$. Fix $x,y\in\rn$ such that $\max\left(|x-a|,|y-a|\right)< r_3/2$. We will show that \[ E_{xy} \le 33 K_3 \left| y - x \right| . \] As this inequality trivially holds for $x=y$, we will further suppose that $x\neq y$. \smallbreak Let $L_{xy}$ denote the (closed) segment connecting $x$ and $y$. We will distinguish several possible cases. If $L_{xy} \subset\rn\setminus F$ and $T\in Y^*$ with $\|T\|_{Y^*}\leq 1$, then there exists $\xi_T \in L_{xy}$ such that \[ T\left( \bar f(y) - \bar f(x) \right)= \left((T(\bar f\,))'(\xi_T)\right) (y-x) . \] By \eqref{eq:DfContProved}, we simply get \begin{equation}\label{eq:ExyVDoplnkuF} E_{xy} \le \sup_{\substack{T\in Y^*\\ \left\|T\right\|_{Y^*}\leq 1}} \left\| T\right\|_{Y^*} \left\| (\bar f\,)'(\xi_T) - L(a) \right\|_{\mathcal{L}(\rn,Y)} \left| y - x \right| \leq K_3 \left| y - x \right|. \end{equation} If $x, y \in F$, we have $E_{xy} \leq K_2 \left| y - x \right|$ by \eqref{eq:LipOrStrict}. In the remaining cases, $L_{xy} \cap F \neq \emptyset$ and one or both points $x$, $y$ lie in $\rn\setminus F$. If $x,y\in\rn\setminus F$ then segment $L_{xy}$ can be divided into two or three segments as follows: 1. $L_{xu}$ with $u\in F$ and $L_{xu} \setminus \{u\} \subset \rn\setminus F$. 2. $L_{uv}$ with $u, v \in F$, which might possibly be degenerate ($v=u$). 3. $L_{vy}$ with $v\in F$ and $L_{vy} \setminus \{v\} \subset \rn\setminus F$. \iftrue \begin{figure}[hbt] \begin{center} \def(-4.3,0){(-4.3,0)} \def(8.2,0){(8.2,0)} \begin{tikzpicture} \draw (-4.3,0) node[below left]{$x$} -- (8.2,0) node[below left]{$y$}; \fill (-4.3,0) circle (2pt); \fill (8.2,0) circle (2pt); \draw (0,-2) coordinate (a1) -- (2,2) coordinate (a2) -- (2.7, -0.8) -- (3,2) coordinate (b1) -- (4,-2) coordinate (b2); \path (2, -2) node[above] {$F$}; \draw[thick] (1,0) node[above left]{$u$} circle (2pt); \draw (0.1,0) node[above left]{$\bar u$} circle (2pt); \foreach \x/\y in {0.3/0.1, 0.3/-0.3, -0.2/-0.2 } { \fill (\x,\y) circle (1.5pt); \fill ($ (a1)!(\x,\y)!(a2) $) circle(1.5pt); } \path (0.7,-0.6) node [below right ]{$\widehat{x_j}$}; \path (-0.1,-0.2) node [below left ]{$ x_j$}; \draw[thick] (3.5,0) node[above left]{$v$} circle (2pt); \draw (5,0) node[above right]{$\bar v$} circle (2pt); \foreach \x/\y in {5.1/-0.2, 4.8/-0.15 } { \fill (\x,\y) circle (1.2pt); \fill ($ (b1)!(\x,\y)!(b2) $) circle(1.2pt); } \path (5.1,-0.2) node [below right ]{$x_k$}; \path (3.65,-0.6) node [below left ]{$\widehat{x_k}$}; \end{tikzpicture} \caption{The case $x,y\in \mathbb R^n\setminus F$ with $L_{xy}$ intersecting $F$. Published with permission of \copyright\ Jan Kol\'a\v{r} 2016. All Rights Reserved.\relax } \label{fig:segmentLxy} \end{center} \end{figure} \fi \smallbreak The reader might welcome an informal remark, that we will not use the estimate of $E_{uv}$ (which could be obtained immediately from \eqref{eq:LipOrStrict}), but replace it by a~convex combination of estimates of $E_{\widehat{x_j}, \widehat{x_k}}$ with $\widehat{x_j}$, $\widehat{x_k}$ related to the definition of $\bar f(\bar u)$, $\bar f(\bar v)$, where $\bar u,\bar v \in \rn\setminus F$ are points approximating $u$, $v$ (see Figure~\ref{fig:segmentLxy}). This way we do not need the continuity of $\bar f$ at points $u,v\in F$. We omit the case $x\in F$, $y\in \rn\setminus F$ since it is analogous to the case that follows. If \begin{equation}\label{eq:caseX} x\in \rn\setminus F \text{ and } y\in F \end{equation} then $L_{xy}$ divides into two segments $L_{xu}$ and $L_{uv}$ as above with $v=y$ (we can consider $L_{vy}$ as degenerate). We use a~convex combination of estimates of $E_{\widehat{x_j}, y}$ (again provided by \eqref{eq:LipOrStrict}). Apart from that, this case is similar to the most complex case $x,y\in \rn\setminus F$ and therefore we will not fully threat both of them. \bledenxvipoWSOK (Formally, the case \eqref{eq:caseX} can be treated together with the case $x,y\in \rn\setminus F$ \eledenxvipoWS if we extend our notation as follows: Let $\phi_0(z) = 1$ if $z\in F$ and $\phi_0(z) = 0$ if $z\in \rn\setminus F$. Let $x_0= y$, $\widehat{x_0}= y$ and \bledenxvipoWSOK $A(x_0) = 0$. \eledenxvipoWS Then \bledenxvipoWSOK $\{ \phi_j \} _ {j\in \N\cup \{0\} }$ \eledenxvipoWS is a~partition of unity and \eqref{Ext_of_f*} remains true, with unchanged values of $\bar f$, if the sum is extended to include $j=0$. Moreover, the second line of \eqref{Ext_of_f*} then gives the correct value of $\bar f(v)$ \bledenxvipoWSOK even though we have $v=y\in F$. \eledenxvipoWS We also define $\ixsetSxzero =\markTD \{0\}$.) \smallbreak Let us concentrate on the case $x,y\in \rn\setminus F$. Let \begin{equation}\label{eq:defm} m:= \left| y-x \right| \min( K_3 / M , 1 / 4 ) . \end{equation} % % % We choose a~point $\bar u \in L_{xu} \setminus \{u\}$ with $\left| \bar u - u \right| < m$ and likewise $\bar v \in L_{vy} \setminus \{v\}$ with $\left| \bar v - v \right| < m$. (For the case \eqref{eq:caseX} we let $\bar v=v=y$.) % % Since $L_{x\bar u} \subset \rn\setminus F$, we already estimated in~\eqref{eq:ExyVDoplnkuF} that \begin{equation}\label{eq:Exu} E_{x\bar u} \leq K_3 \left| \bar u - x \right| \le K_3 \left| y - x \right| . \end{equation} Likewise, \begin{equation}\label{eq:Evy} E_{\bar vy} \leq K_3 \left| y - \bar v \right| \le K_3 \left| y - x \right| . \end{equation} By \eqref{Es_4}, we have \begin{align} \label{eq:hatxj-baru} \left|\widehat{x_j} - \bar u\right| \le 5\dist (\bar u, F) & \le 5 \left| \bar u - u \right| < 5 m \\ \text{and}\quad \label{eq:hatxk-barv} \left|\widehat{x_k} - \bar v\right| & \le 5 \left| \bar v - v \right| < 5 m \end{align} whenever $\markTA j \in \ixsetSbaru$ and $\markTA k \in \ixsetSbarv$, in which case therefore also \begin{equation}\label{eq:hatxk-hatxj--barv-baru} \Bigl| \left(\widehat{x_k} - \widehat{x_j} \right) - \left( \bar v - \bar u \right) \Bigr| \le \left|\widehat{x_k} - \bar v\right| + \left|\widehat{x_j} - \bar u\right| \le 10 m . \end{equation} Since $\bar u \in L_{xy} \subset B(a,r_3/2)$, clearly $ \left| \bar u - a \right | < r_3/2 $, and from \eqref{Es_1}, we get $ \left| x_j - a \right| \blistopadxvH \le \left| x_j - \bar u \right| + \left| \bar u - a \vphantom{ x_j } \right | \le 2 \dist(\bar u, F) + \left| \bar u - a \right | \le 3 \left| \bar u - a \right | < 3 r_3/2 \elistopadxvH \le r_1 $ whenever $\markTA j \in \ixsetSbaru$. The values of $\bar f(\bar u)$ (and similarly also of $\bar f(\bar v)$) are defined by \eqref{Ext_of_f*} where $\phi_j(\bar u)$ can be nonzero only when $\markTA j \in \ixsetSbaru$. Using \eqref{Ext_of_f*}, the triangle inequality, \eqref{P3}, \eqref{P5}, \eqref{eq:Abounded} and \eqref{eq:hatxj-baru}, we obtain \[ \Bigl\| \bar f(\bar u) - \sum_j \phi_j(\bar u) f(\widehat{x_j}) \Bigr\| _Y \leq \sum_j \phi_j(\bar u) \left\|A(x_j)\right\|_{\mathcal{L}(\rn,Y)} \left| \bar u - \widehat{x_j}\right| \le 5 M m \le 5K_3 \left| y - x \right| . \] Likewise, \[ \Bigl\| \bar f(\bar v) - \sum_k \phi_k(\bar v) f(\widehat{x_k}) \Bigr\| _Y \le 5 M m \le 5K_3 \left| y - x \right| . \] Using identities $\phi_j=\phi_j\sum_k \phi_k$ and $\phi_k = \phi_k \sum_j \phi_j $, we can write \begin{equation}\label{eq:esti1} \Bigl\| \bar f(\bar v) - \bar f (\bar u) - \sum_j \sum_{k} \phi_j(\bar u) \phi_k(\bar v) \left( f(\widehat{x_k}) - f(\widehat{x_j}) \right) \Bigr\| _Y \le 10K_3 \left| y-x \right| . \end{equation} Since $\widehat{x_j}, \widehat{x_k} \in F$, we get by \eqref{eq:LipOrStrict}, $K_2 \le K_3$, \eqref{eq:hatxk-hatxj--barv-baru} and \eqref{eq:defm} \[ \left\| f(\widehat{x_k}) - f(\widehat{x_j}) - L(a) (\widehat{x_k}- \widehat{x_j}) \right\| _Y \leq K_2 \left| \widehat{x_k} - \widehat{x_j} \right| \le K_3\,( 10 m + \left| \bar u - \bar v \right| ) \le 11 K_3 \left| y - x \right| \] whenever $\markTA j \in \ixsetSbaru$ and $\markTA k \in \ixsetSbarv$. Hence, again by \eqref{eq:hatxk-hatxj--barv-baru}, we obtain (see also \eqref{eq:defm} and note that $\left\|L(a) \right\|_{\mathcal{L}(\rn,Y)}\le M$) \[ \left\| f(\widehat{x_k}) - f(\widehat{x_j}) - L(a) (\bar v - \bar u ) \right\| _Y \leq 11 K_3 \left| y - x \right| + 10 m \left\|L(a) \right\|_{\mathcal{L}(\rn,Y)} \le 21 K_3 \left| y - x \right| , \] which combines with \eqref{eq:esti1} and $ \sum_j \! \sum_{k} \phi_j(\bar u) \phi_k(\bar v) = 1 $ to \begin{equation} \Bigl\| \bar f(\bar v) - \bar f (\bar u) - L(a) (\bar v - \bar u ) \Bigr\| _Y \le 31 K_3 \left| y - x \right| . \end{equation} So \begin{equation}\label{eq:Euv} E_{\bar u \bar v} \le 31 K_3 \left| y - x \right| . \end{equation} By \eqref{eq:Exu}, \eqref{eq:Euv} and \eqref{eq:Evy}, since $E_{xy} \le E_{x \bar u} + E_{\bar u \bar v} + E_{\bar v y}$, \begin{equation}\label{eq:strict-posledni-v-dukaze E_{xy} \le 33 K_3 \left| y - x \right| , \end{equation} which concludes the proof of Claim~\ref{claim:strictAndLip}. \end{proof} The following corollary provides a~vector-valued version of \cite[Theorem~3.1]{KZ}. \begin{corollary}\label{cor:KZ_do_Y} Let $F\subset\rn$ be a~nonempty closed set, $Y$ a~normed linear space, $f\fcolon F\to Y$ an~arbitrary function and $L\fcolon F\to\mathcal L(\rn,Y)$ a~relative Fr{\'e}chet derivative of $f$ (on $F$) such that $L$ is Baire one on $F$. Then there exists a~function $\bar{f}\fcolon \rn\to Y$ such that \begin{enumerate}[\textup\bgroup (i)\egroup] \item $\bar{f}$ is Fr{\'e}chet differentiable on $\rn$, \item ${\bar{f}}=f$ and $(\bar{f})^\prime=L$ on $F$, \item if $a\in F$, $L$ is continuous at $a$ and $L(a)$ is a~relative strict derivative of $f$ at $a$ (with respect to $F$), then the Fr{\'e}chet derivative $(\bar{f})^\prime$ is continuous at $a$, \item ${\bar{f}}\in\mathcal C^{\infty}\left(\rn\setminus F,Y\right)$. \end{enumerate} \end{corollary} \begin{remark} The previous corollary easily implies the $C^1$ case of Whitney's extension theorem for vector-valued functions (see, e.g., \cite[Theorem~3.1.14]{Fed}). Indeed, assuming that the assumptions of Whitney's theorem are fulfilled, it is sufficient to show that $L(a)$ is a~strict derivative of $f$ at~$a$ for every $a\in F$ (which involves a~straightforward and easy computation only, cf.~\cite[Remark~3.2]{KZ}) and then to apply Corollary~\ref{cor:KZ_do_Y}. \end{remark} \let\pleaseDoSpellCheck( \begin{remark} \myParBeforeItems \begin{enumerate}[(a)] \item In (\ref{thm:fininf:item:strict}) of Theorem \ref{thm:fininf}, we cannot expect the Fr{\'e}chet derivative $(\bar{f})^\prime$ to be continuous at~$a$ with respect to the whole space $\rn$ unless appropriate assumptions are added (cf.\ Remark~\ref{rem:ZAthm:diffext}\itemref{rem:ZAthm:diffext:item:C1}). Indeed, consider $n=2$, $F=[0,1]\times\{0\}$, $f\fcolon F\to\R$ given by $f(x,0)=x^7\left|\sin\frac{1}{x}\right|$ for $x\in(0,1]$ and $f(0,0)=0$, $L=0$ and $a=(0,0)$. Note that $L(a)$ is a relative {\em strict} derivative of $f$ at $a$. If we extend $f$ according to Theorem \ref{thm:fininf}, then the extended function $\bar{f}$ is not Fr{\'e}chet differentiable in any neighborhood of $a$, since both $f$ and $\bar{f}$ do not have a~Fr{\'e}chet derivative at those points of~$F$ at which $\sin\frac{1}{x}$ changes its sign. However, the Fr{\'e}chet derivative $(\bar{f})^\prime$ is continuous at $a$ with respect to $(\R^2\setminus F)\cup\{a\}$ as Theorem \ref{thm:fininf}\itemref{thm:fininf:item:strict} states. \let\pleaseDoSpellCheck( \smallbreak \item\label{poznamka46itemB} In Theorem~\ref{thm:fininf}\itemref{thm:fininf:item:strict}, neither the continuity of $(\bar f)'$ at $a$ nor the conclusion that $L(a)$ is the strict derivative of~$\bar f$ at~$a$ (even with respect to $(\rn \setminus F) \cup \{a\}$) can be obtained when we remove the assumption that $L(a)$ is a~relative {\em strict} derivative of $f$ at $a$ (with respect to $F$). Indeed, consider $F=\{0\}\cup\{\frac{1}{n}\setcolon n\in\N\}\subset \R $ and let $f\fcolon F\to\R$ be given by $f(\frac{1}{n})=\frac{(-1)^n}{n^2}$ for $n\in\N$ and $f(0)=0$. Let $L(x)=0$ ($x\in F$) and $a=0$. \bledenWSxviOK Obviously, $L(a)=0$ is a~relative derivative of $f$ at $a$ with respect to $F$. \eledenWSxvi For $n\in\N$, the distance between $x_n := \frac{1}{n}$ and $x_{n+1}$ is less than $\frac{1}{n^2}$ and the absolute increment of $f$ between these two points is greater than $\frac{1}{n^2}$. Applying Theorem \ref{thm:fininf}, we obtain the extended function $\bar{f}$ on~$\R$ that is continuous at every $x_n$ due to condition \itemref{thm:fininf:item:cont}. By the mean value theorem, for every $n\in \N$, the absolute value of the derivative of $\bar{f}$ at some point of the interval $(x_{n+1}, x_n)$ is greater than $1$. Therefore $(\bar{f})^\prime$ cannot be continuous at~$a$ with respect to $(\R\setminus F) \cup \{a\}$. Also, $L(a)=0$ cannot be a~strict derivative of~$\bar f$ at~$a$ (even with respect to~$(\R\setminus F) \cup \{a\}$). \let\pleaseDoSpellCheck( \smallbreak \item Likewise, neither of the two conclusions of Theorem~\ref{thm:fininf}\itemref{thm:fininf:item:strict}\footnote{\relax Namely that $(\bar f)'$ is continuous at $a$ or that $L(a)$ is the strict derivative of~$\bar f$ at~$a$ (even only with respect to $(\rn \setminus F) \cup \{a\}$). } can be obtained without assuming that $L(a)$ is continuous at $a$ with respect to $F$. Consider the same set $F$ as in \itemref{poznamka46itemB} together with $a=0$, $f=0$ on $F$, $L(0)=0$ and $L(\frac 1n )=(-1)^n$ for $n\in \N$. Applying Theorem \ref{thm:fininf}, we obtain the extended function $\bar{f}$ on~$\R$ that has $(-1)^n$ as the derivative at isolated point $\frac 1n$ due to condition \itemref{thm:fininf:item:frechet}. Hence $\bar{f}$ is continuous at~$\frac 1n$. By the mean value theorem, for every $n\in \N$, the absolute value of the derivative of~$\bar{f}$ at some point close to $\frac 1n$ is greater than $\frac 12$. Note that $(\bar f)'(0)=L(0)=0$. Therefore $(\bar{f})^\prime$ cannot be continuous at~$a$ with respect to $(\R\setminus F) \cup \{a\}$. Also, $L(a)=0$ cannot be a~strict derivative of~$\bar f$ at~$a$ (even with respect to~$(\R\setminus F) \cup \{a\}$). \end{enumerate} \end{remark} \begin{remark}\label{rem:finDimNecess} If statements \itemref{thm:fininf:item:strict} or \itemref{thm:fininf:item:lip-Loc-Glob} are required in Theorem~\ref{thm:fininf}, the assumption $F\subset \rn$ cannot be generalized to $F\subset X$, replacing $\rn$ by an (infinitely dimensional) Banach space $X$. In other words, the condition $\dim X<\infty$ cannot be removed from Theorem~\ref{thm:difext} \itemref{thm:difext:item:strict}\itemref{thm:difext:item:lip-Loc-Glob}. Indeed, let $p\in [1,2)$, $X=L_p(0,1)$, $Y=l_2$, $e\in X$ with $\left\|e\right\|_X =1$. By \cite[Theorem~3]{JLext}, for every integer $n > 10$, there is a~finite set $F_n\subset X$ and a~function $f_n \fcolon F_n \to Y$, such that $\Lip f > c_n \Lip f_n$ for every $f\fcolon X\to Y$ that extends $f_n$, where $c_n \to \infty$. The actual value of $c_n=\tau\cdot (\log n / \log \log n)^{1/p-1/2} $ (for some $\tau > 0$) is not important for our purposes. By translating and scaling down the set, and by scaling the values of $f_n$ we can assure that $F_n\subset B_X(2^{-n} e, 2^{-2n-1})$, $f_n(F_n) \subset B_{Y}(0, 2^{-2n-1})$ and $\Lip f_n = c_n^{-1/2}$. Then the property of $f_n$ is that it has no extension $\bar f_n\fcolon X\to Y$ with $\Lip \bar f_n \le d_n := c_n \, \Lip f_n = c_n^{1/2}$. Note that $c_n^{-1/2} \to 0$ and $d_n\to \infty$. Let $F=\{0\} \cup \bigcup_{ n > 10 } F_n$ and define $f\fcolon F\to Y$ by $f(0)=0$, $f|_{F_n}=f_n$. The scaling was chosen so that $0\in \mathcal L(X,Y)$ is a~relative strict derivative of $f$ (with respect to~$F$) at $a:=0 \in X$. Since every $F_n$ is finite, $0$ is the only accumulation point of $F$. Let $L(x)=0$ for $x\in F$. Consider $\bar f$ that is an extension of $f$ as in the theorem. If $(\bar f)'$ is continuous at $a$ with respect to $(X\setminus F) \cup \{ a \}$ as in~\itemref{thm:fininf:item:strict}, we obtain a~contradiction. First, we see that $(\bar f)'$ is actually continuous with respect to $X$ (note that the derivative is continuous at $a =0 $ with respect to $F$, since it equals $L$ on~$F$ because every $x\in\theset \setminus\{a\}$ is an isolated point of $\theset$). Consequently, $\bar f$ is Lipschitz in $B_X(0,2r)$ for some $r>0$. Let $\pi$ be the radial projection onto $\overline {B_X(0,r)}$, $\pi(x) = x $ for $x\in B_X(0,r)$ and $\pi(x) = r x / \left\| x \right\| $ for $x\in X\setminus B_X(0,r)$. Then $\Lip \pi \le 2$, see e.g.\ \cite[Remark~4]{Maligranda}, hence the mapping $g(x) = \bar f(\pi(x))$ is Lipschitz, and for $n$ sufficiently large, $g$ is a~$d_n$-Lipschitz extension of $f_n$, which is a~contradiction. Likewise, if $L(a)$ is the strict derivative of $\bar f$ at $a$ (with respect to $X$) as in~\itemref{thm:fininf:item:strict} then $\bar f$ is Lipschitz in $B_X(0,2r)$ for some $r>0$ and we obtain a~contradiction. The same example also provides a~contradiction with~\itemref{thm:fininf:item:lip-Loc-Glob}. \end{remark} \titleformat{\section}{\bfseries}{
1705.09272
\section{Introduction} Recently, Bose-Einstein condensates (BEC) with dipole-dipole interactions (DDI) have received considerable attention both experimentally and theoretically \cite{Baranov, Pfau,Carr, Pupillo2012}. Dipolar quantum gases, in stark contrast to dilute gases of isotropic interparticle interactions, offer fascinating prospects of exploring ultracold gases and novel many-body quantum phases with atomic interactions that are long-range and spatially anisotropic. Impressionnant efforts has been directed towards the ground states properties and elementary excitations of such dipolar systems at both zero and finite temperatures \cite {Santos, Santos1, Dell, Eberlein, Bon1, Bon, Corm, He, Biss, lime, Boudj, Boudj1, Boudj2}. Three-body interactions (TBI) are ubiquitous and play an important role in a wide variety of interesting physical phenomena, and yield a new physics and many surprises not encountered in systems dominated by the two-body interactions. Recently, many experimental and theoretical techniques have been proposed to observe and realize the TBI in ultracold Bose gas \cite{Ham, Will, Daly, Petrov}. For instance, inelastic three-body processes, including observations of Efimov quantum states and atom loss from recombination have been also reported in Refs \cite{Eff, Eff1, Bed, Kra, Brut}. Few-body forces induce also nonconventional many-body effects such as quantum Hall problems \cite {Grei} and the transition from the weak to strong-pairing Abelian phase \cite {Moor, Read}. In 2002, Bulgac \cite{Bulg} predicted that both weakly interacting Bose and Fermi gases with attractive two-body and large repulsive TBI may form droplets. Dasgupta \cite{Dasg} showed that if the two-body interactions were attractive, the presence of the TBI leads to a nonreversible BCS-BEC crossover. Furthermore, many proposals dealing with effects of effective TBI in ultracold bosonic atoms in an optical lattice or a superlattice are reported in \cite{ Daly1,Mazz, Singh, Mahm}. Moreover, the TBI in dilute Bose gases may give rise to considerably modify the collective excitations at both zero and finite temperatures \cite {Abdul,Hamid, Chen}, the transition temperature, the condensate depletion and the stability of a BEC in a one (1D)- and two (2D)-dimensional trapping geometry \cite {Peng, Mash}. However, little attention has been paid to effects of TBI on dipolar BECs. For instance, it has been argued that TBI play a crucial role in the stablization of the supersolid state in 2D dipolar bosons \cite{Petrov1} and in the quantum droplet state in 3D BEC with strong DDI \cite{Pfau1, Kui, Blakie, BoudjDp}. Our main aim here is to study effects of the TBI on weakly interacting dipolar Bose gases in a pencake trap at finite temperature. To this end, we employ the full Hartree-Fock-Bogoliubov (HFB) approximation. This approach which takes into account the pair anomalous correlations has been extensively utilized to describe the properties of both homogenous and trapped BECs with contact interactions \cite{Hut}. We will show in particular how the interplay between the DDI, TBI and temperature can enhance the density profiles, the condensed fraction and the collective modes of the system. The rest of the paper is organized as follows. In Sec.\ref {3BT}, we introduce the full HFB formalism for dipolar BECs with TBI. We discuss also the issues encountered in our model and present the resolution of these problems. Section \ref{NR} is devoted to presenting and discussing our numerical results. Our conclusions are drawn in Sec.\ref{Conc}. \section{Three-body model for dipolar bosons} \label{3BT} Consider a dipolar BEC with contact repulsive two-body interactions and TBI confined in a pancake-shaped trap with the dipole moments of the particles oriented perpendicular to the plane. It is straightforward to check that the condensate wavefunction $\Phi ({\bf r})=\langle \hat\psi ({\bf r})\rangle$, with $\hat\psi ({\bf r})$ being the Bose field operator, satisfies the generalized Gross-Pitaevskii (GP) equation \cite{BoudjDp} \begin{widetext} \begin{align}\label{EMP} i\hbar \frac{\partial \Phi ({\bf r},t)} {\partial t} &=\left \{ h^{sp}+ g_2 \bigg[n_c({\bf r},t) +2 \tilde n({\bf r},t) \bigg]+ \frac{g_3}{2} \bigg [n_c^2({\bf r},t)+ 6n_c ({\bf r},t)\tilde n({\bf r},t) + \tilde m^*({\bf r},t) \Phi^2({\bf r},t) \bigg ] \right \} \Phi ({\bf r},t) \\ &+\left [ g_2 \tilde m ({\bf r},t) +\frac{3 g_3}{2} \tilde m({\bf r},t) n_c ({\bf r},t) \right] \Phi^*({\bf r},t) \nonumber \\ &+\int d{\bf r'} V_d({\bf r}-{\bf r'}) \bigg [ n ({\bf r'},t) \Phi({\bf r},t)+ \tilde n ({\bf r},{\bf r'},t)\Phi({\bf r'},t) +\tilde m ({\bf r},{\bf r'},t)\phi^*({\bf r'},t) \bigg ], \nonumber \end{align} \end{widetext} where $h^{sp} =-\hbar^2 \Delta/2m +U({\bf r})$ is the single particle Hamiltonian, $m$ is the particle mass, $U({\bf r})= m \omega_{\rho}^2 (\rho^2+\lambda^2 z^2)/2$, $\rho^2=x^2+y^2$, $\lambda=\omega_{z}/\omega_{\rho}$ is the ratio between the trapping frequencies in the axial and radial directions. The two-body coupling constant is defined by $g_2=4\pi \hbar^2 a/m$ with $a$ being the $s$-wave scattering length which can be adjusted using a magnetic Feshbach resonance. The three-body coupling constant $g_3$ is in general a complex number with $Im(g_3)$ describing the three-body recombination loss and $Re(g_3)$ accounting for the three-body scattering parameter. In the present paper, we do not consider the three-body recombination terms i.e. $Im(g_3)=0$, so the system is stable which is consistent with recent experiments \cite{Kra, Pfau1}. The DDI potential is $V_d({\bf r}) = C_{dd} (1-3\cos^2\theta) / (4\pi r^3)$, where $C_{dd} ={\cal M}_0 {\cal M}^2 (= d^2/\epsilon_0)$ is the magnetic (electric) dipolar interaction strength, and $\theta$ is the angle between the relative position of the particles ${\bf r}$ and the direction of the dipole. The condensed and noncondensed densities are defined, respectively as $n_c({\bf r})=|\Phi({\bf r})|^2$, $\tilde n ({\bf r})= \langle \hat {\bar\psi}^\dagger ({\bf r}) \hat {\bar\psi} ({\bf r}) \rangle $ and $n({\bf r})=n_c({\bf r})+\tilde n ({\bf r})$ is the total density. The term $\tilde n ({\bf r, r'})$ and $\tilde m ({\bf r, r'})$ are respectively, the normal and the anomalous one-body density matrices which account for the dipole exchange interaction between the condensate and noncondensate. Equation (\ref{EMP}) describes the coupled dynamics of the condensed and noncondensed components. For $g_3=0$, it recovers the generalized nonlocal finite-temperature GP equation with two-body interactions. For $\tilde m =0$, Eq.(\ref{EMP}) reduces to the HFB-Popov equation \cite{Bon, Corm,BoudjDp} which is gapless theory. For $\tilde m=\tilde n =0$, it reduces to standard GP equation that describes dipolar Bose gases only at zero temperature. Upon linearizing Eq.(\ref{EMP}) around a static solution $\Phi_0$, utilizing the parameterization $\Phi({\bf r},t)=[\Phi_0({\bf r})+\delta \Phi({\bf r},t) ] e^{-i\mu t/\hbar}$, where $\delta \Phi = \sum_k [u_k ({\bf r}) e^{-i \varepsilon_k t/\hbar}+ v_k({\bf r}) e^{i \varepsilon_k t/\hbar}] $, and $\varepsilon_k$ is the Bogoliubov excitations energy. The quasi-particle amplitudes $ u_k({\bf r}), v_k({\bf r}) $ satisfy the generalized nonlocal Bogoliubov-de-Gennes (BdG) equations \cite{BoudjDp}: \begin{widetext} \begin{align} \varepsilon_k u_k ({\bf r}) &= \hat {\cal L} u_k ({\bf r})+ \hat {\cal M} v_k ({\bf r}) + \int d{\bf r'} V_d({\bf r}-{\bf r'}) n ({\bf r},{\bf r'}) u_k ({\bf r'}) + \int d {\bf r'} V_d({\bf r}-{\bf r'}) \bar m ({\bf r},{\bf r'}) v_k ({\bf r'}), \label{BdG1} \\ -\varepsilon_k v_k ({\bf r}) &= \hat {\cal L} v_k ({\bf r})+ \hat {\cal M} u_k ({\bf r}) + \int d{\bf r'} V_d({\bf r}-{\bf r'}) n ({\bf r},{\bf r'}) v_k ({\bf r'}) + \int d {\bf r'} V_d({\bf r}-{\bf r'}) \bar m ({\bf r},{\bf r'}) u_k ({\bf r'}), \label{BdG2} \end{align} \end{widetext} where $\hat {\cal L}=h^{sp}+ 2g_2n ({\bf r})+ 3g_3 [n_c^2 ({\bf r}) +4 n_c ({\bf r}) \tilde n ({\bf r})+\tilde m^* ({\bf r}) \Phi^2({\bf r})+\tilde m ({\bf r}) \Phi^{*2}({\bf r})]/2 + \int d{\bf r'} V_d({\bf r}-{\bf r'}) n ({\bf r'})-\mu$, $\hat {\cal M}=g_2 [\Phi_0^2({\bf r})+ \tilde m ({\bf r})]+g_3[ n_c^2({\bf r})+3\Phi_0^2({\bf r}) \tilde n ({\bf r})+3\Phi_0^2({\bf r}) \tilde m ({\bf r}) ]$, $n ({\bf r},{\bf r'})= \Phi_0^*({\bf r'}) \Phi_0({\bf r})+ \tilde n ({\bf r},{\bf r'})$ and $\bar m ({\bf r},{\bf r'})= \Phi_0({\bf r'}) \Phi_0({\bf r}) +\tilde m ({\bf r},{\bf r'})$. \\ Equations (\ref{BdG1}) and (\ref{BdG2}) describe the collective excitations of the system. The normal and the anomalous one-body density matrices can be obtained employing the transformation $\hat {\bar\psi}=\sum_k [u_k ({\bf r}) \hat b_k+ v_k^*({\bf r}) \hat b_k^\dagger] $ \begin{align} \tilde n ({\bf r, r'})&= \sum_k \bigg\{ \left[u_k^*({\bf r'}) u_k ({\bf r})+v_k({\bf r'})v_k^*({\bf r}) \right] N_k({\bf r}) \label {HFB1} \\ &+v_k({\bf r'})v_k^*({\bf r})\bigg\}, \nonumber \\ \tilde m ({\bf r, r'})&= -\sum_k \bigg\{ \left[u_k({\bf r'}) v^*_k ({\bf r})+u_k({\bf r}) v_k^*({\bf r'}) \right] N_k({\bf r}) \label {HFB2} \\ &+u_k({\bf r'})v_k^*({\bf r})\bigg\}, \nonumber \end{align} where $N_k=\langle \hat b_k^{\dagger} \hat b_k\rangle=[\exp(\varepsilon_k/T)-1]^{-1}$ are occupation numbers for the excitations. The noncondensed and anomalous densities can simply be obtained by setting, respectively $ \tilde n ({\bf r})=\tilde n ({\bf r, r})$ and $ \tilde m ({\bf r})=\tilde m ({\bf r, r})$ in Eqs.(\ref{HFB1}) and (\ref{HFB2}). From now on we assume that $\tilde n ({\bf r},{\bf r'})=\tilde m ({\bf r},{\bf r'})=0$ for ${\bf r} \neq {\bf r'}$ \cite{Bon,Bon1}. It is worth stressing that the omission of the long-range exchange term does not preclude the stability of the system \cite {Bon, He, Biss, Zhan, Bail, Tick, BoudjDp}. As is well known, the full HFB theory sustains some hindrances notably the appearence of an unphysical gap in the excitation spectrum and the divergence of the anomlaous density. In fact, this violation of the conservation laws in the HFB theory is due to the inclusion of the anomalous density which in general leads to a double counting of the interaction effects. The common way to circumvent this problem is to neglect $\tilde m$ in the above equations, which restores the symmetry and hence leads to a gapless theory, but this is nothing else than the Popov approximation. To go consistently beyond the Popov theory, one should renormalize the coupling constant taking into account many-body corrections for scattering between the condensed atoms on one hand and the condensed and thermal atoms on the other. Following the procedure outlined in Refs \cite{Davis, Morgan, Boudj8, Boudj9, Boudjbook} we obtain \begin{align} \label{Renor} &g_2 |\Phi|^2\Phi+g_2\tilde{m}\Phi^*+\frac{3g_3 }{2} n_c\tilde{m}\Phi^*\\ & =g_2 \bigg[1+\frac{\tilde {m}(1+3g_3n_c/g_2) }{\Phi ^2}\bigg] |\Phi|^2\Phi \nonumber\\ &= g_R |\Phi|^2\Phi \nonumber. \end{align} This spatially dependent effective interaction, $g_R$ is somehow equivalent to the many body $T$-matrix \cite{Hut}. A detailed derivation of $g_R$, including the term $g_3$, will be given elsewhere. It is easy to check that if one substitutes (\ref{Renor}) in the HFB equations, we therefore, reinstate the gaplessness of the spectrum and the convergence of the anomalous density. \section{Numerical results} \label{NR} For numerical purposes, it is useful to set the Eqs.(\ref{EMP})-(\ref{Renor}) into a dimensionless from. We introduce the following dimensionless parameters: the relative strength $\epsilon_{dd}=C_{dd}/3g_2 $ ($\epsilon_{dd}=0.16$ for Cr atoms) which describes the interplay between the DDI and short-range interactions, and $\bar g_3 =g_3 n_c/g_2$ describes the ratio between the two-body interactions and TBI. Throughout the paper, we express lengths and energies in terms of the transverse harmonic oscillator length $l_0=\sqrt{\hbar/m \omega_{\rho}}$ and the trap energy $\hbar \omega_{\rho}$, respectively. \begin{figure} \includegraphics[scale=0.45]{ND.eps} \includegraphics[scale=0.45]{AD.eps} \caption { (Color online) Density profiles of the noncondensed (a) and anomalous (b) components for several values of the three-body interactions $\bar g_3$ at $T/T_c=0.2$. Solid line: $\bar g_3=0.05$, dashed line: $\bar g_3=0.055$ and dotted line: $\bar g_3=0.06$. Parameters are : $N=10^5$ of ${}^{52}$Cr atoms, $\lambda=4$, and the scattering length $a=20 a_0$. Here the contact interactions can be tuned using the Feshbach resonance making the dipolar strength more lager.} \label{dens} \end{figure} Figure.\ref{dens} shows that the noncondensed and the anomalous densities increase with the TBI which leads to reduce the condensed density. A careful observation of the same figure reveals that the $\tilde m$ is larger than $\tilde n$ at low temperature which is in fact natural since the anomlaous density itself arises and grows with interactions \cite{Boudj2011, Boudj2012}. When the temperature approaches to the transition, one can expect that $\tilde m$ vanishes similar to the case of a BEC with a pure contact interaction \cite{Boudj2011, Boudj2012}. \begin{figure}[htb] \includegraphics[scale=0.8]{condF.eps} \caption {Condensate fraction as a function of reduced temperature $T/T_c^0$ (where $T_c^0$ is the ideal gas critical temperature). Solid line: full HFB with $\bar g_3=0.05$, dashed line: full HFB with $\bar g_3=0$, circles: HFB-Popov approximation without TBI, and dotted line: ideal gas $N_c/N=1-(T/T_c^0)^3$. Parameters are the same as in Fig.\ref{dens}.} \label{Frac} \end{figure} In Fig.\ref{Frac} we compare our prediction for the condensed fraction $N_c/N$ with the HFB-Popov theoretical treatment and the noninteracting gas. As is clearly seen, our results diverge from those of the previous approximations due to the effects of the TBI. This means that both the condensed fraction and the transition temperature decrease with increasing the TBI. \begin{figure}[ htb] \includegraphics[scale=0.8]{CMD.eps} \caption { (Color online) Lowest mode frequencies of ${}^{52}$Cr BEC as a function of temperature. Triangles: our predictions, circles: HFB-Popov approximation without TBI. For each angular momentum number $m=0,1,2$, we plot only the lowest mode. Parameters are the same as in Fig.\ref{dens}.} \label{Exc} \end{figure} Before leaving this section, let us unveil the role of the TBI on the collective excitations. According to Fig.\ref{Exc}, we can observe that our results deviate from the HFB-Popov (which have also been found in \cite{Bon}) at higher temperatures for $m=0$ and $2$ excitations. The reason of such a downward shift which enlarges as the temperature approaches $T_c$, is the inclusion of both anomalous pair correlations and TBI. A similar behavior holds in the case of BEC with short-range interactions (see e.g. \cite{Hut}). Fig.\ref{Exc} depicts also that both the full HFB and the HFB-Popov produce a small shift from 1 for the Kohn mode $\omega/\omega_{\rho}=1$ at higher temperatures. One possibility to fix this problem might be the inclusion of the dynamics of the noncondensed and the anomalous components. A suitable formalism to explore such a dynamics is the time-dependent HFB theory \cite{Boudj8, Boudj9, Boudjbook, Boudj2011, Boudj2012}. \section{Conclusion} \label{Conc} In conclusion, we have deeply investigated the properties of dipolar Bose gas confined by a cylindrically symmetric harmonic trapping potential in the presence of TBI at finite temperature. The numerical simulation of the full HFB model emphasized that the condensed fraction and the transition temperature are reduced by the TBI. Effects of the TBI and temperature on the collective modes of the system are notably highlighted. We found that the full HFB approach in the presence of the TBI reproduces the HFB-Popov results of \cite{Bon} only at low temperature while both approaches diverge each other when the temperature is close to $T_c$. One can expect that the same behavior persists in the case of a density-oscillating ground states known as a biconcave state predicted in Ref \cite{Bon1}. \section*{Acknowledgments} We are grateful to Dmitry Petrov and Axel Pelster for the careful reading of the manuscript and helpful comments.
1801.07011
\section{Introduction} \label{sec:introduction} In case of events concerning several communities, event-centric information available in different languages can reflect community-specific bias \cite{Rogers:2013}. As events unfold, it therefore becomes particularly important to analyze language-specific differences across the local event representations to better understand the community-specific perception of the events, the propagation of event-centric information across languages and communities as well as its impact. Wikipedia with its large multicultural user community and editions in over 290 languages is a rich source of multilingual information with respect to ongoing and past events. Information about ongoing events of global importance can evolve quickly in different language editions as the event unfolds \cite{Fetahu:2015}, making Wikipedia an important and up-to-date source for cross-cultural studies The potentially large amount of multilingual information to be analyzed, event dynamics and the language barrier limit the analysis methods, in particular with respect to the features accessible for the analysis, making cross-cultural studies of ongoing events particularly challenging. Although existing tools facilitate automatic cross-lingual article comparisons in Wikipedia with respect to selected features, including for example a cross-lingual text alignment for articles pairs by MultiWiki \cite{Gottschalk2017}, a comparison of the topics covered by the articles by Omnipedia \cite{Bao:2012} and a computation of the article similarity in Manypedia \cite{Massa:2012}, there is still a significant room for further developments in this area. In this abstract we present preliminary results of a case study concerning an analysis of cross-lingual event-centric information in Wikipedia exemplified through the Brexit referendum in June 2016. In this study, we observed an interdisciplinary, multicultural group of information professionals (communication design and sociology researchers) who performed the task of cross-cultural analysis of the Brexit representation in the multilingual Wikipedia in June 2016. In the study, six participants worked for 14 hours together as a team, resulting in a total of 84 person-hours spent. \section{Objectives \& Methodology} \label{sec:challenges} The key study objective was to \textit{observe which methods and features can be efficiently used to perform cross-lingual analytics on multilingual Wikipedia}. As a first step, we introduced the participants to the topic of Brexit in Wikipedia and presented them with several example tools that can assist Wikipedia analytics in general. We asked the participants to: 1) define their own research questions and analysis methods related to Brexit and its international perception from the social and cultural perspectives; and 2) conduct an analysis of the Brexit-related articles across different Wikipedia language editions with respect to these questions. We focus the discussion on two Brexit-related articles selected by the participants: the “United Kingdom European Union membership referendum, 2016” (\textit{referendum}) article and the “United Kingdom withdrawal from the European Union” (\textit{withdrawal}) article. \begin{figure*} \centering \includegraphics[width=0.9 \textwidth {images/timeline01c_top.pdf} \caption{Timeline of the withdrawal article showing the edit frequency of the English withdrawal article over time and the creation of the article editions in other languages. } \label{fig:timeline} \end{figure*} \section{Observations} \label{sec:feature_selection} \subsection{Content Analysis} Due to the language barrier, the participants focused on the content features involving less text, e.g. the tables of contents and images. \textbf{Textual features}: Most Wikipedia articles contain a table of contents (TOC) that is arranged individually in each language edition. As the effort to read through whole Wikipedia articles -- especially when the reader is not proficient in the particular language -- was estimated too high, the participants decided to focus on the TOCs as an approximation of textual analysis. The participants arranged the TOC entries from the English, German, French and Italian article revisions by their frequency across languages. This TOC comparison has shown that the articles differed in many aspects: While the German article focused on the referendum results in the different regions, the English Wikipedia covered much more aspects, especially on economical and societal implications of the withdrawal. \textbf{Multimedia features}: Wikipedia articles can contain images that can be shared across different language editions. It became evident that images containing the map of the UK and the referendum ballot paper were shared most frequently across languages. Smaller deviations were observed for several languages, e.g. the German article contained many photos of the politicians. \vspace{-1em} \subsection{Temporal Analysis} The description of ongoing events may vary substantially over time. \textbf{Textual features:} The TOC of a Wikipedia article is subject of change over time. To observe this behavior, the participants extracted the TOC three times per day, in the time from the 22nd to the 24th of June 2016 (with the 23rd of June being the referendum date). The TOCs of two of these points in time made visible that the French version did not have the referendum results on the 23rd of June as opposed to other language versions under consideration. Within the following day, the English TOC stayed nearly unchanged in contrast to the other languages where especially the German article showed a major increase of new sections. The analysis on the withdrawal article revealed that the German version contained detailed historical and legislative entries while the French version focused on the referendum's impact and the public opinion. \textbf{Edit-based features:} A valuable temporal aspect with respect to the cross-lingual information propagation comes from the editing process by the Wikipedia editors and the origin of the first revision per language. To explore this feature, the lists of all language editions for the referendum ($48$ language editions) and the withdrawal articles ($59$ language editions) have been analyzed using the Wikipedia Edits Scraper and IP Localizer tool. This way, the participants gained the edit histories of $107$ article language editions. They manually created two annotations for the initial revision of each article: “Creation date”, i.e. the date of the first edit, and “Origin”, i.e. the reference to other language editions that were utilized to create the respective edition. A visualization presented in Figure \ref{fig:timeline} has been created by the participants and illustrates the English articles’ development including the number of edits and the creation of the other language editions. Article editions directly translated from other languages are marked and important events related to Brexit are added to the timeline. Edit peaks correlate with the sub-events in particular during the referendum held on the 23rd of June 2016. \subsection{Network \& Controversy Analysis} The interlinking between Wikipedia articles across languages can provide useful insights into the coherence of the language editions. \textbf{Category-based features:} In Wikipedia, editors can assign categories like “Politics” or “EU-skepticism in the UK” to the articles. To gain more insights into the perception of Brexit within the network of Wikipedia articles, the participants analyzed this categorization in all available language editions The majority of the categories was related to basic concepts like “Referendum”. The Scottish and the English Wikipedia were rather disconnected from the network. In the withdrawal articles, “Eurocentrism” was identified as a main category connecting languages. \textbf{Links-based features:} Each Wikipedia article can contain links to external sources that the participants extracted and compared using the MultiWiki tool. For most of the language editions, the overlap of the linked web pages was rather low , and just in some cases it reached higher values: the English and German withdrawal articles shared $17.32\%$ of links, mainly related to news pages like the BBC or The Guardian. \textbf{Edit-based features:} To observe controversies, the participants reviewed the English discussion pages on the articles using their TOCs. A major finding facilitated by this feature was a discussion among the Wikipedia editors on the question to which article the search term “Brexit” should link to. Having initially redirected to the referendum article, the term lead to the withdrawal article a few days after the actual referendum. \section{Conclusion} \label{sec:conclusion} As our case study illustrates, certain types of cross-lingual analysis on Wikipedia mostly relying on non-textual features (e.g. links, images and category structures) can be performed efficiently, manually or by using existing tools, and can already provide interesting insights. In contrast, analysis of textual content and discussion pages appears to be limited through the language barrier. \footnotesize{ \section{Acknowledgment} Partially funded by ALEXANDRIA (ERC 339233) and Data4UrbanMobility (German Federal Ministry of Education and Research (BMBF) 02K15A040). \bibliographystyle{ACM-Reference-Format}
2107.01948
\section{Introduction} The dynamic mode decomposition (DMD) algorithm, is a powerful and versatile data-driven approach proposed by \cite{Schmid2010}, ideally suited to analyze complex high-dimensional geophysical flows in terms of recurrent or quasi-periodic modes. The DMD is indeed related to the Koopman theory \cite{Rowley2009}, stating that observables of an Hamiltonian system can always be described via a linear transformation. The original DMD algorithm has a lot of common points with the algorithm presented in \cite{Saad1980}. For practical applications and real data analysis, several follow-up algorithms have been proposed. To name a few, it can be listed the optimized DMD \cite{Chen2012}, the optimal mode decomposition \cite{wynn2013}, the exact DMD\cite{Tu2014}, the Hankel DMD \cite{Arbabi2017}, the sparsity promoting DMD \cite{Kusaba2020}, the multi-resolution DMD \cite{Kutz2015}, the extended DMD \cite{Williams2015}, DMD with control \cite{Proctor2016}, total least squares DMD \cite{Hemati2017}, dynamic distribution decomposition \cite{TaylorKing2020}, etc. These DMD algorithms are generally motivated by different reasons, but a key overall objective is to help provide the most precise numerical approximation of the Koopman operator. When the system is ergodic and measure-preserving, it would indeed be equivalent to have a precise description of the spectrum $\nu$ of Koopman operator restricted on $\mathcal{H}_f$ and a precise mapping between $\mathcal{H}_f$ and $L^2(S^1,d\nu)$ (where $\mathcal{H}_f$ is the linear subspace generated by a single observable $f$ and $S^1$ the unit complex circle, see section 2 for detailed definition of these spaces). The authors of \cite{Korda2018} proved the convergence in the strong operator topology of extended DMD algorithm, provided a complete orthogonal basis of the space of square-integrable observables. In \cite{Arbabi2017} the convergence of Hankel DMD algorithm is proved for the finite dimensional case, which corresponds to the finite truncation of the discrete part of the spectrum. Christoffel-Darboux kernel is exploited in \cite{KORDA2020599} to directly identify the discrete component and the absolutely continuous component of the spectrum. Note, DMD algorithms are not the only way to approximate Koopman operator. In a series of papers ( \cite{WilliamsRowleyKevrekidis2015},\cite{Das2017}, \cite{Das2018},\cite{Giannakis2018} and \cite{Giannakis2020}), the approximation of Koopman operator is performed by kernel methods. Recently, \cite{Giannakis2020} showed the convergence of kernel methods for any measure preserving ergodic dynamical systems, the measure of which support lies on a compact manifold. A related issue for stochastic dynamics can be found in \cite{Nuske2021} In this manuscript, we define a Hilbert space $\mathcal{H}_f$ and the associated Koopman-like time-shift operator $\mathcal{K}$ for each given time series the autocovariances of which exist. When the time series is given by an observable $f$ on a ergodic dynamical system with finite invariant measure, $\mathcal{H}_f$ coincides with the closure of the infinite dimensional Krylov subspace generated by the classical Koopman operator and $f$, and $\mathcal{K}$ coincides with the classical Koopman operator on $\mathcal{H}_f$. We argue and prove that for any observable $f$ such that all the time-delayed autocovariance exist, when the parameters of the autocovariance matrix $C_{NM}(f)$ goes to infinity in the right order, the leading eigenvalues renormalized by the dimension of $C_{NM}(f)$ converge to the energy of $f$ that is represented by the eigenvector of $\mathcal{K}$. All other renormalized eigenvalues shall further converge to 0 uniformly. Despite its theoretical interests, the main theorem directly suggests a practical algorithm to explicitly identify the Koopman eigenfrequencies together with the associated energy from given time series. As a by-product, it also shows that the leading temporal empirical orthogonal functions calculated by singular spectrum analysis (SSA, \cite{Ghil2002AdvancedSM}) method are indeed represented by the eigenfrequencies of $\mathcal{K}$. Similarly, this theorem also sheds light on the theoretical foundation of data-adaptive harmonic decomposition (DAHD,\cite{Kondrashov2020DataadaptiveHA}, and Hankel alternative view of Koopman analysis (HAVOK, \cite{Brunton2017ChaosAA}). Because all these methods are based either on trajectory matrix or on $C_{NM}(f)$. The paper is organized as follows. In section 2, we present our main result and the necessary mathematical background knowledge. We also discuss about how the main theorem provides theoretical support to SSA,DAHD, and HAVOK. In section 3, we state the details of the proof of the main result. In section 4, we present the detailed algorithm and compare it with another numerical method based on Yosida's mean ergodic theorem (\cite{Yosida1995}). In section 5 we present numerical results on two simple low dimensional measure preserving ergodic dynamical systems and interpolated ocean sea-surface height data. Section 6 concludes this study and gives some perspectives. The necessary code and data that reproduces all the numerical results can be accessed at https://doi.org/10.5281/zenodo.5585970. \section{Preliminaries and the main result} Given a continuous-time dynamical system \[ X_t = \Phi_t(X_0), \] and an observable $f(X_t)$, we have a time series $\{f(X_t): t=0,\Delta t,2\Delta t,...\}$. We assume that the time-delayed autocovariance of $f$ exists for all $l\in\mathbb{N}$: \begin{align} \rho_{l\Delta t} = \lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^nf(X_{k\Delta t})\bar{f}(X_{(k+l)\Delta t}). \end{align} To avoid misinterpretation, we use $F_0 = \{f(X_0), f(X_{\Delta t}),...\}$ to denote time series associated to $f$ and a given (fixed) orbit of the dynamical system. For $l\in\mathbb{N}$, we define $F_l$ to be the time shifted time series $\{f(X_{l\Delta t}), f(X_{(l+1)\Delta t}),...\}$. For any $a,b\in\mathbb{C}$ and any two time series, associated to the same dynamics \begin{align} G = \{g_0,g_1,...\}\nonumber \\ H = \{h_0,h_1,...\}\nonumber, \end{align} we define \begin{align} aG + bH = \{ag_0+bh_0, ag_1+bh_1, ...\}. \nonumber \end{align} Let \begin{align} \tilde{H}_f = \{\sum_{i=1}^n c_i F_{l_i}: n\geq 1, c_i\in\mathbb{C}, l_i\in\mathbb{N}\}. \end{align} Then $\tilde{H}_f$ is a linear space. Now we define the Koopman-like (or time shift) operator on $\tilde{H}_f$. We start with for any $l\in\mathbb{N}$ \begin{align} \mathcal{K}^{\Delta t}F_l = F_{l+1}, \end{align} and then generalize the action of $\mathcal{K}^{\Delta t}$ to the whole $\tilde{H}_f$. It is not hard to show that $\mathcal{K}^{\Delta t}: \tilde{H}_f\to \tilde{H}_f$ is well-defined. The existence of $\rho_s$ allows us to define an inner product on $\tilde{\mathcal{H}}_f$ by \begin{align} \inp{h}{g} = \lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}h_k\bar{g}_k, \end{align} where $h,g\in\tilde{\mathcal{H}}_f$. For any $l_1,l_2\in\mathbb{N}$, it is obvious that \begin{align} \rho_{l_1\Delta t} = \lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^nf(X_{(n+l_2)\Delta t})\bar{f}(X_{(n+l_1+l_2)\Delta t}). \end{align} Hence $\mathcal{K}^{\Delta t}$ preserves the inner product in $\tilde{\mathcal{H}}_f$ and is hence continuous. Let $\mathcal{H}_f$ be the completion of $\tilde{\mathcal{H}}_f$. $\mathcal{H}_f$ is a Hilbert space. $\mathcal{K}^{\Delta t}$ preserves the inner product of $\tilde{\mathcal{H}}_f$. Therefore $\mathcal{K}^{\Delta t}$ can be extended, by continuity, to an isometric operator that acts on $\mathcal{H}_f$. For sake of simplicity, keeping the same notation for the extended operator, $\mathcal{K}^{\Delta t}$, whose domain is $\mathcal{H}_f$, is the Koopman-like (or time shift) operator we study in this paper. \begin{remark} The classical Koopman operator is defined to act on some function space on the whole phase space of some dynamical system, i.e. \begin{align} \mathcal{K}^{\text{cl}}g (x) = g(\Phi(x)), \end{align} where $x\in X$ is a state and $\Phi$ the discrete-time flow of the dynamical system. In our setting, the definition of $\mathcal{K}^{\Delta t}$ and $\mathcal{H}_f$ purely relies on the time series. The dynamical system is hidden behind. It is possible that the time series only reveals partial properties of the dynamical system. When the discrete-time dynamical system is ergodic and has finite invariant measure $\mu$, Birkhoff ergodic theorem guarantees that $\mathcal{H}_f$ is isomorphic to the closure of Krylov subspace $\overline{\text{Span}\{f,\mathcal{K}^{\text{cl}}f,...\}}\subset L^2(X,d\mu)$ as Hilbert spaces. The isomorphism is given by \begin{align} \phi: \sum_{i=1}^nc_iF_{l_i} \rightarrow \sum_{i=1}^nc_i(\mathcal{K}^{\text{cl}})^{l_i}f \end{align} And it is not hard to see that $\mathcal{K}^{\Delta t}$ acts on $\mathcal{H}_f$ in a similar way as $\mathcal{K}^{\text{cl}}$ on the space of observables $L^2(X,d\mu)$, i.e. for any $h\in \mathcal{H}_f$, \begin{align} \phi(\mathcal{K}h) = \mathcal{K}^{\text{cl}}(\phi(h)). \end{align} The elements in $\tilde{\mathcal{H}}_f$ can always be represented as some time series. But in general we can not assert that any element in $\mathcal{H}_f$ can be represented as some time series. In particular, we do not know if the eigenvectors $v_i\in\mathcal{H}_f$ of $\mathcal{K}$ can be represented as a time series in the form $\{1,\xi_i, \xi_i^2,...\}$. Nor in general can we identify $\mathcal{H}_f$ with some space of functions on the phase space. Hence, without these assumptions, the time-shift operator and the Koopman operator cannot be strictly related. Ergodicity+finite invariant measure is a stronger assumption than the existence of autocovariance. As long as the autocovariances exist, the quantities mentioned in the main result are mathematically well-defined. Therefore we do not restrict ourselves to the case where the system is ergodic and has finite invariant measure. But we keep the generality in the definition of $\mathcal{H}_f$ and $\mathcal{K}$. The physical meaning of $\mathcal{H}_f$ and $\mathcal{K}$ in the general case needs to be studied further. For readers who are interested in the classical Koopman operator $\mathcal{K}^{\text{cl}}$, we point out that $\mathcal{K}$ indeed coincides with $\mathcal{K}^{cl}$ when the system is ergodic and has finite invariant measure. \end{remark} If $\{f(X_t): t\geq 0\}$ is a continuous-time time series, we assume that \begin{align} \rho_s = \lim_{T\to\infty}\frac{1}{T}\int_{0}^Tf(X_t)\bar{f}(X_{t+s})dt \end{align} exists for every $s\geq 0$. Similar to the discrete-time case, we define \begin{align} F_{s} = \{f(X_t):t\geq s\} \end{align} to be the continuous time series that starts from $f(X_s)$. We can define the linear space \begin{align} \tilde{\mathcal{H}}^{\text{cont}}_f = \{\displaystyle\sum_{i=1}^n c_iF_{s_i}: c_i\in\mathbb{C}, s_i\geq 0, n\geq 1\}, \end{align} with inner product: \begin{align} \inp{h}{g} = \lim_{T\to\infty}\frac{1}{T}\int_{0}^Th(t)\bar{g}(t)dt. \end{align} Let $\mathcal{H}^{\text{cont}}_f$ be the completion of $\tilde{\mathcal{H}}^{\text{cont}}_f$. $\mathcal{K}^s$ is an isometry on $\mathcal{H}^{\text{cont}}_f$ for all $s\geq 0$. We are interested in the eigenvalues and eigenvectors of $\mathcal{K}^{\Delta t}$ (for discrete-time case) and $\mathcal{K}^s$ (for continuous-time case). A natural question to ask is that whether the eigenfrequencies of $\mathcal{K}^{\Delta t}$ and $\mathcal{K}^{s}$ are the same. We have the following result. \begin{prop}\label{prop:cont K = disc K} Let $\{f(X_t):t\geq 0\}$ be a continuous time process for which $\rho_s$ exists for all $s\geq 0$. Assume that the curve $\mathcal{K}^s: [0,\infty)\to\mathcal{H}_f^{\text{cont}}$ is continuous in $s$. Let $\Delta t>0$ be a time step. Assume that \begin{align} &\lim_{T\to\infty}\frac{1}{T}\int_{0}^Tf(X_t)\bar{f}(X_{t+s})dt\nonumber \\ =&\lim_{T\to\infty}\frac{\Delta t}{T}\sum_{\mathbb{N}\ni n=0}^{T/\Delta t}f(X_{n\Delta t})\bar{f}(X_{n\Delta t+s}),\label{eq: inner product cont=disc} \end{align} then $\mathcal{H}_f \hookrightarrow \mathcal{H}^{\text{cont}}_f$. Let $q$ be an eigenfrequency of the discrete-time operator $\mathcal{K}^{\Delta t}$, i.e. there exists $h\in \mathcal{H}_f \hookrightarrow \mathcal{H}^{\text{cont}}_f$ so that $\mathcal{K}^{\Delta t}h = e^{iq}h$. Then there exists an integer $k$, and $h_k\in\mathcal{H}^{\text{cont}}_f$, so that \begin{align} \mathcal{K}^sh_k = e^{i\frac{q+2k\pi}{\Delta t}s}h_k \end{align} for all $s\geq 0$. \end{prop} The proof of this proposition is given in the appendix. Proposition \ref{prop:cont K = disc K} guarantees that for every eigenfrequency $q$ of $\mathcal{K}^{\Delta t}$, there always exists an eigenfrequency of $\mathcal{K}^s$ for all $s\geq 0$ which reduces to $q$ at discrete time step. Our discussion about continuous-time time series stops here. Hereinafter we always assume that the time series $\{f(X_t)\}$ is discrete in time. For simplicity the time series is denoted by $\{f(0), f(1),...\}$ and we use $f$ to denote the whole time series generated by $f$ as an element in $\mathcal{H}_f$. We use the notation $\mathcal{K}$ to replace $\mathcal{K}^{\Delta t}$. Recall that the time-shift operator $\mathcal{K}$ is an isometry on the Hilbert space $\mathcal{H}_f$ of time-series associated to observable $f$. \begin{remark} Due to Birkhoff ergodic theorem, autocovariance exists when the dynamical system is ergodic and has a finite invariant measure. \end{remark} The following theorem provides a very useful result to decompose any isometric operator and the Hilbert space on which it acts into unitary and non unitary part. \begin{theorem}[Wold decomposition]\label{thm:Wold} Let $\mathcal{H}$ be a Hilbert space and $\mathcal{V}$ any isometry of $\mathcal{H}$. Then we have an orthogonal decomposition $\mathcal{H} = \mathcal{H}_{NU}\bigoplus\mathcal{H}_U$, and $\mathcal{V} = \mathcal{V}_{NU}\bigoplus\mathcal{V}_U$, such that $\mathcal{H}_{NU} = \displaystyle\bigoplus_{s\in I}\mathcal{H}_{s}$, $\mathcal{V}_{NU}$ acts on $\mathcal{H}_s$ for some index set $I$ as a unilateral shift, i.e. $\mathcal{V}_{NU}(v_0,v_1,...) = (0, v_0,v_1,...)$. And $\mathcal{V}_U$ acts on $\mathcal{H}_U$ and is unitary. $\mathcal{H}_{NU}$ is called the completely non unitary part of $\mathcal{H}$ as it does not contain closed subspaces of $\mathcal{H}$ on which $\mathcal{V}$ acts as a unitary operator. \end{theorem} Wold theorem is a particular case of (Sz\"{o}kefalvi-Nagy–Foia\;{s}) theorem for contraction operator. \begin{theorem}[Sz\"{o}kefalvi-Nagy–Foias] Let $\mathcal{T}$ be a contraction operator (i.e. $\|\mathcal{T}\| \leq 1$) on a Hilbert space $\mathcal{H}$ then \[ \mathcal{H}_U := \displaystyle\cap_{k\geq 0}(\rm{fix}(\mathcal{T}^k\mathcal{T}^{*k})\cap\rm{fix}(\mathcal{T}^{*k}\mathcal{T}^k)) \] is the largest space among all closed $\mathcal{T}$-invariant and $\mathcal{T}^*$-invariant subspaces of H on which $\mathcal{T}$ restricts to a unitary operator. The orthogonal complement $\mathcal{H}_U^\perp =\mathcal{H}_{NU} $ is the completely non unitary part of $\mathcal{H}$. Here \rm{fix}(A) refers to the subspace spanned by all the invariant vectors of operator $\rm{A}$. \end{theorem} Theorem \ref{thm:Wold} implies an orthogonal decomposition $f = f_{NU} + f_U$. Note that $\mathcal{H}_{f,NU}$ and $\mathcal{H}_{f,U}$ are invariant under $\mathcal{K}$ and $\mathcal{H}_{f,NU}\perp\mathcal{H}_{f,U}$. Then the fact that $\mathcal{H}_f$ is generated by $f$ implies that $\mathcal{H}_{f,U} = \overline{\text{Span}_{\mathbb{C}}\{f_U,\mathcal{K}f_U,...\}}$. Note that for any eigenvector $h\in\mathcal{H}_f$ of $\mathcal{K}$ associated to eigenvalue $\lambda$, we have an orthogonal decomposition $h = h_{f,U} + h_{f,{NU}}$, where $h_{f,U}\in\mathcal{H}_{f,U}$, $h_{f,{NU}}\in\mathcal{H}_{f,{NU}}$. Then $\mathcal{K}h = \lambda h = \mathcal{K}h_{f,U} + \mathcal{K}h_{f,{NU}}$. Because $\mathcal{H}_{f,U}$ and $ \mathcal{H}_{f,{NU}}$ are invariant subspaces, $h = \mathcal{K}h_{f,U}/\lambda + \mathcal{K}h_{f,{NU}}/\lambda$ is an orthogonal decomposition for which $\mathcal{K}h_{f,U}/\lambda\in\mathcal{H}_{f,U}$, $\mathcal{K}h_{f,{NU}}/\lambda\in\mathcal{H}_{f,{NU}}$, implying that $h_{f,U}$ \YZ{ and } $ h_{f,NU}$ are both \YZ{$\mathcal{K}-$}\YZ{eigenvectors} of the same eigenvalue. $h_{f,{NU}}$ must be zero because $\mathcal{K}$ acts on $\mathcal{H}_{f,{NU}}$ as the unilateral shift operator. Hence a \YZ{$\mathcal{K}-$eigenvector} \YZ{must be} inside $\mathcal{H}_{f,U}$. \begin{definition}\label{def:f-cyc} A Hilbert space $\mathcal{H}$ with an unitary operator $U$ is called $f$-cyclic if $\mathcal{H} = \overline{\text{Span}_{\mathbb{C}}\{f,Uf,...\}}$ for some $f\in\mathcal{H}$. \end{definition} \begin{theorem}[Spectral theorem for unitary operator]\label{thm:unitary spectral} Let $\mathcal{H}$ be a Hilbert space and $U$ an unitary operator on $\mathcal{H}$. Assume that $\mathcal{H}$ is $f$-cyclic for some $f\in\mathcal{H}$. Then there exists a finite measure $\nu_f$ on the unit circle $S^1\subset\mathbb{C}$, and an isomorphism $\phi$ \begin{align} \phi: \mathcal{H} &\rightarrow L^2(S^1,d\nu_f) \end{align} such that $\phi\circ U\circ\phi^{-1} (g)(z) = zg(z)$, for any $g\in L^2(S^1,d\nu_f)$ and any $z\in S^1$. In particular, $\phi(f) = 1$. \end{theorem} See lemma 5.4 in \cite{spectraltheoryBorthwick2020} for a mathematical proof. Note that lemma 5.4 in \cite{spectraltheoryBorthwick2020} assumes that $\mathcal{H} = \overline{\text{Span}_{\mathbb{C}}\{...,U^{-1}f,f,Uf,...\}}$, which is a weaker assumption than $\mathcal{H}$ being $f-$cyclic in the sense of Definition 1. Therefore Theorem \ref{thm:unitary spectral} applies to $\mathcal{H}_{f,U}$ and $\mathcal{K}$. In general, the spectrum measure $\nu$ consists of the discrete component, the singular-continuous component, and the absolutely continuous component (with respect to Lebesgue measure): $\nu = \nu_d + \nu_{sc} + \nu_{ac}$. The three components are pairwise-orthogonal, in the sense that for any $g\in L^2(S^1,d\nu)$, we can write $g = g_{d} + g_{sc} + g_{ac}$ such that $\nu_d(g_d) = \nu(g_d)$, $\nu_d(g_{sc}) = \nu_d(g_{ac}) = 0$, similarly for $\nu_{sc}$ and $\nu_{ac}$. Together with theorem \ref{thm:Wold}, this suggests the orthogonal decomposition \YZ{$\mathcal{H}_f = \mathcal{H}_{f,d}+\mathcal{H}_{f,NU}+\mathcal{H}_{f,sc} + \mathcal{H}_{f,ac}$}, and \begin{align} f = f_{NU} + f_{d} + f_{sc} + f_{ac}.\label{eq:f_decomp} \end{align} \YZ{It is easy to see that $\mathcal{H}_{f,d},\mathcal{H}_{f,NU},\mathcal{H}_{f,sc}$ and $\mathcal{H}_{f,ac}$ are invariant subspaces of $\mathcal{K}$.} In particular, the discrete part $\nu_d$ is a finite or countable sum of Dirac measures $\nu_d= \displaystyle\sum_i |a_i|^2\delta_{\xi_i}$, where $\xi_i\in S^1$ is the support of $\delta_{\xi_i}$. Hence we can write \begin{align} \phi(f_d) = \sum_{i} \mathds{1}_{\xi_i}, \end{align} where $\mathds{1}_{\xi_i}(z) = 1$ if $z=\xi_i$ and 0 otherwise. As such, for every $\xi_i\in\text{Supp}(\nu_d)$, $\phi^{-1}(\mathds{1}_{\{\xi_i\}})$ is an \YZ{eigenvector} of $\mathcal{K}$. On the other hand, let $h\in \mathcal{H}_{f,U}$ be an \YZ{eigenvector} of $\mathcal{K}$, i.e. $\mathcal{K}h = \xi h$ for some $\xi\in S^1$. Then $\|\xi\phi(h) - z\phi(h)\|^2 = 0$. Let $A = \{z:z\phi(h)(z)\neq \xi\phi(h)(z)\}$, then $\nu(|\mathds{1}_{A}\phi(h)|^2) = 0$, meaning that $\mathds{1}_A\phi(h) = 0$ in $L^2(S^1,d\nu)$. Hence $\phi(h) = \mathds{1}_{\{\xi\}}\phi(h)$, and $\nu(\{\xi\}) > 0$. This shows that there is a one-to-one correspondence between the support of the discrete measure $\nu_d$ and the \YZ{$\mathcal{K}-$eigenvectors} inside $\mathcal{H}_{f,U}$. \YZ{In particular, all the eigenvectors of $\mathcal{K}$ are simple.} Let $v_i$ be the corresponding normalized \YZ{$\mathcal{K}-$eigenvectors}, then we have \begin{align} f = \sum_{i}a_iv_i + f_{sc} + f_{ac} + f_{NU}.\label{eq:f decompose} \end{align} \YZ{ Our goal is to evaluate $|a_i|^2$, the numerical tool is the Gram matrix $G_{NM}(f)$, where the $(i,j)-$entry is \begin{align} G_{NM,ij}(f) = \frac{1}{M}\sum_{t=0}^Mf(i+t)\bar{f}(j+t). \end{align} We also define the autocovariance matrix \begin{align} C_{N}(f) :=& \begin{pmatrix} \rho_0, & \rho_1, & \cdots & \rho_{N} \\ \bar{\rho}_{1}, & \rho_0, & \cdots & \rho_{N-1}\\ \vdots \\ \bar{\rho}_{N}, & \bar{\rho}_{N-1}, & \cdots & \rho_0 \end{pmatrix}\nonumber \\ =& \begin{pmatrix} \inp{f}{f}, & \inp{f}{\mathcal{K}f}, & \cdots & \inp{f}{\mathcal{K}^Nf} \\ \inp{\mathcal{K}f}{f}, & \inp{\mathcal{K}f}{\mathcal{K}f}, & \cdots & \inp{\mathcal{K}f}{\mathcal{K}^{N}f}\\ \vdots \\ \inp{\mathcal{K}^Nf}{f}, & \inp{\mathcal{K}^Nf}{\mathcal{K}f}, & \cdots & \inp{\mathcal{K}^Nf}{\mathcal{K}^{N}f} \end{pmatrix}\nonumber \end{align} It is obvious that for any $N\in\mathbb{N}$ \begin{align} \lim_{M\to\infty}G_{NM}(f) = C_{N}(f). \end{align} Recall that $f = f_d + f_{NU} + f_{sc} + f_{ac}$ and that each of these four components is orthogonal to all other components. Hence \begin{align} C_{N}(f) = C_{N}(f_d) + C_{N}(f_{NU}) + C_{N}(f_{sc}) + C_{N}(f_{ac}). \end{align} } \YZ{ Let $d_{NM,1} \geq d_{NM,2} \geq \dots \geq 0$ be the eigenvalues of $G_{NM}(f)$. Let $d_{N,1}\geq d_{N,2} \geq ...\geq 0$ be the eigenvalues of $C_{N}(f)$. It is clear that \begin{align} \lim_{M\to\infty}d_{NM,i} = d_{N,i} \end{align} Our main result states that: } \begin{theorem}[Main result]\label{thm: main result gram} Assume that \YZ{the autocovariance $\rho_s$ exists for all $s\in\mathbb{N}$}. Let $\{v_i\}$ be the time shift operator \YZ{eigenvectors} of unit length. Let $f \in \mathcal{H}_f$ the Hilbert space of observable time-series with $f = \displaystyle\sum_{i=1}^{\infty}a_iv_i + f_{sc} + f_{ac} + f_{NU}$, where $f_{sc}$ and $f_{ac}$ are the components of $f$ in the space spanned by the singular-continuous spectrum and absolute-continuous spectrum, and $f_{NU}$ the component of $f$ in the completely non unitary subspace (i.e. direct sum of unilateral shift spaces). Assume that $|a_1| \geq |a_2| \geq \dots \geq 0$. Then for any $i$: \begin{align} \lim_{N\rightarrow\infty}\lim_{M\rightarrow\infty}\YZ{\frac{d_{NM,i}}{N}} = |a_i|^2.\label{eq:dnm_a} \end{align} \end{theorem} \YZ{ For a given observable $f$, the trajectory matrix is defined as: \begin{align} A_{NM}(f) &= \begin{pmatrix} f(0), & f(1), & \dots & f(M) \\ f(1), & f(2), & \dots & f(M+1) \\ \vdots \\ f(N), & f(N+1), & \dots & f(N+M) \end{pmatrix}.\label{A_NM} \end{align} Then it is clear that \begin{align} G_{NM}(f) = \frac{1}{M}A_{NM}(f)A_{NM}(f)^* \end{align} where $A^*$ refers to the conjugate transpose of $A$. Let $\delta_{NM,1}\geq \delta_{NM,2}\geq \dots \geq 0$ be the singular values of $A_{NM}(f)$. Then directly we have that $\delta_{NM,i} = \sqrt{Md_{NM,i}}$. \begin{corollary}[Trajectory matrix version]\label{cor:main_traj} Assume $\rho_s$ exists for all $s\in\mathbb{N}$. Then \begin{align} \lim_{N\to\infty}\lim_{M\to\infty}\frac{\delta_{NM,i}^2}{NM} = |a_i|^2.\label{eq:cnm_a} \end{align} \end{corollary} } \begin{remark} The Gramian matrix is used by singular spectrum analysis methods (\cite{Ghil2002AdvancedSM}) to construct temporal modes of the given time series. The eigenfunctions of $G_{NM}$ are called temporal empirical orthogonal functions (EoFs). Theorem \ref{thm: main result gram} implies that the leading temporal EoFs are \YZ{due to} theoretical \YZ{eigenfrequencies} \YZ{of $\mathcal{K}$}. Similarly, the data-adaptive harmonic decomposition (DAHD, \cite{Kondrashov2020DataadaptiveHA}) and Hankel alternative view of Koopman analysis (HAVOK, \cite{Brunton2017ChaosAA}) are based on Gramian matrix and \YZ{trajectory} matrix, respectively. The main theorem and the corollary directly provides a way to identify which features extracted by SSA, DAHD, or HAVOK are related to \YZ{eigenfrequencies of $\mathcal{K}$}, and which features are not. \end{remark} \YZ{The main result can be summarized as the following abstract mathematical theorem with respect to isometric operators Hilbert space. According to our knowledge, this mathematical result is not shown in any previous literature. \begin{theorem}[Main result in pure mathematical form] Let $\mathcal{H}$ be a Hilbert space and $\mathcal{K}$ an isometry of $\mathcal{H}$. The inner product in $\mathcal{H}$ is denoted by $\inp{ }{ }$. Let $f\in\mathcal{H}$ be any vector. Let \begin{align} C_{N}(f) = \begin{pmatrix} \inp{f}{f} & \inp{f}{\mathcal{K}f} & \cdots & \inp{f}{\mathcal{K}^Nf} \\ \inp{\mathcal{K}f}{f} & \inp{\mathcal{K}f}{\mathcal{K}f} & \cdots & \inp{\mathcal{K}f}{\mathcal{K}^Nf}\\ \vdots\\ \inp{\mathcal{K}^Nf}{f} & \inp{\mathcal{K}^Nf}{\mathcal{K}f} & \cdots & \inp{\mathcal{K}^Nf}{\mathcal{K}^Nf} \end{pmatrix}. \end{align} Let $v_i$ be the eigenvectors of $\mathcal{K}$ in $\mathcal{H}$. And let \begin{align} f = \sum_{i}a_iv_i + f^{\perp} \end{align} be the decomposition of $f$, where $f^{\perp}$ is perpendicular to the subspace spanned by all the eigenvectors of $\mathcal{K}$. There maybe uncountably many $v_i$ in $\mathcal{H}$ but only countably many are included in the summation. Let $d_{N,1}\geq d_{N,2}\geq \cdots \geq 0$ be the eigenvalues of $C_{N}$. And assume that $|a_1|\geq |a_2|\geq \cdots \geq 0$. Then \begin{align} \lim_{N\to\infty}\frac{d_{N,i}}{N} = |a_i|^2 \end{align} \end{theorem} } \section{Proof of the main theorem} We first present several lemmas which are independent of the language of Koopman theory. \begin{lemma}\label{lem:contr} Let \YZ{$\mathcal{T}$} be a contraction on a Hilbert space $H$. Then for every $f, g \in \mathcal{H}_{NU}$ \[ \lim_{n\to \infty}\langle \YZ{\mathcal{T}}^n f, g\rangle_{\mathcal{H}} =0 \] \end{lemma} \begin{proof}[Proof of lemma \ref{lem:contr}] For every $f\in \mathcal{H}$ the sequence $(\|\YZ{\mathcal{T}}^n f\|)_{n\in \mathbb{N}}$ is decreasing thus convergent. For any $k\in\mathbb{N}$, we have \begin{align*} &\|\YZ{\mathcal{T}}^{*k} \YZ{\mathcal{T}}^k \YZ{\mathcal{T}}^n f - \YZ{\mathcal{T}}^n f \|^2 = \|\YZ{\mathcal{T}}^{*k} \YZ{\mathcal{T}}^k \YZ{\mathcal{T}}^n f\| - \nonumber \\ &2 {\mathcal Re}\left( \YZ{\mathcal{T}}^{*k} \YZ{\mathcal{T}}^k \YZ{\mathcal{T}}^n f, \YZ{\mathcal{T}}^n f\right) + \|\YZ{\mathcal{T}}^n f \|^2\\ =& \|\YZ{\mathcal{T}}^{*k}\YZ{\mathcal{T}}^{n+k} f\|^2 -2 \|\YZ{\mathcal{T}}^{n+k} f\|^2 + \|\YZ{\mathcal{T}}^n f \|^2\\ \leq& \|\YZ{\mathcal{T}}^{n+k} f\|^2 -2 \|\YZ{\mathcal{T}}^{n+k} f\|^2 + \|\YZ{\mathcal{T}}^n f \|^2 \\ =& \|\YZ{\mathcal{T}}^n f \|^2 - \|\YZ{\mathcal{T}}^{n+k} f\|^2 \to 0 \text{ as } n\to \infty \end{align*} Hence $\langle (I - \YZ{\mathcal{T}}^{*k}\YZ{\mathcal{T}}^k)\YZ{\mathcal{T}}^n f, g \rangle_{\mathcal H} \to 0$ for every $f,g \in {\mathcal H}$ as $n\to \infty$, therefore, \[ \langle \YZ{\mathcal{T}}^n f, g\rangle_{\mathcal H} \to 0 \text{ for every } g\in \rm{ran} (I - \YZ{T_c}^{*k}\YZ{T_c}^k) \text{ as } n\to \infty \] The same argument for $\YZ{\mathcal{T}}^{*}$ yields \[ \langle \YZ{\mathcal{T}}^n f, g\rangle_{\mathcal H}= \langle f, \YZ{\mathcal{T}}^{*n} g\rangle_{\mathcal H} \to 0 \text{ for every } g\in \rm{ran} (I - \YZ{\mathcal{T}}^k\YZ{\mathcal{T}}^{*k})\] as $n\to \infty$. We obtain that $\langle \YZ{\mathcal{T}}^n f, g\rangle_{\mathcal H}\to 0$ as $n\to \infty$ for every \begin{align*} f,g &\in \sum_{k\geq 1}(\rm{ran} (I - \YZ{\mathcal{T}}^{*k}\YZ{\mathcal{T}}^k) + \rm{ran} (I - \YZ{\mathcal{T}}^k\YZ{\mathcal{T}}^{*k}))\\ &= \left(\cap_{k\geq 1} \left(\rm{ran} (I - \YZ{\mathcal{T}}^{*k}\YZ{\mathcal{T}}^k)^{\perp} \cap \rm{ran} (I - \YZ{\mathcal{T}}^k\YZ{\mathcal{T}}^{*k})^{\perp}\right)\right)^{\perp}\\ &= \left(\cap_{k\geq 1} (\rm{fix} (\YZ{\mathcal{T}}^{*k}\YZ{\mathcal{T}}^k) \cap \rm{fix} (\YZ{\mathcal{T}}^k \YZ{\mathcal{T}}^{*k})) \right)^{\perp} ={\mathcal H}_U^{\perp} = {\mathcal H}_{NU} \end{align*} \end{proof} \begin{lemma}\label{lem:shift_product} Let $\mathcal{H}$ be a Hilbert space over $\mathbb{C}$ of finite dimension or infinite dimension. $a_0,a_1,...\in\mathcal{H}$ such that $\displaystyle\sum\|a_i\|^2 < \infty$. Then \begin{align} \lim_{k\rightarrow\infty}\sum_{i}|\inp{a_{i}}{a_{i+k}}| = 0 \end{align} \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:shift_product}] Without loss of generality, we may assume that $\displaystyle\sum_{i}\|a_i\|^2 = 1$. For any $\epsilon > 0$, there exists $N$ such that $\displaystyle\sum_{i\geq N}\|a_i\|^2 \leq \epsilon/2$. Further there exists $M>N$, such that for any $i<N$ and $j>M$, $\|a_j\| < \frac{\epsilon}{2}\|a_i\|$. Now for any $k>M$, \begin{align} &\sum_{i}|\inp{a_i}{a_{i+k}}| = \sum_{i=1}^N|\inp{a_i}{a_{i+k}}| + \sum_{i>N}|\inp{a_i}{a_{i+k}}| \\ \leq& \sum_{i\leq N}\|a_i\|\|a_{i+k}\| + \frac{1}{2}\sum_{i>N}(\|a_i\|^2 + \|a_{i+k}\|^2)\nonumber \\ \leq& \frac{\epsilon}{2}\sum_{i\leq N}\|a_i\|^2 + \sum_{i>N}\|a_i\|^2\nonumber \\ \leq& \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon\nonumber \end{align} \end{proof} \begin{lemma}[The weakly mixing property]\label{lem:weakly mixing} Let $\nu$ be a continuous measure on $S^1$. Then for any $f,g\in L^2(S^1,d\nu)$ and $z \in S^1$ \begin{align} \lim_{N\to\infty}\frac{1}{N}\sum_{i=0}^N|\nu(z^if\bar{g})|^2 = 0 \end{align} \end{lemma} \begin{proof} The proof of this mixing theorem can be found page 39 of \cite{Halmos1956}. An alternative proof is given in the appendix. \end{proof} \begin{lemma}\label{lem:nu_d1} Let $L>0$ and $\xi_1,...,\xi_{L} \in S^1$ such that $\xi_i\neq \xi_j$, and $c_1\geq...\geq c_L > 0$. Then \begin{align} \lim_{N\rightarrow\infty}\max_{\sum_{i=1}^N|\alpha_i|^2 = 1}\frac{1}{N}\sum_{k=1}^Lc_k^2|\sum_{i=0}^N\alpha_i\xi_k^i|^2 = c_1^2 \label{eq:nu_d equiv} \end{align} \end{lemma} \begin{proof}[Proof of lemma \ref{lem:nu_d1}] Let \begin{align} E_{N} &= \frac{1}{\sqrt{N+1}}\begin{pmatrix} 1 & 1 & \dots & 1 \\ \xi_1 & \xi_2 & \dots & \xi_L \\ \vdots \\ \xi_1^N & \xi_2^N & \dots & \xi_L^N \end{pmatrix}\nonumber \\ &= (\Phi_{N,1},...,\Phi_{N,L}),\\ \text{and } C &= \text{diag}(c_1,...,c_L) \end{align} Then $\|\Phi_{N,k}\| = 1$ and $\inp{\Phi_{N,k}}{\Phi_{N,l}} = \frac{1 - (\xi_k\xi_l^{-1})^{N+1}}{(N+1)(1-\xi_k\xi_l^{-1})}\rightarrow 0$ as $N\rightarrow \infty$. Applying the Gram-Schmidt process to $E_N$, we get a matrix $\tilde{E}_N = E_NS_N$, so that the columns of $\tilde{E}_N$ are orthogonal to each other and of unit length. $S_N\rightarrow I_L$ as $N\rightarrow\infty$, where $I_L$ refers to the identity matrix. Hence $\displaystyle\lim_{N\rightarrow\infty}S_NC = C$. As a consequence the leading singular value $d_{N,1}$ of $E_NC$ converges to $c_1$. On the other hand, for any $\alpha = (\alpha_0,...,\alpha_N)\in\mathbb{C}^N$ with $\|\alpha\|=1$, direct computation shows that \begin{align} \|\alpha^{\top} E_NC\|^2 = \frac{1}{N+1}\sum_{k=1}^Lc_k^2|\sum_{i=0}^N\alpha_i\xi_k^i|^2. \end{align} Hence \begin{align} \max_{|\alpha|^2 = 1}\frac{1}{N+1}\sum_{k=1}^Lc_k^2|\sum_{i=0}^N\alpha_i\xi_k^i|^2 = \|E_NC\|^2 = d_{N,1}^2\rightarrow c_1^2 \end{align} \end{proof} Now we can start to prove the main result. Recall that $f = f_d + f_{sc} + f_{ac} + f_{NU}$, and \YZ{$C_{N}(f) = C_{N}(f_{NU}) + C_{N}(f_d) + C_{N}(f_{ac}) + C_{N}(f_{sc})$}. For any \YZ{semi positive-definite} matrix \YZ{$A\in\mathbb{C}^{N\times N}$}, we define \YZ{$\|A\| = \displaystyle\max_{v\in\mathbb{C}^{N}}\frac{v^{\top}A\bar{v}}{\|v\|^2}$}. The maximal \YZ{eigenvalue} $d_1$ of $A$ is equal to $\|A\|$. If Theorem \ref{thm: main result gram} holds for $i=1$, we can then recursively deduce Theorem \ref{thm: main result gram} for all $i$ by removing $a_iv_i$ from $f$ at each step. Since \begin{align} \lim_{M\to\infty}G_{NM}(f) = C_{N}(f), \end{align} It is thus sufficient to prove that: \YZ{ \begin{align} &\lim_{N\rightarrow\infty}\frac{\|C_{N}(f_{NU})\|}{N} = \lim_{N\rightarrow\infty}\frac{\|C_{N}(f_{ac})\|}{N} \nonumber \\ =& \lim_{N\rightarrow\infty}\frac{\|C_{N}(f_{sc})\|}{N} = 0,\label{eq:ANM/NM s sc ac} \end{align} } and that \begin{align} \YZ{\lim_{N\rightarrow\infty}\frac{\|C_{N}(f_{d})\|}{N}} = |a_1|^2.\label{eq:ANM/NM d} \end{align} Now fix $N$, for any \YZ{$g\in \mathcal{H}_f$}, \begin{align} \YZ{\|C_{N}(g)\|} &=\max_{\alpha}\frac{\|\sum_{i=0}^N\alpha_i\mathcal{K}^ig\|^2}{|\alpha_1|^2+...+|\alpha_N|^2} \end{align} Hence Eq.\eqref{eq:ANM/NM s sc ac} and \eqref{eq:ANM/NM d} are equivalent to the following: \begin{align} \lim_{N\rightarrow\infty}\max_{\alpha}\frac{\|\sum_{i=0}^N\alpha_i\mathcal{K}^ig\|^2}{N(|\alpha_1|^2+...+|\alpha_N|^2)} = \begin{cases} |a_1|^2 \!\!\!\!\!\!&\text{if $g=f_d$ } \\ 0 &\text{if $g = f_{sc}$,$f_{ac}$ or $f_{NU}$}.\label{eq:alpha X} \end{cases} \end{align} The case when $g=f_{NU}$ can be quickly proved: \begin{prop}[The case for $f_{NU}$]\label{prop:f_s case} \begin{align} \lim_{N\rightarrow\infty}\max_{\alpha}\frac{\|\sum_{i=0}^N\alpha_i\mathcal{K}^if_{NU}\|^2}{N(|\alpha_1|^2+...+|\alpha_N|^2)} = 0 \end{align} \end{prop} \begin{proof}[Proof of proposition \ref{prop:f_s case}] Without loss of generality, we may assume that $\|\alpha\|=1$. Since $f_{NU}\in\mathcal{H}_{NU} = \bigoplus_{s\geq 0}\mathcal{H}_{s}$, we can write $f_{NU} = (a_0,a_1,...)$, where $a_i\in\mathcal{H}_{s}$. For $k>0$, let $c_{k} = |\inp{\mathcal{K}^if_{NU}}{\mathcal{K}^{i+k}f_{NU}}| = |\inp{\mathcal{K}^{i+k}f_{NU}}{\mathcal{K}^{i}f_{NU}}|$, which does not depend on $i$. Lemma \ref{lem:contr} implies that $\displaystyle\lim_{|i-j|\rightarrow\infty} c_{|i-j|} = 0$. Therefore for any $\epsilon>0$, there exists $M_{\epsilon}$ such that $c_{|i-j|} \leq \epsilon/4$ for any $|i-j|>M_{\epsilon}$. Now for any $N>2M_{\epsilon}\|f_{NU}\|^2/\epsilon$, and any $|\alpha_1|^2 + ...+|\alpha_N|^2 = 1$, \begin{align} &\|\sum_{i=0}^N\alpha_i\mathcal{K}^if_{NU}\|^2 = \sum_{i,j}\alpha_i\bar{\alpha}_j\inp{\mathcal{K}^if_{NU}}{\mathcal{K}^jf_{NU}}\nonumber \\ \leq&2\sum_{k=0}^N\sum_{i=0}^{N-k}\alpha_i\bar{\alpha}_{i+k}c_k \leq\sum_{k=0}^N\sum_{i=0}^{N-k}(|\alpha_i|^2 + |\alpha_{i+k}|^2)c_k\nonumber \\ \leq&2\sum_{k=0}^Nc_k\leq 2\sum_{k=0}^{M_{\epsilon}}c_k + 2\sum_{k>M_{\epsilon}}c_k\nonumber \\ \leq& M_\epsilon \|f_{NU}\|^2 + (N-M_\epsilon)\epsilon/2 \leq \frac{N\epsilon}{2} + \frac{N\epsilon}{2} = N\epsilon \end{align} \end{proof} Recall the notations in Theorem \ref{thm:unitary spectral}, for any $g\in\mathcal{H}_{f,U}$, \begin{align} &\|\sum_{i=0}^N\alpha_i\mathcal{K}^ig\|_{\mathcal{H}_f}^2 = \|\sum_{i=0}^N\alpha_iz^i\phi(g)\|_{L^2(S^1,d\nu_f)}^2\nonumber \\ =& \int_{S^1}|\sum_{i=0}^N\alpha_iz^i|^2|\phi(g)(z)|^2d\nu_f(z) \end{align} This proves that \begin{prop}\label{prop:ANM-S1} For any $g \in \mathcal{H}_{f,U}$, \begin{align} &\YZ{\lim_{N\rightarrow\infty}\frac{\|C_{N}(g)\|}{N}}\nonumber \\ =&\lim_{N\rightarrow\infty}\max_{\|\alpha\|=1}\frac{1}{N}\int_{S^1}|\sum_{i=0}^N\alpha_iz^i|^2|\phi(g)(z)|^2d\nu_f(z) \end{align} \end{prop} To prove Eq.\eqref{eq:alpha X} for $g=f_d, f_{sc}$ and $f_{ac}$, we start with the following lemma. \begin{lemma}\label{lem:nu_d2} Let $f,\mathcal{H}_{f,U}, \nu_f, \phi$ be the same as in Theorem \ref{thm:unitary spectral}. For simplicity, we denote $\nu_f$ by $\nu$. $\nu = \nu_d+\nu_{sc} + \nu_{ac}$. Let $\nu_{d,1}$ be a purely discrete finite measure on $S^1$, such that $\{\xi_1,...,\xi_L\}=\text{Supp}(\nu_{d,1})\subset\text{Supp}(\nu_d)$ and $\nu_{d,1}(\{\xi_i\}) = \nu_{d}(\{\xi_i\})$ for any $0\leq i\leq L$. Let $c_k = \sqrt{\nu_d(\{\xi_k\})}$. Let $f_k = \phi^{-1}(\mathds{1}_{\{\xi_k\}})$, and set $h = \displaystyle\sum_{k=1}^Lf_k$. Let $d_{N,1}(h)$ be the leading \YZ{eigenvalue} of \YZ{$C_{N}(h)$}. Then \begin{align} \YZ{\lim_{N\rightarrow\infty}\frac{d_{N,1}(h)}{N}} = \max_{k}c_k^2. \label{eq:nu_d normal} \end{align} \end{lemma} \begin{proof}[Proof of proposition \ref{lem:nu_d2}] According to proposition \ref{prop:ANM-S1}, \begin{align} &\YZ{\lim_{N\rightarrow\infty}\frac{d_{N,1}(h)}{N} = \lim_{N\rightarrow\infty}\frac{\|C_{N}(h)\|}{N}}\nonumber\\ =&\lim_{N\rightarrow\infty}\max_{\|\alpha\|=1}\frac{1}{N}\int_{S^1}|\sum_{i=0}^N\alpha_iz^i|^2|\phi(h)(z)|^2d\nu_f(z)\nonumber \\ =&\lim_{N\rightarrow\infty}\max_{\|\alpha\|=1}\frac{1}{N}\int_{S^1}|\sum_{i=0}^N\alpha_iz^i|^2d\nu_{d,1}(z)\nonumber \\ =&\lim_{N\rightarrow\infty}\max_{\|\alpha\|=1}\frac{1}{N}\sum_{k=1}^Lc_k^2|\sum_{i=0}^N\alpha_i\xi_{k}^i|^2 \end{align} Then lemma \ref{lem:nu_d1} implies what we want to prove. \end{proof} \begin{prop}[The case for $f_d$ ]\label{prop:f_d} Eq.\eqref{eq:ANM/NM d} holds. \end{prop} \begin{proof}[Proof of proposition \ref{prop:f_d}] For any $\epsilon>0$, we choose a truncation $\nu_d = \nu_{d,1} + \nu_{d,\epsilon}$, so that $\nu_{d,\epsilon}(S^1) < \epsilon$, $|\text{Supp}(\nu_{d,1})| < \infty$, and that $\nu_{d,1}(\{\xi_k\}) = \nu_d(\{\xi_k\})$ whenever $\xi_{k}\in\text{Supp}(\nu_{d,1})$. \YZ{This decomposition of measure also induces an orthogonal decomposition of $L^2(S^1,d\nu_d) = L^2(S^1,d\nu_{d,1}) \bigoplus L^2(S^1,d\nu_{d,\epsilon})$. Note that these two components are invariant under $\mathcal{K}$.} When $\epsilon$ is small enough, $|a_1|^2 = \max_{\xi}\nu_{d,1}(\xi)$. Apply lemma \ref{lem:nu_d2} to $\nu_{d,1}$, and let $h$ be defined as in lemma \ref{lem:nu_d2}. Then $f = h+f_{\epsilon}$ and \YZ{ \begin{align} C_{N}(f) = C_{N}(h) + C_{N}(f_\epsilon), \end{align} } and \begin{align} \YZ{\lim_{N\rightarrow\infty}\frac{\|C_{N}(h)\|}{N}} = c_1^2 = \max_{\xi}\nu_d(\{\xi\}) = |a_1|^2\label{eq:f_d 1} \end{align} And note that, applying Cauchy-Schwartz inequality, \begin{align} &\YZ{\lim_{N\rightarrow\infty}\frac{\|C_{N}(f_\epsilon)\|}{N}} \nonumber \\ =& \lim_{N\rightarrow\infty}\max_{|\alpha|^2=1}\int_{S^1}\frac{|\sum_{i=0}^N\alpha_iz^i|^2}{N}d\nu_{d,\epsilon} \leq \nu_{d,\epsilon}(S^1) < \epsilon\label{eq:f_d 2} \end{align} Eq.\eqref{eq:f_d 1}\eqref{eq:f_d 2} implies Eq.\eqref{eq:ANM/NM d} by letting $\epsilon\rightarrow 0$. \end{proof} \begin{prop}\label{prop:f_c} Let $\nu_c$ be a continuous finite measure on $S^1$. Then \begin{align} \lim_{N\rightarrow\infty}\max_{\|\alpha\|^2=1}\int_{S^1}\frac{1}{N}|\sum_{i=0}^N\alpha_iz^i|^2d\nu_c = 0. \end{align} \end{prop} \begin{proof}[Proof of proposition \ref{prop:f_c}] Let $c_k = |\nu_c(z^k)|$. In lemma \ref{lem:weakly mixing}, let $f=g=1$, it implies that \begin{align} \lim_{N\to\infty}\frac{1}{N}\sum_{k=0}^N c_k^2 = 0. \end{align} Therefore for $\|\alpha\| = 1$, \begin{align} &\int_{S^1}\frac{1}{N}|\sum_{i=0}^N\alpha_iz^i|^2d\nu_c \leq\frac{2}{N}\sum_{k=0}^N\sum_{i=0}^{N-k}|\alpha_{i+k}\bar{\alpha}_{i}|c_k \nonumber \\ \leq& \frac{1}{N}\sum_{k=0}^N\sum_{i=0}^{N-k}(|\alpha_{i+k}|^2 + |\alpha_{i}|^2)c_k\leq \frac{1}{N}\sum_{k=0}^N2c_k\nonumber \\ \leq&\frac{2}{N}\sqrt{(N+1)\sum_{k=0}^Nc_k^2} = 2\sqrt{\frac{N+1}{N}\frac{1}{N}\sum_{k=0}^Nc_k^2}\to 0, \end{align} as $ N\to\infty.$ \end{proof} \begin{corollary}[The case for $f_{sc}$ and $f_{ac}$]\label{cor:f_c} Eq.\eqref{eq:ANM/NM s sc ac} holds for $f_{sc}$ and $f_{ac}$. \end{corollary} \begin{proof} This is the direct consequence of proposition \ref{prop:ANM-S1} and \ref{prop:f_c}. \end{proof} \section{Algorithm and Discussion} A direct application of the main theorem is to determine whether or not the given finite data set is sufficient enough for \YZ{determining the $i-$th \YZ{$\mathcal{K}-$}eigenfreqeuncy} using \YZ{Gramian} matrix. For this purpose, we provide the following algorithm. \begin{itemize} \item Given a time series of data $\{f(t)\}_{0\leq t\leq T}$ \YZ{where $t,T$ are non-negative integers that represent the iteration number}, choose $N_k, M_{k,j}$ where $1\leq j \leq l_k$, such that $N_k+M_{k,j}\leq T$, $M_{k,1}< M_{k,2}< ...< M_{k,l_k}\gg N_k$. \item For each $N_k, M_{k,j}$, compute the renormalized \YZ{eigenvalues} of $G_{N_kM_{k,j}}$, denoted by \YZ{$\sigma_{k,j,i} = \frac{d_{N_kM_{k,j},i}}{N_k}$}. \item Given $i$, for each $N_k$ check if $\sigma_{k,j,i}$ converges as $j$ increases. If for some $k$ it does not converge, it means that the $i-$th \YZ{$\mathcal{K}$}\YZ{eigenfrequency} is not well represented by this dataset. \item Given $i$, if for all $k$, $\sigma_{k,j,i}$ shows good convergence, then check if $\sigma_{k,l_k,i}$ converges as $k$ increases. If $\sigma_{k,l_k,i}$ converges to some nonzero number, then the \YZ{energy of the} $i-$th \YZ{$\mathcal{K}-$}\YZ{eigenfrequency} is well represented by this data set. Otherwise, the $i-$th \YZ{$\mathcal{K}-$}\YZ{eigenfrequency} is not well-represented by this data set. \end{itemize} \YZ{ \begin{remark}[Identification of the \YZ{$\mathcal{K}-$}eigenfrequencies] Assume convergence for sufficiently enough $i$, choose $G_{NM}$ so that $d_{NM,i}/N$ is close enough to $|a_i|^2$ for sufficiently many $i$. Assume that $|a_{k-1}| > |a_k| = |a_{k+1}| = ... = |a_{k+L}|>|a_{k+L+1}|$. $L$ must be finite because $\|f\|^2 \geq \displaystyle\sum_{i}|a_i^2|$. Let $\xi_k,...,\xi_{k+L}$ be the corresponding theoretical $\mathcal{K}$eigenfrequencies. Our goal is to identify $\xi_{k},...,\xi_{k+L}$. Let $\eta_{k+i} = (1,\xi_{k+i},\xi_{k+i}^2,...,\xi_{k+i}^N)$. Let $\{v_{NM,k},v_{NM,k+1},...,v_{NM,k+L}\}$ be the corresponding eigenvectors of $G_{NM}$. Then each of $v_{NM,k},...,v_{NM,k+L}$ is approximately a linear combination of $\eta_{k},...,\eta_{k+L}$. Then $\xi_{k+i}$ can be identified by applying Fourier analysis. In following two cases, the eigenfrequency can be approximated by counting the local maximums of $v_{NM,i}$. \begin{itemize} \item Case 1: $f$ is a real valued observable. And for each $v_i$ there is no eigen-vector except the conjugate of $v_i$ that has the same energy as $v_i$; \item Case 2: for each $v_i$ there does not exist other eigen-vectors that has the same energy as $v_i$. \end{itemize} \end{remark} } \subsection{Implication to Hankel DMD} In \cite{Arbabi2017} a Hankel DMD algorithm has been proposed and the authors showed that $\YZ{d_{N,i}}$ can be used to identify Koopman and non-Koopman eigenfunctions for fixed $N$ under the conditions that 1), the Hilbert space $\mathcal{H}_f$ is finite dimensional and 2), $N$ is larger than the dimension of $\mathcal{H}_f$. More precisely, they showed that $\YZ{d_{N,i}} > 0$ if and only if \YZ{$\YZ{d_{N,i}}$} corresponds to a Koopman eigenfunction. However, this assumption is already too strong even for the case where $f(x)$ is the observation of the first component of the 3 dimensional Lorenz system. In the case for which the dimension of $\mathcal{H}_f$ is infinite, their method unfortunately fails. Because $d_{N,i}$ can be positive even if there is no Koopman eigenfunctions. Therefore Theorem \ref{thm: main result gram} can be thought of as a completion of the method posed in \cite{Arbabi2017} under a much weaker assumption, by letting $N\to\infty$. \subsection{Comparison with Yosida's formula} Yosida mean ergodic theorem \cite{Yosida1995} provides a formula to calculate $a_\omega$, the coefficient of the Koopman eigenfunction of frequency $\omega$ in Eq.\eqref{eq:f decompose}: \begin{align} a_{\omega} = \lim_{T\to\infty}\frac{1}{T}\sum_{t=0}^{T-1}\exp(-2\pi i\omega t)f(t). \label{eq:Yosida} \end{align} $a_{\omega} = 0$ if $\omega$ is not a Koopman eigenfrequency. Under the assumption of ergodicity and finite invariant measure, this formula can be proved by combining Theorem \ref{thm:Wold}, lemma \ref{lem:shift_product}, and Von-Neumann ergodic theorem. \YZ{In the more general situation where only the existence of autocovariances is assumed, we do not know whether the limit in Eq.\eqref{eq:Yosida} always exists. Nor do we know if the output of Yosida's formula is strictly related to Koopman theory. } This formula was first introduced to the fluid dynamics' community by \cite{Mezi2004ComparisonOS, Mezi2005SpectralPO}. Eq.\eqref{eq:Yosida} is easy to compute for \YZ{finite $T$ and} a given $\omega$. In the case for which the Koopman eigenfrequencies are unkown, numerically one still has the chance to identify some Koopman eigenfrequencies by calculating Eq.\eqref{eq:Yosida} for all $\omega\in\{k\Delta\omega: k = 1,...,n\}$ and then finding the peak value. On the other hand, from the theoretical point of view, \YZ{when the system is ergodic and has a finite invariant measure,} our result allows us to identify the Koopman eigenfunctions without having prior knowledge about the Koopman eigenfrequencies. \section{Numerical experiments} \subsection{Lorenz63 system} To first test the theorem-based methodology, we consider the Lorenz63 system. We integrate Lorenz system using the Runge-Kutta 4th order scheme with $\Delta t = 0.01$. As already mentioned in \cite{Das2017}, \YZ{this system is ergodic and has finite invariant measure. Hence the autocovariance always exist \YZ{and $\mathcal{H}_f$ can be identified with a subspace of $L^2(X,d\mu)$ and $\mathcal{K}$ coincides with the classical Koopman operator on $\mathcal{H}_f$.}} Due to its weakly mixing nature, the only Koopman eigenfunction of Lorenz 63 is the constant function which has frequency 0. Let $f(t) = x - \bar{x}$, where $x$ is the first component of Lorenz system and $\bar{x}$ is the temporal mean of $x$. We use $E_{H,1}(N,M)$ to denote the leading renormalized singular value $\frac{d_{NM,1}^2}{NM}$. Then the decomposition $f = \displaystyle\sum_{i}a_{i}\YZ{v}_i + f_{NU} + f_{sc} + f_{ac}$ can be reduced to $f= f_{NU} + f_{sc} + f_{ac}$. As expected, Fig.\ref{fig:L63} does not display the tendency that $E_{H}(N,\YZ{M_{\text{max}}})$ converge to some nonzero value as $N\to\infty$. \begin{figure} \centering \includegraphics[width = 0.5\textwidth]{Figure_1_36_2.png} \caption{The results about Lorenz 63 system. The behavior of $E_{H,1}(N,M)$ agrees with the theoretical fact that $E_{H}(N,\YZ{M_\text{max}})$ does not converge to any nonzero value as $N\to\infty$. } \label{fig:L63} \end{figure} \subsection{A simple 4-dimensional system} Following one of the numerical examples in \cite{Das2017}, we next consider a coupled system $(X,T, \mu)$, which consists of the discrete-time Lorenz system $(X_{63}, T_{63}, \mu_{63})$ and a rotation on the unit circle $(S^1,T_1, \mu_1)$, i.e. $X = X_{63}\times S^1$, $T = T_{63}\times T_1$ and $\mu = \mu_{63}\times\mu_1$. It is outlined in \cite{Das2017} that $\mu$ is an invariant measure. Still, the Lorenz system does not have non-trivial Koopman \YZ{eigenfrequency} and $(X,T,\mu)$ is ergodic. \YZ{Therefore the autocovariance exists \YZ{and $\mathcal{K}$ coincides with the classical Koopman operator.}.} We choose the rotation $T_1$ to have period $p = \pi/5$ and define the observable \begin{align} f(x,y,z,\xi)=\sin(\xi+x/10). \end{align} For simplicity, we also use $f(t)$ to denote $f(x(t),y(t),z(t),\xi(t))$. Then $f=\displaystyle\sum_{i}a_iv_i + f_{sc} + f_{ac} + f_{NU}$ as in Eq.\eqref{eq:f decompose}. Anticipated by our main theorem, the renormalized singular value of the \YZ{trajectory} matrix should then converge to the same quantity as the one calculated by Eq.\eqref{eq:Yosida}. Hence it is worth to make a numerical comparison about $|a_{\omega}|^2$ obtained from Yosida's formula and that from the singular values of $A_{NM}$. The integration time step for Lorenz system is $\Delta t = 0.01$. The Runge-Kutta 4th order scheme is applied for integrating Lorenz system. The frequency $\omega$ we investigate is exactly the inverse of the period of $(S^1,T_1)$, i.e. $\omega = 5\Delta t/\pi$ for Eq.\eqref{eq:Yosida}. We use $E_{Y}(T)$ to denote the value of $|a_{\omega}|^2$ computed by Eq.\eqref{eq:Yosida}, and $E_{H,i}(N,M)$ to denote the value of $\frac{d_{NM,i}^2}{NM}$ which is computed from the singular value decomposition of $A_{NM}$. Note that $t,T,N,M$ are all integers which refer to the number of time steps instead of the exact time. \begin{figure} \centering \includegraphics[width = 0.5\textwidth]{Figure_1_33_2.png} \caption{The numerical results of $E_Y(T)$ and $E_{H,i}(N,M)$ for $(X_{63}\times S^1, T_{63}\times T_1, \mu_{63}\times \mu_1)$.} \label{fig:L63+S1} \end{figure} Fig.\ref{fig:L63+S1} shows the numerical results of $E_{Y}(T)$ and $E_{H}(N,M)$. The value $E_Y(T)$ does not converge to $0$, showing that $\omega$ is indeed a Koopman eigen frequency. We also computed $\|f\|^2\approx\frac{1}{T}\displaystyle\sum_{t=1}^T|f(t)|^2 \approx 0.5002$. $E_{Y}(2\times 10^5)\approx 0.1303$, meaning that the fraction of energy in $f$ represented by the Koopman \YZ{eigenvector} $v_{\omega}$ is about $26\%$. Note that $E_H(10^3,2\times 10^5)$ is close to $E_{Y}(2\times 10^5)$, meaning that the leading singular value of the \YZ{trajectory} matrix indeed corresponds to the eigenfrequency $\omega$. $E_{H,1}(10^3,M)$ and $E_{H,2}(10^3,M)$ seem to converge to the same value. This is because the Koopman \YZ{eigenfrequencies} always exist in pair, i.e. $\exp(2\pi i\omega)\in\text{Supp}(\nu_d)\iff \exp(-2\pi i\omega)\in\text{Supp}(\nu_d)$. Since the observable $f$ is real, the coefficient $a_{\omega} = \bar{a}_{\bar{\omega}}$. Therefore, the total fraction of energy in $f$ that is represented by signals of period $p=\frac{\pi}{5}$ is about $52\%$. \subsection{AVISO (DUACS) interpolated ocean topography data (1993-2019)} For final illustration, we consider sea surface height (SSH) estimates. The AVISO gridded products provide the global SSH interpolation since 1993, the year after the launch of the first satellite altimeter TOPEX/Poseidon. The SSH is interpolated daily at a grid resolution of $0.25^\circ\times 0.25^{\circ}$. In this subsection, we use the main theorem to possibly assess the use of Koopman analysis for this dataset. \YZ{The} assumption \YZ{of the main theorem} is the \YZ{existence of autocovariance}, which implies that the system should be stationary. We thus process the data by removing the overall constant rising tendency of SSH at each grid point over the decades (see for instance Fig.2 in \cite{Cazenave2010}). \YZ{We can not assert that the whole Earth system is ergodic and has an invariant measure, which includes the Earth, the atmosphere, ocean, all celestial bodies, but also the biology and living animals, etc. Hence we can not claim that the quantities $a_i$, $v_i$, etc. are associated to the classical Koopman operator. But, as we already stated, all these quantities are well-defined mathematically as long as the autocovariances exist. Moreover, in order to apply Yosida's formula, we assume that the eigenvectors $v_i\in\mathcal{H}_f$ can be represented as a time series of the form $\{1, e^{i\omega}, e^{2i\omega},...\}$ for some $\omega\in [0,2\pi]$. In this case, the output of Yosida's formula $a_{\omega}$ must be $a_i$.} We renormalize the SSH at each grid point, to simply ensure the data to have zero mean and unit variance at every grid point. We first apply Yosida's formula (Eq.\ref{eq:Yosida}) to the global data to compute $|a_{\omega}|$ at every grid point, where $\omega = \exp(2\pi i /365.25)$. This quantity is computed for January 1, 1998, i.e. $f(1)$ refers to the SSH at Jan. 1, 1998. Note that in theory, i.e. \YZ{assuming the autocovariance exists} and the data set large enough, this quantity does not depend on time. Since the data now has unit variance, $|a_{\omega}|^2$ can be interpreted as the fraction of energy in the SSH that is represented by the \YZ{$\mathcal{K}-$eigenfrequency} $\omega$. Similarly, since the SSH are real numbers, $|a_\omega| = |a_{\YZ{-\omega}}|$ and $|a_{\omega}|^2 + |a_{\YZ{-\omega}}|^2 = 2|a_{\omega}|^2$ represents the fraction of energy represented by the yearly signal. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{19980101_period_365_25_koopman.jpg} \caption{ The $|a_{\omega}|$ value at each grid point computed at Jan. 1, 1998 using Eq.\eqref{eq:Yosida} } \label{fig:global_yearly} \end{figure} Fig.\ref{fig:global_yearly} shows that more than $0.5^2+0.5^2 = 50\%$ of energy at the pacific ocean to the north of the equator (for instance at ($114.875^{\circ}$W, $6.125^{\circ}$N)) is represented by the yearly signal. Constructing the \YZ{trajectory} matrix for SSH at ($114.875^{\circ}$W, $6.125^{\circ}$N)), i.e. we choose $f = \text{SSH}(114.875^{\circ}$W,$6.125^{\circ}$N$)$, we can then compare $E_{Y}(T)$ and $E_{H,i}(N,M)$, for $i=0,1,2,3$, $N = p,3p,6p$, $M = 3p,6p,20p$, with $p = 1$ year $= 365.25$(days). Fig.\ref{fig:SSH 1,2} shows that the first two renormalized singular values apparently converge to the fraction of energy represented by \YZ{$\mathcal{K}-$eigenfrequencies} $\omega$ and $\YZ{-\omega}$. However, the third and fourth renormalized singular values do now show a sign of convergence. As shown in Fig.\ref{fig:SSH 3,4}, this is likely due to the overall limited length of the present-day data set regarding the high dimensional \YZ{state space of the} dynamical \YZ{system} at stake. \begin{figure} \centering \includegraphics[width = 0.5\textwidth]{eig01.png} \caption{The numerical results of $E_Y(T)$ and $E_{H,i}(N,M)$ (i=0,1) for AVISO interpolated SSH at ($114.875^{\circ}$W, 6.125$^{\circ}$N). It shows that the first and the second singular value have possibly converged.} \label{fig:SSH 1,2} \end{figure} \begin{figure} \centering \includegraphics[width = 0.5\textwidth]{eig23.png} \caption{The numerical results of $E_Y(T)$ and $E_{H,i}(N,M)$ (i=2,3) for AVISO interpolated SSH at ($114.875^{\circ}$W, 6.125$^{\circ}$N). It shows that the third and the fourth singular value have not yet converged.} \label{fig:SSH 3,4} \end{figure} \section{Conclusion} The main objective of this study is to provide a rigorous and practical method to identify Koopman \YZ{eigenfrequencies} for discrete-time ergodic and (finite) measure preserving dynamical systems. \YZ{In the more general situation where we only assume that the time series has well-defined autocovariances, we define the Hilbert space $\mathcal{H}_f$ purely based on the time series and the time shift operator $\mathcal{K}$ that acts on $\mathcal{H}_f$. When the system is ergodic and has finite invariant measure, $\mathcal{H}_f$ can be identified with the closure of the Krylov subspace generated by an observable $f$, and the time-shift operator $\mathcal{K}$ coincides with the classical Koopman operator on the observable space $\mathcal{H}_f$}. This work follows the result in \cite{Arbabi2017}, but further extend the applicability of the Hankel-DMD. It provides a theorem-based practical way to help assess the results of the decomposition in terms of Koopman \YZ{eigenfrequencies}. \YZ{It shows that} the leading temporal EoFs, which are calculated from the eigen decomposition of the Gramian matrix, are \YZ{indeed due to intrinsic eigenfrequencies}. The main theorem provides partial theoretical foundation to several existing empirical methods including SSA,DAHD, and HAVOK. The main result shows that the discrete spectrum \YZ{of $\mathcal{K}$} can be characterized by the singular values (eigenvalues) of \YZ{trajectory} (Gramian, respectively) matrix. It remains to study whether the continuous spectrum can also be characterized by these matrices. The numerical illustrations demonstrate the applicability of the theorem-based methodology for low dimensional systems. Yet, using sea surface height observables to inform about a very large dimension dynamical planet system, it is also apparent that one major difficulty of applying the main theorem might be the length of the data-set. An heuristic solution is to possibly associate the observables at different grid points, and/or to consider multiple observables, i.e. sea surface temperature. We reserve these investigations for future studies. \section*{Acknowledgement} The authors acknowledge the support of the ERC EU project 856408-STUOD, the support of the ANR Melody project, the support from China Scholarship Council, and the support from the National Natural Science Foundation of China (Grant No. 42030406).
2107.02050
\section{Some examples and best-practices} \end{document} \section{Acknowledgement \label{sec:acknowledgement}} We thank Jade Powell for pointing us the supernova repository. The authors gratefully acknowledge the support of the NSF for the provision of computational resources. The work of GP is partially supported by the research grant number 2017W4HA7S ``NAT-NET: Neutrino and Astroparticle Theory Network'' under the program PRIN 2017 funded by the Italian Ministero dell'Istruzione, dell'Universita' e della Ricerca (MIUR). \section{Conclusion \label{sec:conclusion}} We have discussed a new multi-messenger strategy with GWs and {LENs} in order to catch signals from core-collapse supernovae. The strategy involves several LEN detectors as well as GW detectors. We considered different emission models both for GW and for LEN resulting from recent numerical simulations. We performed a coherent set of injections by taking into account also the different backgrounds characterizing the detectors, and we analyzed the detection efficiency of the global network to the different signals and for several detectors configurations. We showed that in general a multi-messenger approach can give better sensitivity to otherwise statistically unimportant signals. Additionally we improved the neutrino analysis sector by introducing a new parameter $\xi$ that changes the estimation of {FAR} as well as the significance value for event clusters in neutrino detectors. Thanks to this 2-parameter method, we have shown that we can get a promising improvement in terms of FARs of recovered injections without misidentification of noise. This 2-parameter method increases the detection horizon of current-generation neutrino detectors. This approach can be easily applied also in an online system such as SNEWS2.0, which will gain in term of safe alerts for the electromagnetic community that would otherwise be lost. {Moreover, the multimessenger campaign between GWs and LENs will profit this new method in terms of the global efficiency gain. Due to the fact that the efficiency of LEN analysis is somewhat higher than that of GW analysis, we may in the future detect a coincidence from several LEN detectors. In this case, we can do a targeted search of GWs and we can still profit from this new method.} \section{Introduction\label{sec:intro}} Core-collapse Supernovae (CCSNe) are perfect astrophysical targets for multimessenger astronomy \cite{pagliaroli_PRL,leonor}. Indeed the large amount of energy produced by the stellar collapse, $\sim 10^{53}$ erg, is expected to be released as low-energy neutrinos (LENs) { with average energy around 10 MeV}, gravitational waves (GWs), and multi-wavelength electromagnetic emissions. The first neutrino detection from a CCSN in a nearby galaxy, SN1987A, observed by Kamiokande-II \cite{hirata1987}, IMB \cite{Bionta1987}, and Baksan \cite{alexeyev}, proved that CCSNe can produce a large number of MeV neutrinos which are in the sensitivity range of our detectors. Currently, there are several neutrino detectors in operation, as Super-Kamiokande \cite{superK} (Super-K), LVD \cite{lvd_det}, KamLAND \cite{kamland}, IceCube \cite{icecube}, which are sensitive to a LEN burst search to distances up to at least the edge of the Milky Way and beyond. These detectors are also involved in a joint prompt search for a CCSN neutrino burst via the SuperNova Early Warning System (SNEWS) \cite{Antonioli2004,Al_Kharusi_2021} to provide fast alerts to the electromagnetic community. The joint observation of the first binary neutron star merger \cite{Abbott2017} started the promising era of multimessenger astronomy with advanced GW detectors. The GW search is currently being carried out by the advanced detectors working as a network: two 4-km-length LIGO \cite{ligo} detectors in Hanford and Livingston, USA, and one 3-km-length Virgo \cite{virgo} detector in Cascina, Italy. These advanced detectors already performed three observing runs (O1, O2, O3) from 2015 to 2020. Moreover, the Kagra detector \cite{kagra} in Kamioka, Japan, already joined the hunt for GWs at the end of O3, with sensitivity, at a beginning stage, comparable to the other detectors. GWs signals are also expected from CCSN events by several different physical processes\cite{Ott_2009,Abdikamalov:2020jzn,powell2020,Szczepanczyk}.Thus, these astrophysical objects are ideal targets for multimessenger search via GWs and LENs. In this paper, we investigate the best way of combining GW and LEN data to hunt CCSNe in order to improve our efficiency and detection horizon. Here, based on our previous investigations \cite{halimtesi,halim2019}, we describe our strategy and we test its power with simulated signals injected in a time-coherent way, both in GW and LEN data. There have been several studies on multimessenger searches to combine gravitational waves and other messengers, including searches with high energy neutrinos { (TeV energy)} \cite{Aso_2008,multimess,Adrian2013,dipalma2014} and with gamma-ray bursts \cite{gwgrbabbott,Ashton_2018}. However, the joint analysis strategy combining LENs and GWs to hunt for CCSNe has not so far been studied thoroughly. In the strategy described here, we use \texttt{coherent WaveBurst} (cWB) pipeline \cite{Klimenko_2004,Klimenko2008,dragotesi,Necula_2012} to analyze simulated GW data. This pipeline is a model-agnostic algorithm for the search of GW transients. cWB is open to a wide class of GW sources; it was the pipeline providing the first alert of the arrival of the first GW signal GW150914 \cite{gw150914} and it is used for the search of GWs from CCSNe \cite{em_gw_ccsne}. In parallel, we simulate the time series of expected LEN signal and background event rates from several neutrino detectors and then analyze the network of the simulated LEN data to hunt for astrophysical neutrinos\footnote{Note that we employ no detailed detector simulation for the neutrino detectors.}. This strategy for the neutrino network analysis will profit from a new approach, already introduced in \cite{halim2019}, to increase the burst detection sensitivity of neutrino detectors. In this paper, we then implement a new time-coincidence analysis between two messengers, following the flow chart shown in Fig.~\ref{fig:GWnu_scheme}. Data from different messengers are analyzed separately and then combined by coincidence analysis to produce a list of possible GW-LEN signals. The described strategies could, in principle, be used for online astrophysical alert networks, such as SNEWS, or offline analysis. \begin{figure}[!ht] \centering \includegraphics[width=.7\linewidth]{figures/GWnu_scheme.png} \caption{The schematic view of the multimessenger GW-LEN strategy proposed in this paper.} \label{fig:GWnu_scheme} \end{figure}% The article is organised as follows. In Section \ref{sec:sources}, we will discuss the emission models from each messenger. Then, in Section \ref{sec:science}, a discussion on the data and analysis by our strategy will be presented. Finally, we will implement the strategy on simulated data and show the result in Section \ref{sec:results}. \section{Astrophysical Sources of GW \& LEN\label{sec:sources}} Sources that can produce a copious of LENs are CCSNe (review can be seen in \cite{jankarev}), failed-supernovae (see more in \cite{O_Connor_2015}), as well as quark novae. All of these sources agree on several parameters: the total energy in neutrinos can be of the order of $10^{53}$ ergs, the surge of neutrinos is of the order of 10 seconds, and the average energy of neutrinos is $O(10)$ MeV. Here, we focus our work on the CCSN search. \subsection{Gravitational Wave Emission} GWs are produced by quadrupole mass moment rapidly changing in time\footnote{The general explanation of GW can be seen such as in \cite{maggiore2007gravitational}.}. The signal is characterised by temporal evolution of its amplitude ($h_+$ \& $h_\times$). The (putative) GW bursts from CCSNe are modelled with a very broad classes. Depending on the progenitor scenarios (whether they are nonrotating or highly spinning), the signals may have very different models (see review such as \cite{Ott_2009}). For the highly spinning progenitors, during core collapse, the stellar core will be spun up of a factor of 1000 comparing with the progenitor core rotation, thanks to the conservation of angular momentum. If the pre-collapse core has a spin period of $\sim$ 1 second, then the proto neutron star, after the collapse, will have a spin period of the order of milliseconds. This rapidly millisecond-spinning compact remnant has a rotational energy of about $10^{52}$ erg. GW signals can be produced thanks to magnetorotational hydrodynamic mechanism \cite{gossan2015,Powell2016}. The GW signal from this mechanism is dominated by bounce and subsequent ringdown of the proto neutron star. Strong centrifugal deformation of inner core, which is related to oblateness, due to rapidly rotating precollapse core, gives a large time-varying quadruple mass moment. The characteristic of GW signal for this rapidly rotating CCSNe will have the peak of the detected strain amplitude ($h_+$ \& $h_\times$) of the order of $\sim10^{-21}-10^{-20}$ at a distance $D\sim10$ kpc. The energy produced in GW, $\epsilon_\mathrm{GW}$ will be of the order of $10^{-10}-10^{-8}M_\odot c^2$. Moreover, the frequency will be narrowband with most emitted power lies on $500-800$ Hz. The timescale for this GW burst will be around $\sim10$ ms. An example $h_+$ evolution of a simulated GW signal from this mechanism can be seen in the top panel of Figure 1 in \cite{Powell2016}. The non-spinning $\nu$-driven CCSNe, which is expected to be more often found, instead, will produce GWs from turbulent convection and Standing Accretion Shock Instability (SASI) \cite{blondin2003}. In this CCSN type, postbounce core is unstable to convection. Immediately, negative entropy gradient behind the stalled shock is formed and the prompt convection carries on, giving rise to a GW burst. Moreover, when the postbounce evolution proceeds, $\nu$-heating sets up negative entropy gradient in the gain layer behind the shock, leading to the \textit{$\nu$-driven convection}. Simultaneously, the $\nu$-diffusion establishes negative lepton gradient in the mantle of proto neutron star, giving rise to \textit{proto neutron star convection}. The frequency content in the neutrino mechanism is robust, but the phase is stochastic (due to turbulence). The signal is long lasting of about $\sim0.3-2$ seconds. The GW strain amplitude is about 1 order of magnitude weaker than from that of the rapidly spinning, $h_+$ (and $h_\times$)$\approx10^{-22}$ at the distance $D=10$ kpc. The energy in gravitational waves $\epsilon_\mathrm{GW}$ is of the order of $\sim10^{-11}-10^{-9}M_\odot c^2$. An example of the $h_+$ evolution from a simulated GW signal from this mechanism is shown in the bottom panel of Figure 1 in \cite{Powell2016}. All in all, we can say that we have several models that are quite broad for this kind of source. Thus, matched-filtering based analysis pipelines that are used to search GWs from blackhole/neutron star binaries\footnote{Matched filter uses a bank of templates from some robust model of a signal and then cross-correlate the templates with the data. This method is popular for the GW searches for binary mergers of compact objects. Notable pipelines are \texttt{PyCBC} \cite{alex_nitz_2020} and \texttt{GstLAL} \cite{chad_hanna2020}.} cannot be optimum for our problem. In fact, there has been a study on the optically targeted search of this GW signal with Advanced LIGO-Virgo detectors by \texttt{cWB} pipeline \cite{em_gw_ccsne}. The study shows the efficiency of the order of 50\% in the distance of 5 kpc. Taking into account the rate of CCSNe is of the order of 3 events per century in our galaxy ($O(20)$ kpc) itself \cite{Cappellaro2000,diehl2006,maggiore2018}, this is indeed quite a limited distance. Nevertheless, a multimessenger strategy may expand the detection horizon via coincidence analysis by lowering the background distribution curve and therefore increasing the significance of coincidences. This may give us a new strategy to reevaluate sub-threshold candidates in GW data. \subsection{Low-Energy Neutrino Emission} \label{subsec:len_emission} Supernova progenitors already produce strong-luminosity MeV-neutrinos from thermal emission of advanced burning phases, for example the Si burning, even before the collapsing phase. These pre-supernova neutrinos could possibly be detected \cite{ODRZYWOLEK2004303,Kato_2015,Asakura2016,yoshida} by the ultrapure liquid scintillator detectors such as Borexino \cite{borex}, KamLAND \cite{kamland}, JUNO \cite{juno}, as well as DUNE \cite{dune} (or even Super Kamiokande \cite{superK} with the help of Gadolinium). The main detection channel to detect these low-energy CCSN neutrinos is basically the inverse beta decay happening in neutrino detectors, \begin{equation} \bar{\nu}_e+p\rightarrow n + e^+. \label{eq:ibd} \end{equation} KamLAND collaboration has studied that their detector can detect these pre-supernova neutrinos from a star with a mass of $25M_\odot$ at a distance less than 690 pc with $3\sigma$ significance before the supernova, dependent on the neutrino mass ordering and background levels \cite{Asakura2016}. During the Fe-core collapse (see \cite{pagliarolitesi,jankarev,Muller2019,Vitagliano2019} for more details in the processes), electron neutrinos are produced by electron captures on heavy nuclei and few free protons. Then, there will be neutrino trapping when the core density is of the order of $10^{12}$ g/cm$^3$. Around the time of trapping $\nu_e$ luminosity reaches $\sim10^{53}$ erg/s with the mean energy of $O(10)$ MeV. The trapping makes a small dip of luminosity due to the fact that neutrinos only come from a narrow region around the newly formed neutrinosphere. Neutrino emission again increases rapidly after the core bounce as newly formed shock propagates into regions of sufficiently low density and reaches neutrinosphere thanks to the shock heating and low optical depth. The dominant production is still $\nu_e$ due to the high $Y_e$ in the shock. The luminosity of this burst of neutrinos can reach $\sim3.5\times10^{53}$ erg/s, known as neutronization burst. A Mton water Cherenkov detector like Hyper Kamiokande \cite{hyperK} can observe this burst, which would be a standard candle with distance error measurement of $\sim5\%$. Moreover, the nonobservation of this peak will support the normal mass hierarchy in neutrinos, while the observation will be consistent with the inverted hierarchy (see more in \cite{kachelriess2005}). Around ten of millisecond timescale after the bounce, the CCSN develops an accretion shock lying on $\sim100-200$ km from the center. Neutrino emission for electron and heavy flavor is quite different in this phase. For all flavors during this phase, there is a diffusive flux to proto neutron star surface region driven by temperature gradient and neutrino chemical potential. For electron flavor neutrinos, the emission is not only due to the diffusion of thermal energy from the proto neutron star core, but also from the accretion. During accretion phase, $10-20\%$ of $\nu_e-\bar{\nu}_e$ energy is converted to thermal energy and can revive the shock. The timescale in this phase is $t\sim0.5$ sec. Simulations show that the mean energy of electron antineutrinos $\langle E_{\bar{\nu}_e}\rangle$ is somewhat proportional to proto neutron star mass during the accretion. During the cooling phase, the electron flavor and heavy flavor luminosity become relatively similar. The luminosity of all flavors decreases roughly exponentially with a decay time-scale of seconds. Then, a fraction of infalling materials adds to the proto neutron star until the explosion. The proto neutron star evolves to be a neutron star. Most of CCSN energy is emitted in this phase (called cooling phase), accounting for $\sim90\%$ in the form of all neutrino and antineutrino species with high luminosity. The primary production of neutrinos in this phase is the neutral current (for all neutrino and antineutrino species). \section{Messengers from Core-collapse Supernovae.} \label{sec:sources} Several known astrophysical sources are expected to emit GWs and LENs. In this work, we consider transient sources, causing both a $O(10)$-ms GW burst and an impulsive $O(10)$-sec emission of $O(10)$-MeV LENs. These phenomena are expected to come from CCSNe \cite{jankarev} and ``failed'' SNe \cite{OConnor:2010moj}, which are our main focus in this article. { GW and LEN signals are both sensitive to the initial conditions of CCSN simulation, as progenitor mass, rotation, etc. So that, a coherent combined GW-LEN analysis should be performed by considering GW and LEN signals resulting from the same numerical simulation. However, unfortunately, there are not, at the present, numerical simulations providing successful CCSN explosion and both signals. In particular, several simulations provide both signals for the first half of a second, till the explosion, which obviously is not enough to correctly estimate the neutrino emission that lasts $O(10)$-sec. In this paper, we made our best to relate the GW and neutrino signals coming from different available simulations with similar progenitor masses.} \subsection{Gravitational Wave Emission}\label{sec:GWemission} We consider the GW signals resulting from new 3D neutrino-radiation hydrodynamics CCSN simulations of Radice {\it et al.} \cite{radice} (abbreviated as ``Rad'') obtained for three different zero age main sequence (ZAMS) masses {($9\,M_\odot$, $13\,M_\odot$, and $25\, M_\odot$)} in order to take into account both low-mass progenitors with successful explosions and high-mass progenitors with failed explosions and black-hole formation. The total GW energy radiated in the different cases spans from few $10^{-11}\,M_\odot c^2$ for the lower mass of progenitor to few $10^{-9}\,M_\odot c^2$ for the $25 M_\odot$ progenitor (see Figure 4 of Radice {\it et al.}\cite{radice}). Moreover, we take into account also models with rapid rotation and high magnetic field. In particular, we adopt GW waveforms from two different papers, namely the Dimmelmeier model \cite{dimmelmeier2008} (abbreviated as ``Dim'') and the Scheidegger model \cite{Scheidegger:2010en} (abbreviated as ``Sch''). In this case, we use three different models from each paper with the same ZAMS mass of $15\,M_\odot$. These models produce much stronger gravitational waves. For this mechanism to work, the stellar progenitors must have strong rotation and magnetic field, which are believed to be less likely with respect to the neutrino-radiation mechanism \cite{em_gw_ccsne,jankarev,woosley}. However, we cannot rule out their existence, because we have not yet detected any CCSN GWs with any of the possible models. The amplitude evolutions of GWs are reported in c.f. Fig.~2 in \cite{dimmelmeier2008} for the Dim model and in c.f. Fig.~3 in \cite{Scheidegger:2010en} for the Sch model. The total GW energy radiated in the different cases spans from a fraction of $10^{-9}\,M_\odot c^2$ to around $10^{-7}\,M_\odot c^2$. The details of these models can be seen in Tab.~\ref{tab:gw_models}. The adopted GW models are intended to cover as much as possible the uncertainty band on theoretical predictions, with the lower limit represented by the GW signal for the Rad model, while the upper case is the one for the Dim and the Sch models. \begin{table}[t \caption{Waveforms from CCSN simulations used in this work. We report in the columns: emission type and reference, waveform identifier, waveform abbreviation in this manuscript, progenitor mass, angle-averaged root-sum-squared strain $h_\mathrm{rss}$, frequency at which the GW energy spectrum peaks, and emitted GW energy.} \label{tab:gw_models} \centering {\renewcommand{\arraystretch}{1.2 \begin{tabular}{|c | c | c | c | c | c | c |} \hline Waveform & Waveform & Abbr. & Mass & $h_\mathrm{rss}\,@10\, \mathrm{kpc}$ & $f_\mathrm{peak}$ & $E_\mathrm{GW}$\\ Family & Identifier & & $M_\odot$ & $\mathrm{\left[10^{-22}\,\frac{1}{\sqrt{Hz}}\right]}$ & $\mathrm{[Hz]}$ & $[10^{-9}\,M_\odot c^2]$ \\ \hline \hline Radice \protect\cite{radice} & s25 & Rad25 & {25} & 0.141 & 1132 & 28 \\%2.8 10-8 3D simulation; & s13 & Rad13 & {13} & 0.061 & 1364 & 5.9 \\ $h_+$ \& $h_\times$; (Rad) & s9 & Rad9 & 9 & 0.031 & 460 & 0.16 \\%1.6 10-10 \hline Dimmelmeier \protect\cite{dimmelmeier2008} & dim1-s15A2O05ls & Dim1 & 15 & 1.052 & 770 & 7.685 \\ 2D simulation; & dim2-s15A2O09ls & Dim2 & 15 & 1.803 & 754 & 27.880\\ $h_+$ only; (Dim) & dim3-s15A3O15ls & Dim3 & 15 & 2.690 & 237 & 1.380 \\ \hline Scheidegger \protect\cite{Scheidegger:2010en} & sch1-R1E1CA$_L$ & Sch1 & 15 & 0.129 & 1155 & 0.104 \\ 3D simulation; & sch2-R3E1AC$_L$ & Sch2 & 15 & 5.144 & 466 & 214 \\ $h_+$ \& $h_\times$; (Sch) & sch3-R4E1FC$_L$ & Sch3 & 15 & 5.796 & 698 & 342 \\ \hline \end{tabular} } \end{table} \subsection{Low-energy Neutrino Emission}\label{sec:NUemission} Concerning the LEN emission we consider the signals resulting from the numerical simulations of H{\"u}depohl without the collective oscillations \cite{hudepohl}. In particular, we adopt the time-dependent neutrino luminosities and average energies obtained for a progenitor of $11.2 M_\odot$. The simulation provides all flavors of neutrino fluxes differential in energy and time for the first 7.5 seconds of the neutrino emission; however, in order to cover at least the first 10 seconds of the signal, we considered also an analytical extension of these fluxes. The average neutrino energies from before collapse up to the simulated $0.5$ s after bounce are $\langle E_{\nu_e}\rangle=13$ MeV, $\langle E_{\bar{\nu}_e}\rangle=15$ MeV and $\langle E_{\nu_x}\rangle=14.6$ MeV, see c.f. Table 3.4 of Ref. \cite{hudepohl}. In addition, we also adopt a parametric model for neutrino emission as described in Pagliaroli {\it et al} \cite{pagliaroli2009}. This model provides the best-fit emission from SN1987A data and it is characterized by a total energy radiated in neutrinos of $\mathcal{E}=3\times 10^{53}$ erg, and average energies of $\langle E_{\nu_e}\rangle=9$ MeV, $\langle E_{\bar{\nu}_e}\rangle=12$ MeV and $\langle E_{\nu_x}\rangle=16$ MeV. The temporal structure we adopt for this signal is described by: \begin{equation} F(t,\tau_1,\tau_2) = (1-e^{-{t / {\tau_1}}})e^{-{t / {\tau_2}}}, \label{eq:pagliaroli_model} \end{equation} where the parameters that govern the emission are $\tau_1$ and $\tau_2$. They represent the rise and the decay timescales of the neutrino signal. Their best-fit values using SN1987A data \cite{pagliaroli_ccsn} are $\sim 0.1$ s and $\sim 1$ s. In order to simulate the clusters of supernova neutrino events we consider only the main interaction channel for water and scintillator, i.e. the inverse beta decay (IBD) $\bar\nu_e+p \rightarrow n+e^+$. We assume standard MSW neutrino oscillations to estimate the $\bar{\nu}_e$ flux $\Phi_{\bar{\nu}_e}$ at the detectors. This flux is an admixture of the unoscillated flavors fluxes at the source, i.e. $\Phi_{\bar{\nu}_e}=P\cdot\Phi_{\bar{\nu}_e}+(1-P)\Phi_{\bar{\nu}_x}$, where $x$ indicates the non-electronic flavours and $P$ is the survival probability for the $\bar{\nu}_e$. Depending on the neutrinos mass hierarchy, this probability can be $P\simeq 0$ for Inverted Hierarchy (IH) or $P\simeq0.7$ for Normal Hierarchy (NH). The expected number of IBD events for the different models and detectors considered in our work is reported in Tab.~\ref{tab:nu_models} for a CCSN located at a reference distance of $10$~kpc. \begin{table*}[t \caption{{Number of IBD events expected for a CCSN exploding at 10 kpc from us for the different neutrino models adopted and the considered detectors} (Super-K \protect\cite{superK}, LVD \protect\cite{lvd_det}, and KamLAND \protect\cite{kamland}). In parenthesis we report the assumed energy threshold ($E_\mathrm{thr}$).} \label{tab:nu_models} \centering {\renewcommand{\arraystretch}{1.2 \begin{tabular}{|c | c | c | c | c | c |} \hline Model & Progenitor & Super-K & LVD & KamLAND \\ (identifier) & Mass & ($E_\mathrm{thr}=6.5$ MeV) & ($E_\mathrm{thr}=7$ MeV) &($E_\mathrm{thr}=1$ MeV) \\ \hline \hline Pagliaroli \protect\cite{pagliaroli2009} & $25\, M_\odot$ & 4120 & 224 & 255\\ (SN1987A) &&&&\\ \hline H{\"u}depohl \protect\cite{hudepohl} & $11.2\, M_\odot$ & 2620 & 142 & 154 \\%\gp{(Heud.)} (Hud) &&&&\\ \hline \end{tabular} } \end{table*} \section{Data and Analysis\label{sec:science}} In this section, we will discuss the data and analysis used in our work for GWs as well as LENs. We will also present a possible strategy to do a combined multimessenger search. In the following, we assume a conservative global false alarm rate (FAR) of 1/1000 years {which is reflected in} a specific cut on FAR for the two sub-networks of LEN detectors and GW detectors; see Sec.~\ref{FAR} for a deeper discussion. \subsection{Gravitational Wave Analysis}\label{sec:gw_analysis} The GW analysis has been done considering the cWB\footnote{cWB home page, \url{https://gwburst.gitlab.io/}; \\ public repositories, \url{https://gitlab.com/gwburst/public}\\ documentation, \url{https://gwburst.gitlab.io/documentation/latest/html/index.html}.} algorithm, a pipeline that has been widely used inside the LIGO and Virgo collaborations applied to the data of first and second generation detectors, in particular for the triggered search for CCSNe \cite{SNTargeted2016,em_gw_ccsne}. Moreover, cWB does not need any GW waveform templates; it simply combines in a coherent way the excess energy extracted from the data of the involved GW interferometers. A maximum likelihood analysis identifies the GW candidates and estimates their parameters (such as time, frequency, amplitude, etc). The candidates' detection confidence is assessed comparing the detection statistics $\rho$ with a distribution calculated from the background obtained with a time-shift procedure \cite{Klimenko:2015ypf, Drago:2020kic}. To build the GW data set for this work, we simulate Gaussian detector noise with a spectral sensitivity based on the expected \cite{Abbott:2020qfu} Advanced LIGO and Advanced Virgo detectors \cite{ligo, virgo}. About 16 days of data have been simulated and time shifts have been performed to reach a background livetime of $\sim$ 20 years. Waveforms from emission models described in Section \ref{sec:GWemission} have been generated with discrete values of distances: 5, 15, 20, 50, 60, 700 kpc, with an incoming sky direction different for each one of them, according to the presence of possible sources. For the lower distances (5, 15, 20) we considered a Galactic model following \cite{marektesi}, whereas for the upper distances we considered fixed directions in the sky: the Large and Small Magellanic Clouds at 50 and 60 kpc respectively, and the Andromeda location at 700 kpc. These distances have been used for the multimessenger analysis with LENs, whereas intermediate distances between 60 and 700 kpc are also considered just to complete the efficiency curve\footnote{For distances between 60 and 700 kpc, we still considered the Andromeda direction, even though no known astronomical objects are present in that distance range.}. The injection rate is around $1/100$ per second, in order to maintain enough time difference between two consecutive waveforms. To ensure sufficient statistics, for each distance and considered model we inject around $\sim 2500$ different realizations over all the sky direction. GW candidates are passed to the multimessenger analysis after applying a $\mathrm{FAR_{GW}}$ threshold of $864$ per day, which has been set to reach the required combined FAR of 1/1000 years. { Efficiency curves in Fig.\ref{fig:gw_1987_60} represent the ratio of the number of recovered injections with a $\mathrm{FAR_{GW}}<864$ per day to the $\sim 2500$ total ones performed for each distance.} \begin{figure}[!ht] \centering \includegraphics[width=.7\linewidth]{Dataforfigures/Eff_gw_tot} \label{fig:cWBeff1} \caption{Efficiency curve of GW sub-network Advanced LIGO and Advanced Virgo for the different GW emission models (see Tab. \protect\ref{tab:gw_models}) and considering a FAR threshold of $864$/day.} \label{fig:gw_1987_60} \end{figure}% \subsection{Neutrino Analysis: A New Approach to Expand the Neutrino Detection Horizon} \label{subsec:neutrino} In the standard LEN analysis to search for CCSNe \cite{Agafonova2014,Ikeda2007,Abe2016}, a time series data set from a detector is binned in a sliding time window of $w=20$ seconds. The group of events inside each window is defined as a \textit{cluster} and the number of events in the cluster is called multiplicity $m$. The multiplicity distribution due to background-only events is expected to follow a Poisson distribution and the significance of the $i$-th cluster is correlated with its \textit{imitation frequency} ($f^\mathrm{im}$) defined as, \begin{equation} f^\mathrm{im}_i (m_i)=N\times \sum_{k=m_i}^\infty P(k), \label{eq:fim_prob} \end{equation} where the Poisson term, $P(k)$, represents the probability that a cluster of multiplicity $k$ is produced by the background and is defined as, \begin{equation} P(k)=\frac{(f_\mathrm{bkg}w)^k e^{-f_\mathrm{bkg}w}}{k!}, \label{eq:poiss_pdf} \end{equation} and $N=8640$ is the total number of windows in one day, taking into account that in order to eliminate boundary problems, there is a $10$-s overlapping window between two consecutive bins. In fact, this imitation frequency is equivalent to the FAR in the GW analysis. Based on our previous work \cite{Casentini2018} on exploiting the temporal behavior of LEN signals from CCSNe\footnote{Recent developments based on this analysis approach are also discussed in \cite{Mattiazzi:2021zcb}.}, we characterized each cluster by a novel parameter, defined as $\xi_i\equiv \frac{m_i}{\Delta t_i}$, where $\Delta t_i$ is the duration of the $i$-th cluster, i.e., the time elapsing from the first to the last event in a cluster. Obviously, in our analysis, this duration can reach a maximum value of 20 seconds, which is the bin time window size itself. Moreover we will consider only clusters with $m_i\geq2$, so that the parameter $\xi_i\geq0.1$. Previous results \cite{Casentini2018} show that by performing an additional cut in $\xi$ it is possible to disentangle further the simulated astrophysical signals from the background. However, in this work we further investigate the possibility to use the $\xi$ parameter to define a new modified \textbf{2-parameter} ($m$ and $\xi$) imitation frequency for each cluster, called $F^\mathrm{im}$, which can be calculated as follows: \begin{equation} F^\mathrm{im}_i (m_i,\xi_i)= N \times \sum_{k=m_i}^\infty P(k,\xi_i), \label{eq:newfim_0} \end{equation} where the term $P(k,\xi_i)$ represents the joint probability that a cluster with a given multiplicity $k$ \textbf{and} a specific value of $\xi_i$ is produced by the background. It is convenient to rewrite the joint probability as $P(k,\xi)=P(\xi | k)P(k)$; where $P(\xi | k)$ is the conditional probability of a cluster to have a specific $\xi$ value given the cluster has already had a multiplicity $k$. This conditional probability can be derived for each detector by taking into account the distribution of the $\xi$ values expected for clusters only due to background \cite{Casentini2018}. As a leading example, we show in Figure \ref{fig:pdf} this distribution for the Super-K detector in the form of the probability density function (PDF). \begin{figure}[!ht] \centering \includegraphics[width=.7\linewidth]{figures/SuperK_PDF.png \caption{Probability density functions for background plus signal clusters as functions of the $\xi$ parameter and for different distances in the case of the Super-K detector. The black solid line shows the PDF for pure background clusters. Data are taken from \protect\cite{Casentini2018}.} \label{fig:pdf} \end{figure} Particularly in Figure \ref{fig:pdf}, the black solid line represents the normalised probability for background clusters to have a specific value of $\xi$, i.e. $\mathrm{PDF}(\xi | k)$. Then, this $\mathrm{PDF}$ is related to the conditional probability, namely $P(\xi_i | k)= \int_{\xi=\xi_i}^\infty \mathrm{PDF}(\xi\geq\xi_i|k)\, d\xi$. Equation \ref{eq:newfim_0} can actually be rewritten as (see App. \ref{app} and c.f. Sec.~7.1. of \cite{halimtesi} for more detail), \begin{equation} F^\mathrm{im}_i(m_i,\xi_i)= N \times \sum_{k=m_i}^\infty P(k) \int_{\xi=\xi_i}^\infty \mathrm{PDF}(\xi\geq\xi_i|k) d\xi. \label{eq:newfim_1} \end{equation} We show in App.~\ref{app} that the new imitation frequency converges to the standard one of equation \ref{eq:fim_prob} for pure background clusters while it gives a much smaller value than the Poisson expectation for signal clusters (for larger $\xi$). To build the LEN data set for this work, we simulate about 10 years of background data for each neutrino detector assuming the following background frequencies $f_\mathrm{bkg}$: $0.012$ Hz for Super-K \cite{Abe2016}, $0.015$ Hz for KamLAND \cite{kam_bg}, and $0.028$ Hz for LVD \cite{Agafonova2014}. Background distributions as a function of the $\xi$ parameter are obtained from pure background data. The CCSN simulated signals from all the emission models described in Sec.~\ref{sec:sources} are injected into the neutrino background data and \textit{coherently} into the GW background data, i.e., for each model and each source distance, the GW and LEN signals are injected so that the starting points of the two signals are coincident in time, taking also into account the temporal delay due to the different positions of the detectors. Neutrino clusters are considered signal candidates if their $f^\mathrm{im}\leq 1/$day\footnote{For the single-detector threshold, we still use the 1-parameter $f^\mathrm{im}$ described in Equation \ref{eq:fim_prob}.}, which has been set in order to reach the global FAR of $1/1000$ years. \begin{figure}[!ht] \centerin \includegraphics[width=.7\linewidth]{Dataforfigures/Efficiency.png} \caption{The efficiency curves of neutrino detectors for the Hud (continuous lines) and SN1987A emission model (dashed lines). Clusters are selected with an imitation frequency threshold of 1/day.} \label{fig:Efficiency} \end{figure}% The LEN efficiency curves are shown in Figure \ref{fig:Efficiency} for all the detectors and emission models considered. They are defined from the simulations of signals at different distances and their subsequent injections, at a certain defined rate, into each detector's background. The cluster of expected neutrino events is extracted through a Monte Carlo and then injected into the background of each detector. After the injection, we group in clusters the output data set using the window $w$ of 20 seconds following the procedure described in previous sections. {Finally we select clusters with $f^\mathrm{im}\leq 1/\mathrm{day}$. In case of a network of neutrino detectors, the expected signals from the same CCSN are injected into the different-detector data sets by also taking into account the expected time of flight between the detectors. {We call this method the coherent injection}. We inject signals at a rate of 1 per day. The efficiency for each CCSN distance $D$ is defined as, \begin{equation} \eta(D) = \frac{N_\mathrm{r,s}(D)}{N_\mathrm{inj,s}(D)}. \label{eq:det_eff} \end{equation} Here $N_\mathrm{r,s}$ is the number of recovered signals, after the selection in $f^\mathrm{im}$, while $N_\mathrm{inj,s}$ is the total number of injected signals. \subsection{Multimessenger Analysis} \label{FAR} As discussed in Sec.~\ref{sec:intro}, the ultimate goal of our analysis (green boxes in Fig.~\ref{fig:GWnu_scheme}) is to perform a multimessenger analysis, combining both neutrino triggers and GW triggers. {This is done by performing} a temporal-coincidence analysis between these two trigger lists. Joint coincidences found between LEN and GW triggers {are defined }as ``CCSN candidates''. In order to assess the statistical significance of such candidates we need to combine the FAR of GW triggers with those of LEN triggers. The $\mathrm{FAR_{GW}}$ is obtained by applying the time-shifting method described in {Sec. \ref{sec:gw_analysis}}. The $\mathrm{FAR_{\nu}}$ associated with a neutrino trigger is obtained by following the product method of $\mathrm{Nd}$-fold detector coincidence introduced in SNEWS \cite{Antonioli2004}, i.e., \begin{equation} \mathrm{FAR_{\nu}}=\mathrm{Nd}\times w_{\nu}^{\mathrm{Nd}-1}\prod_{i=1}^{\mathrm{Nd}}F^\mathrm{im}_i, \label{eq:farLEN} \end{equation} where $\mathrm{Nd}$ is the number of neutrino detectors combined, $w_{\nu}$ is the time window used to look for coincidences in the neutrino sector, and $F^\mathrm{im}_i$ is the imitation frequency of the clusters obtained by exploiting the 2-parameters method described in Sec.~\ref{subsec:neutrino}. Finally, the multimessenger {$\mathrm{FAR_{glob}}$} associated with ``CCSN candidates'' is\footnote{More discussion on the choice of coincidence analysis can be seen in \cite{halimtesi}.}, \begin{equation} \mathrm{FAR_{glob}}=\mathrm{Net}\times w_c^{\mathrm{Net}-1}\prod_{X=1}^\mathrm{Net}\mathrm{FAR}_X, \label{eq:jointfar} \end{equation} where $\mathrm{Net}$ is the number of sub-networks, $w_c$ is coincidence window between GW and LEN signals and $\mathrm{FAR}_X$ is the \textit{false-alarm-rate} from the sub-network $X=\{\mathrm{\nu,\,GW}\}$. Furthermore, it is straightforward also to write the \textit{false-alarm-probability} in terms of Poisson statistics as \begin{equation} \mathrm{FAP}=1-e^{-\mathrm{FAR}\times\mathrm{livetime}}, \label{eq:jointfap} \end{equation} where $\mathrm{livetime}$ is the common observing time in the considered network. By using Equation \ref{eq:jointfar} and \ref{eq:jointfap}, we can compare the performance of our 2-parameter method (Eq. \ref{eq:newfim_1}) with the standard 1-parameter method (Eq. \ref{eq:fim_prob}) in the context of multimessenger analysis. This performance can be described in terms of efficiency values, i.e., the ratio between the number of survived candidates (after all the cuts/thresholds) due to injections and the total number of injections performed As anticipated at the beginning of this section, we apply a threshold on $\mathrm{FAR_{glob}}$ of $1/1000$ years in order to be very conservative and a window $w_c=10$ seconds to accommodate all the scenarios. To reach the $\mathrm{FAR_{glob}}$ threshold, we impose the same requirements\footnote{Note that this requirement may change in subsequent SNEWS updates.} of SNEWS \cite{Antonioli2004} in the neutrino sub-network, which is the $\mathrm{FAR_\nu} \leq 1/100\,\mathrm{year}$ and a temporal window $w_{\nu}=10$ seconds, so that the required threshold for the GW sector is $\mathrm{FAR_{GW}}\leq 864$ per day. Let us stress that the two windows for the search for coincidences in the LEN sector and in the global network could be in principle very different. We define a ``detection'' in a network when $\mathrm{FAP}\geq 5\sigma$\footnote{$5\sigma\approx 5.7\times 10^{-7}$}. \section{Results \label{sec:results}} In this section, we will discuss the results {following the procedure in the previous section}. We will start by discussing the single-detector neutrino analysis results, {and then move on to} the sub-network of neutrino detectors, {and to wrap all the steps, we will provide} the global network of GW-LEN analysis. \subsection{Improving the LEN detection capability}\label{sec:single_det} We apply our method to analyze simulated single detector data for KamLAND, LVD, and Super-K taking into account both neutrino emission models described in Section \ref{sec:NUemission}. In order to quantify the improvement related to the 2-parameter method versus the standard one, we discuss in the following, as a leading example, the case of the KamLAND detector. Let us consider a CCSN occurring at 60 kpc with the neutrino signals following SN1987A model (see the first row of Tab \ref{tab:nu_models}). After simulating {10 years} of KamLAND background data we inject randomly these simulated signals with the rate of 1 per day (3650 in total). All clusters reconstructed by the analysis are plotted in Fig. \ref{fig:kamland_60kpc} in a $\xi$ {vs} multiplicity plane. { Each blue cross in this plot represents one cluster of events generated by one injected CCSN signal. The cluster multiplicity of the injections can be different despite the CCSN distance is fixed to 60 kpc. The reason is that the Monte Carlo simulation automatically allows the statistical Poisson fluctuation of the IBD events, moreover the number of background events inside the 20 second window is also fluctuating. For each cluster we estimate the associated imitation frequency (or $\mathrm{FAR}_\nu$\footnote{As previously stated in Sec. \ref{subsec:neutrino}, the imitation frequency can be considered as the $\mathrm{FAR}$. Thus, in this case, $\mathrm{FAR}_\nu$ is basically imitation frequency for KamLAND data, either $f^\mathrm{im}$ or $F^\mathrm{im}$ depending on the context in the text.}). This imitation frequency in standard 1-parameter analysis is only one-to-one related to the cluster multiplicity through Poisson statistics. So that, in order to fulfill the SNEWS requirement of $\mathrm{FAR_\nu}\leq1/100$ year, a cluster's multiplicity needs to be at least equal to 8. } \begin{figure}[!ht] \centering \includegraphics[width=.7\linewidth]{Dataforfigures/kamland_60kpc.png \caption{The $\xi$-multiplicity map for KamLAND (as a leading example) with the simulated backgrounds (yellow triangles) and injections (blue crosses) with the SN1987A emission model at 60 kpc.} \label{fig:kamland_60kpc} \end{figure}% { In other words the cluster should lie on the green area of Fig.\ref{fig:kamland_60kpc} and all the injected clusters with a multiplicity $<8$ are lost by standard 1-parameter analysis. This lower limit on the multiplicity could be translated into a maximum KamLAND horizon of $\simeq 65$ kpc for the emission model based on SN1987A, indeed the average multiplicity expected for a CCSN at this distance is $\langle m_i\rangle=8$. With our 2-parameter method, the imitation frequency of each cluster is also a function of the $\xi$ value following Eq. \ref{eq:newfim_0} and we can determine the pair of $\xi$ and multiplicity values in order to have the needed $\mathrm{FAR_\nu}$. In particular, the red line in Fig. \ref{fig:kamland_60kpc} belongs to $\mathrm{FAR_{\nu}}=1/100$ years for the 2-parameter method, i.e., the threshold corresponding to the current SNEWS requirement. All the clusters above this red line fulfill the $\mathrm{FAR_{\nu}}<1/100$ years requirement and this happens also with multiplicity lower than 8 given a specific $\xi$ value. The red area in Fig.~\ref{fig:kamland_60kpc} represents the improvement area, i.e., clusters which pass the FAR threshold for 2-parameter method but \textbf{not} for the 1-parameter. In addition, we show that all simulated background clusters (yellow triangles) are well below the red threshold line. The result of Fig. \ref{fig:kamland_60kpc} could be interpreted as an increase of the efficiency as quantified in Tab.~\ref{tab:kam_single}. The efficiency for KamLAND to identify a signal at 60 kpc is improved from $73\%$ in 1-parameter method to $83\%$ by using 2-parameter method. Moreover, this result also implies that we are expanding the detection horizon of the detector.} \begin{table}[!ht] \caption{Efficiency ($\eta$) comparison between 1-parameter and 2-parameter methods for the single detector KamLAND at 60-kpc for $\mathrm{FAR_{\nu}}<1/100\,\mathrm{[year^{-1}]}$ with the SN1987A model.} \label{tab:kam_single} \centering {\renewcommand{\arraystretch}{1.2 \begin{tabular}{|c | c | c | c |} \hline Noise & Noise & $\eta_\mathrm{1param}$ & $\eta_\mathrm{2param}$ \\ & $\left[<1/100\,\mathrm{yr}\right]$ & $\left[<1/100\,\mathrm{yr}\right]$ & $\left[<1/100\,\mathrm{yr}\right]$ \\ \hline \hline 75198 & 0/75198 & \cellcolor{magenta!30}2665/3654=\textbf{72.9\%} & \cellcolor{yellow!70}3026/3654=\textbf{82.8\%} \\ \hline \end{tabular} } \end{table} The results obtained for KamLAND in this specific case are representative for all the scenarios we investigated. Similar improvements can be obtained for different emission models and various neutrino detectors. Details and figures are in App.~\ref{appC} for the Super-K detector analysis. \subsection{The sub-network of LEN detectors} In this section, we extend the analysis to the sub-network of LEN detectors. As stated previously, we consider several neutrino detectors in this work and we can construct sub-networks of pair configurations. {Our aim is to show the impact of the 2-parameter method in this specific sector of the analysis}. Thus, we will discuss the combined analysis of KamLAND and LVD detectors, given that their efficiency curves are very similar; see Fig. \ref{fig:Efficiency}. Let us consider the neutrino signal from the Hud model and CCSNe {happening} at 5, 15, 20, 50, 60, and 700 kpc from us. Injections are performed {{in a coherent way (Section \ref{subsec:neutrino})}} in both data sets and coincidences in time within {$w_\nu=10$} sec are considered as potential CCSNe signals. In this sub-network of neutrino detectors, we put a threshold of $5\sigma$ in $\mathrm{FAP}_\nu$ (Eq. \ref{eq:jointfap}). We compare in Fig. \ref{fig:efficiecny_double} the efficiencies at this threshold for the 1-parameter method (orange line) and the 2-parameter one (green line), and for the point of interest, we show more details of this comparison for 50 and 60 kpc in Tab. ~\ref{tab:kam_lvd_eff}. The efficiency to identify these signals at a distance of 50 kpc with a $\mathrm{FAR_\nu}\le 1/100$ years is $12\%$ and $26\%$ for LVD and KamLAND, respectively. However, if the detectors work together looking for time coincidences within $w_\nu$, the number of recovered signals above the same statistical threshold grows to $\sim 43\%$ when adopting the standard SNEWS requirment for the FAR estimation (the 1-parameter method). Finally, when also the $\xi$ value is taken into account (2-parameter method) this efficiency grows to $\sim 55\%$. Following the same logic, for a CCSN at 60 kpc, the fraction of signals with $\mathrm{FAR_\nu}\le 1/100$ years is only $3\%$ and $7\%$ for LVD and KamLAND, respectively. When they work as a network this efficiency increases to $18\%$ with the standard FAR estimation and to $26\%$ with the new method proposed in this work. \begin{figure}[!ht] \centering \includegraphics[width=.7\linewidth]{Dataforfigures/efficiecny_double_5sig_tesi.png \caption{The efficiency of KamLAND-LVD analysis from the Hud model when a threshold on $\mathrm{FAP_\nu}\geq5\sigma$ is applied to the network. The orange line is the one obtained with the old 1-parameter method while the green line shows the improvement of the new 2-parameter method presented in this paper.} \label{fig:efficiecny_double} \end{figure}% \begin{table \caption{Efficiency ($\eta$) comparison between 1-parameter and 2-parameter method for analysis of KamLAND-LVD with the Hud neutrino model and for $\mathrm{FAP_\nu}>5\sigma$. \label{tab:kam_lvd_eff} \centering {\renewcommand{\arraystretch}{1.2 \begin{tabular}{|c | c | c |} \hline Distance $\mathrm{[kpc]}$ & $\eta_\mathrm{1param}$ & $\eta_\mathrm{2param}$ \\ & $\left[>5 \sigma \right]$ & $\left[> 5 \sigma\right]$ \\ \hline \hline 50 & \cellcolor{magenta!30}\textbf{47/108=43.5\%} & \cellcolor{yellow!70}\textbf{59/108=54.6\%}\\ \hline 60 & \cellcolor{magenta!30}\textbf{19/107=17.8\%} & \cellcolor{yellow!70}\textbf{28/107=26.2\%}\\ \hline \end{tabular} } \end{table} It is important to clarify that the efficiency curves in Fig.~\ref{fig:efficiecny_double} cannot be directly compared with the ones reported in Fig.~\ref{fig:Efficiency} for the single detector cases, because the requirement applied in terms of $\mathrm{FAR_{\nu}}$ is different. In order to help the reader compare the results, we briefly report the results also for the SN1987A emission model and a CCSN at $60$ kpc detected with the LVD-KamLAND network. The increase in efficiency is from {$85\%$ to $93\%$} and these numbers can be directly compared with the ones in Tab. \ref{tab:kam_single} related to the single detector case. \subsection{The global network of GW-LEN detector} After discussing the joint-neutrino analysis, here we discuss the global analysis in order to combine neutrino and gravitational waves in a single network. { In the global GW-LEN network we look for temporal coincidences within $w_c=10$ seconds among GW triggers and LEN clusters. We assess their statistical significance following the approach discussed in Sec. 2. In this paper, we would like to emphasize the power of combining GWs and LENs in a situation where both sub-networks of detectors can gain on combining data. In other words, we highlight the case in which the detection efficiency of both LEN and GW detectors is not $100\%$ to see the improvement in both directions. To do this, we need to combine detectors with similar detection efficiency at the same CCSN distance. As reported in previous sections, the horizon of the GW network is completely dependent on the assumed GW emission model, see Fig. \ref{fig:gw_1987_60}. In particular, for the model called Dim2 in Tab. \ref{tab:gw_models}, the GW detection horizon is compatible with that of the LVD and KamLAND neutrino detectors and coincides with the Large Magellanic Cloud. For this reason, we highlight the results for the global network LIGO-Virgo, LVD, and Kamland.} Results for different GW models are reported in App.~\ref{appB}. In this section, we adopt as the requirement in terms of significance $5\sigma$ to claim real GW-$\nu$ detection. In particular, we show, for all the networks and emission models considered, the $\mathrm{FAR_{glob}}$ defined in Eq. \ref{eq:jointfar} of ``CCSN candidates'' estimated by following the standard 1-parameter method procedure (we call this $\mathrm{FAR_{old}}$) versus the same quantity obtained with the new 2-parameter method (called $\mathrm{FAR_{new}}$). \begin{figure}[!ht] \centering \includegraphics[width=.7\linewidth]{Dataforfigures/dim2_KAM1987_60.png \caption{{The $\mathrm{FAR_{glob}}$ of GW-$\nu$ candidates obtained with the 2-parameter method ($\mathrm{FAR_{new}}$) vs the 1-parameter ($\mathrm{FAR_{old}}$) considering KamLAND (SN1987A-model) and HLV (Dim2-model) and a CCSN at 60 kpc.}} \label{fig:gw_1987_60_dou} \end{figure}% Now we discuss the case of KamLAND detector working together with the HLV (Hanford, Livingston, Virgo) GW network. In Fig.~\ref{fig:gw_1987_60_dou} we compare the $\mathrm{FAR_{old}}$ with the $\mathrm{FAR_{new}}$ for the case of a CCSN occurring at $60$ kpc and with a neutrino emission compatible with SN1987A and the GW emission Dim2. The magenta dashed line corresponding $5\sigma$ significance is also plotted. The green area is the area where the 1-parameter method produces $5\sigma$ significant clusters. Meanwhile, the red area is the improvement region produced by 2-parameter method. The blue data points belonging to the red zone are CCSN signals detected only with the new procedure. As reported in the first line of Tab.~\ref{tab:dimkamlvd} the 2-parameter method gives us an additional {$\sim12\%$} of signals that otherwise are lost by 1-parameter method. \begin{figure}[!ht] \centerin \includegraphics[width=.7\linewidth]{Dataforfigures/dim2_1987_rev_60.png \caption{{The $\mathrm{FAR_{glob}}$ of GW-$\nu$ candidates obtained with the 2-parameter method ($\mathrm{FAR_{new}}$) vs the 1-parameter ($\mathrm{FAR_{old}}$) considering KamLAND-LVD (SN1987A-model) and HLV (Dim2-model) for a CCSN at 60 kpc.}} \label{fig:gw_1987_60_tri} \end{figure}% \begin{table \caption{Efficiency ($\eta$) comparison of 1-parameter and our 2-parameter method for Figure \protect\ref{fig:gw_1987_60_dou} and \protect\ref{fig:gw_1987_60_tri}. The first column indicates the specific network of detectors considered and the adopted emission models. The second column shows results after we impose the threshold on the FAR of GW data ($\mathrm{<864/day}$). The third and last columns report the fraction of signals with a significance greater than $5\sigma$ (efficiency) with 1-parameter and 2-parameter methods. \label{tab:dimkamlvd} \centering {\renewcommand{\arraystretch}{1.2 \begin{tabular}{|c | c | c | c |} \hline Network $\&$ Type & Recovered & $\eta_\mathrm{1param}$ & $\eta_\mathrm{2param}$ \\ of Injections & $\mathrm{FAR_{GW}}<864/\mathrm{d}$ & $\left[>5\sigma\right]$ & $\left[>5\sigma\right]$ \\ \hline \hline HLV-KAM & 784/2346= & 554/784= & 650/784= \\ (Dim2-SN1987A) & 33.4\% & \cellcolor{magenta!30}\textbf{70.7\%} & \cellcolor{yellow!70}\textbf{82.9\%} \\ \hline HLV-KAM-LVD & 784/2346= & 776/784= & 784/784= \\ (Dim2-SN1987A) & 33.4\% & \cellcolor{magenta!30}\textbf{99.0\%} & \cellcolor{yellow!70}\textbf{100\%} \\ \hline \end{tabular} } \end{table} In other words, 2346 GW injections are performed and analyzed with the cWB GW-pipeline, 784 of them show a $\mathrm{FAR_{GW}<864/day}$. These GW triggers are considered to look for temporal coincidences with the list of neutrino clusters characterized by {$\mathrm{FAR_{glob}}<1/1000$} year, preliminary. We eventually chose to use a $5\sigma$ threshold in order to compare the results with Ch.~8 of c.f. Ref.~\cite{halimtesi}. Among the candidates GW-LEN coincidences, {554} have a statistical significance of $5\sigma$ ($\sim 71\%$ of the GW triggers) when the standard 1-parameter method is used. By applying the new 2-parameter method, we gain about 110 more signals detected increasing the fraction of GW triggers to $\sim 83\%$. Following the same approach, we extend this result by considering also the LVD detector inside the neutrino sub-network. Results in this case are reported in Fig.~\ref{fig:gw_1987_60_tri} and in the second row of Tab.~\ref{tab:dimkamlvd}. In this case, the improvement due to the 2-parameter method versus the standard one seems less evident. The reason is that the efficiency saturates to its maximum value of $33.4\%$, i.e., all the GW triggers are in coincidence with a neutrino candidates whose significance is $>5\sigma$. For the sake of completeness, we discuss also a similar case by adopting the Hud model for a CCSN occurring at 60 kpc. The comparison of the $\mathrm{FAR_{new}}$ with the $\mathrm{FAR_{old}}$ can be seen in Fig.~\ref{fig:dim2_no_60}. Moreover, the efficiency comparison on this analysis can be seen in detail in Tab.~\ref{tab:dimkamlvd_hud}, where we have $\sim7\%$ improvement with our 2-parameter method. The gain we obtain is on average $O(10^3)$ between the 1-parameter and the 2-parameter method for both emission models. \begin{figure}[!ht] \centering \includegraphics[width=.7\linewidth]{Dataforfigures/dim2_no_Hud_60.png \caption{{The $\mathrm{FAR_{glob}}$ comparison of our 2-parameter method ($\mathrm{FAR_{new}}$) vs the 1-parameter ($\mathrm{FAR_{old}}$) with coincidence analysis between joint neutrino KamLAND-LVD (Hud-model) and GW injections (Dim2-model) at 60 kpc.}} \label{fig:dim2_no_60} \end{figure}% \begin{table \caption{Efficiency ($\eta$) comparison of 1-parameter and our 2-parameter method for Figure \protect\ref{fig:dim2_no_60}. The columns are analogous to Table \protect\ref{tab:dimkamlvd}. \label{tab:dimkamlvd_hud} \centering {\renewcommand{\arraystretch}{1.2 \begin{tabular}{|c | c | c | c |} \hline Network $\&$ Type & Recovered & $\eta_\mathrm{1param}$ & $\eta_\mathrm{2param}$ \\ of Injections & $\mathrm{FAR_{GW}}<864/\mathrm{d}$ & $\left[>5\sigma\right]$ & $\left[>5\sigma\right]$ \\ \hline \hline HLV-KAM-LVD & 784/2346= & 710/784= & 764/784= \\ (Dim2-Hud) & 33.4\% & \cellcolor{magenta!30}\textbf{90.6\%} & \cellcolor{yellow!70}\textbf{97.5\%} \\ \hline \end{tabular} } \end{table} Finally, let us summarize the results presented and their interpretation in terms of detection efficiency. In Fig.~\ref{fig:gw_1987_60} we can see that the GW network HLV, by applying a threshold $\mathrm{FAR_{GW}}\le 864$/day, recovers about {$\sim33\%$} of the injected signals for a CCSN distance of 60 kpc for the Dim2 GW emission model. Moreover, these recovered GW triggers are far from statistically significant; indeed by requiring a $5\sigma$ threshold, the HLV detection efficiency drops to zero. Correspondingly, for the neutrino network LVD+KamLAND, the recovered signal at {$\mathrm{FAP}_\nu>5\sigma$} is about $26\%$ \& $85\%$ for Hud and 1987A model. We note that once the threshold is set for the two sub-networks, the lower one among the two efficiencies represents the upper limit for the global network. By working as a global network and by using our method, the global detection efficiency of the GW-$\nu$ grows to $\sim 33\%$. In case of a weaker neutrino emission such as the one of Hud model, the detection efficiency of the GW-$\nu$ network reaches the value of {$33.4\%\cdot97.5\%=32.6\%$}. \section{Joint GW-LEN Analysis\label{sec:multi}} Diagram explanation \begin{enumerate} \item[A] Coincidence time window: time of lum curve of neutrino from CCSNe \item[B] Joint GW+LEN signif! \item[C] BG trigger generation \item[D] Individual detection \item[E] Statistical detection of multiple sources: collective method \item[F] Simul of Astrophy signals: to check sensitivity of the algorithm; monte carlo \item[G] Population estimation \item[H] Estimated sensitivity: What about the final result as a \textit{proof-of-concept}???? \end{enumerate} \section{Derivation of equation \ref{eq:newfim_1}} \label{app} $\int_{\xi=\xi_\mathrm{min}}^\infty \mathrm{PDF}(\xi\geq\xi_\mathrm{min}|k)$ and the integration can be stated as, \begin{equation} \begin{split} 1&\equiv\int_{\xi=\xi_\mathrm{min}}^\infty \mathrm{PDF}(\xi\geq\xi_\mathrm{min}|k) d\xi\\ &=\int_{\xi_\mathrm{min}}^\infty N_kf(\xi)d\xi\\ &=\int_{k/w}^\infty N_kf(\xi)d\xi\\ &=N_k \int_{k/w}^\infty f(\xi)d\xi \end{split} \label{eq:pdf_func} \end{equation} Then, from equation \ref{eq:pdf_func}, the normalization factor $N_k$ can be written as, \begin{equation} N_k=\frac{1}{\int_{k/w}^\infty f(\xi)d\xi}. \end{equation} The conditional probability (integral) in equation \ref{eq:newfim_1}, can be written, \begin{equation} \begin{split} P(\xi | k) & = \int_{\xi\geq k/w}^\infty N_k f(\xi) d\xi\\ & =1-\int_{k/w}^{\xi\geq k/w} N_k f(\xi) d\xi\\ & = 1- N_k\int_{k/w}^{\xi\geq k/w} f(\xi) d\xi\\ &= 1-\frac{\int_{k/w}^{\xi\geq k/w} f(\xi) d\xi}{\int_{k/w}^\infty f(\xi)d\xi}, \label{eq:cond_pro} \end{split} \end{equation} where the integration in the numerator always has this relation $\xi\geq k/w$, with $w$ as the maximum duration, which is the window or bin width itself. All in all, after considering equation \ref{eq:newfim_1} to \ref{eq:cond_pro}, the new imitation frequency becomes, \begin{equation} \begin{split} F^\mathrm{im}_i & (w,m_i,\xi_i) = 8640\times \sum_{k=m_i}^\infty P(k)\left[1-N_k\int_{k/w}^{\xi_i}f(\xi)d\xi\right]\\ = & f^\mathrm{im}_i(w,m_i) - 8640\times\sum\limits_{k=m_i}^\infty P(k)N_{k}\int_{k/w}^{\xi_i} f(\xi)d\xi\\ = & f^\mathrm{im}_i(w,m_i)-8640\times \sum\limits_{k=m_i}^{m_i+n;\,n\leq (w\cdot\xi_i-m_i)} P(k)N_{k}\\ &\times\int_{k/w}^{\xi_i} f(\xi)d\xi. \label{eq:newfim_final} \end{split} \end{equation} Let us test this formula. Intuitively, we can say that when we have a pure background cluster with multiplicity $m_\mathrm{bkg}$ and $\xi_\mathrm{bkg}=m_\mathrm{bkg}/w$, the new imitation frequency should be very similar as the old one, namely, \begin{equation} F^\mathrm{im}_\mathrm{bkg}(w,m_\mathrm{bkg},\xi_\mathrm{bkg})\simeq f^\mathrm{im}_\mathrm{bkg}(w,m_\mathrm{bkg}), \label{eq:test_min} \end{equation} and when we have a very strong signal, with $m_\mathrm{strong}$ and $\xi_\mathrm{strong}$, \begin{equation} F^\mathrm{im}_\mathrm{strong}(w,m_\mathrm{strong},\xi_\mathrm{strong})\ll f^\mathrm{im}_\mathrm{strong}(w,m_\mathrm{strong}). \label{eq:test_max} \end{equation} We can prove those relations above. First, suppose that we found a cluster whose $\xi_i=\xi_{min}=m_i/w$, meaning that $n=0$ for equation \ref{eq:newfim_final}, and thus, our new imitation frequency becomes the old one, \begin{equation} \begin{split} F^\mathrm{im}_i(w,& m_i,\xi_\mathrm{min}=m_i/w)= \left[f^\mathrm{im}_i-8640 \times\sum\limits_{k=m_i}^{m_i+0} P(k)N_{k}\right.\\ &\left.\times\int_{k/w}^{m_i/w} f(\xi)d\xi \right]\\ =& \left[f^\mathrm{im}_i-8640 \times P(m_i)N_{m_i}\cancelto{\boxed{=0}}{\int_{m_i/w}^{m_i/w} f(\xi)d\xi} \right]\\ = & f^\mathrm{im}_ \end{split} \end{equation} meanwhile, even when $\xi$ large, this condition that $N_{k}\int_{k/w}^{m_i/w} f(\xi)d\xi\leq 1$ is true, and, \begin{equation} \begin{split} F^\mathrm{im}_i(w,m_i,&\xi_\mathrm{large})= f^\mathrm{im}_i(w,m_i)-8640 \times\sum\limits_{k=m_i}^{m_i+n} P(k)N_{k}\\ &\times\int_{k/w}^{\xi_\mathrm{large}} f(\xi)d\xi \\ < &f^\mathrm{im}_i(w,m_i)-8640\times\sum\limits_{k=m_i}^{m_i+n} P(k)\\ <&8640\times\sum_{k=m_i}^\infty P(k)-8640\times\sum\limits_{k=m_i}^{m_i+n} P(k)\\ < & 8640\times \left[\left(1-\sum_{k=0}^{m_i-1} P(k)\right)-\sum\limits_{k=m_i}^{m_i+n} P(k)\right]\\ <& 8640\times \left(1-\sum_{k=0}^{m_i+n} P(k)\right)\\%\leq 8640\, \left(\mathrm{max\,possible\,value\,of\,}f^{im}_i\right)\\ <&8640\times \left(\sum_{k=m_i+n+1}^\infty P(k) \right)=f^\mathrm{im}_i(w,m_i+n+1), \end{split} \end{equation} and when $n\gg$, this relation holds $f^\mathrm{im}(w,m_i+n+1)\ll f^\mathrm{im}(w,m_i)$, and therefore, \begin{equation} \begin{split} F^\mathrm{im}_i(w,m_i,\xi_\mathrm{large})&<f^\mathrm{im}(w,m_i+n+1)\\ &\ll f^\mathrm{im}(w,m_i \end{split} \end{equation} Thus, we have proven the relations \ref{eq:test_min} and \ref{eq:test_max}.$\QEDA$ \section{More GW models} \label{appB} Here, we perform our study for other GW models such as Dim1, Dim3, Sch1, Sch2, and Sch3. All the FAR comparisons from these models can be seen in Fig.~\ref{fig:jointfig}. Dim1 and Sch1 are quite weak models, so very few injections are recovered. Meanwhile, Dim2 (in Figure \ref{fig:dim2_no_60}), Dim3, Sch2, and Sch3 are quite strong models, so we have more recovered injections after the triple coincidence analysis, and indeed more passed the $5\sigma$ significance. \begin{figure}[!ht] \begin{minipage}[b]{0.5\linewidth \centering \includegraphics[width=\linewidth]{Dataforfigures/dim1_no_Hud_60.png} \subcaption{Dim1} \end{minipage \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{Dataforfigures/dim3_no_Hud_60.png} \subcaption{Dim3} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{Dataforfigures/sch1_no_Hud_60.png} \subcaption{Sch1} \end{minipage \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{Dataforfigures/sch2_no_Hud_60.png} \subcaption{Sch2} \end{minipage} \centerline{ \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\linewidth]{Dataforfigures/sch3_no_Hud_60.png} \subcaption{Sch3} \end{minipage} } \caption{\textbf{The $\mathrm{FAR_{glob}}$ comparison of our 2-parameter and 1-parameter method for the triple-coincidence analysis (KamLAND-LVD-GW) with injections at 60 kpc. We compare various GW models (see Table \ref{tab:gw_models}).}} \label{fig:jointfig} \end{figure} \begin{table* \caption{Efficiency ($\eta$) comparison of 1-parameter and our 2-parameter method for Figure \protect\ref{fig:jointfig}. The columns are analogous to Table \protect\ref{tab:dimkamlvd}. \label{tab:varkamlvd} \centering {\renewcommand{\arraystretch}{1.2 \begin{tabular}{|c | c | c | c |} \hline Type \& Number & Recovered & $\eta_\mathrm{1param}$ & $\eta_\mathrm{2param}$ \\ of Injections & $\mathrm{FAR_{GW}}<864/\mathrm{d}$ & $\left[>5\sigma\right]$ & $\left[>5\sigma\right]$ \\ \hline \hline Dim1-KAM-LVD & 46.5\% & \cellcolor{magenta!30}\textbf{37.2\%} & \cellcolor{yellow!70}\textbf{44.2\%} \\ = 86 & = 40/86 & = 32/86 & = 38/86 \\ \hline Dim3-KAM-LVD & 83.3\% & \cellcolor{magenta!30}\textbf{75.8\%} & \cellcolor{yellow!70}\textbf{81.5\%} \\ = 1386 & = 1154/1386 & = 1051/1386 & = 1130/1386 \\ \hline Sch1-KAM-LVD & 39.1\% & \cellcolor{magenta!30}\textbf{30.4\%} & \cellcolor{yellow!70}\textbf{34.8\%} \\ = 23& = 9/23 & = 7/23 & = 8/23 \\ \hline Sch2-KAM-LVD & 99.3\% & \cellcolor{magenta!30}\textbf{99.0\%} & \cellcolor{yellow!70}\textbf{99.2\%} \\ = 2329& = 2312/2329 & = 2305/2329 & = 2310/2329 \\ \hline Sch3-KAM-LVD & 99.8\% & \cellcolor{magenta!30}\textbf{99.6\%} & \cellcolor{yellow!70}\textbf{99.7\%} \\ = 2398& = 2393/2398 & = 2388/2398 & = 2391/2398 \\ \hline \end{tabular} } \end{table*} \section{Single Detector Super-K} \label{appC} For the Super-K case, we do an analysis similar to that in Sec.~\ref{sec:single_det}. In this case, the Super-K detector is sensitive beyond the Small Magellanic Cloud. In order to show the improvement of our method we focus on a distance of 250 kpc. We provide the $\xi$ vs multiplicity plot in Fig. \ref{fig:superk_250pc} where the gain region is highlighted. The efficiency comparison between the 1-parameter and the 2-parameter method for this distance is given in Tab.~\ref{tab:suk_single}. There are not many interesting objects for our target at 250-kpc distance; nevertheless, this exercise is done as a proof of concept of our method for the Super-K detector. Our method in fact could play a role for a future detector like Hyper-Kamiokande \cite{hyperK} to detect CCSNe in Andromeda galaxy and beyond. \begin{figure}[!ht] \centering \includegraphics[width=.7\linewidth]{Dataforfigures/superk_250kpc.png \caption{The $\xi$-multiplicity map for Super-K (as a leading example) with the simulated background and injections with SN1987A emission model at 250 kpc.} \label{fig:superk_250pc} \end{figure}% \begin{table}[!ht] \caption{Efficiency ($\eta$) comparison between 1-parameter and 2-parameter method of single detector Super-K with $D=250$ kpc for $\mathrm{FAR_{\nu}}\leq1/100\,\mathrm{[year^{-1}]}$ \label{tab:suk_single} \centering {\renewcommand{\arraystretch}{1.2 \begin{tabular}{|c | c | c | c |} \hline $D$ & Noise & $\eta_\mathrm{1param}$ & $\eta_\mathrm{2param}$ \\ [kpc]& $\left[<1/100\,\mathrm{yr}\right]$ & $\left[<1/100\,\mathrm{yr}\right]$ & $\left[<1/100\,\mathrm{yr}\right]$ \\ \hline \hline \cellcolor{green!30}250 & 0/49200 & \cellcolor{magenta!30}2575/3645=\textbf{70.6\%} & \cellcolor{yellow!70}3117/3645=\textbf{85.5\%} \\ \hline \end{tabular} } \end{table}
2107.01950
\section{Introduction} Yu. Manin~\cite[\S 14]{Manin} introduced the notion of $R$-equivalence for points of algebraic varieties over a field. This notion has been used extensively in the study of reductive algebraic groups, e.g.~\cite{CTS1,CTS2,Gi2,ACP}. In the present paper, we propose a generalized definition of $R$-equivalence that is applicable to arbitrary schemes over an affine base and allows to extend several of the above-mentioned results to reductive group schemes in the sense of~\cite{SGA3}. Among reductive groups, two classes play a fundamental role, the tori and the semisimple simply connected isotropic groups. In these two cases the $R$-equivalence class group $G(k)/R$ of a reductive group $G$ over a field $k$ is already known to coincide with the value of a certain functor defined on the category of all commutative $k$-algebras, and even on all commutative rings $B$ such that $G$ is defined over $B$. Namely, if $G=T$ is a $k$-torus and \begin{equation} 1\to F\to P\to T\to 1 \end{equation} is a flasque resolution of $T$, then $T(k)/R$ coincides with the first Galois (or \'etale) cohomology group $H^1_{\acute et}(k,F)$~\cite{CTS1}, and $H^1_{\acute et}(-,F)$ is the functor of the above kind. If $G$ is a simply connected absolutely almost simple $k$-group having a proper parabolic $k$-subgroup, then $G(k)/R$ coincides with the Whitehead group of $G$, which is the subject of the Kneser--Tits problem, and with the group of $\mathbf{A}^1$-equivalence classes of $k$-points. Recall that the Whitehead group of $G$ over $k$ is defined as the quotient of $G(k)$ by the subgroup generated by the $k$-points of the unipotent radicals of all proper parabolic $k$-subgroups of $G$. In the setting of reductive groups over rings, the Whitehead group is also called a non-stable $K_1$-functor, which is defined as follows. Let $B$ be a ring. If $\mathfrak{G}$ is a reductive $B$--group scheme equipped with a parabolic $B$--subgroup $\mathfrak{P}$ of unipotent radical $R_u(\mathfrak{P})$, we define the elementary subgroup $E^\mathfrak{P}(B)$ of $\mathfrak{G}(B)$ to be the subgroup generated by the $R_u(\mathfrak{P})(B)$ and $R_u(\mathfrak{P}^{-})(B)$ where $\mathfrak{P}^{-}$ is an opposite $B$--parabolic to $\mathfrak{P}$. We define the non stable $K_1$-functor $K_1^{\mathfrak{G}, \mathfrak{P}}(B)= \mathfrak{G}(B)/E^\mathfrak{P}(B)$ called also the Whitehead coset. We say that $\mathfrak{G}$ has $B$-rank $\ge n$, if every normal semisimple $B$-subgroup of $\mathfrak{G}$ contains $(\mathbf{G}_{m,B})^n$. If $B$ is semilocal and $\mathfrak{P}$ is minimal, or if the $B$-rank of $\mathfrak{G}$ is $\ge 2$ and $\mathfrak{P}$ is strictly proper (i.e. $\mathfrak{P}$ intersects properly every semisimple normal subgroup of $G$), then $E^\mathfrak{P}(B)$ is a normal subgroup independent of the specific choice of $\mathfrak{P}$~\cite[Exp. XXVI]{SGA3},~\cite{PS}. A related, more universal construction is the 1st Karoubi-Villamayor $K$-functor, or the group of $\mathbf{A}^1$-equivalence classes, denoted here by $\mathfrak{G}(B)/{A^1}$ where ${A^1}\mathfrak{G}(B)$ consists in the (normal) subgroup of $\mathfrak{G}(B)$ generated by the elements $g(0) g^{-1}(1)$ for $g$ running over $\mathfrak{G}(B[t])$. If $G$ is semisimple simply connected over a field $k$ and equipped with a strictly proper parabolic $k$--subgroup $P$, we know that the natural maps $$ K_1^{G, P}(k) \to G(k)/{A^1} \to G(k)/R $$ are bijective~\cite{Gi2}. In the present paper, we investigate to which extent such a result holds over the ring $B$, especially in the semilocal case and in the regular case. Our first task is the extension of the notion of $R$-equivalence for rational points of algebraic varieties to integral points of a $B$-scheme in such a way that it is functorial with respect to ring homomorphisms. This is the matter of section \ref{sec_R_eq}; an advantage of $R$-equivalence is the nice functoriality with respect to fibrations. In the subsequent sections we study the properties of $R$-equivalence on reductive group schemes. For tori over regular rings, the Colliot-Th\'el\`ene and Sansuc computation of $R$--equivalence extend verbatim to the ring setting, see \S \ref{subsec_tori}. For non-toral reductive groups we obtain several results under the assumption that $B$ is an equicharacteristic semilocal regular domain. Namely, we show in Theorem~\ref{thm_main} that for a semisimple simply connected $B$--group $\mathfrak{G}$ of $B$-rank $\geq 2$, the maps $$ K_1^{\mathfrak{G}, \mathfrak{P}}(B) \to \mathfrak{G}(B)/{A^1} \to \mathfrak{G}(B)/R $$ are isomorphisms; if the $B$-rank of $\mathfrak{G}$ is only $\ge 1$, then the second map is an isomorphism (Theorem~\ref{thm_KV_R}). In particular, this provides several new cases where $E^\mathfrak{P}(B)=\mathfrak{G}(B)$ holds (Cor. \ref{cor_main}). Let $K$ be the fraction field of $B$. Another main result is the surjectivity of the map $\mathfrak{G}(B)/R \to \mathfrak{G}(K)/R$, assuming either that $\mathfrak{G}$ is a reductive group of $B$-rank $\ge 1$, or that $\mathfrak{G}$ has no parabolic subgroups over the residue fields of $B$ (Theorem \ref{thm:surj}). If $\mathfrak{G}$ is simply connected semisimple of $B$-rank $\ge 1$, then this map is an isomorphism (Theorem~\ref{thm_KV_R}). This statement was previously known (for $\mathfrak{G}(B)/{A^1}$ instead of $\mathfrak{G}(B)/R$) in the case where $\mathfrak{G}$ is defined over an infinite perfect subfield of $B$ and is of classical type, see~\cite[Corollary 4.3.6]{AHW} and~\cite[Example 2.3]{Mo-book}. As a corollary, we conclude that if $\mathfrak{G}$ is a $B$-torus or a simply connected semisimple $B$-group of $B$-rank $\ge 1$, then $\mathfrak{G}$ is retract rational over $B$ if and only if $\mathfrak{G}_K$ is retract rational over $K$ (Proposition~\ref{prop_retract_torus} and Theorem~\ref{thm_vanish}). In particular, if $X$ is a connected smooth scheme over a field $k$, and $\mathfrak{G}$ is a reductive $X$-group scheme belonging to one of the two classes mentioned above, then $\mathfrak{G}$ being retract rational at the generic point of $X$ implies that all fibers $\mathfrak{G}_x$, $x\in X$, are retract rational. This is reminiscent of the recent results on the rationality of fibers of smooth proper schemes over smooth curves~\cite{KoTsch,NiSh}. The assumption that $B$ is equicharacteristic arises from the fact that we use a geometric construction developped by I. Panin for the proof of the Serre--Grothendieck conjecture for equicharacteristic semilocal regular rings~\cite[Theorem 2.5]{Pa} (see also~\cite{PaStV,FP}). Recently, K. \v{C}esnavi\v{c}ius partially generalized this construction to semilocal regular rings which are essentially smooth over a discrete valuation ring and proved the Serre--Grothendieck conjecture for quasi-split reductive groups over such rings in the unramified case~\cite{Ces-GS}. We expect that in the future this approach will yield a similar extension of our results. \medskip Another motivation for the present work was to deal with the specialization problem for $R$-equivalence~\cite[6.1]{CT},~\cite{Ko},~\cite{Gi1}. Let $A$ be a henselian local domain of residue field $k$ and fraction field $K$. Let $\mathfrak{G}$ be a reductive $A$--group scheme and denote by $G= G \times_A k$ its closed fiber. We address the questions whether there exists a natural specialization homomorphism $\mathfrak{G}(K) /R \to G(k)/R$ and a lifting map $G(k)/R \to \mathfrak{G}(K) /R$. It makes sense to approach these questions using the generalized $R$-equivalence for $\mathfrak{G}$, since we may investigate whether the maps in the diagram $$ \xymatrix{ G(k)/R &\ar[l] \mathfrak{G}(A)/R \ar[r] & \mathfrak{G}(K)/R } $$ are injective/surjective/bijective. In general, the only apriori evidence is the surjectivity of $\mathfrak{G}(A)/R \to G(k)/R$ which follows from the surjectivity of $\mathfrak{G}(A) \to G(k)$ (Hensel's lemma). We prove that if $\mathfrak{G}$ is a torus or a simply connected semisimple group scheme equipped with a strictly proper parabolic $A$-subgroup, then the map $\mathfrak{G}(A)/R \to G(k)/R$ is an isomorphism (Proposition~\ref{prop_torus2} and Theorem~\ref{thm:hens-isotr}). In the case where $A$ is a henselian regular local ring containing a field $k_0$, we prove that for any reductive $\mathfrak{G}$ there are two isomorphisms $$ G(k)/R \buildrel\sim\over\la \mathfrak{G}(A)/R \buildrel\sim\over\lgr G(K)/R $$ and in particular there is a well-defined specialization (resp.\ lifting) homomorphism (Theorem~\ref{thm:hens-inj}). Note that the recent results on the local-global principles over semi-global fields~\cite{CTHHKPS} crucially use the existense of an (idependently constructed) specialization map for two-dimensional rings. \smallskip \bigskip \noindent{\bf Acknowledgments}. We thank K. \v{C}esnavi\v{c}ius for valuable comments in particular about fields of representants for henselian rings. We thank D. Izquierdo for useful conversations. \bigskip \noindent{\bf Notations and conventions.} We use mainly the terminology and notation of Grothendieck-Dieudonn\'e \cite[\S 9.4 and 9.6]{EGA1}, which agrees with that of Demazure-Grothendieck used in \cite[Exp. I.4]{SGA3} Let $S$ be a scheme and let $\mathcal E$ be a quasi-coherent sheaf over $S$. For each morphism $f:T \to S$, we denote by $\mathcal E_{T}=f^*(\mathcal E)$ the inverse image of $\mathcal E$ by the morphism $f$. Recall that the $S$--scheme $\mathbf{V}(\mathcal E)=\mathop{\rm Spec}\nolimits\bigl( \mathrm{Sym}^\bullet(\mathcal E)\bigr)$ is affine over $S$ and represents the $S$--functor $T \mapsto \mathop{\rm Hom}\nolimits_{\mathcal{O}_T}(\mathcal E_{T}, \mathcal{O}_T)$ \cite[9.4.9]{EGA1}. We assume now that $\mathcal E$ is locally free of finite rank and denote by $\mathcal E^\vee$ its dual. In this case the affine $S$--scheme $\mathbf{V}(\mathcal E)$ is of finite presentation (ibid, 9.4.11); also the $S$--functor $T \mapsto H^0(T, \mathcal E_{(T)})= \mathop{\rm Hom}\nolimits_{\mathcal{O}_T}(\mathcal{O}_T, \mathcal E_{T} )$ is representable by the affine $S$--scheme $\mathbf{V}(\mathcal E^\vee)$ which is also denoted by $\mathbf{W}(\mathcal E)$ \cite[I.4.6]{SGA3}. For scheme morphisms $Y \to X \to S$, we denote by $\prod\limits_{X/S}(Y/X)$ the $S$--functor defined by $$ \Bigl( \prod\limits_{X/S}(Y/X) \Bigr)(T)= Y(X \times_S T) $$ for each $S$--scheme $T$. Recall that if $\prod\limits_{X/S}(Y/X)$ is representable by an $S$-scheme, this scheme is called the Weil restriction of $Y$ to $S$. If $\mathfrak{G}$ is a $S$--group scheme locally of finite presentation, we denote by $H^1(S, \mathfrak{G})$ the set of isomorphism classes of sheaf $\mathfrak{G}$--torsors for the fppf topology. \section{$R$-equivalence for schemes}\label{sec_R_eq} \subsection{Definition} Let $B$ be a ring (unital, commutative). We denote by $\Sigma$ the multiplicative subset of polynomials $P \in B[T]$ satisfying $P(0), P(1) \in B^\times$. Note that evaluation at $0$ (and $1$) extend from $B[t]$ to the localization $B[t]_\Sigma$. Let $\mathcal F$ be a $B$-functor in sets. We say that two points $x_0,x_1 \in \mathcal F(B)$ are \emph{directly $R$--equivalent} if there exists $x \in \mathcal F\bigl( B[t]_\Sigma \bigr)$ such that $x_0=x(0)$ and $x_1=x(1)$. The $R$-equivalence on $\mathcal F(B)$ is the equivalence relation generated by this elementary relation. \smallskip \begin{sremarks}\label{rem_def}{\rm (a) If $B$ is a field, then $B[t]_\Sigma$ is the semilocalization of $B[t]$ at $0$ and $1$ so that the definition agrees with the classical definition. \smallskip \noindent (b) If $B$ is a semilocal ring with maximal ideals $\goth m_1,\dots, \goth m_r$, then $B[t]_\Sigma$ is the semilocalization of $B[t]$ at the maximal ideals $\goth m_1 B[t]+ t B[t]$, $\goth m_1 B[t]+ (t-1) B[t]$, \dots, $\goth m_r B[t]+ t B[t]$, $\goth m_r B[t]+ (t-1) B[t]$. In particular $B[t]_\Sigma$ is a semilocal ring. \smallskip \noindent (c) The most important case is for the $B$--functor of points $h_\goth X$ of a $B$--scheme $\goth X$. In this case we write $\goth X(B) /R$ for $h_\goth X(B)/R$. \smallskip \noindent (d) If the $B$--functor $\mathcal F$ is locally of finite presentation (that is commutes with filtered direct limits), then two points $x_0,x_1 \in \mathcal F(B)$ are directly $R$--equivalent if there exists a polynomial $P \in B[t]$ and $x \in \mathcal F\bigl( B[t, \frac{1}{P}] \bigr)$ such that $P(0)$, $P(1) \in B^\times $ and $x_0=x(0)$ and $x_1=x(1)$. This applies in particular to the case of $h_\goth X$ for a $B$--scheme $\goth X$ locally of finite presentation. } \end{sremarks} The important thing is the functoriality. If $ B \to C$ is a morphism of rings, then the map $\mathcal F(B) \to \mathcal F(C)$ induces a map $\mathcal F(B)/R \to \mathcal F(C)/R$. We have also a product compatibility $(\goth X \times_B \goth Y)(B)/R \buildrel\sim\over\lgr \goth X(B)/R \times \goth Y(B)/R$ for $B$-schemes $\goth X, \goth Y$. If $\mathfrak{G}$ is a $B$--group scheme (and more generally a $B$-functor in groups), then the $R$--equivalence is compatible with left/right translations by $\mathfrak{G}(B)$, also the subset $R\mathfrak{G}(B)$ of elements of $\mathfrak{G}(B)$ which are $R$-equivalent to $1$ is a normal subgroup. It follows that the set $\mathfrak{G}(B)/R \cong \mathfrak{G}(B)/R\mathfrak{G}(B)$ is equipped with a natural group structure. \subsection{Elementary properties} We start with the homotopy property. \begin{slemma} \label{lem_homotopy} Let $\mathcal F$ be a $B$--functor. \smallskip \noindent (1) The map $\mathcal F(B)/R \to \mathcal F(B[u])/R$ is bijective. \smallskip \noindent (2) Assume that $\mathcal F$ is a $B$--functor in groups. Then two points of $\mathcal F(B)$ which are $R$--equivalent are directly $R$--equivalent. \end{slemma} \begin{proof} (1) The specialization at $0$ provides a splitting of $B \to B[u]$, so that the map $\mathcal F(B)/R \to \mathcal F(B[u])/R$ is split injective. It is then enough to establish the surjectivity. Let $f \in \mathcal F(B[u])$. We put $x(u,t)=f(ut) \in \mathcal F(B[u,t])$ so that $x(u,0) = f(0)_{B[u]}$ and $x(u,1) =f$. In other words, $f$ is directly $R$-equivalent to $f(0)_{B[u]}$ and we conclude that the map is surjective. \smallskip \noindent (2) We put $\mathcal B=B[t]$ and are given two elements $f, f' \in \mathcal F(B)$ which are $R$-equivalent. By induction on the length of the chain connecting $f$ and $f'$, we can assume that there exists $f_1 \in \mathcal F(B)$ which is directly $R$-equivalent to $f$ and $f'=f_2$. Also by translation we can assume that $f=1$. There exists $g(t), h(t) \in \mathcal F(\mathcal B)$ such that $g(0)=1$, $g(1)=f_1^{1}=h(0)$ and $h(1)= f_2$. We put $f(t)= g(t)^{-1} \, h(1-t) \in \mathcal F(\mathcal B)$. Then $f(0)=1$ and $f(1)= f_2$ as desired. \end{proof} \begin{slemma} \label{lem_limit} Let $\mathcal F$ be a $B$--functor locally of finite presentation and consider a direct limit $B_ \infty= \limind_{\lambda \in \Lambda} B_\lambda$ of $B$--rings. Then the map $\limind_{\lambda \in \Lambda} \mathcal F(B_\lambda)/R \to \mathcal F(B_\infty)/R$ is bijective. \end{slemma} \begin{slemma} \label{lem_weil} Let $C$ be a locally free $B$--algebra of degree $d$. Let $\mathcal E$ be a $C$--functor and consider the $B$--functor $\mathcal F= \prod_{C/B}\mathcal E$ defined by $\mathcal F(B')= \mathcal E(C \otimes_B B')$ for each $B$--algebra $B'$. Then the morphism $\mathcal F(B)/ R \to \mathcal E(C)/R$ is an isomorphism. \end{slemma} \begin{proof} We distinguish the multiplicative subsets $\Sigma_B$ and $\Sigma_C$. The map $B[t]_{\Sigma_B} \otimes_B C \to C[t]_{\Sigma_C}$ induces a map $\mathcal F(B[t]_\Sigma)= \mathcal E(B[t]_{\Sigma_B} \otimes_B C) \to \mathcal E(C[t]_{\Sigma_C})$. We get then a morphism $\mathcal F(B)/ R \to \mathcal E(C)/R$. We claim that $B[t]_{\Sigma_B} \otimes_B C \to C[t]_{\Sigma_C}$ is an isomorphism. Since $C$ is locally free over $B$ of degree $d$, we can consider the norm map $N: C \to B$ as defined in \cite[Tag 0BD2, 31.17.6]{St}. It is well-known that there exists a polynomial map $N': C \to B$ such that $N(c)= c \, N'(c)$ for each $c \in C$. Given $Q(t) \in C[t]$ such that $Q(1), Q(0) \in C^\times$, we have that $P(T)=N_{C/B}(Q(T))$ belongs to $\Sigma$ so that $Q(t)$ divides $P(T)$. It follows that we have an isomorphism $C[t]_{\Sigma_B} \to C[t]_{\Sigma_C}$. Since $B[t]_\Sigma \otimes_{B[t]} {C[t]} \buildrel\sim\over\lgr C[t]_{\Sigma_B}$ \cite[Tag 00DK, 9.11.15]{St}, we conclude that the map $B[t]_{\Sigma_B} \otimes_B C \to C[t]_{\Sigma_C}$ is an isomorphism. As counterpart we get that the map $\mathcal F(B)/ R \to \mathcal E(C)/R$ is an isomorphism. \end{proof} \begin{slemma} \label{lem_sorite1} Let $\goth X$ be a $B$-scheme. \noindent (1) Assume that $\goth X=\mathop{\rm Spec}\nolimits(B[\goth X])$ is affine and let $\mathfrak{U}=\goth X_f$ be a principal open subset of $\goth X$ where $f \in B[\goth X]$. If two points $x_0, x_1 \in \mathfrak{U}(B)$ are directly $R$-equivalent in $\goth X(B)$, then they are directly $R$-equivalent in $\mathfrak{U}(B)$. \smallskip \noindent (2) Assume that $B$ is semilocal. Let $\mathfrak{U}$ be an open $B$--subscheme of $\goth X$. If two points $x_0, x_1 \in \goth X(B)$ are directly $R$-equivalent in $\goth X(B)$, then they are directly $R$-equivalent in $\mathfrak{U}(B)$. \smallskip \noindent (3) Let $\mathfrak{G}$ be a $B$--group scheme and let $\mathfrak{U}$ be an open $B$--subscheme of $\mathfrak{G}$. If $\mathfrak{U}$ is a principal open subset or if $B$ is semilocal, then the map $\mathfrak{U}(B)/R \to \mathfrak{G}(B)/R$ is injective. \end{slemma} Note that (3) was known in the field case under an assumption of unirationality \cite[Prop. 11]{CTS1}. \begin{proof} (1) Let $x_1,x_2 \in \mathfrak{U}(B)$ and let $x(t) \in \goth X(B[t]_\Sigma)$ such that $x(0)=x_0$ and $x(1)=x_1$. We consider the polynomial $P(t)=f(x(t)) \in B[t]_\Sigma$. Since $P(0)= f(x(0)) \in B^\times$ and $P(1)= f(x(1)) \in B^\times$, it follows that $P \in \Sigma$ hence $x(t) \in \mathfrak{U}(B[t]_\Sigma)$. Thus $x_0$ and $x_1$ are directly $R$--equivalent in $\mathfrak{U}(B)$. \smallskip \noindent (2) Let $x(t) \in \goth X(B[t]_\Sigma)$ such that $x(0)=x_1$ and $x(1)=x_1$. Since $B[t]_\Sigma$ is a semilocal ring and the closed points of $\mathop{\rm Spec}\nolimits(B[t]_\Sigma)$ map to points of $\mathfrak{U}$, it follows that $x(t) \in \mathfrak{U}(B[t]_\Sigma)$. \smallskip \noindent (3) This follows from the fact that two points of $\mathfrak{G}(B)$ are $R$-equivalent if and only they are directly $R$-equivalent according to Lemma \ref{lem_homotopy}.(2). \end{proof} \begin{slemma} \label{lem_sorite2} \noindent (1) Let $\mathcal{L}$ be a finitely generated locally free $B$--module and consider the associated vector group scheme $\text{\rm \bf W}(\mathcal{L})$. Let $\mathfrak{U} \subset \text{\rm \bf W}(\mathcal{L})$ be an open subset of the affine space $\text{\rm \bf W}(\mathcal{L})$. We assume that $\mathfrak{U}$ is a principal open subset or that $B$ is semilocal. Then any two points of $\mathfrak{U}(B)$ are directly $R$-equivalent. In particular if $\mathfrak{U}(B) \not = \emptyset$, we have $\mathfrak{U}(B)/R= \bullet$. \smallskip \noindent (2) Let $\mathfrak{G}$ be an affine $B$--scheme of finite presentation such that $H^1(B,\mathfrak{G})=1$, $H^1(B[t]_\Sigma,\mathfrak{G})=1$ and $\mathfrak{G}(B)/R=1$. Let $f: \goth Y \to \goth X$ be a morphism of $B$--schemes which is a $\mathfrak{G}$--torsor. Then the map $\goth Y(B)/R \to \goth X(B)/R$ is bijective. \smallskip \noindent (3) In (2), assume that $\mathfrak{G}$ arises by successive extensions of vector group schemes. Then the map $\goth Y(B)/R \to \goth X(B)/R$ is bijective. \smallskip \noindent (4) In (2), assume that $\mathfrak{G}$ is a split $B$--torus and that $\mathop{\rm Pic}\nolimits(B)= \mathop{\rm Pic}\nolimits(B[t]_\Sigma)=0$. Then the map $\goth Y(B)/R \to \goth X(B)/R$ is bijective. \smallskip \noindent (5) Assume that $B$ is semilocal and that $\goth T$ is a quasitrivial $B$--torus and let $f: \goth Y \to \goth X$ be a morphism of $B$--schemes which is a $\goth T$--torsor. Then the map $\goth Y(B)/R \to \goth X(B)/R$ is bijective. \end{slemma} \begin{proof} (1) According to Lemma \ref{lem_sorite1}.(3), it is enough to show that two points of $\text{\rm \bf W}(\mathcal{L})(B)=\mathcal{L}$ are $R$--equivalent. Let $x_0,x_1 \in \mathcal{L}$ and consider $x(t)= (1-t)x_0 + t x_1 \in \mathcal{L} \otimes_B B[t] \subset \mathcal{L} \otimes_B B[t]_\Sigma = \text{\rm \bf W}(\mathcal{L})( B[t]_\Sigma ) $. Since $x(0)=x_0$ and $x(1)=x_1$, we conclude that $x_0$ and $x_1$ are directly $R$--equivalent. \smallskip \noindent (2) Since $H^1( B, \mathfrak{G})=0$, it follows that the map $\goth Y(B)\to \goth X(B)$ is surjective and a fortiori the map $\goth Y(B)/R \to \goth X(B)/R$ is onto. For the injectivity, it is enough to prove that two points $y_0, y_1 \in \goth Y(B)$ such that their images $x_0, x_1 \in \goth X(B)$ are directly $R$--equivalent are $R$-equivalent. Our assumption is that there exists $x(t) \in \goth X(B[t]_\Sigma)$ such that $x(0)=x_0$ and $x(1)=x_1$. Since $H^1( B[t]_\Sigma, \mathfrak{G})=1$ by assumption, we can lift $x(t)$ to some element $y(t) \in \goth Y(B[t]_\Sigma)$. Then $y_0=y(0). g_0$ and $y_1=y(1). g_1$ for (unique) elements $g_0, g_1$ of $\mathfrak{G}(B)$. By (1), $g_0$ and $g_1$ are $R$--equivalent to $1$ which enables us to conclude that $y_0$ and $y_1$ are $R$--equivalent. \smallskip \noindent (3) By induction we can assume that $\mathfrak{G}$ is a vector group scheme. In this case $H^1(C, \mathfrak{G})=1$ for each $B$--ring $C$ and $\text{\rm \bf G}(C)/R=1$ according to (1). Hence part (2) of the statement applies. \smallskip \noindent (4) By induction by the rank we can assume that $\mathfrak{G}=\mathbb{G}_m$. We have $H^1(B,\mathbb{G}_m)=\mathop{\rm Pic}\nolimits(B)=0$ and similarly for $H^1(B[t]_\Sigma,\mathbb{G}_m)$. Finally we have $\mathbb{G}_m(B)/R=1$ according to (1) so that part (2) of the statement applies. \smallskip \noindent (5) follows from similar vanishing properties. \end{proof} \subsection{Retract rationality and $R$-equivalence} It is well-known that over a field, there is a close relation between retract rationality of algebraic varieties and the triviality of their $R$-equivalence class groups; see e.g. the survey~\cite{CT}. We extend Saltman's definition \cite{Sa} of retract rationality over fields to the setting of pointed $B$--schemes. \begin{sdefinition} Let $(\goth X,x)$ be a pointed $B$--scheme. We say that $(\goth X,x)$ is \begin{enumerate} \item \emph{$B$--rational} if $(\goth X,x)$ admits an open $B$-subscheme $(\mathfrak{U},x)$ such that $(\mathfrak{U},x)$ is $B$--isomorphic to an affine open subscheme of $(\mathbf{A}^N_B,0)$. \item \emph{stably $B$--rational} if $(\goth X,x)$ admits an open $B$-subscheme $(\mathfrak{U},x)$ such that \break $(\mathfrak{U} \times_B \mathbf{A}^d_B, (x,0))$ is $B$--rational for some $d \geq 0$. \item \emph{retract $B$--rational} if $(\goth X,x)$ admits an open $B$-subscheme $(\mathfrak{U},x)$ such that $(\mathfrak{U},x)$ is a $B$--retract of an open subset of some $(\mathbf{A}^N_B,0)$. \end{enumerate} \end{sdefinition} \begin{slemma} \label{lem_retract} Assume that $B$ is semilocal with residue fields $\kappa_1$, \dots, $\kappa_c$. Let $(\mathfrak{U},x)$ be a pointed $B$-scheme which is a retract of an open subset $(\goth V,0)$ of some $(\mathbf{A}^N_B,0)$. \smallskip (1) There is an affine open $(\mathfrak{U}',x)$ of $(\mathfrak{U},x)$ which is a retract of an affine open subset $(\goth V',0)$ of some $(\mathbf{A}^N_B,0)$. \smallskip (2) We have $\mathfrak{U}(B)/R=\bullet$. \smallskip (3) The map $\mathfrak{U}(B) \to \mathfrak{U}(\kappa_1) \times \dots \times \mathfrak{U}(\kappa_c)$ is onto. \end{slemma} \begin{proof} (1) Since the map $\mathfrak{U} \to \goth V$ is a closed immersion, it is enough to deal with the case of $(\goth V,0)$. According to \cite[Tag 01ZU]{St}, there exists an open affine subset $\goth V'$ of $\goth V$ containing $0_{\kappa_1}, \dots, 0_{\kappa_c}$. Then the zero section $\mathop{\rm Spec}\nolimits(B) \to \goth V$ factorizes through $\goth V'$, so that $(\goth V',0)$ does the job. \smallskip \noindent (2) We have $\goth V(B) \not = \emptyset$ and $\goth V(B)/R=\bullet$ according to Lemma \ref{lem_sorite2}.(1). On the other hand, the functoriality of $R$-equivalence provides a section of the map $\mathfrak{U}(B)/R \to \goth V(B)/R$. We conclude that $\mathfrak{U}(B)/R=1$. \smallskip \noindent (3) Once again we can work with an open subset $\goth V$ of $\mathbf{A}^N_B$. Let $(v_i)_{i=1,\dots, c}$ be an element of $\goth V(\kappa_1) \times \dots \times \goth V(\kappa_c)$. There exists $f \in B[t_1,\dots, t_n]$ such that $\goth V_f = \mathbf{A}^N_{B,f} \subseteq \goth V$ and $v_i \in \goth V_f(\kappa_i)$ for $i=1,\dots, c$. We can deal with $\goth V_f$ and observe that $$ \goth V_f(B)=\bigl\{ b \in B^N \, \mid \, f(b) \in B^\times \bigr\} $$ maps onto $\prod\limits_{i=1,\dots, c} \, \goth V_f(\kappa_i)$. \end{proof} \begin{sdefinition} We say that a $B$--scheme $\goth X$ satisfies the \emph{lifting property} if for each semilocal $B$--ring $C$, the map $$ \goth X(C) \to \prod\limits_{\goth m \in \mathop{\rm max}\nolimits(C)} \goth X(C/\goth m) $$ is onto, where $\mathop{\rm max}\nolimits(C)$ denotes the maximal spectrum of $C$. \end{sdefinition} We extend Saltman's criterion of retract rationality \cite[th. 3.9]{Sa}. \begin{sproposition} \label{prop_retract} We assume that $B$ is semilocal with residue fields $\kappa_1$, \dots, $\kappa_c$. Let $(\goth X,x)$ be a pointed affine finitely presented integral $B$-scheme. Then the following assertions are equivalent: \smallskip $(i)$ $(\goth X,x)$ is retract $B$-rational; \smallskip $(ii)$ $(\goth X,x)$ admits an open $B$-subscheme $(\goth V,x)$ which satisfies the lifting property. \end{sproposition} \begin{sremarks}\label{rem_integral}{\rm (a) Note that the assumption $X(B) \not = \emptyset$ implies that $B$ is an integral ring. \smallskip (b) Assume that $B$ is an integral ring of field of fractions $K$. Let $\goth Y$ be a flat affine $B$--scheme such that $\goth Y_K$ is integral. Then $B[\goth Y]$ injects in $K[\goth Y]$ so that $\goth Y$ is integral. In particular, if $\mathfrak{G}$ is a smooth affine $B$-group scheme such that $\mathfrak{G}_K$ is connected, then $\mathfrak{G}$ is integral. } \end{sremarks} \begin{proof} Let $\goth m_1, \dots, \goth m_c$ be the maximal ideals of $B$ and put $\kappa_i=B/\goth m_i$ for $i=1,\dots,c$. \smallskip \noindent $(i) \Longrightarrow (ii)$. By definition $(\goth X,x)$ admits an open $B$-subscheme $(\mathfrak{U},x)$ such that $(\mathfrak{U},x)$ is a $B$--retract of an open subset of some $\mathbf{A}^N_B$. We take $\goth V=\mathfrak{U}$, it satisfies the lifting property according to Lemma \ref{lem_retract}.(3). \smallskip \noindent $(ii) \Longrightarrow (i)$. Up to replacing $\goth V$ by $\goth X$, we can assume that $\goth X$ satisfies the lifting property. Furthermore an argument as in Lemma \ref{lem_retract}.(1) permits to assume that $\goth X$ is affine. We denote by $x_i \in \goth X(\kappa_i)$ the image of $x$. We write $B[\goth X]= B[t_1,\dots, t_N]/ \mathcal P$ for a prime ideal $\mathcal P$ of $B[t_1,\dots, t_N]$. We denote by $\eta: \mathop{\rm Spec}\nolimits(\kappa(\goth X)) \to \mathbf{A}^N_B$ the generic point of $\goth X$. We consider the semilocalization $C$ of $B[t_1,\dots, t_N]$ at the points $\eta , x_1, \dots, x_c$ of $\mathbf{A}^N_B$. Our assumption implies that the map $$ \goth X(C) \to \goth X( \kappa(\goth X)) \times \goth X(\kappa_1) \times \dots \times \goth X(\kappa_c) $$ is onto. Let $y \in \goth X(C)$ be a lifting of $(\eta, x_1,\dots, x_c)$. Then $y$ extends to a principal neighborhood $B[t_1,\dots, t_N]_f$ of $(\eta, x_1,\dots, x_c)$, i.e. there is a $B$-map $\phi: (\mathbf{A}^N_B)_{f} \to \goth X$ which satisfies $\phi(\eta)=\eta$ and $\phi(x_i)=x_i$ for $i=1,...,c$. The composite $\goth X_{f} \to (\mathbf{A}^N_B)_{f} \xrightarrow{\phi} \goth X$ fixes the points $\eta$, $x_1,\dots, x_c$. There exists then a function $g \in B[t_1,\dots, t_N]$ such that $g(\eta) \in \kappa(\goth X)^\times$, $g(x_i) \in \kappa_i^\times$ for $i=1,..,c$ and the restriction $$ \goth X_{fg} \to (\mathbf{A}^N_B)_{fg} \xrightarrow{\phi} \goth X $$ is the canonical map. Thus the open subset $(\goth X_{fg},x)$ of $(\goth X,x)$ is a $B$-retract of the principal open subscheme of $(\mathbf{A}^N_B)_{fg}$ of $\mathbf{A}^N_B$. \end{proof} \begin{sremarks}{\rm (a) Under the assumptions of the proposition, it follows that the retract rationality property is of birational nature (with respect to our base point). Furthermore inspection of the proof shows that if $(\goth X,x)$ is $B$--retract rational, we can take $\goth V$ to be a principal open subset of $\goth X$ and it is a $B$--retract of a principal open subset of $\mathbf{A}^N_B$. \smallskip \noindent (b) The direct implication $(i) \Longrightarrow (ii)$ does not require $\goth X$ to be integral. } \end{sremarks} \begin{sexample}\label{ex_scheme_tori}{\rm Let $B$ be a semilocal ring having infinite residue fields $\kappa_1, \dots, \kappa_c$. Let $\mathfrak{G}$ be a reductive $B$--group scheme and let $\goth T$ be a maximal $B$-torus of $\mathfrak{G}$ (such a torus exists according to Grothendieck's theorem \cite[XIV.3.20 and footnote]{SGA3}). Let $\goth X=\mathfrak{G}/\goth N_\mathfrak{G}(\goth T)$ be its $B$--scheme of maximal tori. We claim that $\goth X$ satisfies the lifting property so that, if $B$ is integral, $(\goth X, \bullet)$ is retract rational over $B$ according to Proposition~\ref{prop_retract}. It is enough to show that the map $\goth X(B) \to \prod_{i=1,\dots, c} \goth X(\kappa_i)$ is onto. Let $T_i$ be a maximal $\kappa_i$-torus of $\mathfrak{G}_{\kappa_i}$ for $i=1,\dots, c$. Since $\kappa_i$ is infinite, there exists $X_i \in \mathop{\rm Lie}\nolimits(T_i)(\kappa_i) \subset \mathop{\rm Lie}\nolimits(\mathfrak{G})(\kappa_i)$ such that $T_i= C_{\mathfrak{G}_{\kappa_i}}(X_i)$ \cite[XIV.5.1]{SGA3}. We pick a lift $X \in \mathop{\rm Lie}\nolimits(\mathfrak{G})(R)$ of the $X_i's$. Then $\goth T= C_\mathfrak{G}(X)$ is a maximal $B$--torus of $\mathfrak{G}$ which lift the $T_i$'s. By inspection of the argument we can actually assume only that $\sharp \kappa_i \geq \mathop{\rm dim}\nolimits_{\kappa_i}( \mathfrak{G}_{\kappa_i})$ by using~\cite[Thm. 1]{Barnes}. } \end{sexample} \begin{sproposition}\label{prop_vanish} We assume that $B$ is semilocal. Let $\mathfrak{G}$ be a $B$--group scheme with connected geometric fibers such that \smallskip $(i)$ $(\mathfrak{G},1)$ is retract $B$--rational; \smallskip $(ii)$ $\mathfrak{G}(\kappa)$ is dense in $\mathfrak{G}_\kappa$ for each residue field $\kappa$ of a maximal ideal of $B$. \smallskip \noindent Then $\mathfrak{G}(B)/R=1$. \end{sproposition} Note that $(ii)$ is satisfied if $\mathfrak{G}$ is reductive and if $B$ has infinite residue fields. \begin{proof} Let $\goth m_1, \dots, \goth m_c$ be the maximal ideals of $B$ and put $\kappa_i=B/\goth m_i$ for $i=1,\dots, c$. The algebraic groups $\mathfrak{G}_{\kappa_i}$ are then retract rational and $\mathfrak{G}(\kappa_i)$ is dense in $\mathfrak{G}_{\kappa_i}$ for $i=1,\dots, c$. Let $(\mathfrak{U}, 1)$ be an open subset of $(\mathfrak{G},1)$ which is a $B$--retract of some open of $\mathbf{A}^N_B$. Since $\mathfrak{U}(B)$ maps onto $\mathfrak{U}(\kappa_1) \times \dots \times \mathfrak{U}(\kappa_c)$ (Lemma \ref{lem_retract}.(3)) and since $\mathfrak{U}(\kappa_i)$ is dense in $\mathfrak{G}(\kappa_i)$ by assumption, it follows that $\mathfrak{U}(B)$ is $B$-dense in $\mathfrak{G}$. In particular, there exist $u_1, \dots, u_s \in \mathfrak{U}(B)$ such that $\mathfrak{G}= u_1 \mathfrak{U} \cup \dots \cup u_s \mathfrak{U}$ so that $\mathfrak{U}(B)$ generates $\mathfrak{G}(B)$ as a group. Lemma \ref{lem_retract}.(2) shows that $\mathfrak{U}(B)/R=1$. We conclude that $\mathfrak{G}(B)/R=1$. \end{proof} \begin{sremark}\label{rem_quasitrivial} {\rm Proposition \ref{prop_vanish} applies to quasitrivial tori so that it is coherent with Lemma \ref{lem_weil}. } \end{sremark} \section{$R$-equivalence for reductive groups} \subsection{$R$-equivalence as a birational invariant} The following statement generalizes \cite[Prop. 11]{CTS1}. \begin{sproposition} \label{prop_dominant} Assume that $B$ is semilocal with infinite residue fields. Let $\mathfrak{G}$ be a $B$--group scheme and let $f: (\goth V, v_0) \to (\mathfrak{G},1)$ be a $B$--morphism of pointed $B$--schemes such that $(\goth V,v)$ is an open subset of some $(\mathbf{A}^n_B, 0)$ and such that $f_{B/\goth m}$ is dominant for each maximal ideal $\goth m$ of $B$. Let $(\mathfrak{U}, 1)$ be an open neighborhood of $(\mathfrak{G},1)$. \smallskip \noindent (1) We have $f( \goth V(B)) \, . \, \mathfrak{U}(B) = \mathfrak{G}(B)$. \smallskip \noindent (2) The map $\mathfrak{U}(B)/R \to \mathfrak{G}(B)/R$ is bijective. \end{sproposition} \begin{proof} Let $\goth m_1, \dots, \goth m_c$ be the maximal ideals of $B$. \smallskip \noindent (1) From the proof of \cite[Prop. 11]{CTS1}, we have $f( \goth V(B/\goth m_i)) \, . \, \mathfrak{U}(B/\goth m_i) = \mathfrak{G}(B/\goth m_i)$ for $i=1,\dots,c$. We are given $g \in \mathfrak{G}(B)$ and denote by $g_i$ its reduction to $\mathfrak{G}(B/\goth m_i)$ for $i=1,\dots,c$, then $g_i= f(v_i) \, u_i$ for some $v_i \in \goth V(B/\goth m_i)$ and $u_i \in \mathfrak{U}(B/\goth m_i)$. Let $v \in \goth V(B)$ be a common lift of the elements $v_i$, then $f(v)^{-1}g \in \mathfrak{G}(B)$ belongs to $\mathfrak{U}(B)$. \smallskip \noindent (2) The surjectivity follows from (1) and the fact $\goth V(B)/R=1$ established in Lemma \ref{lem_sorite2}.(1). On the other hand, the injectivity has been proven in Lemma \ref{lem_sorite1}.(3). \end{proof} We say that a $B$-group scheme $G$ is $B$-linear, if for some $N\ge 1$ there is a closed embedding of $B$-group schemes $G\to\GL_{N,B}$. \begin{slemma} \label{lem_unirational} Assume that $B$ is semilocal with infinite residue fields $\kappa_1, \dots , \kappa_c$. Let $\mathfrak{G}$ be a reductive $B$--group scheme. \smallskip (1) There exist maximal $B$--tori $\goth T_1, \dots, \goth T_n$ of $\mathfrak{G}$ such that the product map \break $\psi: \goth T_1 \times \dots \times \goth T_n \to \mathfrak{G}$ satisfies the following property: $\psi_{\kappa_j}$ is smooth at the origin for each $j=1,\dots,c$. Furthermore, the submodules $\mathop{\rm Lie}\nolimits(\goth T_i)(B)$ together generate $\mathop{\rm Lie}\nolimits(\mathfrak{G})(B)$ as a $B$--module. \smallskip (2) Assume furthermore that $\mathfrak{G}$ is $B$-linear. Then there exists a quasi--trivial $B$--torus $\mathfrak{Q}$ and a $B$--morphism of pointed $B$--schemes $f: (\mathfrak{Q}, 1) \to (\mathfrak{G},1)$ such that $f_{\kappa_j}$ is smooth at the origin for each $j=1,\dots,c$. \end{slemma} \begin{proof} (1) We start with the case of an infinite field $k$ and of a reductive $k$--group $G$. We know that $G(k)$ is Zariski dense in $G$. Let $T$ be a maximal $k$--torus of $G$ and let $1=g_1,g_2 \dots, g_n$ be elements of $G(k)$ such that $\mathop{\rm Lie}\nolimits(G)$ is generated by the $^{g_i}\!\mathop{\rm Lie}\nolimits(T)(k)$'s. We consider the map of $B$--schemes \[\xymatrix@1{ \gamma: \goth T^n & \to & \mathfrak{G} \\ (t_1,\dots, t_n) & \mapsto & {^{g_1}\!t_1} \dots {^{g_n}\!t_n} . }\] Its differential at $1$ is \[\xymatrix@1{ d\gamma_{1_k}: \mathop{\rm Lie}\nolimits(T)(k)^n & \to & \mathop{\rm Lie}\nolimits(G)(k) \\ (X_1,\dots, X_n) & \mapsto & {^{g_1}\!X_1} + \dots + {^{g_n}\!X_n} }\] which is onto by construction. We put $T_i= {^{g_i}\!T}$ for $i=1,...,n$ and observe that the product map $\psi: T_1 \times_k \dots \times_k T_n \to G$ is smooth at $1$. In this construction we are free to add more factors. In the general case, we fix $n$ large enough and maximal $\kappa_j$--tori $T_{1,j}$, \dots , $T_{n,j}$ such that the product map $\psi_i: T_{1,j} \times_{\kappa_j} \dots \times_{\kappa_j} T_{n,j} \to \mathfrak{G}_{\kappa_j}$ is smooth at $1$ for $j=1, \dots, c$. Example \ref{ex_scheme_tori} shows that there exists a maximal $B$--torus $\goth T_{i}$ which lifts the $T_{i,j}$'s for $i=1,...,n$. Then the product map $\psi: \goth T_1 \times_B \dots \times_k \goth T_n \to \mathfrak{G}$ satisfies the desired requirements. Nakayama's lemma implies that the $\mathop{\rm Lie}\nolimits(\goth T_i)(B)$'s generate $\mathop{\rm Lie}\nolimits(\mathfrak{G})(B)$ as a $B$--module. \smallskip \noindent (2) We assume that $\mathfrak{G}$ is linear so that the $\goth T_i$'s are isotrivial according to \cite[Cor. 5.1]{Gi21}. Then by~\cite[Prop. 1.3]{CTS2} there exist flasque resolutions $1 \to \goth S_i \to \mathfrak{Q}_i \xrightarrow{q_i} \goth T_i \to 1$ of $\goth T_i$ where $\mathfrak{Q}_i$ is a quasi-trivial $B$--torus and $\goth S_i$ is a flasque $B$-torus for $i=1,...,n$. We consider the map \[\xymatrix@1{ f: \mathfrak{Q}_1 \times_B \dots \times_B \mathfrak{Q}_n & \to & \mathfrak{G} \\ (v_1,\dots, v_n) & \mapsto & q_1(v_1) \dots q_n(v_n) . }\] Since the $q_i$'s are smooth, $f=\psi \circ (q_1,\dots, q_n)$ satisfies the desired requirements. \end{proof} Putting together Lemma \ref{lem_unirational} and Proposition \ref{prop_dominant} leads to the following fact. \begin{scorollary} \label{cor_retract_red} Assume that $B$ is semilocal with infinite residue fields. Let $\mathfrak{G}$ be a $B$-linear reductive $B$--group scheme. \smallskip \noindent (1) Let $f: (\mathfrak{Q}, 1) \to (\mathfrak{G},1)$ be the morphism constructed in Lemma \ref{lem_unirational}.(2). Then $f( \mathfrak{Q}(B)) \, . \, \mathfrak{U}(B) = \mathfrak{G}(B)$. \smallskip \noindent (2) Let $(\mathfrak{U}, 1)$ be an arbitrary open subset of $(\mathfrak{G},1)$. The map $\mathfrak{U}(B)/R \to \mathfrak{G}(B)/R$ is bijective. \end{scorollary} \subsection{The case of tori}\label{subsec_tori} Let $B$ be a commutative ring such that the connected components of $\mathop{\rm Spec}\nolimits(B)$ are open (e.g. $B$ is Noetherian or semilocal). Let $\goth T$ be an isotrivial $B$--torus. According to \cite[Prop. 1.3]{CTS2}, there exists a flasque resolution $$ 1 \to \goth S \to \mathfrak{Q} \xrightarrow{\pi} \goth T \to 1, $$ that is an exact sequence of $B$--tori where $\mathfrak{Q}$ is a quasitrivial $B$--torus and $\goth S$ is a flasque $B$--torus. The following statement generalizes the corresponding result over fields due to Colliot-Th\'el\`ene and Sansuc \cite[Thm. 3.1]{CTS1}. \begin{sproposition} \label{prop_torus1} Assume additionnally that $B$ is a regular integral domain. We have $\pi(\mathfrak{Q}(B)) = R \goth T(B)$ and the characteristic map $\goth T(B) \to H^1(B,\goth S)$ induces an isomorphism $$ \goth T(B)/R \buildrel\sim\over\lgr \ker\bigl( H^1(B,\goth S) \to H^1(B,\mathfrak{Q}) \bigr). $$ In particular, if $B$ is a regular semilocal domain, we have an isomorphism $\goth T(B)/R \buildrel\sim\over\lgr H^1(B,\goth S)$. \end{sproposition} \begin{proof} According to Lemma \ref{lem_sorite2}.(1) we have $\mathbb{G}_m(B)/R=1$. Then Lemma \ref{lem_weil} shows that $R\mathfrak{Q}(B)=\mathfrak{Q}(B)$. Hence the inclusion $\pi(\mathfrak{Q}(B)) \subseteq R \goth T(B)$. For the converse, it is enough to show that a point $x \in \goth T(B)$ which is directly $R$--equivalent to $1$ belongs to $\pi\bigl( \mathfrak{Q}(R) \bigr)$. By definition, there exists a polynomial $P\in B[t]$ such that $P(0), P(1) \in B^\times$ and $x(t) \in \goth T\bigl( B[t,1/P] \bigr)$ satisfying $x(0)=1$ and $x(1)=x$. We consider the obstruction $\delta(x(t)) \in H^1(B[t,1/P], \goth S)$. Since $\goth S$ is flasque and $B$ is a regular domain, the map $$ H^1(B, \goth S) \to H^1(B[t,1/P], \goth S) $$ is onto by~\cite[Cor. 2.6]{CTS2}. It follows that $\delta(x(t))= \delta(x)(0)= \delta(x(0))=1$ so that $x(t)$ belongs to the image of $\pi: \mathfrak{Q}\bigl( B[t,1/P] \bigr) \to \goth T\bigl( B[t,1/P] \bigr)$. Thus $x=x(1)$ belongs to $\pi\bigl( \mathfrak{Q}(R) \bigr)$. as desired. \end{proof} \smallskip Using the above result, we extend Colliot-Th\'el\`ene and Sansuc's criterion of retract rationality, see~\cite[Prop. 7.4]{CTS2} and~\cite[Prop. 3.3]{M96}. \smallskip \begin{sproposition}\label{prop_retract_torus} Let $B$ be a semilocal ring and let $\goth T$ be an isotrivial $B$--torus. Let $1 \to \goth S \to \mathfrak{Q} \xrightarrow{\pi} \goth T \to 1$ be a flasque resolution. \smallskip \noindent (1) We consider the following assertions: \smallskip $(i)$ $\goth S$ is an invertible $B$-torus (i.e. a direct summand of a quasitrivial $B$--torus); \smallskip $(ii)$ there exists an open subset $(\mathfrak{U},1)$ of $(\goth T,1)$ such that $\pi^{-1}(\mathfrak{U}) \cong \goth S \times_B \mathfrak{U}$; \smallskip $(iii)$ the pointed $B$-scheme $(\goth T, 1)$ is retract rational; \smallskip $(iv)$ $\goth T$ is $R$--trivial on semilocal rings, that is $\goth T(C)/R=1$ for each semilocal $B$-ring $C$; \smallskip $(iv')$ $\goth T$ is $R$--trivial on fields, that is $\goth T(F)/R=1$ for each $B$-field $F$. \smallskip $(v)$ $\goth T$ satisfies the lifting property. \smallskip \noindent Then we have the implications $(i) \Longrightarrow (ii) \Longrightarrow (iii) \Longrightarrow (iv) \Longrightarrow (iv') \Longrightarrow (v)$. Furthermore if $B$ is integral, we have the equivalences $(iii) \Longleftrightarrow (iv) \Longleftrightarrow (iv') \Longleftrightarrow (v)$. \smallskip \noindent (2) We assume furthermore that $B$ is a normal domain of fraction field $K$. We consider the following assertions: \smallskip $(vi)$ $\goth S_K$ is an invertible $K$-torus; \smallskip $(vii)$ $\goth T_K$ is $R$--trivial on semilocal rings, that is, $\goth T(A)/R=1$ for each semilocal $K$-ring $A$; \smallskip $(vii')$ $\goth T_K$ is $R$--trivial on fields, that is, $\goth T(F)/R=1$ for each $K$--field $F$; \smallskip $(viii)$ the pointed $K$-scheme $(\goth T_K, 1)$ is retract rational. \smallskip \noindent Then the assertions $(i)$, $(ii)$, $(iii)$, $(iv)$, $(iv')$, $(v)$, $(vi)$, $(vii)$, $(vii')$ and $(viii)$ are equivalent. \end{sproposition} \begin{proof} Let $\goth m_1, \dots, \goth m_c$ be the maximal ideals of $B$ and put $\kappa_i=B/\goth m_i$ for $i=1, \dots, c$. \smallskip \noindent (1) $(i) \Longrightarrow (ii)$. Let $C$ be the semilocal ring of $\goth T$ at the points $1_{\kappa_1}, \dots, 1_{\kappa_c}$ of $\goth T$. Since $\goth S$ is invertible, we have $H^1(C, \goth S)=1$. In particular the $\goth S$--torsor $\pi: \mathfrak{Q} \to \goth T$ admits a splitting $s: \mathop{\rm Spec}\nolimits(C) \to \mathfrak{Q}$. It follows that there exists an open neighborhood $(\mathfrak{U}, 1)$ of $\goth T$ such that the $\goth S$--torsor $\pi: \mathfrak{Q} \to \goth T$ admits a splitting $s: \mathfrak{U} \to \mathfrak{Q}$. \smallskip \noindent $(ii) \Longrightarrow (iii)$. We are given an open neighborhood $(\mathfrak{U},1)$ of $(\goth T,1)$ such that $\pi^{-1}(\mathfrak{U}) \cong \goth S \times_B \mathfrak{U}$. Thus $(\mathfrak{U},1)$ is a retract of $\pi^{-1}(\mathfrak{U})$ which is open in some affine $B$--space, so that $(\goth T,1)$ is retract rational. \smallskip \noindent $(iii) \Longrightarrow (iv)$. Let $\kappa_1, \dots, \kappa_c$ the residue fields of the maximal ideals of $B$. If all $\kappa_i$'s are infinite, Proposition \ref{prop_vanish} shows that $\goth T$ is $R$--trivial. For the general case, it is enough to show that $\goth T(B)/R=1$. We may assume that $\kappa_1, \dots, \kappa_b$ are finite fields and that $\kappa_{b+1}, \dots , \kappa_c$ are infinite. The maps $\mathfrak{Q}(\kappa_i) \to \goth T(\kappa_i)$ are onto for $i=1,\dots, b$ (a finite field is of cohomological dimension 1) so that there exists a finite subset $\Theta$ of $\mathfrak{Q}(B)$ mapping onto $\prod_{i=1, \dots , b} \goth T(\kappa_i)$. Let $(\mathfrak{U}, 1)$ be an open subset of $(\mathfrak{G},1)$ which is a $B$--retract of some open of $(\mathbf{A}^N_B,0)$. Reasoning as in the proof of Proposition \ref{prop_vanish}, we observe that $\goth T= \mathfrak{U} \, \, \bigl( \mathfrak{U}(B) \, \pi(\Theta) \bigr)$ so that $\goth T(B)/R=1$. \smallskip \noindent $(iv) \Longrightarrow (iv')$. Obvious. \smallskip \noindent $(iv') \Longrightarrow (v)$. We assume that $\goth T$ is $R$--trivial on fields. It enough to show that $\goth T(B)$ maps onto $\goth T(\kappa_1) \times \dots \times \goth T(\kappa_c)$. We consider the commutative diagram $$ \xymatrix@C=20pt{ \mathfrak{Q}(B) \ar[r] \ar[d] & \goth T(B) \ar[d] \\ \prod_i \mathfrak{Q}(\kappa_i) \ar[r] & \prod_i \goth T(\kappa_i) . } $$ The left vertical map is onto since $\mathfrak{Q}$ satisfies the lifting property and the bottom horizontal map is onto by Proposition~\ref{prop_torus1}, since $\goth T(\kappa_i)/R=1$. Thus the right vertical map is onto. \smallskip Finally, if $B$ is integral, then $\goth T$ is an integral scheme according to Remark \ref{rem_integral}.(b). Proposition \ref{prop_retract} shows that $(v)$ implies $(iii)$. \smallskip \noindent (2) Let $B'$ be a Galois connected cover of $B$ which splits $\goth T$ and $\goth S$. Let $\Gamma$ be its Galois group. Then $B'$ is a normal ring and its fraction field $K'$ is a Galois extension of $K$ of group $\Gamma$. According to Lemma \cite[Lemme 2, (vi)]{CTS1}, $(i)$ (resp.\ $(vi)$) is equivalent to saying that the $\Gamma$--module $\widehat \goth S(B')$ (resp.\, $\widehat \goth S(K')$) is invertible. Since $\widehat \goth S(B')=\widehat \goth S(K')$ we get the equivalence $(i) \Longleftrightarrow (vi)$. The statement over fields~\cite[Prop. 7.4]{CTS2} provides the equivalences $(vi) \Longleftrightarrow (vii') \Longleftrightarrow (viii)$. Taking into account the first part of the Proposition and the obvious implications, we have the following picture {\small $$ \xymatrix@C=10pt{ (i) \ar@2{<->}[d] & \Longrightarrow & (ii) \ar@2{->}[d] & \Longrightarrow & (iii) \ar@2{->}[d] & \Longleftrightarrow & (iv) & \Longleftrightarrow & (iv') & \Longleftrightarrow & (v) \\ (vi) & \Longrightarrow & (vii) & \Longrightarrow & (vii') & \Longleftrightarrow & (viii) & \Longleftrightarrow & (vi)} $$ } Thus the assertions $(i)$, $(ii)$, $(iii)$, $(iv)$, $(iv')$, $(v)$, $(vi)$, $(vii)$, $(vii')$ and $(viii)$ are equivalent. \end{proof} \subsection{Parabolic reduction} Let $B$ be a ring and let $\mathfrak{G}$ be a reductive $B$--group scheme. Let $\mathfrak{P}$ be a parabolic $B$--subgroup of $\mathfrak{G}$ together with an opposite parabolic $B$--subgroup $\mathfrak{P}^{-}$. We know that $\mathfrak{L}= \mathfrak{P} \times_\mathfrak{G} \mathfrak{P}^{-}$ is a Levi subgroup of $\mathfrak{P}$. We consider the big Bruhat cell $$ \Omega:=R_u(\mathfrak{P}^{-}) \times_B \mathfrak{L} \times R_u(\mathfrak{P}^{-}) \subseteq \mathfrak{G} $$ \begin{slemma} \label{lem_parabolic0} We have $$ \mathfrak{L}(B)/R \buildrel\sim\over\lgr \mathfrak{P}(B)/R \hookrightarrow \mathfrak{G}(B)/R. $$ \end{slemma} \begin{proof} The left isomorphism follows from Lemma \ref{lem_sorite2}.(3). According to Lemma \ref{lem_cell}, the big cell $\Omega$ is a principal open subset of $\mathfrak{G}$ so $ \Omega(B)/R$ injects in $\mathfrak{G}(B)/R$ according to Lemma \ref{lem_sorite1}.(1). Since $R_u(\mathfrak{P})$ and $R_u(\mathfrak{P}^{-})$ are extensions of vector group schemes, the map $\Omega(B)/R \to \mathfrak{P}(B)/R$ is bijective by Lemma \ref{lem_sorite2}.(3), hence $\mathfrak{P}(B)/R$ injects into $\mathfrak{G}(B)/R$. \end{proof} According to \cite[Th. 7.3.1]{Gi4}, there exists a homomorphism $\lambda: \mathbb{G}_{m,B} \to \mathfrak{G}$ such that $\mathfrak{P}=\mathfrak{P}_\mathfrak{G}(\lambda)$ and $\mathfrak{L}=\goth Z_\mathfrak{G}(\lambda)$. \begin{slemma}\label{lem_parabolic} We assume that there exists a central split $B$--subtorus $\goth S$ of $\mathfrak{L}$ which factorizes $\lambda$. Assume that $\mathfrak{G}(B)$ is generated by $\mathfrak{P}(B)$ and $\mathfrak{P}^{-}(B)$ and that $\mathop{\rm Pic}\nolimits(B)= \mathop{\rm Pic}\nolimits(B[t]_\Sigma)=0$. Then we have isomorphisms $$ \xymatrix@C=30pt{ \mathfrak{G}(B)/R & \ar[l]_\sim \mathfrak{L}(B)/R \ar[r]^{\sim\quad} & \bigl( \mathfrak{L}/\goth S\bigr)(B)/R . } $$ \end{slemma} \begin{proof} The assumption implies that $\mathfrak{L}(B)$ generates $\mathfrak{G}(B)/R$ so that the injective map $\mathfrak{L}(B)/R \to \mathfrak{G}(B)/R$ is onto hence an isomorphism. The right handside homomorphism follows of Lemma \ref{lem_sorite2}.(4). \end{proof} \begin{scorollary}\label{cor_parabolic} We assume that $B$ is semilocal connected. Let $\goth S$ be the central maximal split $B$--subtorus $\goth S$ of $\mathfrak{L}$ (as defined in \cite[XXVI.7.1]{SGA3}). Then we have isomorphisms $$ \xymatrix@C=30pt{ \mathfrak{G}(B)/R & \ar[l]_\sim \mathfrak{L}(B)/R \ar[r]^{\sim\quad} & \bigl( \mathfrak{L}/\goth S\bigr)(B)/R . } $$ \end{scorollary} \begin{proof} The second condition of Lemma \ref{lem_parabolic} is satisfied \cite[XXVI.5.2]{SGA3}. Also we have $\mathop{\rm Pic}\nolimits(B)= \mathop{\rm Pic}\nolimits(B[t]_\Sigma)=0$ since $B$ and $B[t]_\Sigma$ are semilocal rings. Thus Lemma \ref{lem_parabolic} applies. \end{proof} \section{$\mathbf{A}^1$-equivalence and non-stable $K_1$-functors} \subsection{$\mathbf{A}^1$-equivalence} Let $B$ be an arbitrary (unital, commutative) ring. Let $\mathcal F$ be a $B$-functor in sets. We say that two points $x_0,x_1 \in \mathcal F(B)$ are \emph{directly $\mathbf{A}^1$--equivalent} if there exists $x \in \mathcal F\bigl( B[t]\bigr)$ such that $x_0=x(0)$ and $x_1=x(1)$. The (naive) $\mathbf{A}^1$-equivalence on $\mathcal F(B)$ is the equivalence relation generated by this relation. Let $G$ be a $B$--group scheme. We denote the equivalence class of $1\in G(B)$ by $\r0G(B)$ and the group of $\mathbf{A}^1$-equivalence classes by $$ G(B)/{A^1}=G(B)/\r0G(B). $$ This group is functorial in $B$, and the functor $G(-)/{A^1}$ on the category of $B$-schemes is sometimes called the 1st Karoubi-Villamayor $K$-theory functor corresponding to $G$, and denoted by $KV_1^G(B)$~\cite{J,AHW}. Clearly, for any ring $B$ we have a canonical surjection $$ G(B)/{A^1} \, \to \hskip-6mm \longrightarrow \, G(B)/R. $$ The analog of Lemma~\ref{lem_homotopy} is true for $\mathbf{A}^1$-equivalence. In particular, two points $g_0,g_1 \in G(B)$ are $\mathbf{A}^1$-equivalent if and only if they are directly $\mathbf{A}^1$--equivalent. \subsection{Patching pairs and $\mathbf{A}^1$-equivalence} Let $R \to R'$ be a morphism of rings and let $f \in R$. We say that that $(R \to R',f)$ is a {\it patching pair} if $R'$ is flat over $R$ and $R/fR \buildrel\sim\over\lgr R'/fR$. The other equivalent terminology is to say that \begin{equation}\label{eq:patching} \xymatrix{ R \ar[d] \ar[r] & R_f \ar[d] \\ R' \ar[r] & R'_f } \end{equation} is a {\it patching diagram}. In this case, there is an equivalence of categories between the category of $R$-modules and the category of glueing data $(M',M_1,\alpha_1)$ where $M'$ is an $R'$--module, $M_1$ an $R_f$--module and $\alpha_1: M' \otimes_{R'} R'_f \buildrel\sim\over\lgr M_1 \otimes_{R_f} R'_f$ \cite[Tag 05ES]{St}. Note that this notion of a patching diagram is less restrictive than the one used by Colliot-Th\'el\`ene and Ojanguren in~\cite[\S 1]{CTO}. \begin{sexamples}\label{ex_patch}{\rm (a) (Zariski patching) Let $g \in R$ such $R=fR+gR$. Then \break $(R \to R_g,f)$ is a patching pair. \smallskip \noindent (b) Assume that $R$ is noetherian. If $\widehat R=\limproj R/f^nR$, then $(R \to \widehat R, f)$ is a patching pair according to \cite[Tags 00MB, 05GG]{St}. \smallskip \noindent (c) Assume that $R=k[[x_1,\ldots,x_n]]$ is a ring of formal power series over a field and let $h$ be a monic Weierstrass polynomial of $R[x]$ of degree $\geq 1$. Then $(R[x],R[[x]],h)$ is a patching pair, see \cite[page 803]{BR}. } \end{sexamples} We recall that $(R \to R',f)$ is a {\it glueing pair} if $R/f^n R \buildrel\sim\over\lgr R'/ f^n R'$ for each $n \geq 1$ and if the sequence \begin{equation}\label{complex} 0 \to R \to R_f \oplus R' \xrightarrow{\gamma} R'_f \to 0 \end{equation} is exact where $\gamma(x,y)=x -y$ \cite[Tag 01FQ]{St}. \smallskip \begin{sexamples} {\rm \noindent (a) A patching pair is a glueing pair, we have $R/f^n R \buildrel\sim\over\lgr R'/f^n R'$ for all $n \geq 1$ \cite[Tag 05E9]{St} and the complex \eqref{complex} is exact at $0$, $R$ and $R_f \oplus R'$ \cite[Tag 05EK]{St}. Since the map $\gamma$ is surjective, the complex is exact. \smallskip \noindent (b) If $f$ is a non zero divisor in $R$ and $\widehat R=\limproj R/f^nR$, then $(R \to \widehat R, f)$ is a glueing pair \cite[Tag 0BNS]{St}. } \end{sexamples} If $(R \to R',f)$ is a glueing pair, the Beauville-Laszlo theorem provides an equivalence of categories between the category of flat $R$-modules and the category of glueing data $(M',M_1,\alpha_1)$ where $M'$ is a flat $R'$--module, $M_1$ a flat $R_f$--module and $\alpha_1: M' \otimes_{R'} R'_f \buildrel\sim\over\lgr M_1 \otimes_{R_f} R'_f$~\cite[Tags 0BP2, 0BP7 and 0BNX]{St}. In particular we can patch torsors under an affine flat $R$--group scheme $G$ in this setting, this means that the base change induces an equivalence from the category of $G$-torsors to that of triples $(T,T', \iota)$ where $T$ is a $G$-torsor over $\mathop{\rm Spec}\nolimits(R_f)$, $T'$ a $G$--torsor over $\mathop{\rm Spec}\nolimits(R')$ and $\iota: T \times_{R_f} R'_f \buildrel\sim\over\lgr T' \times_{R'} R'_f$ an isomorphism of $G$--torsors over $\mathop{\rm Spec}\nolimits(R'_f)$, see~\cite[lemma 2.2.10]{BC}. This is a generalization of~\cite[proposition 2.6]{CTO}. More specifically, there is an exact sequence of pointed sets \begin{equation}\label{eq:patch-torsors} 1 \to G(R') \backslash G(R'_f) / G(R_f) \to H^1(R,G) \to H^1(R' ,G) \times H^1(R_f,G). \end{equation} This sequence can be used to relate the $\mathbf{A}^1$-equivalence on $G$ with local triviality of $G$-torsors. \begin{slemma}\label{lem_KV} Let $G$ be a flat $B$-linear $B$--group scheme. Let $h \in B$. \medskip \noindent (1) Let $(B \to A,h)$ be a glueing pair and assume that \begin{equation}\label{eq:torsor-ker-cond} \ker\bigl( H^1(B[x], G) \to H^1(B_h[x], G) \bigr)=1. \end{equation} Then we have $\r0G( A_h) = \r0G(A) \, \r0G(B_h)$ and the map \begin{equation}\label{eq:BB-square23} \ker\bigl( G(B)/{A^1} \to G(B_h)/{A^1} \bigr) \, \to \, \ker\bigl( G(A)/{A^1} \to G(A_h)/{A^1} \bigr) \end{equation} is surjective. \smallskip \noindent (2) Assume that $h$ is a non zero divisor in $B$. Let $\widehat B= \limproj_{n \geq 0} \, B/h^{n+1}B$ be the completion. Then we have the inclusion $\r0G(\widehat B_h) \, \subseteq \, \r0G(\widehat B) \, {A^1}(B_h)$ and the map \begin{equation}\label{eq:BB-square25} \ker\bigl( G(B)/{A^1} \to G(B_h)/{A^1} \bigr) \, \to \, \ker\bigl( G(\widehat B)/{A^1} \to G\bigl( \widehat B_h \bigr)/{A^1} \bigr) \end{equation} is surjective. Assuming furthermore that $G( \widehat B_h) = G(\widehat B) \, \r0G(\widehat B_h)$, we have $G(B_h) = G( B) \, {A^1}(B_h)$. \end{slemma} \begin{proof}(1) Since $(B[t] \to A[t],h)$ is a glueing pair, we have an exact sequence of pointed sets $$ 1 \to G(B_h[x]) \backslash G(A_h[x]) / G(A[x]) \to H^1(B[x] ,G) \to H^1(B_h[x] ,G) \times H^1(A[x],G). $$ Our assumption provides a decomposition $G(A_h[x]) = G(A[x]) \, G(B_h[x])$, and a fortiori a decomposition $G(A_h) = G(A) \, G(B_h)$. Let $x \in \r0G( A_h)$. Then there exists $g \in G(A[x]_h)$ such that $g(0)=1$ and $g(1)=x$. We can decompose then $g= g_1 g_2$ with $g_1 \in G(A[x])$, $g_2 \in \, G(B[x]_h)$. Since $1= g_1(0) g_2(0)$ we can assume that $g_1(0)=1$ and $g_2(0)=1$. It follows that $x \in \r0G(A) \, \r0G(B_h)$. This establishes the equality $\r0G(A_h) = \r0G(A) \, \r0G(B_h)$. For showing the surjectivity of the map \eqref{eq:BB-square23}, we are given $[x] \in G(A)/{A^1}$ and $[y] \in G(B_h)/{A^1}$ such that $x= y \in G(A_h)/{A^1}$. The preceding identity allows us to assume that $x=y \in G(A_h)$. Since $(B,A,h)$ is a glueing pair, $x,y$ define a point $g \in G(B)$. \smallskip \noindent (2) This the special case $A=\widehat B$. The last fact is a straightforward consequence. \end{proof} The condition~\eqref{eq:torsor-ker-cond} in Lemma~\ref{lem_KV} is not easy to check in general. Later on we will discuss a case where it is known to hold as a corollary of the work of Panin on the Serre--Grothendieck conjecture~\cite{Pa,Pa20}. However, Moser obtained the following unconditional result in the special case of Example~\ref{ex_patch} (a). \begin{slemma}\label{lem_moser} (Moser, \cite[lemma 3.5.5]{Moser}, see also \cite[lemma 3.2.2]{AHW}) Let $G$ be a finitely presented $B$-group scheme which is $B$-linear. \smallskip \noindent (1) Let $f_0,f_1 \in B$ such that $Bf_0+Bf_1=B$. Let $g \in G(B_{f_0 f_1}[T ])$ be an element such that $g(0) = 1$. Then there exists a decomposition $g = h_0^{-1} \, h_1$ with $h_i \in G(B_{f_i}[T ])$ and $h_i(0) = 1$ for $i=0,1$. \smallskip \noindent (2) The sequence of pointed sets $$ \xymatrix@C=20pt{ G(B)/{A^1} \ar[r] & G(B_{f_0})/{A^1} \times G(B_{f_1})/{A^1} \ar@<2pt>[r]\ar@<-2pt>[r] & G(B_{f_0f_1})/{A^1} } $$ is exact at the middle term. \end{slemma} \begin{proof} (1) The original reference does the case $B$ noetherian and the general case holds by the usual noetherian approximation trick. \smallskip \noindent (2) Let $[g_0] \in G(B_{f_0})/{A^1}$ and let $[g_1] \in G(B_{f_1})/{A^1}$ such that $[g_0]=[g_1] \in G(B_{f_0f_1})/{A^1}$. Then there exists $g \in G(B_{f_0f_1}[T])$ such that $g_0 \, g_1^{-1}= g(1) \in G(B_{f_0f_1}[T])$ and $g(0)=1$. By (1) we write $g = h_0^{-1} \, h_1$ with $h_i \in G(B_{f_i}[T ])$ and $h_i(0) = 1$ for $i=0,1$ so that $g_0 \, g_1^{-1} = h_0^{-1}(1) \, h_1(1)$. Since $[h_i(1) g_i]= [g_i] \in G(B_{f_i})/{A^1}$, we can replace $g_i$ by $h_i(1) g_i$ and deal then with the case $g_0 = g_1 \in G(B_{f_0f_1})$. This defines an unique element $m \in G(B)$ such that $[m]= [g_i] \in G(B_{f_i})/{A^1}$. \end{proof} \begin{sremark}{\rm By induction we get the following generalization. Let $f_1, \dots, f_c \in B$ such that $Bf_1+ \dots + Bf_c=B$ and put $f=f_1\dots f_c$. Let $g \in G(B_{f}[T ])$ be an element such that $g(0) = 1$. Then there exists a decomposition $g = h_1 \dots h_c$ with $h_i \in G(B_{f_i}[T ])$ and $h_i(0) = 1$ for $i=1, \dots, c$. It follows that the image of $G(B)/{A^1}$ in $\prod_{i=1,..,c} G(B_{f_i})/{A^1}$ consists of elements having same image in $G(B_f)/{A^1}$. } \end{sremark} Since Lemma~\ref{lem_moser} does not presuppose any results about $G$-torsors, Moser was able to use it to establish a local-global principle for torsors~\cite[3.5.1]{Moser} generalizing Quillen's local-global principle for finitely presented modules~\cite[Theorem 1]{Q}. In our context, we combine Lemma~\ref{lem_moser} with a theorem of Colliot-Th\'el\`ene and Ojanguren to obtain the following result. \begin{sproposition} \label{prop_moser} Let $k$ be an infinite field and let $G$ be an affine $k$--algebraic group. Let $A$ be the local ring at a prime ideal of a polynomial algebra $k[t_1, \dots, t_d]$. Then the homomorphism $$ G( A)/{A^1} \to G\bigl( k(t_1, \dots, t_d) \bigr)/{A^1} $$ is injective. \end{sproposition} \begin{proof} Our plan is to use Colliot-Th\'el\`ene and Ojanguren method \cite[\S 1]{CTO} as abstracted in the appendix \ref{appendix_cto}. We consider the $k$--functor in groups $B \mapsto F(B)=G( B)/{A^1}$. The claim follows from Proposition~\ref{prop_cto} once properties $\bf P_1$, $\bf P_2$ and $\bf P'_3$ are checked for the $k$--functor $F$. The property $\bf P_1$ is clear, since $G$ is finitely presented over $k$. Let $L$ be a $k$--field and let $d \geq 0$ be an integer. We have $F(L)= F\bigl( L[t_1,\dots, t_d])$, and $F(L)$ injects in $ F\bigl( L(t_1,\dots, t_d) \bigr)$, since every polynomial over $L$ has an invertible value. Property $\bf P_2$ is established. On the other hand Lemma \ref{lem_moser}.(2) establishes the surjectivity of the map $$ \ker\bigl( G(B)/{A^1} \to G(B_{f_0})/{A^1} \bigr) \to \ker\bigl( G(B_{f_1})/{A^1} \to G(B_{f_0f_1})/{A^1} \bigr) $$ for $B=Bf_0+Bf_1$ so that Zariski patching property $\bf P'_3$ holds for the functor $F$. \end{proof} \begin{sremark}{\rm The extension to the finite field case is established in Corollary \ref{cor:polynomial_local}. } \end{sremark} \subsection{Non stable $K_1$-functor} Let $\mathfrak{G}$ be a reductive group scheme over our base ring $B$. Let $\mathfrak{P}$ be a strictly proper parabolic subgroup of $\mathfrak{G}$. Let $\mathfrak{P}^{-}$ be an opposite $B$--parabolic subgroup scheme of $\mathfrak{G}$, and denote by $E_\mathfrak{P}(B)$ the subgroup of $\mathfrak{G}(B)$ generated by $R_u(\mathfrak{P})(B)$ and $R_u(\mathfrak{P}^{-})(B)$ (it does not depend on the choice of $\mathfrak{P}^-$ by~\cite[XXVI.1.8]{SGA3}). We consider the Whitehead coset $$ K_1^{\mathfrak{G},\mathfrak{P}}(B)=\mathfrak{G}(B)/ E_\mathfrak{P}(B). $$ As a functor on the category of commutative $B$-algebras, $K_1^{\mathfrak{G},\mathfrak{P}}(-)$ is also called the non-stable (or unstable) $K_1$-functor associated to $\mathfrak{G}$ and $\mathfrak{P}$. Recall that if $B$ is semilocal, then the functor $C\mapsto E_\mathfrak{P}(C)$ on the category of commutative $B$-algebras $C$ does not depend on the choice of a strictly proper parabolic $B$-subgroup $\mathfrak{P}$, see~\cite[XXVI.5]{SGA3} and~\cite[th. 2.1.(1)]{S1}. In particular, in this case $E_\mathfrak{P}(B)$ is a normal subgroup of $\mathfrak{G}(B)$. For an arbitrary ring $B$, the same holds if $\mathfrak{G}$ satisfies the condition (E) below, see~\cite{PS}. In these two cases we will occasionally write $K_1^\mathfrak{G}(C)$ instead of $K_1^{\mathfrak{G},\mathfrak{P}}(C)$, omitting the specific strictly proper parabolic $B$-subgroup. \medskip \noindent{\it Condition} (E). For any maximal ideal $\goth m$ of $B$, all irreducible components of the relative root system of $\mathfrak{G}_{B_\goth m}$ in the sense of~\cite[XXVI.7]{SGA3} are of rank at least 2. \medskip Note that the condition (E) is satisfied if $\mathfrak{G}$ has $B$-rank $\ge 2$, since in this case all $\mathfrak{G}_{B_\goth m}$ also have $B_\goth m$-rank $\ge 2$. Since the radicals $R_u(\mathfrak{P})$ and $R_u(\mathfrak{P}^{-})$ are successive extensions of vector group schemes~\cite[XXVI.2.1]{SGA3}, Lemma \ref{lem_sorite2}.(1) implies that $E_\mathfrak{P}(B) \subseteq {A^1} \mathfrak{G}(B) \subseteq \mathfrak{G}(B)$. We get then surjective maps \[ K_1^{\mathfrak{G},\mathfrak{P}}(B) \, \to \hskip-6mm \longrightarrow \, \mathfrak{G}(B)/{A^1} \, \to \hskip-6mm \longrightarrow \, \mathfrak{G}(B)/R. \] \subsection{Comparison of $K_1^{\mathfrak{G}}$, $\mathbf{A}^1$-equivalence and $R$-equivalence} \begin{slemma}\label{lem_W_trivial2} We consider the following assertions: \smallskip $(i)$ The map $K_1^{\mathfrak{G},\mathfrak{P}}(B) \to K_1^{\mathfrak{G},\mathfrak{P}}(B[u])$ is bijective; \smallskip $(ii)$ $\mathfrak{G}(B[u])= \mathfrak{G}(B) \, E_\mathfrak{P}(B[u])$; \smallskip $(iii)$ The map $K_1^{\mathfrak{G},\mathfrak{P}}(B) \to G(B)/{A^1}$ is bijective. \smallskip \noindent Then we have the implications $(i) \Longleftrightarrow (ii) \Longrightarrow (iii)$. Furthermore if $(iii)$ holds, we have that $E_\mathfrak{P}(B)={A^1}\mathfrak{G}(B)$; in particular $E_\mathfrak{P}(B)$ is a normal subgroup of $\mathfrak{G}(B)$ which does not depend of $\mathfrak{P}$. \end{slemma} \begin{proof} $(i) \Longleftrightarrow (ii)$. The map $K_1^{\mathfrak{G},\mathfrak{P}}(B) \to K_1^{\mathfrak{G},\mathfrak{P}}(B[u])$ is always injective, since it has a left inverse induced by $u\mapsto 0$. Clearly, this map is surjective, if and only if we have the decomposition $\mathfrak{G}(B[u])= \mathfrak{G}(B) \, E_\mathfrak{P}(B[u])$. \smallskip \noindent $(ii) \Longrightarrow (iii)$. The map $K_1^{\mathfrak{G},\mathfrak{P}}(B) \to G(B)/{A^1}$ is surjective. Let $g_0, g_1 \in \mathfrak{G}(B)$ mapping to the same element of $G(B)/{A^1}$. There exists $g(t) \in \mathfrak{G}(B[t])$ such that $g(0)=g_0$ and $g(1)=g_1$. Our assumption implies that $g(t)= g \, h(t)$ with $g \in \mathfrak{G}(B)$ and $h(t) \in E_\mathfrak{P}(B[u])$. It follows that $g_i= g \, h(i)$ for $i=0,1$ with $h(i) \in E_\mathfrak{P}(B)$. We get that $g_0= g \, h(0)= (g \, h(1)) \, (h(1)^{-1} \, h(0)) \in g_1 \, E_\mathfrak{P}(B)$. Thus $g_0, g_1$ have same image in $K_1^{\mathfrak{G},\mathfrak{P}}(B)$. \end{proof} \begin{sremarks}\label{rem_invariance}{\rm (a) Assume that $\mathfrak{G}$ satisfies condition $(E)$. In this case, homotopy invariance reduces to the case of the ring $B_\goth m$ for each maximal ideal $\goth m$ of $B$ according to a generalization of the Suslin local-global principle \cite[lemma 17]{PS}. \smallskip \noindent (b) If $B$ is a regular ring containing a field $k$, and $\mathfrak{G}$ satisfies $(E)$, then we know that $K_1^{\mathfrak{G}}(B) \buildrel\sim\over\lgr K_1^{\mathfrak{G}}(B[u])$ by~\cite[th. 1.1]{St20}. \smallskip \noindent (c) Let us provide a counterexample to $K_1^{\mathfrak{G}}(B) \buildrel\sim\over\lgr K_1^{\mathfrak{G}}(B[u])$ in the non-regular case. Given a field $k$ (of characteristic zero), we consider the domain $B=k[x^2, x^3] \subset k[x]$. We claim that $K_1^{\SL_n}(B) \subsetneq K_1^{\SL_n}(B[u])$ for $n>>0$ so that $1=K_1^{\SL_n}(B_\goth m) \subsetneq K_1^{\SL_n}(B_\goth m[u])$ for some maximal ideal of $B$. For $n >>0$, we have $K_1^{\SL_n}(B)= SK_1(B)$ and $K_1^{\SL_n}(B[u])= \SK_1(B[u])$. Inspection of the proof of Krusemeyer's computation of $\SK_1(B)$ \cite[prop. 12.1]{Kr} provides functorial maps $\Omega^1_A \to \SK_1(A \otimes_k B)$ for a $k$--algebra $A$. We get then commutative diagram of maps $$ \xymatrix@C=30pt{ \Omega^1_k \ar[d] \ar[r]^\sim & \SK_1(B) \ar[d] \\ \Omega^1_{k[u]} \ar[d] \ar[r] & \SK_1(B[u]) \ar[d] \\ \Omega^1_{k(u)} \ar[r]^\sim & \SK_1(B_{k(u)}) } $$ where the top and the bottom horizontal maps are isomorphisms \cite[prop. 12.1]{Kr}. Since $\Omega^1_k \subsetneq \Omega^1_{k[u]}$, a diagram chase yields that $\SK_1(B) \subsetneq \SK_1(B[u])$. Since $K_1^{\SL_n}(B_m)=1$, this example also shows that the condition $(iii)$ of Lemma~\ref{lem_W_trivial2} does not imply $(i)$. \noindent (d) In case of regular rings, the condition $(iii)$ of Lemma~\ref{lem_W_trivial2} may hold while $(i)$ does not, if $\mathfrak{G}$ does not satisfy (E). Let $k$ be a field. Let $\mathfrak{P}$ be the standard parabolic subgroup of $\SL_2$ consisting of upper triangular matrices. Then one has $\SL_2(k[x])=E_\mathfrak{P}(k[x])$. Consequently, $K_1^{\SL_2,\mathfrak{P}}(k[x])=1$, and hence $\SL_2(k[x])/{A^1}=1$, so $(iii)$ holds. On the other hand, $K_1^{\SL_2,\mathfrak{P}}(k[x,u])\neq 1$~\cite{C}, so $(i)$ does not hold. } \end{sremarks} \begin{slemma}\label{lem_W_trivial3} We consider the following assertions: \smallskip $(i)$ The map $K_1^{\mathfrak{G},\mathfrak{P}}(B) \to K_1^{\mathfrak{G},\mathfrak{P}}(B[u]_\Sigma)$ is bijective; \smallskip $(ii)$ $\mathfrak{G}(B[u]_\Sigma)= \mathfrak{G}(B) \, E_\mathfrak{P}(B[u]_\Sigma)$; \smallskip $(iii)$ The map $K_1^{\mathfrak{G},\mathfrak{P}}(B) \to G(B)/R$ is bijective. \smallskip \noindent Then we have the implications $(i) \Longleftrightarrow (ii) \Longrightarrow (iii)$. Furthermore if $(iii)$ holds, we have that $E_\mathfrak{P}(B)=R\mathfrak{G}(B)$; in particular $E_\mathfrak{P}(B)$ is a normal subgroup of $\mathfrak{G}(B)$ which does not depend of $\mathfrak{P}$. \end{slemma} \begin{proof} This is similar with that of Lemma \ref{lem_W_trivial2} \end{proof} \section{Passage to the field of fractions} \begin{slemma}\label{lem_monic} Let $B$ be a regular ring containing a field, and let $G$ be a reductive group over $B$ having a strictly proper parabolic $B$-subgroup. Let $f\in B[x]$ be a monic polynomial. Then the natural map of \'etale cohomology sets $H^1_{\acute et}(B[x],G)\to H^1_{\acute et}(B[x]_f,G)$ has trivial kernel. \end{slemma} \begin{proof} Clearly, we can assume that $B$ is a domain. Let $K$ be the field of fractions of $B$. By~\cite[Lemma 5.4]{St20} for any maximal ideal $m$ of $B$ the map $H^1_{\acute et}(B_m[x],G)\to H^1_{\acute et}(K[x],G)$ has trivial kernel. Furthermore, the map $H^1_{\acute et}(K[x],G)\to H^1_{\acute et}(K(x),G)$ has trivial kernel by~\cite[Proposition 2.2]{CTO}. Then for any monic polynomial $f$ the map $H^1_{\acute et}(B_m[x],G)\to H^1_{\acute et}(B_m[x]_{f},G)$ has trivial kernel. Since $B$ is regular, by~\cite[Corollary 3.2]{Thomason} $G$ is $B$-linear. Then the claim holds by~\cite[Lemma 4.2]{St20}. \end{proof} In the extreme opposite case we have the following fact. \begin{slemma} \label{lem_dec_aniso} Let $B$ be a Noetherian commutative ring, and let $G$ be a $B$-linear reductive $B$-group. We assume that $G_{B/m}$ is anisotropic for each maximal ideal $m$ of $B$. Let $f\in B[x]$ be a monic polynomial. Then the natural map of \'etale cohomology sets $H^1_{\acute et}(B[x],G)\to H^1_{\acute et}(B[x]_f,G)$ has trivial kernel. \end{slemma} \begin{proof} Assume first that $B$ is semilocal. Let $\xi=[E]\in H^1_{\acute et}(B[x],G)$ be an element of the kernel. We extend $E$ to a $G$-bundle $\hat E$ on $\mathbb{P}^1_B$ by patching it to the trivial $G$-bundle over $\mathbb{P}^1_B \setminus\{f=0\}$. We denote by $\hat\xi$ its class; since $f$ is monic, we have $\hat\xi|_\infty=*$. Let $m_1,\dots, m_c$ be the maximal ideals of $B$ and put $k_i=B/m_i$. Since $G_{k_i}$ is anisotropic, then by~\cite[Th. 3.8 (b)]{Gi02} $\hat\xi_{k_i}$ is trivial. Next we apply \cite[lemma 5.2.1]{Ces-GS} and get that $\hat\xi$ belongs to the image of $H^1_{\acute et}(B,G) \to H^1_{\acute et}(\mathbb{P}^1_B,G)$. Since $\hat\xi|_\infty=*$, we conclude that $\hat\xi=*$. Thus $E$ is a trivial $G$--torsor over $B[x]$. If $B$ is not necessarily semilocal, the claim reduces to the maximal localizations of $B$ by applying the local-global principle~\cite[Lemma 4.2]{St20}. \end{proof} \begin{sremarks}{\rm \noindent (a) The rigidity property for $\mathbb{P}^1_B$-torsors under reductive groups was proved in~\cite[Th. 1]{R78} and~\cite[Prop. 9.6]{PaStV} under the assumption that $B$ is semilocal and contains a field (i.e. is equicharacteristic). Tsybyshev~\cite[Theorem 1]{Tsy} was able to prove it assuming only that $B$ is reduced and $\mathop{\rm Pic}\nolimits(B)=0$. \v{C}esnavi\v{c}ius~\cite{Ces-GS} observed that one can remove the condition that $B$ contains a field by using Alper's theorem stating that $\GL_N/G$ is affine for any $B$~\cite[cor. 9.7.7]{A}. The idea to use~\cite[Th. 3.8 (b)]{Gi02} for anisotropic groups appeared in~\cite[p. 178]{FP} and in~\cite[th. 1 and remark 2.1.(iii) on the anisotropic case]{F}. Fedorov also introduced the use of affine Grassmannians to treat the case of not necessarily semilocal $B$ and anisotropic $G$ ~\cite[Theorem 5]{F}. \smallskip \noindent (b) Let $G_0$ the underlying Chevalley $B$--group scheme of $G$. The condition of linearity on $G$ is satisfied if the $\text{\rm{Out}}(G_0)_S$--torsor $\mathrm{Isomext}(G_{0}, G)$ is isotrivial, see \cite[prop. 3.2]{M2}; this reference provides then a representation such that $\GL_n/G$ is affine, so there is no need to appeal to Alper's result in this case. This includes the semisimple case and the case when $B$ is a normal ring due to Thomason \cite[Corollary 3.2]{Thomason}. \smallskip \noindent (c) The claim of Lemma~\ref{lem_dec_aniso} does not hold if $G$ is anisotropic over $B$ and isotropic over $B/m$, even if $B$ is regular local and $G$ is simply connected~\cite[Corollary 2.3]{F}. } \end{sremarks} \begin{stheorem}\label{thm:surj} Let $B$ be a regular semilocal domain that contains a field $k$, and let $K$ be the fraction field of $B$. Let $\mathfrak{G}$ be a reductive $B$-group scheme. (1) Assume that either $\mathfrak{G}$ contains a strictly proper parabolic $B$-subgroup, or $\mathfrak{G}$ is anisotropic over $B/m$ for all maximal ideals $m$ of $B$. Then the map $$ \mathfrak{G}(B)/R\to \mathfrak{G}(K)/R $$ is surjective. (2) Assume that $\mathfrak{G}$ contains a strictly proper parabolic $B$-subgroup. Then the map $$\mathfrak{G}(B)/{A^1} \to \mathfrak{G}(K)/{A^1} $$ is injective. \end{stheorem} \begin{proof} Clearly, we can assume that $k$ is a finite field or $\mathbb{Q}$ without loss of generality. Then the embedding $k\to B$ is geometrically regular, since $k$ is perfect~\cite[(28.M), (28.N)]{Mats}. Then by Popescu's theorem~\cite{Po90,Swan} $B$ is a filtered direct limit of smooth $k$-algebras. Since the group scheme $\mathfrak{G}$ is finitely presented over $B$, and the functors $\mathfrak{G}(-)/R$ and $\mathfrak{G}(-)/{A^1}$ commute with filtered direct limits, we can assume that $\mathfrak{G}$ is defined over a smooth $k$-domain $C$, and $B=C_S$ is a localization of $C$ at a set $S$ that is a union of a finite set of prime ideals $p_i$ of $C$. Moreover, since parabolic subgroups of $\mathfrak{G}$ are also finitely presented, depending on the assumption on $\mathfrak{G}$ we can secure that $\mathfrak{G}$ contains a strictly proper parabolic subgroup over $C$, or $\mathfrak{G}$ is anisotropic over $C_{p_i}/p_iC_{p_i}$ for all $p_i$'s. (1) We need to show that $\mathfrak{G}(B)/R\to \mathfrak{G}(K)/R$ is surjective, where $K$ is the fraction field of $B$ and $C$. Clearly, it is enough to show the same for the localization of $C$ at the complement of the union of maximal ideals $m_i\supseteq p_i$ (note that if $\mathfrak{G}$ is anisotropic over $C_{p_i}/p_iC_{p_i}$, then it is automatically anisotropic over $C/m_i$). Hence we can assume that $B$ is a localization of $C$ at a union of a finite set of maximal ideals. On top of that, in order to show that $\mathfrak{G}(B)/R\to \mathfrak{G}(K)/R$ is surjective, it is enough to show that for any $f\in\bigcap_i m_i$ and any $g\in \mathfrak{G}(C_f)$ the image of $g$ in $\mathfrak{G}(K)$ belongs to $\mathfrak{G}(B)\cdot R\mathfrak{G}(K)$. We apply Panin's theorem \cite[th. 2.5]{Pa}. This provides a monic polynomial $h \in B[t]$, an inclusion of rings $B[t] \subset A$, a homomorphism $\phi: A \to B$ and a commutative diagram \begin{equation}\label{eq:panin-diag} \xymatrix@C=30pt{ B[t] \ar[d] \ar[r] & A \ar[d] & \ar[l]_{u} C \ar[d]\\ B[t]_h \ar[r] & A_h & C_f \ar[l]_{v}. } \end{equation} such that \smallskip (i) the left hand square is a elementary distinguished Nisnevich square in the category of smooth $B$-schemes in the sense of \cite[3.1.3]{MV}; \smallskip (ii) the composite $C \xrightarrow{u} A \xrightarrow{\phi} B$ is the canonical localization homomorphism; \smallskip (iii) the map $B[t] \to A \xrightarrow{\phi} B$ is the evaluation at $0$; \smallskip (iv) $h(1) \in B^\times$; \smallskip (v) there is an $A$--group scheme isomorphism $\Phi: \mathfrak{G}_B \times_B A \buildrel\sim\over\lgr \mathfrak{G} \times^u_C A$. \medskip By inspection of the construction $A$ is finite \'etale over $B[t]$ and $h(t) = N_{A/B[t]}(u(f))= u(f) a$ with $a \in A$. Property (4) of \cite[theorem 3.4]{PaStV} states that the map $\phi: A \to B$ extends to a map $A_a \to B$, so that $\phi(g) \in B^\times$. We compute \begin{eqnarray} \nonumber h(0) &=& \phi(h) \qquad [\mbox{property \enskip} (iii)]\\ \nonumber &=& \phi(u(f)) \, \phi(a) \\ \nonumber &=& f \, \phi(a) \qquad [\mbox{property \enskip} (ii)]; \end{eqnarray} it follows that $h(0)$ is a non-zero element of $B$. In particular $\phi$ extends to a map $\phi_h: A_h \to B_{h(0)}$. \noindent Since $(B[t]\to A,h)$ is a glueing pair, we have an exact sequence of pointed sets $$ 1 \to \mathfrak{G}(B[t]_h) \backslash \mathfrak{G}(A_h) / \mathfrak{G}(A) \to H^1(B[t] ,\mathfrak{G}) \to H^1(B[t]_h ,\mathfrak{G}) \times H^1(A,\mathfrak{G}). $$ Our assumptions on $\mathfrak{G}$ imply that the map $H^1(B[t],\mathfrak{G}) \to H^1(B[t]_h,\mathfrak{G})$ has trivial kernel. Indeed, if $\mathfrak{G}$ contains a strictly proper parabolic subgroup over $B$, this follows from Lemma~\ref{lem_monic}. If $\mathfrak{G}$ is anisotropic modulo every maximal ideal of $B$, then the same follows from Lemma~\ref{lem_dec_aniso}, taking into account that $B$ is regular and hence by~\cite[Corollary 3.2]{Thomason} $G$ is $B$-linear. Therefore we have $\mathfrak{G}(A_h)=\mathfrak{G}(B[t]_h)\mathfrak{G}(A)$. Set $\widetilde g=\Phi^{-1}(v_*(g)) \in \mathfrak{G}(A_h)$. Then $\widetilde g=b\cdot a$, where $b\in \mathfrak{G}(B[t]_h)$ and $a\in\mathfrak{G}(A)$. Note that by (iii) we have $\phi(h)=h(0)$. We have $\phi_h(\widetilde g)=\phi_h(v(g))=g\in \mathfrak{G}(B_{h(0)})$ by (ii). It follows that $g=\phi_h(b)\cdot\phi_h(a)$. Clearly we have $\phi_h(a)\in\mathfrak{G}(B)\subseteq \mathfrak{G}(B_{h(0)})$. We claim that $\phi_h(b)\in\mathfrak{G}(B)\cdot R\mathfrak{G}(B_{h(0)})$. Indeed, we have $\phi_h(b)=b|_{t=0}$ by (iii), and since $h(1)\in B^\times$, we have $b|_{t=1}\in\mathfrak{G}(B)$. Then the image of $b$ in $\mathfrak{G}\bigl(B_{h(0)}[t]_h \bigr)$ provides an $R$-equivalence between $\phi_h(b)$ and an element of $\mathfrak{G}(B)$. Summing up, the image of $g\in\mathfrak{G}(C_f)$ under the composition $\mathfrak{G}(C_f)\xrightarrow{v}\mathfrak{G}(A_h)\xrightarrow{\phi_h}\mathfrak{G}(B_{h(0)})$ belongs to $\mathfrak{G}(B)\cdot R\mathfrak{G}(B_{h(0})$. It follows that the image of $g$ in $\mathfrak{G}(K)$ belongs to $\mathfrak{G}(B)\cdot R\mathfrak{G}(K)$. (2) Let $[g] \in \ker\bigl( \mathfrak{G}(B)/{A^1} \to \mathfrak{G}(K)/{A^1} \bigr)$. Up to shrinking of $X=\mathop{\rm Spec}\nolimits(C)$, we can assume that $g \in \mathfrak{G}(C)$. Then there exists then $f \in C$ such that $[g] \in \ker\bigl( \mathfrak{G}(C)/{A^1} \to \mathfrak{G}(C_f)/{A^1} \bigr)$. As in (1), we apply Panin's theorem \cite[th. 2.5]{Pa} and obtain a diagram~\eqref{eq:panin-diag} satisfying the properties (i)--(v). But this time we set $\widetilde g=\Phi^{-1}(u_*(g)) \in \mathfrak{G}(A)$ and we have $[\widetilde g] \in \ker\bigl( \mathfrak{G}(A)/{A^1} \to \mathfrak{G}(A_h)/{A^1} \bigr)$. According to Lemma \ref{lem_monic}, the map $H^1(B[t][x],\mathfrak{G}) \to H^1(B[t]_h[x],\mathfrak{G})$ has trivial kernel so that Lemma \ref{lem_KV}.(1) shows that the map \begin{equation}\label{eq:BB-square27} \ker\bigl( \mathfrak{G}(B[t])/{A^1} \to \mathfrak{G}(B[t]_h)/{A^1} \bigr) \, \to \, \ker\bigl( \mathfrak{G}(A)/{A^1} \to \mathfrak{G}(A_h)/{A^1} \bigr) \end{equation} is surjective. Since $\mathfrak{G}(B)/{A^1}= \mathfrak{G}(B[t])/{A^1}$ and $h(1)\in B^\times$, we deduce that $$ \ker\bigl( \mathfrak{G}(A)/{A^1} \to \mathfrak{G}(A_h)/{A^1} \bigr)=1. $$ We have $[\widetilde g] =1 \in \mathfrak{G}(A)/{A^1}$ and get $[u_*(g)]=1 \in \mathfrak{G}(A)/{A^1}$. By applying $\phi_*$, the property (ii) yields $[g]=1 \in \mathfrak{G}(B)/{A^1}$. \end{proof} \begin{scorollary}\label{cor:polynomial_local} Let $k$ be a field and let $G$ be an affine $k$--algebraic group. Let $A$ be the local ring at a prime ideal of a polynomial algebra $k[t_1, \dots, t_d]$. Then the homomorphism $$ G( A)/{A^1} \to G\bigl( k(t_1, \dots, t_d) \bigr)/{A^1} $$ is injective. \end{scorollary} \begin{proof} If $k$ is infinite, this is the claim of Proposition~\ref{prop_moser}. Assume that $k$ is finite. Let $G_{red}$ denote the reduced affine algebraic $k$-scheme corresponding to $G$. Since $k$ is perfect, $G_{red}$ is a smooth algebraic $k$-subgroup of $G$~\cite[Prop. 1.26, Cor. 1.39]{Milne}. Since $A$ is reduced, $G(A)=G_{red}(A)$ and $G(A[u])=G_{red}(A[u])$, therefore, $G(A)/{A^1}=G_{red}(A)/{A^1}$, and hence we can assume that $G$ is smooth from the start. Let $G^\circ$ be the connected component of the identity $e\in G(k)$. Let $\pi_0(G)$ be the finite \'etale $k$-scheme of connected components of $G$ Then $G^\circ$ is a smooth geometrically connected algebraic $k$-subgroup of $G$, the fiber of the natural map $G\to\pi_0(G)$ at the image of $e$~\cite[Prop. 1.31, 1.34]{Milne}. Since $\pi_0(G)$ is $k$-finite, we have $\pi_0(G)(A[u])=\pi_0(G)(A)$, and hence $\pi_0(G)(A)/{A^1}=\pi_0(G)(A)$ injects into $\pi_0(G)(K)/{A^1}=\pi_0(G)(K)$, where $K=k(t_1,\ldots,t_d)$. Therefore, in order to prove the claim for $G$, it is enough to prove it for $G^\circ$. Hence we can assume that $G$ is smooth and connected. Let $U$ be the unipotent radical of $G$ over $k$, i.e. the largest smooth connected unipotent normal $k$-subgroup of $G$. Since $k$ is perfect, the group $U$ is $k$-split, admits a subnormal series eash of those quotients are isomorphic to $\mathbb A^1_{k}$~\cite[14.63]{Milne}. Therefore $U(A)/{A^1}=1$ and $H^1(R,U)=1$ for every $k$-algebra $R$. Also, since $k$ is perfect, $G/U$ is a reductive algebraic $k$-group~\cite[Prop. 19.11]{Milne}. By Lang's theorem, $G/U$ is quasi-split, and therefore either $G/U$ is a $k$-torus, or it contains a strictly proper parabolic $k$-subgroup and then satisfies Theorem~\ref{thm:surj} (2). In both cases the map $(G/U)(A)/{A^1}\to (G/U)(K)/{A^1}$ is injective. Now let $g\in G(A)$ be mapped into $\r0G(K)\subseteq G(K)$. By the previous argument, there is $h(u)\in (G/U)(A[u])$ such that $h(0)=1$ and $h(1)$ is the image of $g$ in $(G/U)(A)$. Since $H^1(A,U)=H^1(A[u],U)=1$, there is $g(u)\in G(u)$ such that $g(0)\in U(A)$ and $g(1)g^{-1}\in U(A)$. Since $U(A)\subseteq \r0G(A)$, we conclude that $g\in\r0G(A)$, as required. \end{proof} \section{The case of simply connected semisimple isotropic groups} \subsection{Coincidence of equivalence relations} We address the following question. \begin{squestion} \label{question_iso} Assume that $B$ is regular semilocal and that $\mathfrak{G}$ is semisimple simply connected and strictly isotropic equipped with a strictly proper parabolic $B$-subgroup.\\ Is the map $K_1^{\mathfrak{G},\mathfrak{P}}(B) \, \to \, \mathfrak{G}(B)/R$ an isomorphism?\\ \noindent Is the map $\mathfrak{G}(B)/{A^1} \, \to \, \mathfrak{G}(B)/R$ an isomorphism? \end{squestion} The answer is known to be positive in both cases if $B$ is a field. This is implied by Margaux--Soul\'e isomorphism \cite[Th. 3.10]{M1} combined with~\cite[Th. 7.2]{Gi2}. \begin{stheorem} \label{thm_KV_R} Assume that $B$ is a semilocal regular domain containing a field $k$ and denote by $K$ its fraction field. Let $\mathfrak{G}$ be a semisimple simply connected $B$-group having a strictly proper parabolic $B$-subgroup. Then we have a commutative square of isomorphisms $$ \xymatrix@C=30pt{ G(B)/{A^1} \ar[d]_\wr \ar[r]^{\ \sim} & G(B)/R \ar[d]_\wr \\ G(K)/{A^1} \ar[r]^{\ \sim} & G(K)/R } $$ \end{stheorem} \begin{proof} Let $K$ be the fraction field of $B$. The bottom horizontal arrow of the square is an isomorphism by the Margaux--Soul\'e theorem~\cite[Th. 3.10]{M1} combined with~\cite[Th. 7.2]{Gi2}. On the other hand, the left vertical map is injective by Theorem \ref{thm:surj} (2). Then the top horizontal arrow is also injective. Since it is surjective by definition, it is an isomorphism. The right vertical arrow is surjective by Theorem \ref{thm:surj} (1). Hence the vertical arrows are also isomorphisms. \end{proof} \begin{sremark}{\rm The above result does not extend to anisotropic groups. For example, let $k$ be an infinite field and let $G$ be a wound linear algebraic group, i.e. does not contain any subgroups isomorphic to $\mathbb{G}_a$ or $\mathbb{G}_m$. Then by~\cite[Corollary 3.8]{GF} we have $G(k[x])=G(k)$ and, consequently, $G(k)/{A^1}=G(k)$. This applies in particular to the case of an anisotropic reductive $k$--group $G$. On the other hand, the $R$-equivalence class group of $G$ may be even trivial, e.g. if $G$ is a semisimple anisotropic group of rank $\le 2$. Indeed, in this case every element of $G(k)$ is $R$-equivalent to a semisimple regular element, and all maximal tori of $G$ are of rank $\le 2$ and hence rational. } \end{sremark} In the same vein, we can establish the following fact. \begin{scorollary} Let $k$ be a field and let $G$ be a semisimple simply connected $k$--group strictly of $k$-rank $\geq 1$. Let $A$ be the localization of $k[x_1,\dots, x_d]$ at a prime ideal. Then we have a commutative square of isomorphisms $$ \xymatrix@C=30pt{ G(k)/{A^1} \ar[d]_\wr \ar[r]^\sim & G(k)/R \ar[d]_\wr \\ G(A)/{A^1} \ar[r]^{\sim} & G(A)/R. } $$ \end{scorollary} \begin{proof} By~\cite[th. 5.8]{Gi2} there is an isomorphism $G(k)/{A^1} \buildrel\sim\over\lgr G\bigl(k(x_1,\dots, x_d) \bigr)/{A^1}$. Then the claim follows from Theorem~\ref{thm_KV_R}. \end{proof} \begin{stheorem} \label{thm_main} Assume that $B$ is a semilocal regular domain containing a field $k$ and that $\mathfrak{G}$ is semisimple simply connected $B$-group strictly of $B$--rank $\geq 2$. Then the map $K_1^\mathfrak{G}(B) \to \mathfrak{G}(B)/R$ is an isomorphism. \end{stheorem} \begin{proof} Let $K$ be the fraction field of $B$. We consider the commutative diagram $$ \xymatrix@C=30pt{ K_1^{\mathfrak{G}}(B) \ar[d] \ar@{->>}[r] & G(B)/R \ar[d] \\ K_1^{\mathfrak{G}}(K) \ar[r]^{\sim} & G(K)/R } $$ where the bottom isomorphism is \cite[th. 7.2]{Gi2}. On the other hand, the left vertical map is injective \cite[th. 1.2]{St20}. By diagram chase, the top horizontal map is an isomorphism. \end{proof} \subsection{The retract rational case} We now consider the vanishing of Whitehead cosets. \begin{slemma}\label{lem_W_trivial} We assume that the base ring $B$ is a semilocal domain. Let $\mathfrak{G}$ be a reductive $B$--group scheme having a strictly proper $B$--parabolic subgroup $\mathfrak{P}$. We consider the following assertions: \smallskip $(i)$ $K_1^{\mathfrak{G},\mathfrak{P}}(F)=1$ for every $B$-field $F$; \smallskip $(ii)$ $\mathfrak{G}$ satisfies the lifting property; \smallskip $(iii)$ $(\mathfrak{G},e)$ is a retract rational $B$--scheme. \smallskip \noindent Then the following implications $(i) \Longrightarrow (ii) \Longrightarrow (iii)$ hold. \end{slemma} \begin{proof} $(i) \Longrightarrow (ii)$. Let $C$ be a semilocal $B$-ring with residue fields $F_1,\dots, F_s$. We have to show that the map $\mathfrak{G}(C) \to \prod_{i=1,\dots,s} \mathfrak{G}(F_i)$ is onto. We are given an element $(g_1, \dots, g_s) \in \prod_{i=1,\dots,s} \mathfrak{G}(F_i)$. Our assumption implies that there exists a positive integer $d$ such that $$ g_i = u_{i,1} \, v_{i,1} \, u_{i,2} \, v_{i,2} \dots u_{i,d} \, v_{i,d} $$ with $u_{i,j} \in R_u(\mathfrak{P})( F_i)$ (resp.\, $v_{i,j} \in R_u(\mathfrak{P}^{-})( F_i)$) for $i=1,\dots, s$ and $j=1,\dots ,d$. Since $R_u(\mathfrak{P})(C) \to \prod_{i=1,..,s} R_u(\mathfrak{P})( F_i)$ is onto (and similarly for $R_u(\mathfrak{P}^{-})$), we can lift each $(u_{i,j})_{i=1,\dots, s}$ in some $u_j \in R_u(\mathfrak{P})(C)$ (resp.\, $(v_{i,j})_{i=1,\dots, s}$ in $v_j \in R_u(\mathfrak{P}^{-})(C)$). Thus the product $u_{1} \, v_{1} \, u_{2} \, v_{2} \dots u_{d} \, v_{d}$ lifts the $g_i$'s. \smallskip \noindent $(ii) \Longrightarrow (iii)$. This follows from Proposition \ref{prop_retract}. \end{proof} \begin{sproposition} \label{prop_W_trivial} Assume that $B$ is a semilocal domain and that $\mathfrak{G}$ is semisimple simply connected $B$-group having a strictly proper parabolic $B$-subgroup. Let $K$ be the fraction field of $B$. Then the following assertions are equivalent: \smallskip $(i)$ $\mathfrak{G}$ satisfies the lifting property; \smallskip $(ii)$ $(\mathfrak{G},1)$ is a retract rational $B$--scheme; \smallskip $(iii)$ $\mathfrak{G}$ is $R$--trivial on semilocal rings, that is $\mathfrak{G}(C)/R=1$ for each semilocal $B$-ring $C$; \smallskip $(iv)$ $\mathfrak{G}(F)/R=1$ for each $B$-field $F$. \end{sproposition} \begin{proof} Let $\mathfrak{P}$ be a strictly proper parabolic subgroup scheme of $\mathfrak{G}$. \smallskip \noindent $(i) \Longrightarrow (ii)$. We assume that $\mathfrak{G}$ satisfies the lifting property. Then Proposition \ref{prop_retract}, $(ii) \Longrightarrow (i)$, shows that $\mathfrak{G}$ is retract rational over $B$. \smallskip \noindent $(ii) \Longrightarrow (iii)$. If all residue fields are infinite, this is Proposition \ref{prop_vanish}. For the general case, it is enough to show that $\mathfrak{G}(B)/R=1$. We may assume that $\kappa_1, \dots, \kappa_b$ are finite fields and that $\kappa_{b+1}, \dots , \kappa_c$ are infinite. Let $(\mathfrak{U}, 1)$ be an open subset of $(\mathfrak{G},1)$ which is a $B$--retract of some open of $(\mathbf{A}^N_B,0)$. We know that $E_\mathfrak{P}(\kappa_i)= \mathfrak{G}(\kappa_i)$ for $i=1,\dots, b$ \cite[1.1.2]{T}. We consider the open $B$--subscheme $\goth V= \mathfrak{U} \, E_\mathfrak{P}(B)$ of $\mathfrak{G}$. Since $E_P(\kappa_i)$ is dense in $G_{\kappa_i}$ for $i=b+1, \dots,c$, we have $\goth V_{\kappa_i}= \mathfrak{G}_{\kappa_i}$ for $b+1=1, \dots, c$ Since the map $E_\mathfrak{P}(B) \to \prod_{i=1,\dots, b} E_\mathfrak{P}(\kappa_i)$ is onto, we have $\goth V(B)= \mathfrak{G}(B)$. Lemma \ref{lem_sorite2}.(1) shows that $\mathfrak{U}(B)/R=1$ so that $\goth V(B)/R=1$. Thus $\mathfrak{G}(B)/R=1$. \smallskip \noindent $(iii) \Longrightarrow (iv)$. Obvious. \smallskip \noindent $(iv) \Longrightarrow (i)$. Since $(iii)$ holds in particular for any $B$-field $F$, we have $K_1^{\mathfrak{G},\mathfrak{P}}(F)=1$ for every $B$-field $F$ according to Margaux--Soul\'e isomorphism \cite[Th. 3.10]{M1}. Lemma \ref{lem_W_trivial}, $(i) \Longrightarrow (ii)$, implies that $G$ satisfies the lifting property for any semilocal $B$--algebra $C$. \end{proof} This can be refined in the regular case. \begin{stheorem} \label{thm_vanish} Assume that $B$ is a semilocal regular domain containing a field $k$ and that $\mathfrak{G}$ is a semisimple simply connected $B$-group having a strictly proper parabolic $B$-subgroup. Let $K$ be the fraction field of $B$. Then the following assertions are equivalent: \smallskip $(i)$ $\mathfrak{G}$ satisfies the lifting property; \smallskip $(i')$ $\mathfrak{G}$ satisfies the lifting property for each $B$-ring $C$ which is a semilocal regular domain and such that $B$ embeds in $C$; \smallskip $(ii)$ $(\mathfrak{G},1)$ is a retract rational $B$--scheme; \smallskip $(iii)$ $\mathfrak{G}$ is $R$--trivial, that is, $\mathfrak{G}(C)/R=1$ for each semilocal $B$-ring $C$; \smallskip $(iii')$ $\mathfrak{G}(C)/{A^1}=1$ for each $B$-ring $C$ which is a semilocal regular domain; \smallskip $(iv)$ $\mathfrak{G}(F)/R=1$ for each $B$-field $F$; \smallskip $(v)$ $\mathfrak{G}_K$ is a retract rational $K$-variety. \smallskip If, moreover, $\mathfrak{G}$ is strictly of $B$-rank $\ge 2$, then the above statements are also equivalent to the following: $(iii'')$ $K_1^{\mathfrak{G}}(C)=1$ for each $B$-ring $C$ which is a semilocal regular domain. \end{stheorem} \begin{proof} Let $\mathfrak{P}$ be a strictly proper parabolic $B$--subgroup scheme of $\mathfrak{G}$. We detail only the additional facts from Proposition \ref{prop_W_trivial} which provides already the equivalences $(i) \Longleftrightarrow (ii) \Longleftrightarrow (iii) \Longleftrightarrow (iv)$. \smallskip \noindent $(i) \Longrightarrow (i')$. Obvious. \smallskip \noindent $(i') \Longrightarrow (ii)$. In the proof of Proposition \ref{prop_retract}, $(ii) \Longrightarrow (i)$, we apply the lifting to a semilocalization of $B[t_1, \dots,t_n ]$ which is a regular semilocal domain which contains $B$. So the proof of Proposition \ref{prop_W_trivial}, $(i) \Longleftrightarrow (iii)$, works so that $(\mathfrak{G}, 1)$ is retract $B$--rational. \smallskip \noindent $(iii) \Longrightarrow (iii')$. By~\cite[Th. 3.10]{M1} combined with~\cite[Th. 7.2]{Gi2} we have $\mathfrak{G}(F)/{A^1}=\mathfrak{G}(F)/R$ for each $B$--field $F$. Then the claim follows by Theorem~\ref{thm_KV_R}. \smallskip \noindent $(iii') \Longrightarrow (iv)$. Obvious. \smallskip \noindent $(iv) \Longrightarrow (v)$. The assumption implies that the semisimple simply connected strictly isotropic $K$--group $G= \mathfrak{G}_K$ satisfies $G(E)/R=1$ for all $K$--fields $E$. According to \cite[cor. 5.10]{Gi2}, $G$ is a retract $K$--rational variety. \smallskip \noindent $(v)\Longrightarrow (i')$. Let $C$ be a semilocal regular domain which contains $B$. It is clear from the proof of the implication $(i) \Longrightarrow (ii)$ of Lemma~\ref{lem_W_trivial} that it is enough to show that $K_1^\mathfrak{G}(F)=1$ for every residue field $F$ of $C$. Let $\hat C$ be the completion of the localization of $C$ at the prime ideal corresponding to $F$. Then $\hat C$ is a regular local ring, and the fraction field $\hat K$ of $\hat C$ is an extension of $K$. Since $\mathfrak{G}_K$ is retract rational, we have $\mathfrak{G}(\hat K)/R=1$. Then $\mathfrak{G}(\hat C)/R=1$ by Theorem~\ref{thm_KV_R}. Since $\mathfrak{G}$ is affine and smooth, the map $\mathfrak{G}(\hat C)\to\mathfrak{G}(F)$ is surjective~\cite[th. I.8]{Gruson}. Hence $\mathfrak{G}(\hat C)/R\to \mathfrak{G}(F)/R$ is surjective and $\mathfrak{G}(F)/R=1$. According to \cite[Th. 7.2]{Gi2}, we have $K_1^\mathfrak{G}(F)=\mathfrak{G}(F)/R$. \smallskip We assume now that $\mathfrak{G}$ is strictly of $B$-rank $\ge 2$. \noindent $(iii)\Longrightarrow (iii'')$ Follows from Theorem~\ref{thm_main}. \smallskip \noindent $(iii'')\Longrightarrow (iii')$. Obvious. \end{proof} Since a positive answer to the Kneser-Tits problem over fields is known in a bunch of cases, we get the following concrete result. \begin{scorollary} \label{cor_main} Assume that $B$ is a connected semilocal ring containing a field $k$. We assume that $\mathfrak{G}$ is semisimple simply connected isotropic $B$-group and that $\mathfrak{G}_K$ is absolutely almost $K$--simple. Then $\mathfrak{G}(B)/{A^1}=1$ in the following cases: \begin{enumerate} \item $\mathfrak{G}$ is quasi-split; \item the components of the anisotropic kernel of $\mathfrak{G}$ are of rank $\leq 2$; \item $\mathfrak{G}= \SL_{m}(A)$ where $m \geq 2$ and $A$ is an Azumaya $R$--algebra of squarefree index; \item $\mathfrak{G}$ is of type $B_n$, $C_n$; \item $\mathfrak{G}=\Spin(q)$ for a regular quadratic form $q$ which is even dimensional (and isotropic); \item $\mathfrak{G}_K=\Spin(A,h)$ where $A$ is an Azumaya $R$--algebra of degree $2$ or $4$ equipped with an orthogonal involution of first kind and $h$ is an isotropic regular hermitian form. \item $\mathfrak{G}$ if of type $^{3,6}D_4$ or $^1E_6$; \item $\mathfrak{G}$ is of type $^2E_6$ with one of the following Tits indices $$ \begin{array}{lll} \mbox{\rm a)} \quad \begin{picture}(80,30) \put(00,02){\line(1,0){25}} \put(65,02){\oval(80,10)[l]} \put(0,02){\circle*{3}} \put(25,02){\circle*{3}} \put(65,-3){\circle*{3}} \put(65,7){\circle*{3}} \put(45,7){\circle*{3}} \put(45,-3){\circle*{3}} \put(0,02){\circle{10}} \put(25,02){\circle{10}} \put(-5,12){$\alpha_2$} \put(20,12){$\alpha_4$} \put(40,12){$\alpha_3$} \put(62,12){$\alpha_1$} \put(62,-13){$\alpha_6$} \put(40,-13){$\alpha_5$} \end{picture} & \mbox{\qquad \rm b)} \quad \begin{picture}(80,30) \put(00,02){\line(1,0){25}} \put(65,02){\oval(80,10)[l]} \put(0,02){\circle*{3}} \put(25,02){\circle*{3}} \put(65,-3){\circle*{3}} \put(65,7){\circle*{3}} \put(45,7){\circle*{3}} \put(45,-3){\circle*{3}} \put(65,02){\oval(10,20)} \put(0,02){\circle{10}} \put(-5,11){$\alpha_2$} \put(20,11){$\alpha_4$} \put(40,11){$\alpha_3$} \put(62,17){$\alpha_1$} \put(62,-15){$\alpha_6$} \put(40,-13){$\alpha_5$} \end{picture} \mbox{\qquad \rm c)} \quad \begin{picture}(80,30) \put(00,02){\line(1,0){25}} \put(65,02){\oval(80,10)[l]} \put(0,02){\circle*{3}} \put(25,02){\circle*{3}} \put(65,-3){\circle*{3}} \put(65,7){\circle*{3}} \put(45,7){\circle*{3}} \put(45,-3){\circle*{3}} \put(65,02){\oval(10,20)} \put(-5,11){$\alpha_2$} \put(20,11){$\alpha_4$} \put(40,11){$\alpha_3$} \put(62,17){$\alpha_1$} \put(62,-15){$\alpha_6$} \put(40,-13){$\alpha_5$} \end{picture} \end{array} $$ \vskip3mm \noindent where for the last case we assume that $6 \in k^\times$. \bigskip \item $\mathfrak{G}$ is of type $E_7$ with one of the following Tits indices $$ \begin{array}{ll} \mbox{\rm a)} \qquad \begin{picture}(105,45) \put(00,02){\line(1,0){100}} \put(60,02){\line(0,1){20}} \put(00,02){\circle*{3}} \put(20,02){\circle*{3}} \put(40,02){\circle*{3}} \put(60,02){\circle*{3}} \put(80,02){\circle*{3}} \put(100,02){\circle*{3}} \put(60,22){\circle*{3}} \put(20,02){\circle{10}} \put(60,02){\circle{10}} \put(80,02){\circle{10}} \put(100,02){\circle{10}} \put(-5,-11){$\alpha_7$} \put(15,-11){$\alpha_6$} \put(35,-11){$\alpha_5$} \put(55,-11){$\alpha_4$} \put(75,-11){$\alpha_3$} \put(95,-11){$\alpha_1$} \put(55,30){$\alpha_2$} \end{picture} & \mbox{\qquad \rm b)} \qquad \begin{picture}(125,45) \put(00,02){\line(1,0){100}} \put(60,02){\line(0,1){20}} \put(00,02){\circle*{3}} \put(20,02){\circle*{3}} \put(40,02){\circle*{3}} \put(60,02){\circle*{3}} \put(80,02){\circle*{3}} \put(100,02){\circle*{3}} \put(60,22){\circle*{3}} \put(00,02){\circle{10}} \put(20,02){\circle{10}} \put(100,02){\circle{10}} \put(-5,-11){$\alpha_7$} \put(15,-11){$\alpha_6$} \put(35,-11){$\alpha_5$} \put(55,-11){$\alpha_4$} \put(75,-11){$\alpha_3$} \put(95,-11){$\alpha_1$} \put(55,30){$\alpha_2$} \end{picture} \end{array} $$ $$ \begin{array}{l} \mbox{\qquad \rm c)} \qquad \begin{picture}(125,45) \put(00,02){\line(1,0){100}} \put(60,02){\line(0,1){20}} \put(00,02){\circle*{3}} \put(20,02){\circle*{3}} \put(40,02){\circle*{3}} \put(60,02){\circle*{3}} \put(80,02){\circle*{3}} \put(100,02){\circle*{3}} \put(60,22){\circle*{3}} \put(00,02){\circle{10}} \put(20,02){\circle{10}} \put(-5,-11){$\alpha_7$} \put(15,-11){$\alpha_6$} \put(35,-11){$\alpha_5$} \put(55,-11){$\alpha_4$} \put(75,-11){$\alpha_3$} \put(95,-11){$\alpha_1$} \put(55,30){$\alpha_2$} \end{picture} \end{array} $$ \medskip \item $\mathfrak{G}$ is of type $E_8$ with the following Tits indices \vskip-10mm $$ \begin{array}{l} \begin{picture}(125,55) \put(00,02){\line(1,0){120}} \put(80,02){\line(0,1){20}} \put(00,02){\circle*{3}} \put(20,02){\circle*{3}} \put(40,02){\circle*{3}} \put(60,02){\circle*{3}} \put(80,02){\circle*{3}} \put(100,02){\circle*{3}} \put(120,02){\circle*{3}} \put(80,22){\circle*{3}} \put(00,02){\circle{10}} \put(20,02){\circle{10}} \put(40,02){\circle{10}} \put(120,02){\circle{10}} \put(-5,-11){$\alpha_8$} \put(15,-11){$\alpha_7$} \put(35,-11){$\alpha_6$} \put(55,-11){$\alpha_5$} \put(75,-11){$\alpha_4$} \put(95,-11){$\alpha_3$} \put(115,-11){$\alpha_1$} \put(75,30){$\alpha_2$} \end{picture} \end{array} $$ $$ \begin{array}{l} \begin{picture}(125,55) \put(00,02){\line(1,0){120}} \put(80,02){\line(0,1){20}} \put(00,02){\circle*{3}} \put(20,02){\circle*{3}} \put(60,02){\circle*{3}} \put(80,02){\circle*{3}} \put(100,02){\circle*{3}} \put(120,02){\circle*{3}} \put(80,22){\circle*{3}} \put(00,02){\circle{10}} \put(40,02){\circle*{3}} \put(120,02){\circle{10}} \put(-5,-11){$\alpha_8$} \put(15,-11){$\alpha_7$} \put(35,-11){$\alpha_6$} \put(55,-11){$\alpha_5$} \put(75,-11){$\alpha_4$} \put(95,-11){$\alpha_3$} \put(115,-11){$\alpha_1$} \put(75,30){$\alpha_2$} \end{picture} \end{array} $$ $$ \begin{array}{l} \begin{picture}(125,55) \put(00,02){\line(1,0){120}} \put(80,02){\line(0,1){20}} \put(00,02){\circle*{3}} \put(20,02){\circle*{3}} \put(20,02){\circle{10}} \put(60,02){\circle*{3}} \put(80,02){\circle*{3}} \put(100,02){\circle*{3}} \put(120,02){\circle*{3}} \put(80,22){\circle*{3}} \put(00,02){\circle{10}} \put(40,02){\circle*{3}} \put(-5,-11){$\alpha_8$} \put(15,-11){$\alpha_7$} \put(35,-11){$\alpha_6$} \put(55,-11){$\alpha_5$} \put(75,-11){$\alpha_4$} \put(95,-11){$\alpha_3$} \put(115,-11){$\alpha_1$} \put(75,30){$\alpha_2$} \end{picture} \end{array} $$ \end{enumerate} \bigskip \smallskip \noindent If furthermore $\mathfrak{G}$ is strictly of $B$--rank $\geq 2$, then $K_1^\mathfrak{G}(B)=1$. \end{scorollary} \medskip \begin{proof} We assume firstly that the $k$--algebra $B$ is a regular domain so that the statement is a case by case application of Theorem \ref{thm_vanish}, $(v) \Longrightarrow (iii')$ or $(iv) \Longrightarrow (iii')$. Almost all results of retract rationality over $K$ are quoted in \cite[th. 6.1]{Gi2} excepted the following cases. \smallskip \noindent{\it Third outer $E_6$ case, i.e. $E_{6,1}^{29}$.} This is due to Garibaldi, see \cite[Th. 6.2]{Gi2}. \smallskip \noindent{\it Second $E_8$ case, i.e. $E_{8,2}^{66}$.} This is a result by Parimala-Tignol-Weiss \cite[\S 3]{PTW}. \smallskip \noindent{\it Third $E_7$ (resp.\, $E_8$) case, i.e. $E_{7,1}^{78}$ (resp.\ $E_{8,2}^{78}$).} The $R$--triviality over fields is a result by Alsaody-Chernousov-Pianzola \cite[Th. 8.1]{ACP} and by Thakur \cite[Th. 4.2 and Cor. 4.3]{Thakur} independently. \smallskip To deduce the general case where $B$ is not necessarily regular, we use Hoobler's trick, see~\cite[proof of theorem 2]{Ho} or~\cite[p. 109]{Ka}. There exists an henselian pair $(C,I)$ such that $C/I=B$ and $C= \limind C_\alpha$ where each $C_\alpha$ is a semilocalization of an affine $k$--space. We are given a minimal $B$--parabolic subgroup $\mathfrak{P}$ of $\mathfrak{G}$ Denote by $G_0$ the split Chevalley $k$--form of $\mathfrak{G}$. Then $(\mathfrak{G},\mathfrak{P})$ is a form of $(G_0,P_0)$. Since $\text{\rm{Aut}}(G_0,P_0)$ is a smooth affine $k$--group~\cite[lemme 5.1.2]{Gi4}, the map $$H^1(C,\text{\rm{Aut}}(G_0,P_0)) \to H^1(C/I,\text{\rm{Aut}}(G_0,P_0)) $$ is bijective \cite[th. 1]{Strano}. This implies that there exists a couple $(\widetilde \mathfrak{G}, \widetilde \mathfrak{P})$ over $C$ such that $(\widetilde \mathfrak{G}, \widetilde \mathfrak{P}) \times_C B = (\mathfrak{G}, \mathfrak{P})$. Since $\widetilde \mathfrak{G}$ is smooth over $C$, the map ${\widetilde \mathfrak{G}}(C) \to \mathfrak{G}(B)$ is onto and so is ${\widetilde \mathfrak{G}}(C)/{A^1} \to \mathfrak{G}(B)/{A^1}$. On the other hand, we have $H^1(C,\text{\rm{Aut}}(G_0))= \limind H^1(C_\alpha,\text{\rm{Aut}}(G_0))$ \cite[VI$_B$.10.16]{SGA3}. It follows that there exists $\alpha_0$ and a couple $(\mathfrak{G}_{\alpha_0}, \mathfrak{P}_{\alpha_0})$ such that $(\mathfrak{G}_{\alpha_0}, \mathfrak{P}_{\alpha_0}) \times_{C_{\alpha_0}} C= (\widetilde \mathfrak{G}, \widetilde \mathfrak{P})$. We have $\widetilde \mathfrak{G}(C)= \limind_{\alpha \geq \alpha_0} \mathfrak{G}_{\alpha_0}(C_\alpha)$. But $\mathfrak{G}_{\alpha_0}(C_\alpha)/{A^1}=1$ by the regular case of the theorem. Since $\limind_{\alpha \geq \alpha_0} \mathfrak{G}_{\alpha_0}(C_\alpha) / {A^1} \to \widetilde \mathfrak{G}(C) / {A^1} $ is onto, we conclude that $\widetilde \mathfrak{G}(C)/{A^1}=1$. If furthermore $\mathfrak{G}$ is strictly of $B$--rank $\geq 2$, then we have similarly $K_1^{\mathfrak{G}_{\alpha_0}}(C_\alpha) =1$ from the regular case and a composite of surjective maps $\limind_{\alpha \geq \alpha_0} K_1^{\mathfrak{G}_{\alpha_0}}(C_\alpha) \to \hskip-3mm \to K_1^{\widetilde \mathfrak{G}}(C) \to \hskip-3mm \to K_1^\mathfrak{G}(B)$. Thus $K_1^\mathfrak{G}(B)=1$. \end{proof} \section{Behaviour for henselian pairs} \smallskip We address the following question with respect to a henselian pair $(B,I)$ \cite[15.11]{St}; this concerns, for example, the case of a nilpotent ideal. \begin{squestion} \label{question_henselian} Let $\mathfrak{G}$ be a reductive $B$--group scheme. Is the map $\mathfrak{G}(B)/R \to \mathfrak{G}(B/I)/R$ an isomorphism? \end{squestion} Note that since $\mathfrak{G}$ is affine and smooth over $B$, the map $\mathfrak{G}(B)\to\mathfrak{G}(B/I)$ is surjective~\cite[th. I.8]{Gruson}, and hence the map of $R$-equivalence class groups is surjective. \subsection{The torus case} \begin{slemma}\label{lem_tor_isotrivial} Assume that $B/I$ is a normal domain. Let $\goth T$ be a $B$-torus. Then $\goth T$ is isotrivial. \end{slemma} \begin{proof} Since $B/I$ is a normal domain, $\goth T$ is isotrivial \cite[X.5.16]{SGA3}, that is, there exists a finite \'etale cover $C_0$ of $B/I$ such that $\goth T_{C_0} \cong \mathbb{G}_{m,C_0}^r$. Since $(B,I)$ is a henselian couple, $C_0$ lifts to a finite \'etale cover $C$ of $B$ \cite[Tag 09ZL]{St} and furthermore $(C,I C)$ is an henselian pair ({\it ibid}, Tag 09XK). According to \cite[cor. 3.1.3.(b)]{Ces-GS}, the isomorphism $\goth T_{C_0} \cong \mathbb{G}_{m,C_0}^r$ lifts to an isomorphism $\goth T_{C} \cong \mathbb{G}_{m,C}^r$ so that $\goth T \times_G C$ is split. Thus $\goth T$ is isotrivial. \end{proof} A first evidence for the question \ref{question_henselian} is the following fact. \begin{slemma}\label{lem_torus_pair} Let $\goth T$ be a $B$-torus. Assume that $B/I$ is a regular domain. Then the map $\goth T(B)/R \to \goth T(B/I)/R$ is an isomorphism. \end{slemma} \begin{proof} The $B$--torus $\goth T$ is isotrivial according to Lemma \ref{lem_tor_isotrivial}. Let $1 \to \goth S \to \mathfrak{Q} \xrightarrow{\pi} \goth T \to 1$ be a flasque resolution. We have a commutative diagram of exact sequences $$ \xymatrix@C=20pt{ 0 \ar[r] & \goth T(B)/\pi(\mathfrak{Q}(B)) \ar[r] \ar[d] & H^1(B,\goth S) \ar[r] \ar[d] & H^1(B,\mathfrak{Q}) \ar[d] \\ 0 \ar[r] & \goth T(B/I)/\pi(\mathfrak{Q}(B/I)) \ar[r] & H^1(B/I,\goth S) \ar[r] & H^1(B/I,\mathfrak{Q}) \\ } $$ According to \cite[th. 1]{Strano}, the maps $H^1(B,\goth S) \to H^1(B/I,\goth S)$ and $H^1(B,\mathfrak{Q}) \to H^1(B/I,\mathfrak{Q})$ are isomorphisms. By diagram chase we conclude that the map $\goth T(B)/\pi(\mathfrak{Q}(B)) \to \goth T(B/I)/\pi(\mathfrak{Q}(B/I))$ is an isomorphism. According to Lemma \ref{lem_sorite2}.(1) we have $\mathbb{G}_m(B)/R=1$. Then Lemma \ref{lem_weil} shows that $R\mathfrak{Q}(B)=\mathfrak{Q}(B)$, hence the inclusion $\pi(\mathfrak{Q}(B)) \subseteq R \goth T(B)$. It follows that we deal with a surjection $\goth T(B)/\pi(\mathfrak{Q}(B))\to \goth T(B)/R$. According to Proposition \ref{prop_torus1}, we have an isomorphism $\goth T(B/I)/\pi(\mathfrak{Q}(B/I)) \buildrel\sim\over\lgr \goth T(B/I)/R$. Then the isomorphism $\goth T(B)/\pi(\mathfrak{Q}(B)) \to \goth T(B/I)/\pi(\mathfrak{Q}(B/I))=\goth T(B/I)/R$ factors through $\goth T(B)/R$, and we conclude that $\goth T(B)/\pi(\mathfrak{Q}(B))=\goth T(B)/R$ as well. \end{proof} \subsection{A generalization} Using the case of tori, we obtain the following partial result for $R$-equivalence of arbitrary reductive groups. We do it by generalizing an argument of Raghunathan~\cite[\S 1]{R}. \begin{slemma} \label{lem_red_tor} We assume that $B/I$ is a regular domain. Let $\mathfrak{G}$ be a reductive $A$--group scheme admitting $B$-subtori $\goth T_1, \dots, \goth T_n$ such that $\mathop{\rm Lie}\nolimits(\mathfrak{G})(B)$ is generated as $B$--module by the $\mathop{\rm Lie}\nolimits(\goth T_i)(B)$'s. Then $\ker\bigl(\mathfrak{G}(B) \to \mathfrak{G}(B/I) \bigr) \subseteq R\mathfrak{G}(B)$. \end{slemma} \begin{proof} We consider the map of $B$--schemes \[\xymatrix@1{ f: \goth T_1 \times_B \dots \times_B \goth T_n & \to & \mathfrak{G} \\ (t_1,\dots, t_n) & \mapsto & t_1 \dots t_n. }\] For each maximal ideal $\goth m$ of $B$, The differential at $1_{B/\goth m}$ is \[\xymatrix@1{ df_{1_k}: \mathop{\rm Lie}\nolimits(\goth T_1)(B/\goth m) \oplus \dots \oplus \mathop{\rm Lie}\nolimits(\goth T_n)(B/\goth m) & \to & \mathop{\rm Lie}\nolimits(G)(B /\goth m) \\ (X_1,\dots, X_n) & \mapsto & {X_1}+ \dots + {X_n} }\] which is onto by construction. It follows that the map $f$ is smooth at $1_{B/\goth m}$ for each maximal ideal of $\goth m$. The Jacobian criterion shows that $f$ is smooth in the neighborhood of the unit section of $\goth T_1 \times_B \dots \times_B \goth T_n$. The Hensel lemma \cite[th. I.8]{Gruson} (see also \cite[prop. 3.1.1]{Ces-GS}) shows that the induced map $$ \ker\bigl( (\goth T_1 \times_B \dots \times_B\goth T_n)(B) \to (\goth T_1 \times_B \dots \times_B\goth T_n)(B/I) \bigr) \to \ker\bigl( \mathfrak{G}(B) \to \mathfrak{G}(B/I) \bigr) $$ is surjective. The torus case Lemma \ref{lem_torus_pair} shows that $\ker\bigl( \goth T_i(B) \to \goth T_i(B/I) \bigr) \subseteq R\goth T_i(B)$ for $i=1,...,n$. Thus $\ker\bigl( \mathfrak{G}(B) \to \mathfrak{G}(B/I) \bigr) \subseteq R\mathfrak{G}(A)$. \end{proof} Together with Lemma \ref{lem_unirational}.(1), we get the following fact. \begin{scorollary}\label{cor_red_tor} Let $R$ be a semilocal ring with infinite residue fields and let $\mathfrak{G}$ be a reductive $R$--group scheme assumed $R$-linear. Let $(B,J)$ be a henselian pair where $B$ is an $R$--algebra such that $B/J$ is a regular domain. Then $\ker\bigl( \mathfrak{G}(B) \to \mathfrak{G}(B/J) \bigr) \subseteq R\mathfrak{G}(B)$. \end{scorollary} \subsection{The semisimple case} We continue with the henselian pair $(A,I)$. One evidence is the case of the group $\SL_N(\mathcal A)$ for an Azumaya $A$--algebra $\mathcal A$ of degree invertible in $A^\times$ for $N>>0$ since Hazrat has proven that the map $\SK_1(\mathcal A) \to \SL_1(\mathcal A/I)$ is an isomorphism, if $A$ is semilocal~\cite{Hazrat}. Firstly we make a variation on \cite[\S 3.4]{GPS}. \begin{slemma}\label{lem_root2} Let $F$ be a field and let $G$ be a reductive $F$-group. Let $P$ be a strictly proper parabolic $F$--subgroup and let $P^{-}$ be an opposite parabolic subgroup to $P$. We put $U= \mathrm{rad}_u(P)$ and $U^{-}=\mathrm{rad}_u(P^{-})$. We consider the following commutative diagram \[ \xymatrix@C=30pt{ 0 \ar[r]& \mathop{\rm Lie}\nolimits(G) \ar[r] & G(F[\epsilon]) \ar[r] & G(F) \ar[r] & 1 \\ & & E_P(F[\epsilon]) \ar[r] \incl[u] & E_P(F) \ar[r] \incl[u] & 1 } \] and define $V_P= \ker( E_P(F[\epsilon]) \to E_P(F) ) \subseteq \mathop{\rm Lie}\nolimits(G)$. \smallskip (1) The $F$--subspace $V_P$ is an ideal of $\mathop{\rm Lie}\nolimits(G)$ which is $G(F)$-stable. We have \break $V_P= E_P(F) .\mathop{\rm Lie}\nolimits(U)+ E_P(F) .\mathop{\rm Lie}\nolimits(U^{-})$. \smallskip (2) If $G$ is semisimple simply connected, we have $V_P= \mathop{\rm Lie}\nolimits(G)$. \end{slemma} \begin{proof}(1) It follows from~\cite[XXVI.5.1]{SGA3} that $E_P(F[\epsilon])$ (resp.\ $E_P(F)$) is a normal subgroup of $G(F[\epsilon])$ (resp.\ $G(F)$). It implies that $V_P$ is a Lie subalgebra of $\mathop{\rm Lie}\nolimits(G)$ which is furthermore $G(F)$--equivariant. Since $\mathop{\rm Lie}\nolimits(U), \mathop{\rm Lie}\nolimits(U^{-})$ are contained in $V_P$, it follows that $E_P(F) .\mathop{\rm Lie}\nolimits(U)+ E_P(F). \mathop{\rm Lie}\nolimits(U^{-}) \subseteq V_P$. Conversely, we are given an element $v \in V_P$. It is of the shape $v= u_1 u_2 \dots u_{2n}$ with $u_{2i+1} \in U(F[\epsilon])$ and $u_{2i} \in U^{-}(F[\epsilon])$. We have a decomposition $v= v_1 (g_2 v_2 g_2^{-1}) \dots (g_{2n} v_{2n} g_{2n}^{-1})$ with $v_{2i+1} \in \mathop{\rm Lie}\nolimits(U)$, $v_{2i} \in \mathop{\rm Lie}\nolimits(U^{-})$ and $g_1, \dots, g_{2n} \in E_P(F)$. We have proven that $v$ belongs to $E_P(F) .\mathop{\rm Lie}\nolimits(U)+ E_P(F) .\mathop{\rm Lie}\nolimits(U^{-})$. \smallskip \noindent (2) Without loss of generality we can assume that $G$ is almost absolutely $F$--simple. If $F$ is infinite, we have that $E_P(F) .\mathop{\rm Lie}\nolimits(U)=\mathop{\rm Lie}\nolimits(G)$ according to \cite[lemma 3.3.(3)]{GPS} so a fortiori $V_P=\mathop{\rm Lie}\nolimits(G)$. We can then assume that $F$ is finite so that $G$ is quasi-split. We have then $E_P(F) = G(F)$ according to \cite[1.1.2]{T}. If $G$ is split, the statement is \cite[lemma 3.3.(1)]{GPS}. It remains to deal with the quasi-split non split case, it implies that $G$ is of outer type $A$, $D$ or $E_6$. In particular, all geometrical roots have same length and $G$ is not of type $A_1$. Furthermore we can deal with Borel subgroup $B$ of $P$ (according to \cite[remark after Theorem 1]{PS}). Let $T$ be maximal torus of $B$, we recall the decomposition $\mathop{\rm Lie}\nolimits(G)= \mathop{\rm Lie}\nolimits(T) \oplus \mathop{\rm Lie}\nolimits(U) \oplus \mathop{\rm Lie}\nolimits(U^{-})$. We consider the ideal $V_P \otimes_F F_s$ of $\mathop{\rm Lie}\nolimits(G) \otimes_F F_s$. According to \cite[prop. 2.6.a]{Hg}, $V_P \otimes_F F_s$ is central or contains $\mathop{\rm Lie}\nolimits(T) \otimes_F F_s$. Since $V_P$ is not central, we get that $\mathop{\rm Lie}\nolimits(T) \subseteq V_P$. Since $\mathop{\rm Lie}\nolimits(U), \mathop{\rm Lie}\nolimits(U^{-}) \subset \mathop{\rm Lie}\nolimits(G)$, we conclude that $V_P=\mathop{\rm Lie}\nolimits(G)$. \end{proof} \begin{sproposition}\label{prop:hens-couple} Let $R$ be a semilocal ring and let $\mathfrak{G}$ be a semisimple group scheme over $B$, such that its simply connected cover morphism $f: \mathfrak{G}^{sc} \to \mathfrak{G}$ is smooth. We assume that $\mathfrak{G}$ has a strictly proper parabolic $R$-subgroup $\mathfrak{P}$. Let $(B,I)$ be an henselian couple where $B$ is an $R$--algebra. Then the map $K_1^{\mathfrak{G},\mathfrak{P}}(B)\to K_1^{\mathfrak{G},\mathfrak{P}}(B/I)$ is an isomorphism. \end{sproposition} \begin{proof} Let $\mathfrak{P}^{-}$ be a parabolic $R$-subgroup of $\mathfrak{G}$, opposite to $\mathfrak{P}$. Let $\mathfrak{U}= \mathrm{rad}(\mathfrak{P})$, $\mathfrak{U}^{-}= \mathrm{rad}(\mathfrak{P}^{-})$. Since $\mathfrak{G}$ is affine smooth, the map $\mathfrak{G}(B) \to \mathfrak{G}(B/I)$ is surjective according to the generalization of Hensel's lemma to henselian couples \cite[th. I.8]{Gruson} (see also \cite[prop. 3.1.1]{Ces-GS}), hence $K_1^{\mathfrak{G},\mathfrak{P}}(B)\to K_1^{\mathfrak{G},\mathfrak{P}}(B/I)$ is onto. To show that it is injective, it is enough to prove that $\ker(\mathfrak{G}(B)\to \mathfrak{G}(B/I)) \le E_{\mathfrak{P}}(B)$, since $E_{\mathfrak{P}}(B)$ surjects onto $E_\mathfrak{P}(B/I)$. Combining the lifting method of \cite[lemma 3.5]{GPS} and Lemma \ref{lem_root2}, there exist $g_1, \dots, g_{2m} \in E_{\mathfrak{P}}(R)$ such that the product map $$ h: (\mathfrak{U} \times \mathfrak{U}^{-})^m \to \mathfrak{G}, \enskip (u_1,\dots, u_{2m}) \mapsto {^{g_1}\!u_1} \dots {^{g_{2m}}\!u_{2m}} $$ is smooth at all $(1,...,1)_{\kappa_i}'s$. Then $h$ is smooth in the neighborhood of the origin of $(\mathfrak{U} \times \mathfrak{U}^{-})^m$. Hensel's lemma yields $\ker(\mathfrak{G}(B)\to \mathfrak{G}(B/I))\le E_{\mathfrak{P}}(B)$. \end{proof} \section{Specialization for $R$--equivalence} \subsection{The case of tori} Let $A$ be a henselian local ring with the maximal ideal $m$ and the residue field $k$. We remind the reader that a torus over a henselian local ring is isotrivial, this follows of the equivalence of categories between that of $A$-tori and that of $A/m$-tori \cite[X.4.1]{SGA3} and from the fact that a torus over a field is isotrivial. \begin{sproposition} \label{prop_torus2} Let $A$ be a local ring of residue field $k$. Let $\goth T$ be an $A$-torus and put $T= \goth T \times_A k$. Then \smallskip \noindent (1) If $A$ is henselian, then the natural map $\goth T(A)/R \to \goth T(k)/R$ is an isomorphism. In particular we have $\ker\bigl( \goth T(A) \to T(k) \bigr) \subseteq R\goth T(A)$. \smallskip \noindent (2) If $A$ is regular and $K$ denotes the fraction field of $A$, then the natural map $\goth T(A)/R \longrightarrow \goth T(K)/R$ is an isomorphism. \end{sproposition} \begin{proof} \noindent (1) Let $m$ be the maximal ideal of $A$. Since $A$ is henselian, $(A,m)$ is a Henselian pair. Then, since $A/m=k$ is a regular domain, by Lemma~\ref{lem_torus_pair} $\goth T(A)/ R \to T(k)/R$ is an isomorphism. \smallskip \noindent (2) We consider a flasque resolution $$ 1 \to \goth S \to \mathfrak{Q} \xrightarrow{\pi} \goth T \to 1. $$ According to Proposition \ref{prop_torus1}, we have isomorphisms $\goth T(A)/ \pi( \mathfrak{Q}(A)) \buildrel\sim\over\lgr \goth T(A)/R \buildrel\sim\over\lgr H^1(A,\goth S)$ and $\goth T(K)/ \pi( \mathfrak{Q}(K)) \buildrel\sim\over\lgr \goth T(K)/R \buildrel\sim\over\lgr H^1(K,\goth S)$. Since $\goth S$ is flasque, the restriction map $H^1(A,\goth S) \to H^1(K,\goth S)$ is surjective \cite[Th. 2.2]{CTS2} and is injective ({\it ibid}, Th. 4.1). Thus the map $\goth T(A)/R \to \goth T(K)/R$ is an isomorphism. \end{proof} \begin{scorollary} \label{cor_torus2} We assume that the henselian local ring $A$ is regular noetherian with residue field $k$ and fraction field $K$. For any $A$-torus $\goth T$ we have two isomorphisms $$ T(k)/R \xleftarrow{\ \sim\ } \goth T(A)/R \xrightarrow{\ \sim\ } \goth T(K)/R. $$ \end{scorollary} \subsection{Reduction to the anisotropic case}\label{subsec_aniso} We come back to the setting of the introduction where $A$ is a henselian local domain of residue field $k$ and fraction field $K$. Let $\mathfrak{G}$ be a reductive $A$--group scheme. Let $\mathfrak{P}$ be a parabolic $A$--subgroup of $\mathfrak{G}$ and let $\mathfrak{L}$ be a Levi subgroup of $\mathfrak{P}$. We know that $\mathfrak{L}= Z_\mathfrak{G}(\goth S)$ where $\goth S$ is the maximal central $A$--split subtorus $\goth S$ of $\mathfrak{L}$ \cite[XXVI]{SGA3}. We put $G=\mathfrak{G}\times_A k$, $P= \mathfrak{P} \times_A k$ and define similarly $L$ and $S$. According to Corollary~\ref{cor_parabolic}, we have the following commutative diagram where horizontal maps are isomorphisms \begin{equation}\label{diag_big} \xymatrix{ \mathfrak{G}(K)/R & \ar[l]_\sim \mathfrak{L}(K)/R \ar[r]^{\sim\quad} & \bigl( \mathfrak{L}/\goth S\bigr)(K)/R \\ \mathfrak{G}(A)/R \ar[d] \ar[u] & \ar[l]_\sim \mathfrak{L}(A)/R \ar[r]^{\sim\quad} \ar[d] \ar[u]& \ar[d] \ar[u] \bigl( \mathfrak{L}/\goth S\bigr)(A)/R \\ G(k)/R & \ar[l]_\sim L(k)/R \ar[r]^{\sim\quad} & \bigl( L/S\bigr)(k)/R . } \end{equation} By diagram chase, we get the following facts. \begin{slemma} \label{lem_aniso} (1) If $\bigl(\mathfrak{L}/\goth S\bigr)(A) /R \to \bigl(L/S\bigr)(k) /R$ is injective, then $\mathfrak{G}(A)/R \to G(k)/R$ and $\mathfrak{L}(A)/R \to L(k)/R$ are isomorphisms. \smallskip \noindent (2) If $\bigl(\mathfrak{L}/\goth S\bigr)(A) /R \to \bigl(\mathfrak{L}/\goth S\bigr)(K) /R$ is injective (resp.\ surjective, resp.\ isomorphism), then $\mathfrak{G}(A)/R \to \mathfrak{G}(K)/R$ is injective (resp.\ surjective, resp.\ an isomorphism) and the map $\mathfrak{L}(A)/R \to \mathfrak{L}(K)/R$ is injective (resp.\ surjective, resp.\ an isomorphism). \end{slemma} \begin{proof} Since $\mathfrak{G}$ and $\mathfrak{L}$ are smooth $A$-schemes and $A$ is henselian, the maps $\mathfrak{G}(A)/R\to G(k)/R$ and $\mathfrak{L}(A)/R \to L(k)/R$ are surjective. The rest follows from Corollary~\ref{cor_parabolic}. \end{proof} It follows that the specialization problem reduces to the case of $\mathfrak{L}$ and even to $\mathfrak{L}/\goth S$. In particular, if $\mathfrak{P}$ is minimal, then $\mathfrak{L}/\goth S$ is anisotropic. \subsection{The lifting map} \begin{slemma}\label{lem:hens-local-RG} Let $A$ be a henselian local ring with residue field $k$ and let $\mathfrak{G}$ be a reductive $A$-group. Then $\ker\bigl( \mathfrak{G}(A) \to G(k) \bigr) \subseteq R\mathfrak{G}(A)$. \end{slemma} \begin{proof} If $k$ is infinite, the claim follows from Corollary~\ref{cor_red_tor}. If $k$ is finite, then $\mathfrak{G}$ is quasi-split by~\cite[XXVI.7.15]{SGA3} combined with the Lang's theorem. Then one has $\mathfrak{G}(A)/R=\mathfrak{G}(k)/R=1$ by Gauss decomposition~\cite[XXVI.5.1]{SGA3} combined with the fact that quasi-split tori over $A$ and $k$ are $R$-trivial. \end{proof} The above lemma shows that the map $\mathfrak{G}(A) \to \mathfrak{G}(A)/R$ factorizes through $G(k)$, i.e. defines a surjective homomorphism $\phi: G(k) \to \mathfrak{G}(A)/R$. One way to prove that the map $\mathfrak{G}(A)/R \to G(k)/R$ is an isomorphism would be to show that $\phi$ factorizes through $G(k)/R$, that is to complete the following diagram \begin{equation}\label{eq_problem} \xymatrix{ G(k) \ar[r]^\phi \ar[d] & \mathfrak{G}(A)/R \ar[r] & 1. \\ G(k)/R \ar@{.>}[ur]. } \end{equation} The dotted map is called (when it exists) \emph{the lifting map}. In what follows we prove the existense of the lifting map in two different cases. \begin{sproposition}\label{prop:hens-section} Let $A$ be a henselian local ring with the residue field $k$, and let $\mathfrak{G}$ be a reductive group over $A$. Assume that $A$ is equicharacteristic, i.e. $A$ contains a field. Then $\mathfrak{G}(A)/R\to G(k)/R$ is an isomorphism. \end{sproposition} \begin{proof} By Lemma~\ref{lem:hens-section} $A$ is a filtered direct limit of henselian local rings $A_i$ such that the map from $A_i$ to its residue field admits a section. Since $\mathfrak{G}$ is finitely presented over $A$, and the functor $\mathfrak{G}(-)/R$ commutes with filtered direct limits by Lemma~\ref{lem_limit}, we can assume from the start that $A\to k$ admits a section. We have $\ker(\mathfrak{G}(A)\to G(k))\subseteq R\mathfrak{G}(A)$ by Lemma~\ref{lem:hens-local-RG}. Since $A\to k$ admits a section, the map $R\mathfrak{G}(A)\to RG(k)$ is surjective. These two statements together imply that $\mathfrak{G}(A)/R\to G(k)/R$ is injective. The surjectivity is obvious. \end{proof} \begin{stheorem}\label{thm:hens-isotr} Let $A$ be a henselian local ring with residue field $k$, let $\mathfrak{G}$ be a semisimple group scheme over $A$, such that its simply connected cover morphism $f: \mathfrak{G}^{sc} \to \mathfrak{G}$ is smooth. We assume that $\mathfrak{G}$ has a strictly proper parabolic subgroup $\mathfrak{P}$. \smallskip \noindent (1) The map $K_1^{\mathfrak{G},\mathfrak{P}}(A)\to K_1^{G,P}(k)$ is an isomorphism. \smallskip \noindent (2) If $\mathfrak{G}= \mathfrak{G}^{sc}$, we have a square of isomorphisms $$ \xymatrix{ K_1^{\mathfrak{G},\mathfrak{P}}(A)\ar[d]^\wr \ar[r]^\sim & K_1^{G,P}(k) \ar[d]^\wr \\ \mathfrak{G}(A)/R \ar[r]^\sim & G(k)/R . } $$ \smallskip \noindent (3) Assume furthermore that $A$ is a domain with fraction field $K$. There is a natural lifting map $K_1^{G,P}(k) \to K_1^{\mathfrak{G},\mathfrak{P}}(A)\to K_1^{\mathfrak{G},\mathfrak{P}}(K)$. \end{stheorem} \begin{proof} (1) This is a special case of Proposition \ref{prop:hens-couple}. \smallskip \noindent (2) If $\mathfrak{G}= \mathfrak{G}^{sc}$, we have the following commutative diagram $$ \xymatrix{ K_1^{\mathfrak{G},\mathfrak{P}}(A)\ar[d] \ar[r]^\sim & K_1^{G,P}(k) \ar[d]^\wr \\ \mathfrak{G}(A)/R \ar[r] & G(k)/R } $$ where the right vertical isomorphism is \cite[th. 7.2]{Gi2}. Since the left vertical map is onto, a diagram chase shows that all maps are isomorphisms. \smallskip \noindent (3) It is a straightforward consequence. \end{proof} \smallskip \subsection{The case of DVRs}\label{subsec_DVR} Assume that $A$ is a henselian DVR and $\mathfrak{G}$ is a reductive group over $A$. We remind the reader of the existence of a specialization map $$ \varphi: \mathfrak{G}(K)/R \to G(k)/R $$ which is characterized by the property $\varphi([g])= [\overline{g}]$ for all $g \in G(A)$ \cite[th. 0.2]{Gi1}. In other words we have a commutative diagram \begin{equation}\label{eq_DVR} \xymatrix{ \mathfrak{G}(A)/R \ar[r] \ar[d] & \mathfrak{G}(K)/R \ar[dl]^{\varphi} \\ G(k)/R . } \end{equation} \noindent This is based on the existence of a specialization map $\goth X(A)/R \to \goth X(k)/R$ for a projective $A$--scheme $\goth X$ due to Koll\'ar \cite{Ko} and Madore \cite{Ma}, see also \cite[th. 6.1]{CT}. \begin{sremark}\label{rem_char2}{\rm The quoted reference~\cite[th. 0.2]{Gi1} requires the assumption that $k$ is not of characteristic $2$. This assumption occurs only in the de Concini--Procesi construction of the wonderful compactification of an adjoint semisimple $A$-group scheme. This is folklore that we can get rid of this assumption by a refinement of \cite[th. 3.13]{CS}. By descent, the relevant case is that of adjoint Chevalley groups over $\mathbb{Z}$ which is used for example in \cite{STBT}. Note also that in the field case, there is a construction of the wonderful compactification in \cite[\S 6.1]{BK}. } \end{sremark} \begin{sremark}\label{rem_CT}{\rm The existence of the specialization map in the reductive case over a DVR has been established by another method by Colliot-Th\`el\`ene, Harbater, Hartmann, Krashen, Parimala, and Suresh which involves simpler compactifications \cite[Th. A.10]{CTHHKPS}. } \end{sremark} \begin{slemma}\label{lem_dvr2} Let $A$ be a henselian DVR. For any reductive group $\mathfrak{G}$ over $A$ the map $\mathfrak{G}(A)/R\to\mathfrak{G}(K)/R$ is surjective. \end{slemma} \begin{proof} \noindent{\it First case: $G=\mathfrak{G}_k$ is irreducible (that is $G$ is the only parabolic $k$--subgroup of $G$).} Let $S$ be the maximal central split subtorus of $\mathfrak{G}_k$. It lifts to a central split subtorus $\goth S$ of $\mathfrak{G}$ \cite[XI]{SGA3}. Since $G/S$ is anisotropic, we have $(\mathfrak{G}/\goth S)(A)=(\mathfrak{G}/\goth S)(K)$~\cite{BT2}. Hilbert 90 theorem yields $\mathfrak{G}(A)/\goth S(A)=\mathfrak{G}(K)/\goth S(K)$ hence a decomposition $\mathfrak{G}(K) = \goth S(K) \, \mathfrak{G}(A)$. Since $R\goth S(K)= \goth S(K)$, we conclude that $\mathfrak{G}(K) = \mathfrak{G}(A) \, R\mathfrak{G}(K)$. \smallskip \noindent{\it General case.} Let $\mathfrak{P}$ be a minimal parabolic $A$--subgroup of $\mathfrak{G}$. Let $\mathfrak{P}^{-}$ be an opposite parabolic $A$--subgroup scheme to $\mathfrak{P}$. Then the Levi subgroup $\mathfrak{L}=\mathfrak{P} \cap \mathfrak{P}^{-}$ is such that $L =\mathfrak{L}_k$ is irreducible. Let $\goth S$ be the maximal central split subtorus of $\mathfrak{L}$. The first case shows that $(\mathfrak{L}/\goth S)(A)/R\to (\mathfrak{L}/\goth S)(K)/R$ is surjective. By Lemma~\ref{lem_aniso} this implies the surjectivity of $\mathfrak{G}(A)/R\to\mathfrak{G}(K)/R$. \end{proof} \begin{sproposition}\label{prop:hens-dvr-isotr} Let $A$ be a henselian DVR. Let $k$ be the residue field of $A$ and let $K$ be the fraction field of $A$. Let $\mathfrak{G}$ be a semisimple simply connected $A$--group scheme having a strictly proper parabolic $A$-subgroup $\mathfrak{P}$. Then we have the following commutative diagram of isomorphisms \begin{equation}\label{diag_hDVR} \xymatrix{ K_1^{\mathfrak{G},\mathfrak{P}}(k)\ar[d]^\wr & \ar[l]_\sim K_1^{\mathfrak{G},\mathfrak{P}}(A) \ar[r]^{\sim}\ar[d]^{\wr} & K_1^{\mathfrak{G},\mathfrak{P}}(K)\ar[d]^\wr \\ \mathfrak{G}(k)/R & \ar[l]_\sim \mathfrak{G}(A)/R \ar[r]^{\sim}& \mathfrak{G}(K)/R \\ } \end{equation} \end{sproposition} \begin{proof} By~Theorem~\ref{thm:hens-isotr} we have that $\mathfrak{G}(A)/R\to \mathfrak{G}(k)/R$ is an isomorphism. Then it follows from the existence of specialization map~\eqref{eq_DVR} that $\mathfrak{G}(A)/R\to \mathfrak{G}(K)/R$ is injective. By Lemma~\ref{lem_dvr2} the map $\mathfrak{G}(A)/R\to \mathfrak{G}(K)/R$ is surjective. Consider the commutative diagram $$ \xymatrix{ K_1^{\mathfrak{G},\mathfrak{P}}(A) \ar[r] \ar[d] & \mathfrak{G}(A)/R \ar[d] \ar[r]& 1\\ K_1^{\mathfrak{G},\mathfrak{P}}(k) \ar[r] & \mathfrak{G}(k)/R .\\ } $$ The bottom horizontal map is an isomorphism as we have used several times \cite{Gi2} and the left vertical map is an isomorphism in view of Proposition \ref{prop:hens-couple}. It follows that $K_1^{\mathfrak{G},\mathfrak{P}}(A) \to \mathfrak{G}(A)/R$ is injective and then an isomorphism. The remaining isomorphisms follow immediately. \end{proof} \begin{sremark}{\rm The surjectivity of the map $K_1^{\mathfrak{G},\mathfrak{P}}(A)\to K_1^{\mathfrak{G},\mathfrak{P}}(K)$ was previously proved in~\cite[lemme 4.5.1]{Gi2}. Note that it is does not hold for $A=k[[t]]$ and $\mathfrak{G}=\GL_n$ or $\PGL_n$, so seems specific to the semisimple simply connected case. } \end{sremark} \subsection{Specialization in the equicharacteristic case}\label{subsec_const} Assume that $A$ is a complete regular local ring containing a field $k_0$ and let $K$ be its fraction field. According to \cite[$_1$.19.6.4]{EGA4}, $A$ is $k_0$-isomorphic (non-canonically) to a formal series ring $k[[t_1,\dots ,t_d]]$, where $k$ is the residue field of $A$. Let $\mathfrak{G}$ be a reductive $A$-group scheme. There exists a unique reductive $k$--group $G$ such that $\mathfrak{G} \times_A k[[t_1,\dots ,t_d]] \cong G \times_k k[[t_1,\dots ,t_d]]$ (see the proof of Corollary~\ref{cor:cons} below). Since the fraction field $K= k((t_1,\dots t_d))$ of $A$ is a (proper) subfield of the iterated Laurent power series field $k((t_1)) \dots ((t_d))$, and $$ G(k)/R\to G\bigl( k((t_1)) \dots ((t_d))\bigr)/R $$ is an isomorphism~\cite[cor. 0.3]{Gi1}, we can define a specialization map $G(K)/R\to G(k)/R$ inductively, $$ sp: G(K)/R \to G\bigl( k((t_1,\dots, t_d)) \bigr)/R \to G\bigl( k((t_1,\dots, t_{d-1})) \bigr)/R \to \dots \to G(k)/R. $$ However, it is unclear whether this map does not depend of the choice of coordinates $t_1,\ldots,t_d$. The following theorem solves this problem. \begin{stheorem}\label{thm:const} Let $k$ be an arbitrary field. Then for any reductive group $G$ over $k$ and any $d\ge 1$ the natural maps $$ G(k)/R \to G\bigl(k[[t_1,\dots, t_d]]\bigr)/R \to G\bigl(k((t_1,\dots, t_d))\bigr)/R $$ are isomorphisms. \end{stheorem} \begin{proof} We set $A=k[[t_1,\dots, t_d]]$ and $K=k((t_1,\dots, t_d))$. By Proposition~\ref{prop:hens-section} we have the isomorphism $G(k)/R \buildrel\sim\over\lgr G(A)/R$. Theorem \ref{thm:surj} shows that $G(A)/R \to G(K)/R$ is onto. It remains to prove that the surjective map $G(k)/R \to G(K)/R$ is an isomorphism and we know that it holds in the one dimensional case, i.e. the map $G(k)/R \to G\bigl( k((t)) \bigr)/R$ is an isomorphism \cite[cor. 0.3]{Gi1}. Using the embedding $$K=k((t_1,\dots, t_d)) \hookrightarrow k((t_1))\dots((t_d))$$ we get that $G(k)/R \to G(K)/R$ is injective. \end{proof} \begin{scorollary}\label{cor:cons} Let $(A,m)$ be a complete regular local ring containing a field $k_0$. Let $k$ be the residue field of $A$, and let $K$ be the fraction field of $A$. Let $\mathfrak{G}$ be a reductive group scheme over $A$. Then we have two isomorphisms $$ \mathfrak{G}(k)/R \xleftarrow{\ \sim\ } \mathfrak{G}(A)/R \xrightarrow{\ \sim\ } \mathfrak{G}(K)/R. $$ \end{scorollary} \begin{proof} According to \cite[$_1$.19.6.4]{EGA4}, $A$ is $k_0$-isomorphic (non-canonically) to a formal series ring $k[[t_1,\dots ,t_d]]$, where $k$ is the residue field of $A$. The group $\mathfrak{G}$ is the twisted $A$--form of a Chevalley reductive group $\mathbb{Z}$--scheme $\mathfrak{G}_0$ by a $\text{\rm{Aut}}(\mathfrak{G}_0)$--torsor $\mathfrak{E}$. Let $G=\mathfrak{G}\times_A k$ be the restriction of $\mathfrak{G}$ via the residue homomorphism $A\to k$. Since $H^1(\widehat{A}, \text{\rm{Aut}}(\mathfrak{G}_0)_k) \buildrel\sim\over\lgr H^1(k, \text{\rm{Aut}}(\mathfrak{G}_0)_k)$ \cite[XXIV.8.1]{SGA3}, it follows that $\mathfrak{G}$ is isomorphic to ${G \times_k k[[t_1,\dots, t_d]]}$. Then we can apply Theorem~\ref{thm:const}. \end{proof} \begin{stheorem}\label{thm:hens-inj} Let $A$ be a henselian regular local ring containing a field $k_0$. Let $k$ be the residue field of $A$ and let $K$ be the fraction field of $A$. Let $\mathfrak{G}$ be a reductive group scheme over $A$. Then we have two isomorphisms $$ \mathfrak{G}(k)/R \xleftarrow{\ \sim\ } \mathfrak{G}(A)/R \xrightarrow{\ \sim\ } \mathfrak{G}(K)/R. $$ In particular, we have a well-defined specialization map $sp:\mathfrak{G}(K)/R \to \mathfrak{G}(k)/R$ and it is an isomorphism. \end{stheorem} \begin{proof} By Lemma~\ref{lem:hens-section} $A$ is a filtered direct limit of henselian regular local rings $A_i$ such that each $A_i$ contains a field and the map from $A_i$ to its residue field admits a section. Since the group scheme $\mathfrak{G}$ and its parabolic subgroups are finitely presented over $A$, and the functor $\mathfrak{G}(-)/R$ commutes with filtered direct limits, we can assume from the start that $A\to k$ admits a section. Since $A$ is henselian, we have a bijection $H^1(A, \text{\rm{Aut}}(\mathfrak{G}_0)_k) \buildrel\sim\over\lgr H^1(k, \text{\rm{Aut}}(\mathfrak{G}_0)_k)$ \cite[XXIV.8.1]{SGA3}. Since $A\to k$ has a section, it follows that $\mathfrak{G}$ is isomorphic to $\mathfrak{G}_k \times_k A$. Clearly, $\mathfrak{G}$ is isotropic if and only if $\mathfrak{G}_k$ is isotropic. Then by Proposition~\ref{prop:hens-section} $G(A)/R\to G(k)/R$ is an isomorphism and by Theorem~\ref{thm:surj} $\mathfrak{G}(A)/R\to\mathfrak{G}(K)/R$ is surjective. Let $\hat A$ be the completion of $A$ at the maximal ideal and let $\hat K$ be its fraction field. Then $\hat A$ is a complete regular local ring containing $k_0$ and $k$ is its residue field. By Corollary~\ref{cor:cons} the maps $G(\hat A)/R\to G(k)/R$ and $G(\hat A)/R\to G(\hat K)/R$ are isomorphisms. Hence $G(A)/R\to G(\hat A)/R$ is an isomorphism, and consequently $G(A)/R\to G(K)/R$ is injective. \end{proof} \begin{scorollary} Let $B$ be a regular local ring containing a field $k_0$, let $L$ be the fraction field of $B$ and let $l$ be the residue field of $B$. Let $\mathfrak{G}$ be a reductive group scheme over $B$. There is a well-defined specialization homomorphism $\mathfrak{G}(L)/R\to \mathfrak{G}(l)/R$. \end{scorollary} \begin{proof} Let $\hat B$ denote the completion of $B$ with respect to the maximal ideal, and let $\hat L$ denote the fraction field of $\hat B$. By Theorem~\ref{thm:hens-inj} the natural maps $\mathfrak{G}(\hat B)/R\to \mathfrak{G}(l)/R$ and $\mathfrak{G}(\hat B)/R\to \mathfrak{G}(\hat L)/R$ are isomorphisms. The specialization map is then induced by these isomorphisms together with the map $\mathfrak{G}(L)/R\to\mathfrak{G}(\hat L)/R$. \end{proof} \begin{sremark}\label{rem_CT2}{\rm Colliot-Th\`el\`ene, Harbater, Hartmann, Krashen, Parimala, and Suresh have constructed a specialization homomorphism for arbitrary regular local rings of dimension 2~\cite[Prop. A.12]{CTHHKPS}. } \end{sremark} \section{Appendices} \subsection{The big Bruhat cell is a principal open subscheme.}\label{appendix_cell} For split groups and Borel subgroups, this statement goes back to Chevalley, see \cite[lemma 4.5]{Bo}. \begin{slemma}\label{lem_cell} Let $B$ be a ring and let $\mathfrak{G}$ be a reductive group $B$-scheme equipped with a pair of opposite parabolic $B$--subgroups $\mathfrak{P}^{\pm}$. Then the big cell $\Omega$ of $\mathfrak{G}$ attached to $\mathfrak{P}$ and $\mathfrak{P}^{-}$ is a principal open subscheme of $\mathfrak{G}$. More precisely, there exists $f \in B[\mathfrak{G}]$ such that $\Omega= \mathfrak{G}_f$ and $f$ can be chosen $\text{\rm{Aut}}(\mathfrak{G},\mathfrak{P}, \mathfrak{P}^{-})$--invariant. \end{slemma} \begin{proof} Without loss of generality, we can assume that $\mathfrak{G}$ is adjoint. We can assume $B$ noetherian and connected so that $(\mathfrak{G}, \mathfrak{P}, \mathfrak{P}^{-})$ is a $B$--form of $(\mathfrak{G}_0, \mathfrak{P}_0, \mathfrak{P}_0^{-})_B$ where $\mathfrak{G}_0$ is an adjoint Chevalley $\mathbb{Z}$--group scheme equipped with opposite parabolic $\mathbb{Z}$--group subschemes $(\mathfrak{P}_0, \mathfrak{P}_0^{-})$ related to the Chevalley pinning. Then $(\mathfrak{G}, \mathfrak{P}, \mathfrak{P}^{-})$ is the twist of $(\mathfrak{G}_0, \mathfrak{P}_0, \mathfrak{P}_0^{-})_B$ by an $\text{\rm{Aut}}(\mathfrak{G}_0,\mathfrak{P}_0, \mathfrak{P}^{-}_0)$-torsor so that the statement boils down to the split case over $\mathbb{Z}$. We consider the Levi--subgroup $\mathfrak{L}_0 = \mathfrak{P}_0^+ \cap \mathfrak{P}_0^{-}$ so that $\text{\rm{Aut}}(\mathfrak{G}_0,\mathfrak{P}_0, \mathfrak{P}^{-}_0) = \text{\rm{Aut}}(\mathfrak{G}_0, \mathfrak{P}_0, \mathfrak{L}_0)$ is the semi-direct product of $\mathfrak{L}_{0}$ and a finite constant $\mathbb{Z}$--group scheme $\Gamma$ \cite[lemme 5.1.2]{Gi4}. According to \cite[3.8.2.(a)]{BT2} there is a function $f_0 \in \mathbb{Z}[\mathfrak{G}_0]$ such that $\mathbb{Z}[\Omega_0]=\mathbb{Z}[\text{\rm \bf G}_0]_{f_0}$ and satisfying $f_0(1)=1$. We claim that $f_0$ is $\mathfrak{L}_0$-invariant with respect to the adjoint action. We denote by $\Lambda= \mathop{\rm Hom}\nolimits_{\overline{\mathbb{Q}}-gr} (\mathfrak{L}_{\overline{\mathbb{Q}}}, \mathbb{G}_m)$ the lattice of characters and remind the reader of Rosenlicht decomposition \cite[th. 3]{Ro} $$ H^0(\mathfrak{L}_{\overline{\mathbb{Q}}}, \mathbb{G}_m) = \overline{\mathbb{Q}}^\times \oplus \Lambda $$ which shows that $\Lambda = \bigl\{ f \in H^0(\mathfrak{L}_{\overline{\mathbb{Q}}}, \mathbb{G}_m) \, \mid \, f(1)=1 \bigr\}$. We observe that the induced action (by the adjoint action) of $\mathfrak{L}_0(\overline{\mathbb{Q}})$ on $\Lambda$ is trivial. It follows that the map \[ \phi: \mathfrak{L}_0(\overline{\mathbb{Q}}) \to \overline{\mathbb{Q}}[\mathfrak{L}_0]^\times \to \Lambda, \enskip x \mapsto {^x\!f}_0 \, f_0^{-1} \] is a group homomorphism. Since $\mathfrak{L}_{0, \overline{\mathbb{Q}}}$ is generated by its maximal tori, we have $\mathfrak{L}_0(\overline{\mathbb{Q}}) = \langle \mathfrak{L}_0(\overline{\mathbb{Q}})^n \rangle$ for all $n \geq 1$. We get that $\phi$ is zero and this establishes the above claim. Taking the product of $\Gamma$-conjugates of $f_0$ permits to assume that $f_0$ is $\text{\rm{Aut}}(\mathfrak{G}_0, \mathfrak{P}_0, \mathfrak{L}_0)$-invariant. By descent, $f_0$ gives rise to then to $f \in B[\mathfrak{G}]$ so that $\Omega= \mathfrak{G}_{f}$. \end{proof} \subsection{Colliot-Th\'el\`ene and Ojanguren method for functors in pointed sets.} \label{appendix_cto} In this section we summarize the classic injectivity theorem of Colliot-Th\'el\`ene and Ojanguren~\cite[th. 1.1]{CTO}. Our goal is to make explicit the fact that a certain intermediate step in the proof of this theorem holds under weaker assumptions than the theorem itself. Let $k$ be an infinite field and let $R \mapsto F(R)$ be a covariant functor on the category of $k$--algebras (commutative, unital) with values in pointed sets. We consider the following properties: \medskip $({\bf P}_1)$ The functor $F$ commutes with filtered direct limits of $k$-algebras having flat transition morphisms. \medskip $({\bf P}_2)$ For each $k$--field $E$ and for each $n \geq 1$, the map $$ F\bigl( E[t_1, \dots, t_n] \bigr) \to F\bigl( E(t_1, \dots, t_n) \bigr) $$ has trivial kernel; \medskip $({\bf P}_3)$ (Patching property) For each finite type flat inclusion $A \hookrightarrow B$ of noetherian integral $k$--algebras and each non-zero element $f \in A$ such that $A/fA \buildrel\sim\over\lgr B/fB$, then the map $$ \mathop{\rm Ker}\nolimits\bigl( F(A) \to F(A_f)\bigr) \to \mathop{\rm Ker}\nolimits\bigl( F(B) \to F(B_f)\bigr) $$ is onto. \medskip One may consider the following weaker property. \medskip $({\bf P}'_3)$ (Zariski patching) For each noetherian integral $k$--algebra $A$ and for each decomposition $A= Af + Ag$ with $f$ non-zero, then the map $$ \mathop{\rm Ker}\nolimits\bigl( F(A) \to F(A_f)\bigr) \to \mathop{\rm Ker}\nolimits\bigl( F(A_g) \to F(A_{fg} )\bigr) $$ is onto. \medskip We have ${\bf P}_3 \Longrightarrow {\bf P}'_3$ by taking $B=A_g$ since we have $B_f= A_{fg}$ and $A/fA \buildrel\sim\over\lgr B/fB$. The following theorem was proved by Colliot-Th\'el\`ene and Ojanguren. \begin{stheorem} \label{thm_cto} \cite[th. 1.1]{CTO} We assume that $F$ satisfies ${\bf P}_1$, ${\bf P}_2$ and ${\bf P}_3$. Let $A$ be a local ring of a smooth $L$--ring $C$ where $L$ is a $k$--field. Denote by $K$ the fraction field of $A$. Then the map $ F\bigl( A \bigr) \to F\bigl( K\bigr) $ has trivial kernel. \end{stheorem} The proof of this theorem relies on the following result. \begin{sproposition}\label{prop_cto}~\cite[prop. 1.5]{CTO} We assume that $F$ satisfies ${\bf P}_1$, ${\bf P}_2$ and ${\bf P}'_3$. Let $A$ be the local ring at a prime ideal of a polynomial algebra $L[t_1, \dots, t_d]$ where $L$ is a $k$--field. Denote by $K$ the fraction field of $A$. Then for each integer $n \geq 0$, the map $$ F\bigl( A[x_1, \dots, x_n] \bigr) \to F\bigl( K(x_1, \dots, x_n)\bigr) $$ has trivial kernel. \end{sproposition} \begin{proof} The original statement of~\cite[prop. 1.5]{CTO} assumes that $F$ satisfies ${\bf P}_1$, ${\bf P}_2$ and ${\bf P}_3$, and that $A$ is a maximal localization of $L[t_1, \dots, t_d]$. The inspection of the proof shows that instead of property ${\bf P}_3$, only the Zariski patching property ${\bf P}'_3$ was used. Furthermore, since every prime ideal of $L[t_1, \dots, t_d]$ is an intersection of maximal ideals, and $F$ satisfies ${\bf P}_1$, the case where $A$ is a localization at a prime ideal follows from the case of maximal localizations~\cite[p. 101, Premi\`ere r\'eduction]{CTO}. \end{proof} \bigskip \subsection{Fields of representants for henselian regular local rings} The following fact was brought to our attention by K. \v{C}esnavi\v{c}ius. \begin{slemma}\label{lem:hens-section} Let $A$ be a henselian local ring containing a field $k_0$. Then $A$ is a filtered direct limit of henselian local rings $A_i$ such that the map from $A_i$ to its residue field admits a section. If $A$ is moreover regular, then the henselian local rings $A_i$ can be chosen regular as well. \end{slemma} \begin{proof} Clearly, we can assume that $k_0$ is a finite field or $\mathbb{Q}$ without loss of generality. The local ring $A$ is a filtered direct limit of local rings $C_i$ that are localizations of finitely generated $k_0$-algebras contained in $A$. Since $A$ is henselian, we can replace each $C_i$ by its henselization $A_i=(C_i)^h$. Let $k_i=A_i/m_i$ be the residue field of $A_i$. Then $k_i$ is a finitely generated field extension of $k_0$. We claim that $A_i\to k_i$ admits a section. Indeed, since $k_0$ is perfect, it follows that $k_i$ is separably generated over $k_0$, that is, $k_i$ is a finite separable extension of a purely transcendental field extension $L=k_0(t_1,\ldots,t_n)$ of $k_0$ of finite transcendence degree~\cite[II, \S 13, Theorem 31]{ZaSa-I}. Choose arbitrary lifts $a_1,\ldots,a_n$ of $t_1,\ldots,t_n$ to $A_i$. Then $k_0(a_1,\ldots,a_n)\cong L$ is a subfield of $A_i$ that lifts $L$. The primitive element theorem states that $k_i=L[b]=L/P(t)$ where $P$ is a separable $L$--polynomial. Seeing now $P$ as a $A$--polynomial, the Hensel lemma shows that $P(t)$ has a root $a \in A$ which lifts $b \in L$. We define then a $L$--map $k_i=L[b] \to A$ by mapping $b$ to $a$. The composite map $k_i \to A \to k_i=L[b]$ is the identity as desired. If $A$ is a regular henselian local ring, note that the embedding $k_0\to A$ is geometrically regular, since $k_0$ perfect~\cite[(28.M), (28.N)]{Mats}. Then by Popescu's theorem~\cite{Po90,Swan} $A$ is a filtered direct limit of localizations $C_i$ of smooth $k_0$-algebras. Then the henselizations $A_i$ are also regular. \end{proof}
gr-qc/9611030
\section{Introduction} The recent progress in non-perturbative quantum gravity using Ashtekar's formulation of general relativity is due, in part, to the application to gravity of techniques used for studying Yang-Mills theory non-perturbatively \cite{Ash}. The constraints of Euclidean or Lorenzian general relativity are appealing in this formulation because they are polynomials of order at most four in the basic variables. An example of the progress is a complete non-perturbative quantization \cite{ALMMT} of the Husain-Kucha\v r model. This model consists of a four-dimensional generally covariant $SU(2)$ gauge theory, which happens to be perturbatively non-renormalizable, and has a phase space like that of general relativity, except that the Hamiltonian constraint vanishes identically \cite{HK}. More recently, a possibly complete non-perturbative quantization of general relativity has been given by Thiemann \cite{TT}. An important aspect of the approach is the use of the $SU(2)$ invariant Wilson loops as elementary classical variables of the theory. There is a non-countable number of such elementary variables, since they are labelled by the (inequivalent) loops around which the holonomies are evaluated. In order to control this configuration space, it is given as the projective limit of finite dimensional spaces associated with a finite number of inequivalent loops. The aim of this paper is a better understanding of the reduced phase space of Euclidean general relativity in the Ashtekar variables. As a first step in this direction, we give a complete classification of all $SU(2)$ and diffeomorphism invariant local quantities. At the same time, this corresponds to a complete classification of the local conservation laws of the Husain-Kucha\v r model. The characterization ``local'' comes from the fact that we work in the context of jet-spaces, which provide an appealing (countable) projective family for analytic sections. During the analysis, we are naturally led to a special covariant derivative, given by the $SU(2)$ covariant derivative, where the spacetime indices are converted into $su(2)$ indices using the inverse triads. A first result, of considerable interest in itself, is that the constraints can be rewritten in a purely geometrical way. The diffeomorphism constraint corresponds to the vanishing of the trace of the curvature of this covariant derivative, and the Gauss constraint corresponds to the vanishing of the trace of its torsion. The Hamiltonian constraint corresponds to an additional algebraic restriction on the curvature. The paper is organized as follows. In the next section, we fix the notations and define the models. We then introduce the special covariant derivative, and give the covariant representation of the constraints and their algebra. The following two sections are devoted to explaining how this representation may be arrived at in a constructive way. In the third section we give some ideas about jet-spaces as applied to $SU(2)$ Yang-Mills theory. A detailed analysis of the orbit space and Wilson loops in this context serves both as a warm up before discussing the more complicated case of diffeomorphism invariance, and as a possible bridge for comparison with the quantization using Wilson loops as fundamental variables. We first present a change of coordinates which allows the separation of coordinates purely along the gauge orbits from coordinates containing the gauge invariant information. We then show how the local information contained in the Wilson loops can be expressed in terms of these latter coordinates. In the fourth section, we apply these ideas to the diffeomorphism and Gauss constraints of Euclidean general relativity in Ashtekar's variables to obtain the geometrical representation of the constraint surface. The classification of all the local conservation laws of the Husain-Kucha\v r model \cite{HK}, or equivalently, all local diffeomorphism and $SU(2)$ invariant quantities is done in section 5 and a corresponding appendix, containing the local BRST cohomology of the model. Finally, we consider models with the Hamiltonian constraint function as the integrand of an action. In three dimensions such an action is topological in the sense that its field equations require both the curvature and the torsion of the covariant derivative to vanish. In four dimensions, the covariant field equations give an identically vanishing Hamiltonian constraint. \section{Geometrical representation of constraints} \setcounter{equation}{0} Let us first fix the conventions. The indices $i,j,k,\dots$ denote the $SU(2)$ indices which are raised and lowered with the Euclidean metric, while the indices $a,b,c,\dots$ denote three dimensional space indices. Let $\tilde \eta^{abc}$ be the alternating symbol in space, $\tilde e = {1\over 3!}\tilde \eta^{abc}e_a^i e_b^j e_c^k\epsilon_{ijk}$ the determinant of the triad $e^i_a$, $\tilde E^a_i=\tilde e e^a_i$ the density weighted cotriad and $A^i_a$ the $SU(2)$ connection. The generator of $SU(2)$ rotations is denoted by $\delta_i$, so for any $SU(2)$ vector $\omega^j$, $\delta_i \omega^j=\epsilon^{j}_{\ il}\omega^l$. The $SU(2)$ covariant derivative is defined by $D_a=\partial_a+A^i_a \delta_i$. The corresponding curvature is $F^i_{ab}=\partial_{[a} A^i_{b]}+\epsilon^i_{jk}A^j_a A^k_b$, where the square brackets denote antisymmetrization without the factor ${1\over 2}$. Let us also introduce, for later purposes \begin{equation} T^i_{ab} := D_{[a}e^i_{b]}. \end{equation} The constraints of Euclidean general relativity in Ashtekar's variables are \begin{eqnarray} \tilde G_i &\equiv& -D_a \tilde E^a_i=0\label{gauss}\\ \tilde H_a &\equiv& \partial_{[a}A^i_{b]}\tilde E^b_i-A^i_a\partial_b\tilde E^b_i=0\label{diffeo}\\ \tilde{\tilde C} &\equiv& F^i_{ab}\tilde E^a_j \tilde E^b_k{\epsilon_{i}}^{jk}=0 \label{ham}. \end{eqnarray} One often replaces the diffeomorphism constraint by the vector constraint \begin{eqnarray} \tilde V_a\equiv F^i_{ab}\tilde E^b_i=\tilde H_a-A^i_a \tilde G_i. \end{eqnarray} which is an intermediate step in our redefinition of the contraint surface. Let \begin{eqnarray} F^i_{\ jk}&=&F^i_{ab}e^a_je^b_k, \ \ \ \ F_i=F^j_{\ ij},\ \ \ \ F=\epsilon_{ijk}F^{ijk}\nonumber \\ T^i_{\ jk}&=&T^i_{ab}e^a_je^b_k,\ \ \ \ T_i=T^j_{\ ij},\ \ \ \ T=\epsilon_{ijk}T^{ijk}. \end{eqnarray} Consider the covariant derivative \begin{equation} D_i=e^a_i D_a. \end{equation} Its curvature $F^i_{\ jk}$ and torsion $T^i_{\ jk}$ are given by \begin{eqnarray} [D_i,D_j]=F^k_{\ ij}\delta_k-T^k_{\ ij}D_k \end{eqnarray} The Bianchi identities following from $[D_k,[D_i,D_j]]+{\rm cyclic}\ (k,i,j)=0$ are \begin{eqnarray} D_k F^j_{\ mn}-F^j_{\ ki}T^i_{\ mn}+{\rm cyclic}\ (k,m,n)&=&0\label{b1}\\ D_k T^j_{\ mn}-T^j_{\ ki}T^i_{\ mn}+ \epsilon^{j}_{\ ki}F^i_{\ mn}+ {\rm cyclic}\ (k,m,n)&=&0. \label{b2} \end{eqnarray} The constraint surface defined by the equations (\ref{gauss})-(\ref{ham}) may equivalently be represented by the equations \begin{eqnarray} T_i=0\label{y1}\\ F_i=0\label{y2}\\ F=0\label{y3}. \end{eqnarray} Indeed, the first equation is just the Gauss constraint divided by $\tilde e$, the second equation is the vector constraint divided by $\tilde e$ and contracted with $e^a_i$, while the last equation is the Hamiltonian constraint divided by $(\tilde e)^2$. Let $\vec\lambda(x),\vec\mu(x)$ be space dependent $SU(2)$ vectors with $[\vec\lambda,\vec\mu]^i=\epsilon^i_{jk} \lambda^j\mu^k$, let $\rho(x),\sigma(x)$ be space dependent scalars and let the smeared version of the constraint be defined by \begin{eqnarray} {\cal T}[\vec\lambda]=\int d^3x\ \tilde e T_i\lambda^i\label{gs2}\\ {\cal F}[\vec\mu]=\int d^3x\ \tilde e F_i\mu^i\label{v1}\\ {\cal C}[\rho]=\int d^3x\ \tilde e F\rho. \end{eqnarray} A direct computation using the first of the Bianchi identities (\ref{b1}) gives for the constraint algebra \begin{eqnarray} \{{\cal T}[\vec\lambda],{\cal T}[\vec\mu]\} &=&{\cal T}[{[\vec\lambda,\vec\mu]}]\\ \{{\cal T}[\vec\lambda],{\cal F}[\vec\mu]\}&=&{\cal F} [{[\vec\lambda,\vec\mu]}]\\ \{ {\cal T}[\vec\lambda],{\cal C}[\rho] \}&=& 0\\ \{{\cal F}[\vec\lambda],{\cal F}[\vec\mu]\}&=& {\cal T}[-\vec F_{jk}\lambda^j\mu^k+{3\over 2}F_i(\lambda^i\vec\mu-\mu^i\vec\lambda)] + {\cal F}[\vec T_{jk}\lambda^j\mu^k]\nonumber\\ &\equiv&{\cal T}[-\vec F_{jk}\lambda^j\mu^k]+{\cal F}[\vec T_{jk}\lambda^j\mu^k-{3\over 2}T_i(\lambda^i\vec\mu-\mu^i\vec\lambda)]\\ \{{\cal C}[\rho],{\cal F}[\vec\lambda]\}&=&{\cal T}[{1\over 2}F\rho\vec\lambda+2\rho{{\vec \epsilon}_{i}}^m{F^i}_{km}\lambda^k+ 2\rho[\vec\lambda,\vec F]]\nonumber \\ & & +{\cal F}[-{1\over 2}\rho T\vec\lambda+2[\vec D\rho,\vec\lambda]+\rho \vec T_{ij}{\epsilon^{ij}}_k\lambda^k]\\ \{{\cal C}[\rho],{\cal C}[\sigma]\}&=&{\cal C}[-4F^i(\rho D_i\sigma-\sigma D_i\rho)]\nonumber \\ \\ &\equiv&{\cal F}[-4F(\rho\vec D\sigma-\sigma\vec D\rho)] \end{eqnarray} Note that the algebra of the modified vector contraints contains structure functions, but that these relations contain no derivatives of the smearing functions. This is contrary to what happens for the usual representation (\ref{diffeo}). All the structure functions are $SU(2)$ tensors and contain no space indices. \section{Orbit space of $SU(2)$ Yang-Mills theory in the jet-bundle approach and Wilson loops.\label{2}} \setcounter{equation}{0} \subsection{Gauge orbits} Let us take for simplicity Euclidean space ${\bf R}^3$ as the base space of the trivial principal bundle $\pi:{\bf R}^3\times SU(2)\longrightarrow {\bf R}^3$. An analytic connection $A^i_a$ is a section from ${\bf R}^3$ to $su(2)$ which can be represented by giving all its partial derivatives at a point $x_0$. Let us denote by $V^k$ the space with coordinates \begin{equation} (A_a^i,\ \partial_{b_1}A_a^i,\cdots,\partial_{b_k}\cdots\partial_{b_1}A_a^i). \label{oldc} \end{equation} Using a multi-index notation, denote coordinates on $V^k$ collectively by $\partial_B A^i_a$, where the order $|B|$ of the multiindex is less than $k$. The bundle $\pi:{\bf R}^3\times V^k \longrightarrow {\bf R}^3$ is called the $k$-th order jet-bundle and denoted by $J^k$. A point $\tau$ in $J^k$ has coordinates \begin{equation} \tau = (x,\ A_a^i,\ \partial_{b_1}A_a^i,\cdots,\partial_{b_k}\cdots\partial_{b_1}A_a^i). \end{equation} The spaces $V^k$, and the bundles $J^k$, for $k\in {\bf N}$, form a projective family. (For more details see for example \cite{Saunders}.) A local function $f$ of the connection is by definition a smooth, space dependent function, which depends only on a finite number of derivatives of the connection. Hence it belongs to $C^\infty (J^k)$ for some $k$; $f = f(\tau)$. Gauge transformations of the connection are characterized by giving a group element $g(x_0)$ at every point $x_0$. If $\tau_i$ are the Pauli matrices, then $T_j=-{i\over 2}\tau_j$ are generators of $SU(2)$, and gauge transformations act on $A=A^i_aT_idx^a$ as $A_g=g^{-1}Ag+g^{-1}dg$. If $g$ is of the form $g={\rm exp}(\epsilon^iT_i)$ with space dependent $\epsilon^i$, the corresponding infinitesimal gauge transformation are $\delta_\epsilon A^i_a=D_a \epsilon^i$. The total derivative $d_a$ of a function $f(\tau)$ is \begin{eqnarray} d_a f = \partial_a f + \partial_{Ba} A^i_c {\partial f \over\partial (\partial_B A^i_c)} \ . \label{p} \end{eqnarray} Under infinitesimal gauge transformations $\delta_\epsilon A_a^i = D_a\epsilon^i$, and $f(\tau)$ changes according to \begin{equation} \delta_\epsilon f = D_a\epsilon^i {\partial f\over \partial A_a^i} + \cdots + \partial_{c_1}\cdots\partial_{c_k}(D_a\epsilon^i) {\partial f\over \partial(\partial_{c_1}\cdots\partial_{c_k} A_a^i) }. \end{equation} Thus in the jet-bundle $J^k$, infinitesimal gauge transformations are generated by the family of vector fields \begin{eqnarray} \vec X_\epsilon=\partial_B(D_a\epsilon^i) {\partial \over\partial (\partial_B A^i_a)}\label{vf}, \end{eqnarray} which are tangent to the fibers $V^k$, and are parametrized by the functions $\epsilon^i$. The vector fields on $J^k$ form a module over the algebra of local functions $C^\infty(J^k)$, and a generating set for the above family is obtained from (\ref{vf}) by choosing the following values for the functions $\epsilon^i$ and their derivatives at the point $x_0$: \begin{eqnarray} \epsilon^i= \delta^i_j,\ \partial_a \epsilon^i=0,\cdots, \partial_{a_1\cdots a_{k+1}}\epsilon^i=0 & \leftrightarrow & \vec X_j\nonumber\\ \epsilon^i=0,\ \partial_a \epsilon^i= \delta^b_a\delta^i_j,\cdots,\partial_{a_1\cdots a_{k+1}} \epsilon^i=0 & \leftrightarrow & \vec X^b_j\nonumber\\ &\vdots & \nonumber\\ \epsilon^i=0,\ \partial_a\epsilon^i=0,\cdots, \partial_{a_1\cdots a_{k+1}} \epsilon^i=\delta^{(b_1\cdots b_{k+1})}_{(a_1 \cdots a_{k+1})} \delta^i_j & \leftrightarrow & \vec X^{(b_1\cdots b_{k+1})}_j. \label{cv} \end{eqnarray} In other words, an element of the family of vector fields (\ref{vf}) is obtained by fixing a point \begin{equation} (x,\ \epsilon^i,\ \partial_{a_1}\epsilon^i,\cdots,\partial_{a_1}\cdots \partial_{a_{k+1}}\epsilon^i) \label{jk+1} \end{equation} in the jet-bundle $J^{\prime k+1}$ associated with the sections of the bundle $\pi:{\bf R}^3\times su(2)\longrightarrow {\bf R}^3$. The above generating set corresponds to fixing the points defined by the vectors tangent to each of the coordinate lines in the fibre $V^{\prime k+1}$ of $J^{\prime k+1}$. The vector fields \begin{eqnarray} \vec X^b_j,\cdots, \vec X^{(b_1\dots b_{k+1})}_j \label{vf1} \end{eqnarray} are in involution. The commutation rules for the entire set $\vec X_j$, $\vec X^b_j,\cdots$, $\vec X^{(b_1\cdots b_{k+1})}_j$ are summarized in the relation \begin{eqnarray} [\vec X_\epsilon,\vec X_\eta]=\partial_{B}[D_a(\epsilon^i_{\ jk}\epsilon^j\eta^k)] {\partial \over\partial (\partial_{B}A^i_a)}. \label{cr} \end{eqnarray} The involution property is deduced from this by choosing the canonical values (\ref{cv}) for $\epsilon^j$, $\eta^k$ and their derivatives. By Frobenius theorem, the set of vector fields $\vec X_j$, $\vec X^b_j,\dots$, $\vec X^{(b_1\dots b_{k+1})}_j$ is integrable, and hence tangent to finite dimensional integral submanifolds of the fibers $V^k$. These submanifolds are just the gauge orbits ${\cal G}^k$. The collection of maximal dimensional gauge orbits defines a foliation of the fibers $V^k$; the gauge orbits are the leaves of this foliation. \subsection{Orbit space} Let us now investigate the linear independence of the vector fields (\ref{vf1}) in order to study the structure of the space of orbits $V^k/{\cal G}^k$. Consider the following functions on $V^k$ : \begin{eqnarray} & & A_a^i,\ \partial_{(b_1}A^i_{a)},\cdots ,\ \partial_{(b_k}\cdots \partial_{b_1} A^i_{a)},\label{y5}\\ & & F_{b_1a},\ D_{(b_2}F_{b_1)a},\cdots ,\ D_{(b_k} \cdots D_{b_2} F^i_{b_1)a},\label{x5} \end{eqnarray} These functions can be taken as new coordinates on $V^k$ \cite{DVHTV,Tor}. In the Abelian case, this change of coordinates corresponds exactly to separating the old coordinates $\partial_{b_k}\cdots\partial_{b_1}A_a$ (\ref{oldc}) into pieces symmetrized and anti-symmetrized on $a$ and $b_i$, for any $1\le i\le k$. In the non-Abelian case the basic idea is the same, although the details are different due to the commutator in $F_{ab}^i$. In the new coordinates, the family of vector fields (\ref{vf}) is \begin{eqnarray} \vec X_\epsilon &=& \sum_{l=0}^k\ [\partial_{(b_l\cdots b_1}D_{a)} \epsilon^i {\partial\over\partial (\partial_{(b_l\cdots b_1}A^i_{a)})} \nonumber\\ & &+\epsilon^i_{\ jk}D_{(b_{l}}\cdots D_{b_2} F^j_{b_1) a} \epsilon^k{\partial \over\partial (D_{(b_{l}}\cdots D_{b_2} F^i_{b_1)a})}]. \end{eqnarray} It is then straightforward to check that an equivalent generating set to (\ref{cv}) is obtained by making the following choice for the gauge parameters $\epsilon_i$ and their derivatives : \begin{eqnarray} \epsilon^i= \delta^i_j,\ D_a \epsilon^i=0,\ \dots,\ \partial_{(a_1\dots a_{k}}D_{a_{k+1})}\epsilon^i=0 & \leftrightarrow & \vec Y_j\nonumber\cr \epsilon^i=0,\ D_a \epsilon^i= \delta^b_a\delta^i_j,\ \dots,\ \partial_{(a_1\dots a_{k}}D_{a_{k+1})} \epsilon^i=0 & \leftrightarrow & {\partial\over\partial A^j_b}\nonumber\cr &\vdots &\nonumber\cr \epsilon^i=0,\ D_a \epsilon^i=0,\ \dots,\ \partial_{(a_1\dots a_{k}}D_{a_{k+1})} \epsilon^i=\delta^{(b_1\dots b_{k+1})}_{(a_1 \dots a_{k+1})} \delta^i_j & \leftrightarrow & {\partial\over\partial(\partial_{(b_1\dots b_{k}} A^j_{b_{k+1})})}, \end{eqnarray} where \begin{eqnarray} \vec Y_k=\sum_{l=0}^k\ [\epsilon^i_{\ jk}D_{(b_{l}}\cdots D_{b_2}F^j_{b_1) a}{\partial \over\partial (D_{(b_{l}}\cdots D_{b_2}F^i_{b_1)a})}]. \end{eqnarray} This new choice of values for the gauge parameters corresponds to the following situation. We have the Whitney sum bundle $J^k\oplus J^{\prime k+1}$, whose fiber consists of $V^k\oplus V^{\prime k+1}$. In this direct sum, the new choice of gauge parameters corresponds to a change of coordinates in the second factor, from the old coordinates (\ref{jk+1}) to the new ones \begin{equation} (x,\ \epsilon^i,\ D_{a_1}\epsilon^i,\cdots,\ \partial_{(a_1}\cdots \partial_{a_k}D_{a_{k+1})}\epsilon^i ). \end{equation} The associated generating set for the gauge orbits is obtained by fixing, in the second factor of the sum, those points which are determined by the vectors tangent to the new coordinate lines. The question of linear independence is now reduced to the investigation of the linear independence of the three vector fields $\vec Y_j$, since the other vector fields, being tangent to different coordinate lines, are obviously independent. Alternatively, one sees that the coordinates (\ref{y5}) are coordinates purely along the gauge orbits, while the remaining coordinates transform under the adjoint action of the group. This is reminiscent of what happens if one considers holonomies around closed loops as the basic variables of the theory, which also transform under the adjoint action. This analogy will be made more precise in the last part of this section. As an example consider the space $V^1$. The coordinates on $V^1$ are \begin{equation} (A_a^i,\ \partial_{(b_1}A^i_{a)},\ F^i_{{b_1}a} ). \end{equation} In two spacetime dimensions the three vector fields $\vec Y_i$ are \begin{equation} \vec Y_k=\epsilon^i_{\ jk}F^j_{01} {\partial\over\partial F^i_{01}}. \end{equation} These are the field space analogs of the usual angular momentum generators (for one particle) in three dimensions $\epsilon_{ij}^{\ \ k}x^j\partial_k$. On the origin $\vec F_{01}=0$, and all three vector fields vanish, while for $\vec F_{01}\ne 0$, there is one relation between them. Their orbits are the $2$-dimensional spheres centered at the origin in ${\bf R}^3$ with coordinates $F^i_{01}$. In more than two spacetime dimensions, or for $V^k$ with $k>1$, the three vector fields $\vec Y_k$ are of the form \begin{equation} \vec Y_k=\epsilon^i_{\ jk}x^j_S {\partial \over\partial x^i_S}, \end{equation} where we have used $x^i_S$ to denote the coordinates $D_{(b_{l}}\cdots D_{b_2}F^i_{b_1)a}$. The range $N$ of the index $S=1,2,\cdots,N$ depends on the spacetime dimension and $k$. Thus $\vec Y_k$ looks like the sum of the angular momentum generators of $N$ particles: $\vec Y_k = \vec Y_k^{(1)}+\cdots+ \vec Y_k^{(N)}$. In the generic situation, all of the $N$ particles will not lie on a line through the origin, and therefore the orbits of the three $Y_k$ will be three-dimensional. \subsection{Wilson loops} Any gauge invariant polynomial or formal power series on $J^k$ can be written as a power series in the $x^i_S$, where all the internal indices are tied up with the invariant tensors $\delta_{ij}$ and $\epsilon_{ijk}$. This follows from an analysis of the BRST cohomology (see for instance \cite{DVHTV}). On the other hand, it is well known that Wilson loops are non-local gauge invariant objects, and that their knowledge, for all loops, fixes the gauge potentials up to a gauge transformation \cite{Giles}. The object of the following is to show that analytic Wilson loops can be written as a formal power series of invariant monomials in the coordinates $x^i_S$. First of all, it is straightforward to see that holonomies can be described as a formal power series on $J^\infty$. Consider a path $\gamma$ in $\bf R^3$, with base point $x_0$. Divide $\gamma$ into $n+1$ segments given by displacement vectors $\Delta x^a_k$, $0\le k\le n$. Then the discretized holonomy is \begin{eqnarray} H^D_\gamma[A] &=& [1-A_a^i(x_0)\tau^i\Delta x_0^a]\ [1-A_a^i(x_1)\tau^i\Delta x_1^a] \cdots [1-A_a^i(x_n)\tau^i\Delta x_n^a] \nonumber \\ &=& \prod_{k=0}^n\ [1-A_a^i(x_k)\tau^i \Delta x_k^a], \label{disw} \end{eqnarray} with the continuum limit given by \begin{equation} H_\gamma[A] = \lim_{\Delta x^a\to 0 \atop n\to \infty} H^D_\gamma[A]. \end{equation} To rewrite $H^D_\gamma[A]$ as a polynomial on $J^n$, we have to express each $A_a^i(x_k)$ as a function of derivatives of $A_a^i$ evaluated at the base point $x_0$. The answer is simply \begin{eqnarray} A_a^i(x_1) &=& A_a^i(x_0) + (\partial_{b_1}A_a^i)(x_0)\Delta x_0^{b_1} \nonumber \\ A_a^i(x_2) &=& A_a^i(x_0) + (\partial_{b_1}A_a^i)(x_0)\ (\Delta x_0^{b_1}+ \Delta x_1^{b_1}) + (\partial_{b_2}\partial_{b_1}A_a^i)(x_0)\Delta x_0^{b_1}\Delta x_1^{b_2} \nonumber \\ {\rm etc.} && \end{eqnarray} Each term in (\ref{disw}) may now be rewritten in the new coordinates (\ref{y5})-(\ref{x5}). In the continuum limit, we get a formal power series on $J^\infty$. We now want to show that, for closed loops, the coordinates (\ref{y5}) do not appear and that, if one takes the trace to obtain a gauge invariant functional, the Wilson loop, the remaining coordinates (\ref{x5}) are contracted on their internal indices with invariant tensors. This can be deduced as follows. The gauge invariance of the Wilson loop implies the gauge invariance of its discretized version, which can be described, as seen above, as a polynomial on $J^n$. Since this polynomial is gauge invariant, it must be an invariant polynomial in the $x^i_S$ \cite{DVHTV}. Alternatively, we can give the following constructive proof. Let us adopt the conventions of the non abelian Stokes theorem \cite{Ar}, i.e., take a surface $\Sigma$ in ${\bf R}^3$ defined by analytic functions $x^a=f^a(s,t)$ with $0\leq s,t\leq 1$ and $f^{\prime a}=\partial f^a/ \partial s$, $\dot f^a=\partial f^a/\partial t$. Let $h(s,t)$ be the holonomy along the curve $f^a(s^\prime,t)$, $0\leq s^\prime\leq s$ at fixed $t$ and $g(s,t)$ the holonomy along the curve $f^a(s,t^\prime)$, $0\leq t^\prime\leq t$ at fixed $s$. Note that in our conventions, the holonomy around a path $\gamma$ is defined by $H_\gamma={\rm Pexp}(\int_\gamma -A^i_aT_idx^a)$. Following \cite{Ar}, we divide the square $[0,1]\times[0,1]$ into $nm$ rectangles with sides ${1\over n}, {1\over m}$. The holonomy around the boundary $\partial\Sigma$ is given by \begin{equation} H(\partial \Sigma)=\lim_{n,m\rightarrow\infty}H_{n,m}(\partial \Sigma)\label{f} \end{equation} with \begin{equation} H_{n,m}(\partial\Sigma)={\cal P}_{s,t}\prod_{l,k=0}^{n-1,m-1}Sp(l,k) \label{f1}. \end{equation} In this equation ${\cal P}_{s,t}$ denotes the ordering which puts a matrix with the large value of the first argument to the right and, for identical first arguments it puts the one with the smaller second argument to the right, while $Sp(l,k)$ is the holonomy around the spoon loop with bowl based at $f^a(l,k)$ : \begin{eqnarray} Sp(l,k) &\equiv& h^{-1}({l\over n},0)\ g^{-1}({l\over n}, {k\over m})\ [I+{1\over nm}T_i F^i_{ab}f^{\prime a}\dot f^b(f({l\over n},{k\over m})) \nonumber \\ && + o({1\over nm})]\ g({l\over n},{k\over m})\ h({l\over n},0). \end{eqnarray} Following the reasoning in \cite{Loos}, we find that this holonomy reduces to \begin{eqnarray} Sp(l,k) &=& h^{-1}({l\over n},0)\ g^{-1}({l\over n},{k-1\over m})\ [I+{1\over nm}T_i f^{\prime a}\dot f^b(f({l\over n},{k\over m})) \nonumber \\ & & \{1+{1\over m}\dot f^c D_c\}\ F^i_{ab}(f({l\over n},{k-1\over m})) \nonumber \\ & & + o({1\over nm})]\ g({l\over n},{k-1\over m})\ h({l\over n},0) \nonumber \\ &=& h^{-1}({l\over n},0)\ [I+{1\over nm}T_i f^{\prime a}\dot f^b(f({l\over n},{k\over m})) \{ 1+{1\over m}\dot f^{c_k} D_{c_k}\} \nonumber \\ & & \cdots\{1+{1\over m}\dot f^{c_1} D_{c_1} \}F^i_{ab}(f({l\over n},0))+o({1\over nm})]\ h({l\over n},0) \nonumber \\ &=& [I+{1\over nm}T_i f^{\prime a}\dot f^b(f({l\over n},{k\over m})) \{ 1+{1\over n}f^{\prime b_l} D_{b_l}\}\cdots \{1+{1\over n} f^{\prime b_1} D_{b_1}\} \nonumber \\ & &\{ 1+{1\over m}\dot f^{c_k} D_{c_k}\}\cdots\{1+{1\over m} \dot f^{c_1} D_{c_1}\} F^i_{ab} (f(0,0))+o({1\over nm})] \nonumber \\ &=& [I+{1\over nm}T_i f^{\prime a}\dot f^b(f({l\over n},{k\over m})) \{{\rm exp}({l\over n} f^{\prime c} D_c)\} \nonumber \\ & & \{{\rm exp}({k\over m}\dot f^c D_c)\}F^i_{ab}(f(0,0)) +o({1\over nm})]. \end{eqnarray} Injecting this result into formula (\ref{f1}), and using the fact that ${\rm Tr} (T_{i_1}\dots T_{i_k})$ is an invariant tensor under the adjoint action of $su(2)$, (it is a linear combination of $\delta_{ij},\epsilon_{ijk}$ and their contractions), we find indeed that the Wilson loop ${\rm Tr} H(\partial\Sigma)$ can be written as a power series depending on the field strengths and all their symmetrized covariant derivatives $D_{(b_k}\cdots D_{b_{2}}F^i_{b_1)a}$, $k=1,\cdots,\infty$ evaluated at the base point of the loop. There are contractions on the group indices with invariant $su(2)$ tensors, and on the spatial indices with coefficients characterizing the loop $\partial\Sigma$. It also follows from this derivation that, at every finite level of approximation, the Wilson loop can be described as a local function, depending on invariant monomials in the $x^i_S$, and it is only when one takes the continuum limit $n, m\rightarrow \infty$ that it becomes a function involving an infinite number of derivatives. \section{Construction of the geometrical representation of the constraint surface} \setcounter{equation}{0} So far we have considered the jet space associated with $SU(2)$ Yang-Mills theory and considered an alternative set of coordinates on this space. This set of coordinates was defined in such a way as to isolate pure gauge directions. In this section we describe a similar change of coordinates on the jet space of Euclidean canonical general relativity in Ashtekar's variables. This is again designed to isolate pure gauge directions, for the gauge orbits generated by the kinematical constraints. This leads to the geometrical representation of the constraint surface. The general strategy and theorems on how to do this are explained in Ref. \cite{B}, and have already been used in the context of Lorentzian tetrad gravity in Ref. \cite{BBH3}. Similar ideas for gravity in Ashtekar's variables in the Lagrangian approach have been discussed in Ref. \cite{Mor}. The field content of the theory is given by the $SU(2)$ connection $A^i_a$ and the dreibein $e^i_a$. In addition, there are the gauge parameters for the Gauss and diffeomorphism constraints $\eta^i$ and $\eta^a$. We must consider the jet-space of all these fields. Since we will not consider the gauge orbits generated by the Hamiltonian constraint, we do not concern ourselves with the associated gauge parameter. The smeared constraints \begin{eqnarray} \int d^3x\ (\tilde G_i\eta^i+\tilde H_a\eta^a)\label{sm} \end{eqnarray} generate the gauge transformations \begin{eqnarray} \gamma A^i_a &=& D_a\eta^i+ L_\eta A^i_a \nonumber \\ \gamma e^i_a &=& -\eta^k\delta_ke^i_a + L_\eta e^i_a\label{gtr} \end{eqnarray} where the Lie derivative $L_\eta$ is given by $L_\eta A^i_a=\eta^c\partial_c A^i_a + A^i_c \partial_a\eta^c $, and similarily for $e^i_a$. The gauge parameters $\eta^i,\eta^a$ are taken to be commuting in this section, but in the BRST context, they are replaced by anticommuting ``ghosts''. Following Sec. 3 of \cite{BBH3}, we consider the set of coordinates \begin{eqnarray} &&\partial_{(a_l}\cdots\partial_{a_1}A^i_{b)},\ \ \ \partial_{(a_l}\cdots\partial_{a_1}e^i_{b)}\label{x1}\\ &&D_{(i_{l}}\cdots D_{i_2}F^k_{\ i_1)j},\ \ \ D_{(i_{l}}\cdots D_{i_2}T^k_{\ i_1)j}\label{x2}\\ &&\hat C^i=\eta^i+\eta^a A^i_a,\ \ \ \hat\xi^i=e^i_a\eta^a,\label{x3}\\ &&\partial_{(a_l}\cdots\partial_{a_2}K^i_{a_1)},\ \ \ \partial_{(a_l}\cdots\partial_{a_2}L^i_{a_1)},\label{x4} \end{eqnarray} where $l=0,\dots,k$. The $K^i_a$ and $L^i_a$ are gauge parameters replacing the derivatives of $\eta^i$ and $\eta^a$, and are defined by the combinations appearing on the r.h.s of the gauge transformations (\ref{gtr}): $K_a^i\equiv \gamma e_a^i$, $L_a^i\equiv \gamma A_a^i$. By following the same reasoning as in the previous section, that is, rewriting the vector fields generating the gauge transformations in the new coordinate system, and then showing that a generating set is obtained by giving canonical values to the combinations of gauge parameters (\ref{x3}) and (\ref{x4}), we can see that the coordinates (\ref{x1}) are purely along the gauge orbits. Indeed, giving canonical values to the parameters (\ref{x4}), one finds that the generating set contains the vector fields tangent to these coordinate lines. An alternative way to see this is the following: If, for example, $f=f(\partial_{(a} e^i_{b)}, \partial_{(a} A^i_{b)})$, then \begin{equation} f + \gamma f = f + {\partial f \over \partial_{(a}e^i_{b)}} \partial_{(a}K_{b)}^i + {\partial f\over \partial_{(a}A^i_{b)} } \partial_{(a}L_{b)}^i = f( \partial_{(a}e^i_{b)}+ \partial_{(a}K_{b)}^i, \partial_{(a}A^i_{b)} + \partial_{(a}L_{b)}^i), \end{equation} for infinitesimal transformations, which shows that the gauge transformations are just translations by (\ref{x4}) along the coordinate lines of $\partial_{(a} A^i_{b)}$ and $\partial_{(a} e^i_{b)}$. The coordinates (\ref{x2}) are therefore the only ones that are partly transversal to the gauge orbits. Denoting these collectively by ${\cal T}^r$, we see that they transform among themselves with the parameters (\ref{x3}) alone according to \begin{eqnarray} \gamma {\cal T}^r=-\hat C^k\delta_k {\cal T}^r + \hat \xi^k D_k {\cal T}^r. \end{eqnarray} If one now expresses the sum of the smeared constraints (\ref{sm}) in the new coordinate systems, one finds the expression \begin{eqnarray} \int d^3x \tilde e (F_i \hat \xi^i-T_i\hat C^i). \end{eqnarray} This implies the geometric representation (\ref{y1})-(\ref{y2}) of the constraint surface in terms of the ${\cal T}^r$ alone, in agreement with the general theorem of \cite{B}. It is well known that first class constraints play a double role, the first as generators of gauge transformations, the second as the restrictions which give physically acceptable initial data. Having considered the first aspect, we now turn to the constraints (\ref{y1})-(\ref{y3}) as restrictions. To do this explicitly, it is neccessary to further split the coordinates ${\cal T}^r$. We decompose the dual $\epsilon^{ijl}F^k_{\ ij}$ of $F^k_{ij}$ into a trace free symmetric part, a trace, and an antisymmetric part, \begin{equation} F^k_{\ ij} = \epsilon_{ijl} F_T^{(kl)} + {1\over 6}\epsilon_{ij}^{\ \ k} F + {1\over 2} \epsilon_{ijl}\epsilon^{klm} F_m, \label{decom} \end{equation} where $F_T^{(kl)}={1\over 2}\epsilon^{ij(k}F_{ij}^{l)}-{1\over 6}\delta^{kl}F$. {}From this decomposition it is clear that the only non-gauge and non-constraint coordinates on the jet space are the first and second terms in (\ref{decom}), their corresponding symmetrized derivatives, together with analogous coordinates from the identical decomposition of $T^k_{\ ij}$. Note also that in this decomposition, the third term is just the diffeomorphism constraint. As we will see in the next section, these remaining coordinates turn out to be useful in classifying spatial-diffeomorphism and Gauss invariant observables. \section{Classification of local conservation laws \label{la}} \setcounter{equation}{0} Consider the four-dimensional generally covariant $SU(2)$ gauge field theory with action \begin{equation} S= \int {\rm Tr} (e \wedge e \wedge F). \label{hkact} \end{equation} This action is identical in form to that for general relativity except for the gauge group, which is $SU(2)$ instead of $SL(2,C)$. The Hamiltonian description has an identically vanishing first class Hamiltonian \cite{HK}, and two first class constraints, which are the Gauss and the diffeomorphism constraints (\ref{gauss})-(\ref{diffeo}), or equivalently, their covariant versions (\ref{y1})-(\ref{y2}). The geometrical coordinates presented in the last section are therefore very useful in discussing the local conservation laws of this model. Local conserved currents $\tilde{j}^a$ are vector densities constructed from local functions of the fields and their derivatives which satisfy $d_a \tilde{j}^a = 0$ when the equations of motion hold, and where $d_a$ is defined as in (\ref{p}) above, but includes all the fields in the theory. The dual description is in terms of horizontal forms, which are defined to be forms on spacetime, or on space, with coefficients that are local functions. On spacetime, local functions also involve time derivatives, whereas on space, they involve only spatial derivatives. The horizontal exterior derivative is defined by $d\equiv dx^a d_a$, where the index $a$ goes from 0 to 3 for spacetime, or from 1 to 3 for space. Thus, in $n$ spacetime dimensions, we define the $(n-1)$-form $j_{ a_1\cdots a_{n-1} }:= \epsilon_{a_1\cdots a_n} \tilde{j}^{a_n}$, and the conservation equation becomes $dj=0$, when the equations of motion are satisfied. Let $\Sigma$ denote the surface defined by the equations of motion and their derivatives, and $\tilde \Sigma$ the surface defined by the constraints and their spatial derivatives. Then, for theories in $n$ spacetime dimensions, the vector space of local conservation laws is the equivalence classes of horizontal ($n-1$)-forms $j$ on spacetime which satisfy $dj= 0$ on $\Sigma$, and where two such forms are considered equivalent if they differ on $\Sigma$ by the exterior horizontal derivative of a horizontal $n-2$ form $k$: $j\sim j+dk$ on $\Sigma$. In what follows, and in the Appendix, we consider only ``dynamical'' conservation laws and cohomology groups, and not ``topological'' ones. The latter come from non-triviality of the triad manifold ($\tilde e\neq 0$). Following \cite{BBH3}, one can easily generalize the subsequent considerations to include these additional conservation laws and cohomology groups. One can prove (see Appendix) that the vector space of local conservation laws of a diffeomorphism invariant gauge field theory with vanishing Hamiltonian is isomorphic to the direct sum of the following two vector spaces: (i) the vector space of conservation laws in space associated with $\tilde{\Sigma}$, and (ii) the vector space of equivalence classes of horizontal $(n-1)$-forms in space which are invariant on $\tilde \Sigma$ under the transformations generated by the constraints, up to exact $(n-1)$-forms in space; the equivalence relation sets two such forms to be equal if they differ on $\tilde{\Sigma}$ by the horizontal exterior derivative $d$ of a horizontal ($n-2$)-form in space. Let us call this last space ${\cal O}$. In the present case, one can prove (Appendix) that the former space is trivial. To describe ${\cal O}$, we use the decomposition (\ref{decom}): The non-gauge coordinates (\ref{x2}) decompose into sets \begin{eqnarray} && D_{(i_l}\cdots D_{i_1}\epsilon_{\ i)j}^kF,\ D_{(i_l}\cdots D_{i_1}\epsilon_{\ i)j}^kT\\ && D_{(i_l}\dots D_{i_1}\epsilon_{i)jl}\ F^{(kl)}_T, \ D_{(i_l}\dots D_{i_1}\epsilon_{i)jl}\ T^{(kl)}_T,\\ && D_{(i_l}\dots D_{i_1}\epsilon_{i)jl}\epsilon^{lkm}F_m, \ D_{(i_l}\dots D_{i_1}\epsilon_{i)jl}\epsilon^{lkm}T_m \label{4.3}. \end{eqnarray} The first two sets, denoted collectively by ${\cal T}^{\prime r}$, do not vanish on the constraint surface, while the last obviously does. Consider functions $f({\cal T}')$ satisfying $\delta_i f({\cal T}')=0$. Denote by $L({\cal T}')$ the equivalence classes of all such functions under the equivalence relation $f \sim f + D_iM^i$, where $M^i({\cal T}')$ transforms like a vector under $SU(2)$ transformations. With these notations, it follows from the Appendix that ${\cal O}$ is described by linear combinations of the Chern-Simons functional \begin{eqnarray} \int \ Tr(AdA+{2\over 3}A^3) \end{eqnarray} and the functionals \begin{eqnarray} \int d^3x\ \tilde e L({\cal T}^\prime). \end{eqnarray} \section{Models from the Hamiltonian constraint function} \setcounter{equation}{0} Consider the Hamiltonian constraint function in the three-dimensional action \begin{eqnarray} S[A_a^i,e^b_j]=\int d^3x\ \tilde eF.\label{top} \end{eqnarray} The corresponding field equations are \begin{eqnarray} {\delta S\over \delta A_a^i(x)}\equiv\tilde e\epsilon_i^{\ jk}(-T_ke^a_j-T^l_{jk}e^a_l)(x)=0\\ {\delta S\over \delta e^a_i(x)}\equiv\tilde e(-e^i_a F+2\epsilon_l^{\ ik}F^l_{ak})(x)=0. \end{eqnarray} Contracting both equations with $e^i_a$ yields $F=0=T$. Inserting the definitions of $F^i_{\ jk}$ and $T^i_{\ jk}$ in terms of their duals gives the final result $F^i_{jk}=0=T^i_{jk}$. This means that the field equations following from (\ref{top}) require all the local gauge invariant quantities that can be built out of the connection $A_a^i$ and the triad $e^i_a$ to vanish. It is in this sense that the action (\ref{top}) plays the same role for the theory based on the $A^i_a$ and $e^i_a$ with the covariant derivative $D_i$ as the Chern-Simons action plays for the theory based on $A^i_a$ alone with the covariant derivative $D_a$. The theory given by (\ref{top}) is in fact three-dimensional (Euclidean) Einstein gravity, better known in the form \begin{eqnarray} S[A_a^i,e_b^j]=\int {\rm Tr} (e \wedge F). \end{eqnarray} One can also write down a four-dimensional action involving a function similar to the Hamiltonian constraint. Consider the following action made from an $SU(2)$ connection $A_\mu^i$, dreibein $e^{\mu i}$, and a scalar density $\tilde{\Phi}$ : \begin{equation} S= \int d^4x\ \tilde{\Phi} e^{\mu i} e^{\nu j}F_{\mu\nu}^k \epsilon_{ijk}. \end{equation} The spacetime metric $g^{\mu\nu}=e^\mu_ie^{\nu i}$ is degenerate, with degeneracy direction given by the 1-form $V_\mu = \tilde{\Phi}\epsilon_{\mu\nu\alpha\beta}\epsilon_{ijk}e^{\nu i} e^{\alpha j}e^{\beta k}$; $V_\mu g^{\mu\nu}=0$. The field equations are \begin{eqnarray} e^{\mu i} e^{\nu j}F_{\mu\nu}^k \epsilon_{ijk} &=& 0, \nonumber \\ \epsilon_{ijk} D_\mu (\tilde{\Phi} e^{\mu i} e^{\nu j}) &=& 0, \nonumber \\ \tilde{\Phi} e^{\nu j}F_{\mu\nu}^k \epsilon_{ijk} &=& 0. \end{eqnarray} It is clear that the first equation, which is like a Hamiltonian constraint, is identically satisfied as a consequence of the third equation. We suppose that $\tilde \Phi$ is different from zero everywhere. The dynamics is therefore determined entirely by the latter two equations. Both these equations have spatial projections which are the constraints of the theory. The standard 3+1 decompostion of the action reveals that the constraints are in fact just the Gauss and spatial-diffeomorphism constraints (\ref{gauss})-(\ref{diffeo}). Indeed, the 3+1 form of the action is \begin{equation} S = \int dt \int d^3x\ [ \Pi^a_i \dot{A}_a^i + A_0^i D_a \Pi^{ai} + \tilde{\Phi} e^{ai}e^{bj} F_{ab}^k \epsilon_{ijk} ] \end{equation} where $\Pi^a_k = 2\tilde{\Phi} e^{0i}e^{aj}\epsilon_{ijk}$. We can rewrite $\tilde{\Phi} e^{ai}e^{bj}\epsilon_{ijk}$ entirely in terms of $\Pi^a_k$ and a Lagrange multiplier as follows: \begin{eqnarray} {1\over 2\tilde{\Phi}} (e^{bl} e^0_l) \Pi^a_k &=& e^{bl} e^0_l e^0_i e^a_j \epsilon^{ijk} \nonumber \\ &=& e^{bl} ( e^T_{il} + { \delta_{il}\over 3}\ e^{0m}e^0_m) e^a_j \epsilon^{ijk} \nonumber \\ &=& e^{bl} e^T_{il} e^a_j \epsilon^{ijk} + {1\over 3 } e^{0m}e^0_m e^b_i e^a_j \epsilon^{ijk}, \end{eqnarray} where $e^T_{il}$ is the symmetric trace free part of $e^0_l e^0_i$. So finally \begin{equation} e^a_i e^b_j \epsilon^{ijk} = -{3 e^{bl} e^0_l \over 2\tilde{\Phi}e^{0m}e^0_m} \Pi^{ak} + {3 \over e^{0m}e^0_m} e^{bl} e^T_{il} e^a_j \epsilon^{ijk}. \end{equation} Substituting this into the 3+1 action, the second piece contracted with $F_{ab}^k$ vanishes, and we get \begin{equation} S = \int dt \int d^3x\ [ \Pi^a_i \dot{A}_a^i + A_0^i D_a \Pi^{ai} - N^b \Pi^a_k F_{ab}^k ], \end{equation} with the shift function $N^a$ defined by \begin{equation} N^a = {3 e^{al} e^0_l \over 2 e^{0m}e^0_m }. \end{equation} The Hamiltonian constraint vanishes identically, a fact which is already clear from the first field equation. This theory is therefore locally equivalent to (\ref{hkact}). \section{Conclusion} At the price of not using the canonical momenta given by the density weighted cotriad alone, but working instead with both the triads and the cotriads, we have shown that there is a natural covariant derivative acting on $su(2)$ tensors in canonical Euclidean general relativity in Ashtekar's variables. The appealing feature of the associated tensor calculus is that the constraints become algebraic restrictions on the torsion and the curvature of this covariant derivative. This gives the canonical formulation a geometrical flavor analogous to the one of the original Lagrangian Einstein equations. Furthermore, all $SU(2)$ and diffeomorphism invariant integrated local quantities can be classified. Their integrands are shown to be given either by the Chern-Simons Lagrangian or by the invariant volume element times $SU(2)$ invariant functions of covariant derivatives of the curvature and the torsion. This classification is achieved through the computation of the BRST cohomology of the Husain-Kucha\v r model. These results help to address the following questions of \cite{HK}~: (i)~On the basis of our classification of local observables, one can look for a complete set of observables which are in involution to decide on the one hand if the model is integrable and, on the other hand to try to quantize the model in a more traditional way, to be compared with the loop quantization of \cite{ALMMT}~; (ii)~A complete computation of the local BRST cohomology including the Hamiltonian constraint would clearly show the difference the inclusion of this constraint makes on the level of local integrated observables. In fact it is not really necessary to do the computation, since we can use the results of \cite{Torre}, which state (in the context of metric gravity) that there are no local gauge invariant observables. Consequently, the inclusion of the Hamiltonian constraint as a generator for gauge symmetries is responsible for removing all local observables. \section*{Acknowledgements} G. B. would like to thank Friedemann Brandt for useful discussions and the Fonds National Belge de la Recherche Scientifique for financial support. The work of V. H. was partly supported by NSF grant PHY 93-96246, the Eberly Research Funds of the Pennsylvania State University, and the Natural Science and Engineering Research Council of Canada. \section*{Appendix: Local BRST cohomology} \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} In this Appendix we give the computation of the local BRST cohomology associated with the theory given by the action (\ref{hkact}). The analysis follows closely that of \cite{BBH3}, where the local BRST cohomology of the Einstein-Yang-Mills theory is analysed. Let us introduce besides the diffeomorphism and the $SU(2)$ ghosts $\eta^a$ and $\eta^i$ of (\ref{gtr}), their canonically conjugate ghost momenta ghost momenta ${\cal P}_a$ and ${\cal P}_i$. The BRST charge \cite{BFV} of the model (\ref{hkact}) is given by \begin{eqnarray} \Omega=\int d^3x\ ( \tilde G_i\eta^i+\tilde H_a\eta^a-{1\over 2} {\cal P}_k \epsilon^k_{\ ij}\eta^i\eta^j+{\cal P}_i\eta^a\partial_a\eta^i +{\cal P}_b\eta^a\partial_a\eta^b). \end{eqnarray} The nilpotent BRST transformations $s_\omega$ of the fields are generated by taking the Poisson bracket of the fields with $\Omega$. As in \cite{BBH3}, it can then be verified that a new coordinate system for the jet-bundles associated to the fields $A^i_a, e^i_a$, the ghosts $\eta^i,\eta^a$ and the ghost momenta ${\cal P}_i,{\cal P}_a$ is given by the coordinates (\ref{x1}) collectively denoted by $U^s$ and their BRST variations $V^t=s_\omega U^t$, the ${\cal T}^r$, the $\hat C^i, \hat \xi^i$ and the ${\cal T}^*_s\equiv D_{(i_l}\cdots D_{i_1)}\hat C^*_i, D_{(i_l}\cdots D_{i_1)}\hat \xi^*_i$, $l=0,1,\cdots$ with $\hat C^*_i={1\over \tilde e}{\cal P}_i$ and $\hat \xi^*_i={1\over \tilde e}e^a_i({\cal P}_a-A^k_a{\cal P}_k)$. The BRST transformations in the new coordinate system act by convention from the right and are given by \begin{eqnarray} s_\omega U^t&=&V^t,\ s_\omega V^t=0 \\ s_\omega{\cal T}^r&=&-\delta_k {\cal T}^r\hat C^k+D_k {\cal T}^r\hat \xi^k \\ s_\omega \hat C^i&=&{1\over 2}\epsilon^i_{jk}\hat C^j\hat C^k-\hat F^i, s_\omega \hat \xi^i=-\delta_k\hat \xi^i\hat C^k-\hat T^i \\ s_\omega D_{(i_l}\dots D_{i_1)}\hat C^*_i&=&D_{(i_l}\dots D_{i_1)}T_i -\delta_k D_{(i_l}\dots D_{i_1)}\hat C^*_i\hat C^k \nonumber \\ && + D_k D_{(i_l}\cdots D_{i_1)}\hat C^*_i \hat \xi^k,\label{b5} \\ s_\omega D_{(i_l}\cdots D_{i_1)}\hat \xi^*_i&=&-D_{(i_l}\dots D_{i_1)}F_i -\delta_k D_{(i_l}\cdots D_{i_1)}\hat \xi^*_i\hat C^k \nonumber \\ && + D_k D_{(i_l}\cdots D_{i_1)}\hat \xi^*_i \hat \xi^k,\label{b6} \end{eqnarray} with $\hat F^i=(1/2) F^i_{jk}\hat \xi^j\hat \xi^k$ and $\hat T^i=(1/2) T^i_{jk}\hat \xi^j\hat \xi^k$. In order to compute the BRST cohomology, we follow closely the reasoning of \cite{BBH3}, section 7. Apart from global considerations, the generators $U^t,V^t$ belong to the contactible part of the algebra and can be forgotten in the rest of the considerations. For the remaining generators, one first decomposes the cocycles, the coboundaries and the BRST differential according to the number of $\hat \xi^i$'s. The BRST differential decomposes into $s_\omega=s_0+s_1+s_2$. The part $s_0$ can be written as $s_0=\delta+\gamma_{\cal G}$, where the Koszul-Tate differential $\delta$ is defined by the first lines of (\ref{b5}) and (\ref{b6}) alone and $\gamma_{\cal G}$ is defined $\gamma_{\cal G}Y=-\delta_k Y \hat C^k$ for $Y\in \hat\xi^i, {\cal T}^r, {\cal T}^*_s$ and $\gamma_{\cal G}C^i={1\over 2}\epsilon^i_{\ jk}\hat C^j\hat C^k$. The part $s_1$ is given by $s_1 {\cal T}^r=D_k {\cal T}^r\hat \xi^k$, $s_1 {\cal T}^*_s=D_k {\cal T}^*_s\hat \xi^k$ and $s_1\hat \xi^i=-\hat T^i$. Finally, the part $s_2$ acts as $s_2 \hat C^i=-\hat F^i$, where the $s_i$'s, $i=0,1,2$, vanish on the other generators. The anticommutation relations between the $s_i$'s are the same as those in \cite{BBH3}, where for the proof of Eq. (7.23) of \cite{BBH3}, one has to use the Jacobi identities (\ref{b1}) and (\ref{b2}). Lemma 1 of \cite{BBH3} then stays true with $\omega_i(\hat C)$ either given by a constant, or by $-{2\over 3}{\rm Str} \hat C^3$, where Str denotes the symmetrized trace of the matrices. We will now analyze equations (7.29), (7.30) of \cite{BBH3} directly and not follow entirely Appendix E of that paper, because our theory does not fulfill the normality assumption needed in that approach. By using the decomposition of the variables ${\cal T}^r$ defined in section \ref{la}, we can first assume because of (7.30) of \cite{BBH3}, that the invariant $\alpha^i_l$ only depends on the ${\cal T}^{\prime r}$'s and the $\hat \xi$'s. Because $s_1$ commutes with the operator counting the generators (\ref{4.3}), we then can take the equalities (7.29),(7.30) of \cite{BBH3} to be strong equalities and assume that $\beta^i_{l-1}$ is invariant and also only depends on the ${\cal T}^{\prime r}$'s and the $\hat \xi$'s. The equations then become $s\alpha^i_l=0$ and $\alpha^i_l= s\beta^i_{l-1}$. From the descent equations argument of section 6 in \cite{BBH3}, one concludes that, if $l<3$, $\alpha^i_l=s\gamma^i$ for some $\gamma^i$ depending on ${\cal T}^r, \hat \xi, \hat C^i$. We can now use Appendix E of \cite{BBH3} starting from equation (E.4). It is at this stage, because we use Appendix C of \cite{BBH3} that we have to assume that our local functions depend polynomially on the ${\cal T}^r$ variables. Because we are in three dimensions and there are no abelian factors, we conclude that equation (E.21) of \cite{BBH3} holds with $P(\hat F)=0=q^*=G^*$ and a dependence on ${\cal T}^\prime$ rather than ${\cal T}$. The same is true for equation (7.34) of \cite{BBH3}. Let $\hat \theta={1\over 3!}\epsilon_{ijk}\hat \xi^i\hat \xi^j \hat \xi^k$. Let $q=-{2\over 3}\ {\rm Str} \hat C^3 +{\rm Str} \hat C\hat F $. The final result is that the BRST cohomology $H^*(s_\omega)$ of the model is described by \begin{eqnarray} \hat \theta (L_1({\cal T}^\prime)+L_2({\cal T}^\prime)\ {\rm Str} \hat C^3+rq+s\beta,\ r\in {\bf R}. \end{eqnarray} So all the BRST cohomology is concentrated in ghost number $3$ and $6$. The local BRST cohomology in space $H^{*,*}(s_\omega|d)$ is obtained from $H^*(s_\omega)$ by replacing $\eta^a$ by $\eta^a+dx^a$ \cite{B,BBH3}. Hence these groups can be described by \begin{eqnarray} && H^{0,3}(s_\omega|d): d^3x\ \tilde e\ L_1({\cal T}^\prime)+r^\prime {\rm Str} (AF-{1\over 3}A^3), \\ && H^{3,3}(s_\omega|d): d^3x\ \tilde e\ L_2({\cal T}^\prime)\ {\rm Str}C^3, \\ && H^{1,2}(s_\omega|d): dx^a\wedge dx^b{\partial \over\partial \eta^a\eta^b}\ q, \\ && H^{2,1}(s_\omega|d): dx^a{\partial \over\partial \eta^a}\ q \\ && H^{3,0}(s_\omega|d): q, \end{eqnarray} where the solutions involving $L_1({\cal T}^\prime),L_2({\cal T}^\prime)$ are trivial if they are given by $D_iM^i({\cal T}^\prime)$. Note that there is no cohomology in ghost number $-1$ and form degree 3 which, by using the isomorphism of this group with the non trivial conservation laws of the constraint surface \cite{BBH}, excludes the latter. It then follows from the relation between the local Hamiltonian BRST cohomology groups and the local Lagrangian BRST cohomology groups \cite{JMP} and from the fact that the first class Hamiltonian is zero, that the Lagrangian local BRST cohomology groups can be entirely described by the local Hamiltonian BRST cohomology groups and in particular, $H^{-1,4}(s|d)$ in spacetime, with $s$ the BRST differential associated to the minimal solution of the Batalin-Vilkovisky master equation \cite{BV} for the Husain-Kucha\v r model, is isomorphic to $H^{0,3}(s_\omega|d)$. Using again the reasoning of \cite{BBH}, this last space also describes the local conservation laws of the latter model.
cond-mat/9611158
\section{Introduction} \footnotetext{Dedicated to Prof.~W.~G\"otze on the occasion of his 60th birthday.} As is well known, superconductivity only exists at sufficiently low temperatures $T$ and small external magnetic fields $H$. The resulting $H-T$ boundary for the normal to superconducting transition is determined by the Ginzburg parameter \te{\kappa=\lambda/\xi}. For type II superconductors with \te{\kappa>1/\sqrt{2}} one obtains a lower (\te{H_{c1}}) and an upper (\te{H_{c2}}) critical field which - for bulk samples - are universal functions of temperature \cite{deGennes,Tinkham}. However, it was realized long ago by Saint-James and de Gennes \cite{SolGennes,SaintJamesDeGennes} that in the presence of a surface, these results are changed considerably. Regarding \te{H_{c1}}, there is a surface barrier for the entrance of the first flux quantum. Thus the field up to which the sample stays in the Meissner phase may be much larger than the thermodynamic \te{H_{c1}} \cite{SolGennes}. In the case of the upper critical field, superconductivity in a bounded sample persists even in the range \te{H_{c2}<H<H_{c3}=1.69H_{c2}} provided the external field is parallel to the surface \cite{SaintJamesDeGennes}. In this regime only a thin sheet at the sample boundary of the order of the zero field coherence length \te{\xi} is superconducting. Quite generally, in samples whose size is of the order of the $T=0$ coherence length, one expects that the $H-T$ phase boundary will strongly depend on the detailed form of the sample, reflecting the possible eigenmodes for the complex superconducting order parameter $\psi (\bf{{\bf r}})$ in the given geometry. Experimentally this was recently studied by Moshchalkov {\it et al.}~\cite{Moshchalkov}, who investigated the temperature dependence of the upper critical field of small mesoscopic aluminium samples with typical sizes of 1$\mu$ or less. The observed $H-T$ phase boundary turned out to exhibit very peculiar size effects. Specifically it was found that Aharonov-Bohm like oscillations in the critical field were present even in a simply connected geometry, similar to the Little-Parks oscillations found long ago \cite{LittleParks} in thin walled superconducting cylinders. It is the purpose of this work to investigate size effects in the critical fields of small superconductors on the basis of a Ginzburg-Landau (GL) theory. We will find that such a description apparently remains valid down to system sizes of only a few coherence lengths. By a careful solution of the boundary value problem for the linearized GL-equation near $T_c$, we are able to quantitatively describe the observed structure in the upper critical field of a small disc. In the limit where many flux quanta are threading the sample, the upper critical field is in fact a surface critical field, reproducing the standard \te{H_{c3}} value of a semi-infinite geometry. Moreover we determine a generalized lower critical field for mesoscopic discs and rings. An interesting point is that the eigenvalue spectrum which determines the suppression of the critical temperature $T_c (H)$ as a function of magnetic field, is rather different from the case of electron levels in a quantum dot, because of the different boundary conditions. \section{Critical fields of superconducting discs and rings} Let us consider a small disc with radius $R$ and thickness $d$ in an external magnetic field \te{{\bf H}=H{\bf e}_z}, which is perpendicular to the sample surface at \te{z=\pm d/2}. Near the normal to superconducting transition the change in the free energy with respect to the normal state can be expressed in terms of a GL-functional of the complex superconducting order parameter \te{\psi ({\bf r})} \cite{deGennes,Tinkham} \begin{eqnarray} \label{frengl} F[\psi] \mbox{$\; = \;$} & F_{n} + \int \limits_V \left \{ \frac{\hbar^2}{4\mu} \left | \left (\nabla \mbox{$\; - \;$} \frac{2ie}{\hbar c}\mbox{$\mathbf{ A}$} \right )? \psi({\bf r}) \right |^2 \right .\\ &\left. \mbox{$\; + \;$} a|\psi({\bf r})|^2 \mbox{$\; + \;$} \frac{b}{2}|\psi({\bf r})|^4 \mbox{$\; + \;$} \frac{{\bf B}^2}{8\pi} \right \} \,d^3r \; . \nonumber\end{eqnarray} Here \te{{\bf B}=\nabla \times \mbox{$\mathbf{ A}$}} is the magnetic field in the sample with volume $V$, \te{a=a' (T-T_c)/T_c} and $b$ are the standard GL-coefficients and $\mu$ the effective electron mass \cite{deGennes,Tinkham}. In principle there are also surface contributions to the free energy functional (\ref{frengl}) which may be important for mesoscopic samples with a large surface to volume ratio. In our treatment below such contributions are neglected, which is justified only a posteriori. In the vicinity of the transition the order parameter and the screening currents are small. To lowest order we may therefore neglect the quartic term in $F[\psi]$ and replace the magnetic field by the external one. The most probable configuration of the order parameter which follows from the mean field equation \te{\delta F[\psi]/\delta \psi^* =0} is then determined by the eigenvalue problem \begin{equation} - \frac{\hbar^2}{4\mu} \left ( \nabla - \frac{2ie}{\hbar c}\mbox{$\mathbf{ A}$} \right )^2 \psi \mbox{$\; = \;$} \mbox{$\; - \;$} a \psi \label{glg} \label{glgav} \end{equation} for a particle with charge $2e$ in an external magnetic field (\te{e<0}). Assuming that the sample is embedded in an insulating medium, the relevant boundary condition is that of vanishing current normal to the sample surface \te{\partial V}. In covariant form the corresponding Neumann boundary condition is \cite{deGennes} \begin{equation} \left . {\bf n} \cdot \left (\nabla - \frac{2\, i \, e}{\hbar \, c} \mbox{$\mathbf{ A}$} \right ) \psi \right \arrowvert_{\partial V} = \; 0 , \label{randbed} \end{equation} where \te{\bf n} is a unit vector normal to the sample surface. In order to determine the $H-T$ phase boundary, we must find the lowest eigenvalue \te{E_0 (H)} associated with a nonzero order parameter \te{\mbox{$\psi (\mathbf{ r})$} \neq 0}. From \te{E_0 (H)} the transition from the normal to the superconducting state is determined by \e{aeh}{-a= a' \frac{T_c - T_c (H)}{T_c}= E_0 (H).} Here \te{T_c} is the (mean field) transition temperature of the infinite system with zero field. Since \te{E_0 (H) \geq E_0 (0)} quite generally \cite{Simon}, the transition temperature at finite field is always smaller or equal than at \te{H=0}. In order to treat the case of discs or rings, it is convenient to introduce cylindrical coordinates \te{(\rho, \phi, z)}. In the appropriate gauge \te{\mbox{$\mathbf{ A}$} = H \rho \,{\bf e}_{\phi}/2} the solution of the Schr\"odinger equation (\ref{glg}) can then be written as \e{ansatzpsisymm}{\psi = {\cal R} (\rho) \; e^{i m \phi} \; e^{i k_{\nu} z} .} Here \te{m \in \mbox{$Z\!\!\! Z$}} is the angular momentum quantum number and \te{k_{\nu}= \nu \pi/d} with \te{\nu \in \mbox{$I\!\!N_0$}} the discrete wavevector for motion in the $z$-direction. Since the lowest eigenvalue has always \te{\nu=0}, we will omit the $z$-dependence and the associated quantum number $\nu$ in the following. Introducing a dimensionless variable \te{\zeta= \rho^2/2l^2_H} with \te{l_H= (\hbar c / 2 |e| H)^{1/2}} the magnetic length for charge \te{2e}, the differential equation for \te{{\cal R}(\zeta)} can be reduced to Kummer's confluent hypergeometric equation \cite{Abramowitz} \e{dglxi}{\zeta \frac{\partial^2 w}{\partial \zeta^2} \mbox{$\; + \;$} \left(|m|+1 \mbox{$\; - \;$} \zeta \right ) \frac{\partial w}{\partial \zeta} \mbox{$\; - \;$} \alpha \; w \; \mbox{$\; = \;$} \; 0 } by the substitution \te{{\cal R} (\zeta ) \mbox{$\; = \;$} e^{- \frac{\zeta}{2}} \; \zeta^{\frac{|m|}{2}} \; w (\zeta)}. The dimensionless parameter $\alpha$ is directly related to the eigenvalue \te{E(H)} by \e{alvh}{\alpha = - \frac{E(H)}{\hbar \omega_c} \mbox{$\; + \;$} \frac{1}{2} (|m|+m+1)} with \te{\omega_c= |e|H/\mu c} the standard cyclotron frequency. Using \te{-E_0 (H)=a} and \te{a'=\hbar^2/4\mu \xi^2(0)} with \te{\xi (0)} the zero temperature GL-coherence length, the maximum value of $\alpha$ -- which always has \te{m \leq 0} -- determines the magnetic field shift of the transition temperature by \e{dt}{\frac{T_c - T_c (H)}{T_c} = \left [ \frac{1}{2} - \alpha_{max} (H) \right ]\frac{4 \tilde{\Phi}}{\Phi_0}} with \te{\tilde{\Phi}= \pi \xi^2(0)H} and \te{\Phi_0= hc/2|e|} the superconducting flux quantum. In an infinite sample the ground state is the lowest Landau level with \te{E_0^{\infty}= \hbar \omega_c /2}, i.e.~\te{\alpha^{\infty}=0}. The phase boundary is then given by \e{dtbulk}{\frac{T_c - T_c (H)}{T_c}= 2 \frac{\tilde{\Phi}}{\Phi_0},} which is equivalent to the standard relation \te{H=H_{c2} (T)= \Phi_0/2 \pi \xi^2 (T)} \cite{deGennes,Tinkham}. For the finite system, the spectrum of eigenvalues follows from the Neumann boundary condition (\ref{randbed}) at the inner \te{(R_i)} and outer \te{(R)} radius of the ring. The general solution of (\ref{dglxi}) is a linear combination of Kummer functions \cite{Abramowitz}. In the case of a disc geometry only \e{soldisc}{w_1 (\zeta) = \Phi (\alpha,|m|+1,\zeta)= _1F_1(\alpha,|m|+1,\zeta)} is allowed since the second linear independent solution diverges at the origin. Using standard recursion relations for \te{_1F_1}, it is straightforward to show that the boundary condition at \te{\rho=R}, which simply reads \te{dR/d\rho=0}, since \te{\mbox{$\mathbf{ A}$} \cdot {\bf n}=0}, leads to \ea{randbednum}{\nonumber(|m|+1-\alpha)\, \Phi({\alpha-1},{|m|+1},{\zeta_R})& }{randbedz}{\mbox{$\; - \;$} \Phi ({\alpha},{|m|+1},{\zeta_R}) \mbox{$\; + \;$} \alpha \, \Phi({\alpha+1},{|m|+1},{\zeta_R})&\, = 0 } with \te{\zeta_R=R^2/2l^2_H}. For each given \te{m} equation (\ref{randbednum}) determines a discrete series of eigenvalues \te{\alpha_{n m} (H)}, \te{n\in \mbox{$I\!\!N_0$}}, which are \underline{de}creasing with increasing \te{n}. They obey \te{\alpha_{nm}\leq 1/2} and are continuous in \te{\zeta_R= \Phi / \Phi_0} \cite{Benoist} which is just the external flux \te{\Phi= \pi R^2 H} through the area of the disc in units of the flux quantum. Analytical results for the spectrum can be obtained in the low field limit \te{\Phi \rightarrow 0}. In this limit it is straightforward to treat the general case of a ring with \te{\sigma= R_i/R \leq 1}. Standard second order perturbation theory in the magnetic field then leads to a shift in the transition temperature which is given by \e{dtzero}{\frac{T_c - T_c (H)}{T_c}= \frac{1}{2} (1 + \sigma^2) \frac{\tilde{\Phi} \Phi}{\Phi_0^2} + ...\; .} The corrections to this result are of order \te{\Phi^4}, since the ground state energy is even in \te{\Phi}. For a very thin ring with \te{\sigma \rightarrow 1^-} this agrees with the low field limit of the Little-Parks result \cite{LittleParks} \e{lipa}{\frac{T_c - T_c (H)}{T_c}= \frac{\xi^2(0)}{R^2} \min_{m \in \mbox{$Z\!\!\! Z$}} \left | m - \frac{\Phi}{\Phi_0} \right |^2,} as expected. For general magnetic fields the phase boundary can only be obtained numerically. To this end we have directly solved the transcendental equation (\ref{randbednum}) which allows us to determine the spectrum without any discretization error. The energy levels are thus obtained with arbitrary accuracy, in contrast to previous work by Saint-James \cite{SaintJames} or by Nakamura and Thomas \cite{Nakamura} who consider Dirichlet boundary conditions. The results are shown in Fig.~1, where the dimensionless eigenvalues \te{\frac{4 \mu R^2}{\hbar^2} E_{n m}} for \te{n=0} and \te{m=2,1,0,-1,...,-10} are plotted as functions of \te{\Phi /\Phi_0}. \bi{tb}{1.1\linewidth}{Dimensionless energy eigenvalues of a disc as a function of the external magnetic flux.} Evidently the lowest eigenvalue exhibits an oscillatory behaviour with cusps at values \te{\Phi^{(j)}, \, j=1,2,...} where the magnetic quantum number of the lowest eigenstate jumps by one unit. The dimensionless distances between successive cusps \e{defdelta}{\Delta_j= \frac{\Phi^{(j)} \mbox{$\; - \;$} \Phi^{(j-1)}}{\Phi_0}\qquad (\Phi^{(0)}=0)} are given in table 1 with an accuracy corresponding to the last given digit. \vspace{0.2cm} \begin{tabular}{|llc|cll|} \hline $\Phi^{(1)}\phantom{1}=$ & $\phantom{1}1.923765$&$\Phi_0$ & $\qquad$ & $\Delta_1=$ & $1.923765$ \\ \hline $\Phi^{(2)}\phantom{1}=$ & $\phantom{1}3.392344$&$\Phi_0$ & $\qquad$ & $\Delta_2=$ & $1.468579$ \\ \hline $\Phi^{(3)}\phantom{1}=$ & $\phantom{1}4.747920$&$\Phi_0$ & $\qquad$ & $\Delta_3=$ & $1.355676$ \\ \hline $\Phi^{(4)}\phantom{1}=$ & $\phantom{1}6.045882$&$\Phi_0$ & $\qquad$ & $\Delta_4=$ & $1.297962$ \\ \hline $\Phi^{(5)}\phantom{1}=$ & $\phantom{1}7.3068$&$\Phi_0$ & $\qquad$ & $\Delta_5=$ & $1.2609$ \\ \hline $\Phi^{(6)}\phantom{1}=$ & $\phantom{1}8.5423$&$\Phi_0$ & $\qquad$ & $\Delta_6=$ & $1.2355$ \\ \hline $\Phi^{(7)}\phantom{1}=$ & $\phantom{1}9.7584$&$\Phi_0$ & $\qquad$ & $\Delta_7=$ & $1.2161$ \\ \hline $\Phi^{(8)}\phantom{1}=$ & $10.9591$&$\Phi_0$ & $\qquad$ & $\Delta_8=$ & $1.2007$ \\ \hline $\Phi^{(9)}\phantom{1}=$ & $12.1477$&$\Phi_0$ & $\qquad$ & $\Delta_9=$ & $1.1886$ \\ \hline $\Phi^{(10)}=$ & $13.3255$&$\Phi_0$ & $\qquad$ & $\Delta_{10}=$ & $1.1778$ \\ \hline \end{tabular} TABLE 1 \vspace{0.2cm} Experimentally the oscillatory behaviour of the ground state energy is directly reflected in the $H-T$ phase boundary. For the case of a disc discussed here, this was actually first observed by Buisson {\it et al.}~\cite{Buisson}. In their experiment, however, the presence of two gold contacts led to a boundary condition which is different from (\ref{randbed}) over part of the sample boundary. While the oscillations were still present, a detailed comparison with theory was difficult (for instance the first cusp was observed at \te{\Phi\approx 2.5 \Phi_0} compared to \te{\Phi = 1.924 \Phi_0} in the pure Neumann case). The more recent experiments of Moshchalkov {\it et al.}~\cite{Moshchalkov}, however, are in very good agreement with the theoretical predictions. This may be seen from a comparison with the measured deviation of the temperature shift \te{\Delta T_c=T_c \mbox{$\; - \;$} T_c (H)} from the average linear behaviour which is shown in Fig.~2. \bi{mv}{\linewidth}{Flux dependence of the oscillatory part of the temperature shift. The experimental data (squares) is taken from \protect\cite{Moshchalkov}, the solid line is the theoretical prediction based on equation (\protect\ref{dt}).} Here we have used the experimental value \te{\xi(0)=1 \mu}, and a disc area which is only 2.7 \% smaller than the area of the almost rectangular sample used in the experiment. It is important to note that the periods \te{\Delta_j} decrease monotonically from \te{\Delta_1=1.924} to \te{\Delta_{\infty}=1} (see table 1 and below) in contrast to an anomalous first period \te{\Delta_1\approx 1.8} and constant successive ones \te{\Delta_2\approx\Delta_3 \approx\Delta_4 \approx 1.3} which were quoted by Moshchalkov {\it et al.}~\cite{Moshchalkov}. The field at which the ground state changes from \te{m=0} to \te{m=-1} allows us to extract a lower critical field \e{hcedisc}{H_{c1}^{\mbox{disc}} = 1.92376 \frac{\Phi_0}{\pi R^2}} for a mesoscopic system with size \te{R} of order \te{\xi (0)}. Here \te{H_{c1}} is defined via the condition that for \te{H<H_{c1}} the sample tries to screen out the applied flux, whereas for \te{H>H_{c1}} the free energy is minimized by accepting one flux quantum. It is interesting to compare this with Fetter's theory of flux penetration in a superconducting disc \cite{Fetter}, which is based on calculating the self energy of a vortex. In the limit where the disc radius \te{R} is much smaller than the effective thin film penetration depth \te{\lambda_{2d}= \lambda^2/d}, it turns out, that it is energetically favourable for a vortex to enter if \te{H>H_{c1}} with \cite{Fetter} \e{hcefetter}{H_{c1} = \frac{\Phi_0}{\pi R^2} \ln \frac{R}{r_c} \qquad \lambda_{2d} \gg R \gg r_c.} Here \te{r_c\approx \xi (0)} is the core radius, which is always assumed to be much smaller than R. Obviously for samples whose size is of the order of the coherence length \te{\xi(0)}, the expression (\ref{hcefetter}) is no longer applicable. In this limit the approximation that the order parameter is constant beyond \te{r_c} becomes invalid. As found above the lower critical field is then replaced by our result (\ref{hcedisc}), with a crossover at about \te{R \approx 7 \xi (0) }. Here it is important that for \te{ R \approx \xi (0)}, linearized GL-theory is sufficient to calculate \te{H_{c1}}, because it is the sample boundary which limits the magnitude of the order parameter instead of the quartic term as usual. Finally consider a ring with inner radius \te{R_i \gg r_c}. Then the lower critical field is simply determined by the condition that half a flux quantum is applied, i.e. \e{hcering}{H_{c1}^{\mbox{ring}} = \frac{1}{2} \frac{\Phi_0}{\pi R^2}.} Indeed this follows from the quantization of the fluxoid \cite{Tinkham}, and is valid irrespective of the thickness of the ring. Comparing (\ref{hcering}) with the result (\ref{hcedisc}) for a disc, we find that \te{H_{c1}} in the latter case is almost four times larger. Qualitatively this is due to the additional condensation energy in the center of the disc which is required for a vortex to enter. As a second point let us discuss the behaviour at \te{\Phi \gg \Phi_0} where many flux quanta have entered. In this limit the ground state has angular momentum \te{|m|\gg 1}. The associated eigenfunction is thus concentrated near the disc boundary. It is then obvious that our upper critical field for \te{\Phi \gg \Phi_0} is in fact a surface critical field. If this is correct, it should asymptotically approach the value obtained by Saint-James and de Gennes \cite{SaintJamesDeGennes} for a surface with a radius of curvature large compared to the coherence length. This can be verified by considering the special values \te{\Phi_m, \,m=1,2,...} in Fig.~1, where the tangent to \te{E_{0 |m|} (\Phi)} goes through the origin (i.e.~we are considering successive approximations to the envelope). These values are given in table 2 together with the corresponding values of \te{\alpha_{max}}. \vspace{0.2cm} \begin{tabular}{|r|r|r|r|} \hline & & & \\ $m$ & $\Phi / \Phi_0$ & $\alpha_{max}$ & $\frac{T_c - T_c(H)}{T_c}\, / \,\frac{\tilde{\Phi}}{\Phi_0}$ \\ \hline 1 & 2.44 & 0.28761 & 0.849 \\ 2 & 3.92 & 0.26561 & 0.937 \\ 3 & 5.28 & 0.25514 & 0.979 \\ 4 & 6.56 & 0.24872 & 1.005 \\ 5 & 7.82 & 0.24426 & 1.023 \\ 6 & 9.09 & 0.24091 & 1.036 \\ 7 & 10.28 & 0.23839 & 1.046 \\ 8 & 11.46 & 0.23638 & 1.054 \\ 9 & 12.68 & 0.23449 & 1.062 \\ 10 & 13.86 & 0.23300 & 1.067 \\ 100 & 110.23 & 0.21395 & 1.144 \\ 200 & 214.72 & 0.21130 & 1.154 \\ 1000 & 1033.76 & 0.20778 & 1.169 \\ 10000 & 10109.33 & 0.20583 & 1.177 \\ \hline \end{tabular} TABLE 2 \vspace{0.2cm} It is obvious that \te{\alpha_{max}} converges to a limiting value 0.2058... . Using (\ref{dt}) the associated transition temperature is then given by \e{dtinf}{\frac{T_c - T_c (H)}{T_c}=1.177\frac{\tilde{\Phi}}{\Phi_0} .} As expected this is completely equivalent to the well known result \te{H=H_{c3} (T) = 1.695 H_{c2} (T)} for the surface critical field \cite{SaintJamesDeGennes}. The coefficient \te{1 - 2 \alpha_{max}^m < 1} is in fact just the ratio between the ground state energy in the disc and the energy \te{\hbar \omega_c/2} of the lowest Landau level in an infinite sample. Edge states centered near the sample boundary have thus a \underline{lower} energy than bulk levels. Note that this behaviour is just the opposite of the case with Dirichlet boundary conditions (relevant e.g.~for edge states in the Quantum Hall Effect) where edge states are \underline{above} the corresponding bulk Landau levels \cite{Buisson}. Finally let us discuss the behaviour of the periods \te{\Delta_j} of the ground state oscillations for large flux \te{\Phi \gg \Phi_0}. Due to the factor \te{\rho^{|m|/2}} the order parameter for increasing magnetic quantum number \te{|m|\gg 1} is more and more concentrated near the sample boundary, but is practically zero in the interior of the sample. The simply connected disc thus effectively behaves like a ring with a normal core of size \te{R-c_1 l_H}, where \te{c_1} is a constant \cite{Buisson}. The periodicity observed in \te{E_0 (H)} is then simply determined by the condition that one additional flux quantum enters the area of the normal core, i.e. \e{flcond}{\Delta_j (\Phi\gg\Phi_0) = 1 \mbox{$\; + \;$} 2 c_1 \frac{l_H}{R} \rightarrow 1.} In fact this field dependance was observed in the experiments by Buisson et al.~already for \te{\Phi/\Phi_0>5} \cite{Buisson}. In the asymptotic limit, which is however only reached for \te{\Phi/\Phi_0>10^3} (see table 2) the coefficient \te{c_1} can be obtained analytically as \te{2c_1=\sqrt{0.59}\approx 0.76} \cite{SaintJames}. \section{Discussion} Using linearized GL-theory we have calculated the nucleation field of a small superconducting disc with a radius which is of the order of the coherence length \te{\xi (0)}. The good agreement with the experimentally observed $H-T$ phase boundary suggests that the macroscopic GL-description remains valid in this regime which is not obvious a priori. A surprising feature of our results is that Aharonov-Bohm like oscillations are present even in a simply connected sample. The physical origin of this effect is that already the entrance of a single flux quantum effectively makes the sample a multiply connected one. In the limit \te{\Phi \gg \Phi_0} the disc behaves like a thin walled ring, leading to oscillations in \te{T_c (H)} which are completely equivalent to the well known Little-Parks experiment. It is interesting to note that these effects depend crucially on the Neumann boundary conditions. In fact the equivalent eigenvalue spectrum with Dirichlet boundary conditions, which was studied by Nakamura and Thomas \cite{Nakamura}, does not exhibit any oscillations in the ground state energy \te{E_0 (\Phi)}. It is an interesting future problem to investigate similar effects in the fluctuation diamagnetism \cite{Schmidt} or extend the calculations above to more complicated geometries. This would allow to study eigenvalue spectra for systems with classical chaotic dynamics \cite{Ullmo,Oppen} without the complications due to electron-electron interactions which are unavoidable in non-superconducting mesoscopic systems.
1712.07080
\section{Introduction} Prototype quantum processors based on superconducting qubits \cite{blais_quantum-information_2007,devoret_superconducting_2013}, and in particular transmon-related qubits, have become a leading platform for experimentally testing ideas in quantum information processing. The creation of a 10-qubit entangled state with transmon qubits has recently been demonstrated \cite{song_10-qubit_2017}; this followed earlier demonstrations of 3-qubit and 5-qubit entanglement \cite{dicarlo_preparation_2010,neeley_generation_2010,barends_superconducting_2014} with superconducting qubits. There are near-term plans to construct machines capable of entangling up to 49 qubits \cite{martinis_quantum_2017}, and it is expected that a processor with 49 qubits on a 2D lattice may, if the gate fidelities and qubit decoherence meet certain conditions, result in a demonstration of quantum computational supremacy \cite{boixo_characterizing_2016,harrow_quantum_2017}. Studying the coherence of many-qubit entangled states can provide insight into the nature of the noise to which the qubits are exposed. The properties of the noise in a quantum computer can have profound implications for the error correction required to operate the computer fault-tolerantly \cite{knill_threshold_1996,aharonov_fault-tolerant_2006,aharonov_fault-tolerant_2008,reichardt_error-detection-based_2006,aliferis_level_2007,ng_fault-tolerant_2009,preskill_sufficient_2013,hutter_breakdown_2014,paz-silva_multiqubit_2017}. The strength and type of noise is also relevant in determining if a particular processor is performing a sampling task that is hard to simulate classically \cite{harrow_quantum_2017}. GHZ states \cite{greenberger_going_1989,greenberger_multiparticle_1993} are canonical multi-particle entangled states in quantum information. They have been studied for their connection with the foundations of quantum mechanics and entanglement, but are also of interest in metrology \cite{toth_multipartite_2012,chaves_noisy_2013}. In this paper we study the decoherence of GHZ states in a 16-qubit superconducting processor; our choice of GHZ states is motivated both by the widespread appearance of GHZ states in the quantum information literature, and because a very similar study to the present one has been conducted with trapped-ion qubits, by Monz et al. \cite{monz_14-qubit_2011}, and we would like to allow easy comparison with their results. \section{Methods} We investigate the coherence of $N$-qubit Greenberger-Horne-Zeilinger (GHZ) states of the form $\ket{\psi}=\frac{1}{\sqrt{2}} \left( \ket{0 \ldots 0}+\ket{1 \ldots 1} \right)$ on the ibmqx5 16-qubit processor from IBM \cite{noauthor_ibmqx5:_2017}. We used a very similar procedure to evaluate the coherence to that described in Ref.~\cite{monz_14-qubit_2011}, which we outline below. We perform, for each $N$, a set of experiments that together allows us to quantify how quickly the $N$-qubit GHZ state decoheres. The quantity we measure is the coherence $C$ of the GHZ state as a function of a delay time $\tau$ between state generation and a parity measurement that depends on the coherence. By the coherence $C(N,\tau)$ of the GHZ state, we mean specifically the following: the GHZ state under consideration can be represented by a density matrix $\rho^{(N,\tau)}$, and $C(N,\tau)$ is defined as the sum of the amplitudes of the far-off-diagonal elements $\rho^{(N,\tau)}_{11 \cdots 1,00 \cdots 0}$ and $\rho^{(N,\tau)}_{00 \cdots 0,11 \cdots 1}$, i.e., $C(N,\tau) \coloneqq \left| \rho^{(N,\tau)}_{11 \cdots 1,00 \cdots 0} \right| + \left| \rho^{(N,\tau)}_{00 \cdots 0,11 \cdots 1} \right|$. In particular, for each $N \in \left\{ 1,2,\ldots,8 \right\}$, we run a circuit which itself has two parameters: a delay time $\tau$, and an analysis angle $\phi$. The circuit consists of four stages: (i) generate an $N$-qubit GHZ state of the form ($\ket{0 \dots 0}+\ket{1 \dots 1}/\sqrt{2}$); (ii) introduce a delay $\tau$; (iii) rotate each qubit using the single-qubit unitary operator $U(\phi)$, and (iv) measure each qubit in the computational basis $\left\{ \ket{0}, \ket{1} \right\}$. This circuit is run with varying $\tau$, and for each $\tau$, it is run with $\phi$ ranging from $\phi=0$ to $\phi=\pi$, and for each combination of $N$, $\tau$, and $\phi$, the circuit is run multiple times to obtain sufficiently low statistical errors in the measurement results. We aim to access information about the coherence $C(N,\tau)$ of the GHZ states; one convenient approach to measuring the coherence is to measure the amplitude of parity oscillations \cite{leibfried_creation_2005,monz_14-qubit_2011,sackett_experimental_2000}. Each qubit of a generated GHZ state is rotated by \begin{equation} U(\phi)=\cos\left({\frac{\pi}{4}}\right) I + i \sin\left({\frac{\pi}{4}}\right) \begin{pmatrix}0 & e^{-i\phi}\\ e^{i\phi} & 0\end{pmatrix}. \end{equation} These rotations induce oscillations in a measurable quantity called the parity $P \coloneqq P_\textrm{even} - P_ \textrm{odd}$ as the phase $\phi$ is varied. Here $P_\textrm{even/odd}$ correspond to the probabilities of finding the measured bitstring with an even/odd number of 1's. The amplitude of these oscillations is a direct measurement of the coherence $C(N,\tau)$ for a GHZ state with given number of qubits $N$ and a delay since generation $\tau$. We investigate the coherence of each GHZ state as a function of time by varying the delay between creation and coherence measurement. In the experimental device under consideration, the observed coherence decay is exponential, and can be characterized by a coherence time parameter $T_2^{( N )}$, which we obtain by fitting an exponential function $\propto \exp\left( -t / T_2^{( N )} \right)$ to the observed $C(N,\tau)$ data. This $T_2^{( N )}$ value is then compared with that of a single qubit; if the dominant noise source affecting the qubits is not correlated spatially, then one expects $T_2^{( N=1 )} / T_2^{( N )}=N$. This can be interpreted as the coherence time of an $N$-qubit GHZ state decreasing linearly with the number of qubits, or equivalently that the decoherence rate increases linearly with the number of qubits. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{ibmqx5.pdf} \caption{\label{fig:connectivity}The layout of the ibmqx5 device, with the numbering of qubits used in this study. Lines show the direction in which CNOT gates are allowed between pairs of qubits, where $a \to b$ means that a CNOT gate with Qubit $a$ as control and Qubit $b$ as target can be performed.} \end{figure} We now describe some specifics about implementing the protocol described above on the ibmqx5 device. The ibmqx5 \cite{noauthor_ibmqx5:_2017} is a 16-qubit device in which every qubit can interact with at least two nearest neighbors via Controlled-NOT (CNOT) gates. The qubits are arranged in a $2 \times 8$ square lattice, with connectivity as shown in Fig.~\ref{fig:connectivity}. The connections between qubits have directionality: $a \to b$ means that only a CNOT with Qubit $a$ as control and Qubit $b$ as target is supported. To circumvent this limitation, one can apply Hadamard gates to the qubits acting as control and target before and after applying the CNOT gate in order to switch the direction. This architecture allows for the generation of a 16-qubit GHZ state in principle. However, the finite gate fidelities in the device limit the size ($N$) of GHZ state that can be meaningfully prepared and analyzed in practice. In order to maximize initial fidelity we explored different gate paths to attempt to minimize the number of gates used to prepare the GHZ state. We used the QISKit SDK \cite{noauthor_qiskit-sdk-py:_2017} and the OpenQASM language for programming and running our circuits. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Circuit.pdf} \caption{\label{fig:circuit} Circuit used for the generation and analysis of GHZ states. The specific realization here is for the case $N=5$ qubits. The circuit shows only a single identity gate for simplicity; in practice the delay $\tau$ is implemented via the application of multiple fixed-time identity gates. } \end{figure} Fig.~\ref{fig:circuit} shows the circuit we used for generating and analyzing the 5-qubit GHZ state. For the $N=5$ case, we used the physical qubits $\{1,2,3,4,5\}$, and owing to the directionality restrictions shown in Fig.~\ref{fig:connectivity}, this is a case in which the state generation required some additional gates beyond those in a canonical GHZ preparation circuit. The GHZ state is generate using a combination of Hadamard and CNOT gates. Because of the direction of the connection between the 4th and 5th physical qubits we are forced to include extra Hadamard gates to flip the control and target of the CNOT gate. A delay $\tau$ is realized via the application of multiple identity operations. Each identity operation has the same duration as a single-qubit rotation gate ($80~\textrm{ns}$) and is followed by a $10$-ns buffer time, which corresponds to a total delay of $90~\textrm{ns}$ per identity operation. The $U$ rotation needs to be implemented using the standard gates provided by the IBM API. We use the ibmqx5 standard unitary operator $U_3(\theta, \lambda,\phi_{U_3})$, which is defined as follows: \begin{equation} U_3(\theta, \lambda,\phi_{U_3}) \coloneqq \begin{pmatrix} \cos{\frac{\theta}{2}} & -e^{i \lambda} \sin{\frac{\theta}{2}}\\ e^{i\phi_{U_3}} \sin{\frac{\theta}{2}} & e^{i(\lambda+\phi_{U_3})} \cos{\frac{\theta}{2}}\end{pmatrix}. \end{equation} When $\theta=\pi/2$, $\phi_{U_3}=-\lambda$, and $\lambda=-\phi-\pi/2$, then $U_3(\theta, \lambda,\phi_{U_3}) = U(\phi)$, which is the desired rotation. Finally, we measure the qubits. We choose which of the physical qubits of the available 16 to use as follows. For GHZ states with $N \in \left\{1,2,\ldots,6\right\}$, the qubits that comprise the GHZ states begin at qubit 1 and follow the numbering of the device up to $N$. For the $N=7,8$ GHZ states, we choose the physical qubits that maximize the coherence $C(N,\tau=0)$. Specifically, we used the chain $\left(4,13,12,11,10,9,8 \right)$ of physical qubits for $N=7$ and $\left(3,4,13,12,11,10,9,8 \right)$ for $N=8$. \section{Results and Discussion} Using the methods described above we experimentally measure the parity oscillations as a function of the phase of the rotation $\phi$, first with no delay ($\tau=0$) between generation and measurement. A sinusoid is fit to the data for each $N$. This is shown in Fig.~\ref{fig:subplot}. The amplitude of the fitted sinusoid corresponds to the coherence $C(N,0)$, since in this case the delay is zero. The coherence has a maximum value of $1$. Each point in the figure was obtained by performing $n=1000$ runs of the circuit, and an averaging of the results. In order to accurately fit the amplitudes of the parity oscillations, we sampled $4 N +1$ points for each $N$-qubit GHZ state. The period of the oscillations, $T(N)$, decreases with $N$ as $T(N)=2\pi/N$. We may get an estimation of the statistical error as a dispersion around the mean. We estimate the error of each point based on the mean parity $P_\textrm{even/odd}$ values and the number of data samples per point $n$ as $\delta P_\textrm{even/odd} \coloneqq \sqrt{P_\textrm{even/odd}(1-P_\textrm{even/odd})/n}$. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{subplot.pdf} \caption{\label{fig:subplot} Parity oscillations for each of the $N=1,\ldots,8$-qubit GHZ states generated on the ibmqx5, with delay $\tau = 0$. A sinusoid with fixed frequency, but free amplitude and phase variables, was fit to the data points for each $N$. } \end{figure} For every $N$, we expect that the parity $P$ will be equal to $0$ when $\phi=0$. However, for several values of $N$ we observed a shift in the oscillations, such that $P=0$ for values of $\phi \neq 0$. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{amp_decay.pdf} \caption{\label{fig:amp_decay} Decay of the amplitude of the parity oscillations, which is the initial coherence $C(N,0)$, as a function of the size $N$ of each GHZ state. The dashed black line is a linear fit.} \end{figure} The coherence of each GHZ state with no delay $C(N,0)$, as obtained from the oscillations amplitudes in Fig.~ \ref{fig:subplot}, is plotted in Fig.~\ref{fig:amp_decay} as a function of $N$, and decreases approximately linearly with $N$ (we obtain a linear fit $C(N,0) \approx 0.88-0.12(N-1)=1-0.12 N$). Deviations from the linear trend can be partially attributed to the differences in fidelity for gates applied to different physical qubits, as well as to the fact that for some $N$, more than $N$ gates were used to generate the GHZ states due to the need to reverse the control and target qubits on some CNOT gates. We attribute the linear decrease of the initial coherence of the GHZ states to the linear number of quantum gates used to generate these states. In this paper we studied GHZ states up to size $N=8$, even though the ibmqx5 chip has an architecture that could theoretically allow the generation of GHZ states as big as $N=16$. The reason for this is our inability to obtain clear parity oscillations for the $N=9$ case even after trying different combinations of physical qubits. The measured parity values no longer fit well to a sinusoid (see Appendix A), and so the initial (zero-delay) coherence $C(N=9,0)$ cannot be reliably measured. This inhibits meaningful assessment of how the coherence for states with $N \ge 9$ changes with added delay. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{exponential_decay.pdf} \caption{\label{fig:exponential_decay} (a.) Coherence decay for GHZ states with different numbers of qubits as a function of the delay $\tau$. The points are from measured data and the dashed lines correspond to exponential fits for each $N$. The error bars correspond to the errors of the fits to the parity oscillations for each $N$ and $\tau$. (b.) Same as above, except now the coherence $C(N,\tau)$ is plotted as a normalized quantity with respect to the zero-delay coherence $C(N,0)$. This allows for easier visual comparison of the decay rates for each $N$.} \end{figure} For each $N=1,\ldots,8$, we measure how the coherence $C(N,\tau)$ reduces as a function of time by varying a delay $\tau$ between the generation and rotation of the GHZ states. The coherence reductions are manifest as reductions in measured parity oscillations as a function of $\tau$; for each $\tau$, we fit sinusoids to the parity oscillations and extract the fitted amplitudes, as we did for the case of $\tau = 0$ in Fig.~\ref{fig:subplot}. The decay of $C(N,\tau)$ as a function of $\tau$, for each $N$, is shown in Fig.~ \ref{fig:exponential_decay}(a). For each $N$, we fit an exponential decay function to the measured $C(N,\tau)$ data points: $c_N^\mathrm{init} \exp\left( -t/T_2^{(N)} \right)$, where $c_N^\mathrm{init}$ is the fit parameter that characterizes the initial GHZ state coherence, and $T_2^{(N)}$ is the fitted coherence time for the $N$-qubit state. Fig.~ \ref{fig:exponential_decay}(b) also shows the decay of coherence with $\tau$, but normalizes the coherence to $1$ at $\tau=0$ (normalizing out the difference in initial fidelity between the GHZ states of different size $N$), so that the monotonic increase in decay rates with $N$ can be seen more easily by eye. The delay ranges that can be measured for the different number of qubits are limited either by our ability to measure clearly the coherence for a given $\tau$ or chosen based on our having measured sufficiently many data points to obtain a reliable $T_2^{(N)}$ fit ($N=1,2,3$). \begin{figure} \centering \includegraphics[width=0.4\textwidth]{ratio_comp_error_power.pdf} \caption{\label{fig:ratio_error} The measured decoherence rate $1/T_2^{(N)}$ of each $N$-qubit GHZ state as a function of $N$, normalized by the decoherence rate for a single qubit, $1/T_2^{(N=1)}$.} \end{figure} The scaling of the $N$-qubit GHZ decoherence rate $1/T_2^{(N)}$, normalized by the single-qubit decoherence rate $1/T_2^{(N=1)}$, is shown as a function of $N$ in Fig.~\ref{fig:ratio_error}. We have fitted three different functions of $N$ to the data: (i.) a linear function ($\beta N + \alpha$), (ii.) a quadratic function with the linear term set to zero ($\gamma N^2 + \alpha$), and (iii.) a quadratic function with the constant term set to zero ($\gamma N^2 + \beta N$). If the sources of decoherence for each qubit are independent, then the expected scaling in Fig.~\ref{fig:ratio_error} is linear. In contrast, if the system exhibits superdecoherence due to non-zero correlation in the noise, then a quadratic scaling may be expected. Fit (i.) has coefficient of determination $R^2 = 0.996$, and the $99\%$ confidence interval for the linear coefficient $\beta$ is $[0.968, 1.148]$. Fit (ii.) has $R^2 = 0.983$, and the $99\%$ confidence interval for the quadratic coefficient $\gamma$ is $[0.113, 0.160]$. Fit (iii.) has $R^2 = 0.998$, and the $99\%$ confidence intervals are $[0.561, 1.103]$ and $[-0.007, 0.075]$ for the linear ($\beta$) and quadratic ($\gamma$) coefficients respectively. While we cannot completely rule out that the true scaling of $T_2^{(N=1)}/T_2^{(N)}$ is non-linear, we note that the data presented in Fig.~\ref{fig:ratio_error} is well-fitted by a linear function, and the confidence interval of the slope is consistent with linear scaling $T_2^{(N=1)}/T_2^{(N)}=N$. We note that Fig.~2(b) in Ref.~\cite{monz_14-qubit_2011}, which describes the increase in decoherence rate as a function of $N$ for a trapped-ion experiment, is directly comparable to Fig.~\ref{fig:ratio_error} in this paper, since the data were arrived at using nearly identical procedures. \section{Conclusions} In conclusion, we have analyzed the decay in coherence of GHZ states with up to 8 qubits in the ibmqx5 quantum computer. We find a linear increase in decoherence rate with the number of qubits, namely $T_2^{(N=1)}/T_2^{(N)}=N$. The work of Monz et al. \cite{monz_14-qubit_2011} showed that GHZ states exhibited superdecoherence in a quantum processor comprised of $^{40}$Ca$^+$ ions, where each qubit was encoded in the electronic states $S_{1/2}$ and $D_{5/2}$ of a single ion. Since these states are not insensitive to magnetic fields, fluctuations in the current in Helmholtz coils (which form part of the experimental apparatus) lead to correlated dephasing noise for all the qubits. The use of magnetic-field insensitive (``clock'') states, or decoherence-free subspaces, can be used to mitigate the deleterious effects of magnetic-field noise in trapped-ion processors \cite{langer_long-lived_2005}. However, our results provide evidence that superconducting processors constructed from single-junction transmons do not need further engineering to avoid superdecoherence (at least in the design and scale of the ibmqx5 system). This conclusion is consistent with current understanding of the known dominant sources of decoherence for IBM's transmon qubits \cite{gambetta_building_2017}, which are thought to act independently on each qubit. \section{Acknowledgments} We gratefully acknowledge the IBM-Q team for providing us with access to their 16-qubit platform. The views expressed are those of the authors and do not reflect the official policy or position of IBM or the IBM Quantum Experience team. We thank T.~Monz and J.\,I.~Adame for helpful discussions, and J.\,I.~Adame for a thorough reading of a draft of this paper.
1712.01151
\section{Introduction} The Interstellar Boundary Explorer (IBEX) has been continually observing the interaction of the heliosphere with the surrounding interstellar medium since 2009 January \citep{mcc09}. The scientific payload consists of two energetic neutral atom (ENA) imagers, IBEX-Lo \citep{fus09} and IBEX-Hi \citep{fun09}. This study expands previous studies of energy spectra of ENAs observed at low energies \citep{fus14,gal14,gal16} in terms of time (2009--2016) and spatial coverage (the entire downwind hemisphere is included in the analysis and interpretation). The ENAs analyzed here are believed to originate predominantly from the shocked solar wind and pickup protons beyond the termination shock. The ENAs thus allow us to sample the source plasma populations over a vast region of the heliosphere not accessible to in-situ measurements. The ultimate goal of this study is to better understand the heliosphere, its boundaries, and the properties of plasma populations in the heliosheath. \citet{mcc13,zir16} investigated the Port Tail and Starboard Tail lobes in IBEX-Hi maps of ENA intensities: at solar wind energies or higher, regions of depleted ENA emissions appear in two lobes $30^{\circ}-100^{\circ}$ apart from the nominal downwind direction. Will a similar structure appear at lower ENA energies? Observations of neutral hydrogen column densities via Lyman-$\alpha$ absorption \citep{woo14} indicate that the heliotail cannot be deflected by more than 20$^{\circ}$ with respect to the nominal downwind direction ($\lambda_{ecl}=76^{\circ}$, $\beta_{ecl}=-5^{\circ}$) defined by the interstellar flow \citep{mcc15}. We will demonstrate in this paper that the low energy ENAs around 0.2 keV sampled with IBEX-Lo must be taken into account to understand the complex geometry of the heliosheath in the downwind direction. The parent ions of this energy range dominate the plasma pressure. With the energy spectra of ENAs measured over the full energy range of IBEX, we will be able to derive observational constraints on the geometry, cooling lengths, and neutral density of the heliosheath, albeit at a very coarse spatial resolution. We present the data set and how we corrected ENA spectra with the corresponding uncertainties from IBEX-Lo observations (Sections \ref{sec:data} and \ref{sec:errorbars}). We then summarize (in Section \ref{sec:results}) the results of the spectra, discuss the implications of our results on the downwind hemisphere of the heliosphere (Section \ref{sec:discussions}) and conclude the paper with a summary and outlook (Section \ref{sec:conclusions}). \section{Dataset}\label{sec:data} IBEX-Lo is a single pixel ENA camera \citep{fus09}. Neutral atoms enter the instrument through a collimator, which defines the nearly conical field-of-view of $6.5^{\circ}$ full width half maximum. A fraction of the incident ENAs then scatter off a charge conversion surface as negative ions. These ions pass through an electrostatic energy analyzer and are accelerated into a time-of-flight mass spectrometer, which features a triple coincidence detection scheme. Apart from negatively ionized ENAs, IBEX-Lo can also detect H$^{-}$ and O$^{-}$ sputtered off the conversion surface by interstellar neutrals (ISN) and ENAs of solar wind energy or higher \citep{par16}. IBEX-Lo measures ENAs at 8 different energy passbands with central energies at 0.015, 0.029, 0.055, 0.11, 0.209, 0.439, 0.872, and 1.821 keV \citep{fus09}. The observation times for this study include the first 8 years of IBEX-Lo triple coincidence data of hydrogen ENAs, corresponding to 16 seasonal maps from 2009 January until 2016 October. Ram maps (in which IBEX, in its orbit around Earth, moves toward the emission source) of the downwind hemisphere are created from measurements acquired over April--October each year, while anti-ram downwind maps are created from measurements during October--April. The ram and anti-ram distinction is important to this study because the proper motion of the spacecraft is not negligible compared to the ENA velocity in the inertial frame. Ram observations therefore have a significantly better signal-to-background ratio than anti-ram observations, as discussed in Section \ref{sec:errorbars}. The dataset includes only the times with the lowest background levels; measurements affected by high electron background were excluded with the method described earlier by \citet{gal16}. Two basic limitations of the dataset must be kept in mind. First, IBEX spent the four months from July until mid-October every year inside the bow shock of Earth's magnetosphere. As a consequence, no observations with sufficiently low background level were obtained. This causes a data gap over ecliptic longitudes from 0$^{\circ}$ to 120$^{\circ}$, covering most of the downwind hemisphere in the ram direction. Ram observations of slow ENAs have a better signal-to-background ratio than anti-ram observations, which results in smaller error bars (see Section \ref{sec:errorbars}). Second, observations after 2012 July exhibited a lower signal-to-noise ratio: the post-acceleration of IBEX-Lo had to be reduced in 2012 July and the background caused by the terrestrial magnetosphere and the solar wind was elevated from 2012--2016 during the maximum of solar activity \citep{gal15}. We assumed that the energy of detected hydrogen ENAs at 1 au heliocentric distance is the same as their energy at their place of origin in the heliosheath. This is substantiated during solar minimum (2009-2011) because their energy loss, while travelling from the plasma source to IBEX, due to solar radiation pressure is nearly compensated by the energy gain due to solar gravity, even for low energies at tens of eV \citep{bzo08}. During solar maximum conditions (2012--2015), a 22 eV hydrogen ENA emitted far away from the Sun reaches 1 au at an apparent energy of 15 eV (center of the lowest energy bin of IBEX-Lo), and a 36 eV ENA is decelerated to 29 eV (corresponding to the center of energy bin 2) \citep{bzo08}. For higher energies, the differences are even smaller. We neglected these energy shifts in the study because we detected no consistent increase of ENA intensity at 15 and 29 eV when we compared ENA maps from the first 4 years with the maps from 2013--2016. We also verified for 2009 (solar minimum conditions) and 2014 (solar maximum conditions) that no significant bias in corrected ENA intensities occurred at a spatial resolution of $24^{\circ}\times 24^{\circ}$ if we replaced our default correction and mapping algorithm by a different algorithm based on simulation runs with the Warsaw test particle model \citep{sok15}. In the latter approach, we corrected the ENA measurements for the proper motion of IBEX, solar gravity, and the solar radiation pressure. As in \citet{gal16}, we used data from one single season and one energy bin and constructed the map of differential intensities of heliospheric hydrogen ENAs (in units of cm$^{-2}$ sr$^{-1}$ s$^{-1}$ keV$^{-1}$) at 100 au heliocentric distance in the inertial reference frame with respect to the Sun at spatial resolution of $6^{\circ}\times6^{\circ}$. The ENA intensities were first corrected for average sputtering contributions to the ENA measurements and the ubiquitous background (values as stated in \citet{gal15} for 2009--2012, values for 2013--2016 as stated in Table \ref{tab:background_postPACchange}). After this subtraction, the remaining ENA intensities were corrected for the energy-dependent survival probability of ENAs (see Appendices in \citet{gal16,mcc17}) and for the proper motion of the spacecraft relative to the Sun as described by \citet{gal16}. We also compensated for the cylindrical distortion in our map projection for viewing directions at ecliptic latitudes beyond $\pm60^{\circ}$ (see Fig.~\ref{fig:map}). Using the corrected ENA intensity maps, we then calculated the median ENA intensity inside macropixels of size $24^{\circ}\times24^{\circ}$ pixels that are constructed from arrays of four $6^{\circ}$ pixels in latitude and four $6^{\circ}$ pixels in longitude. This mesh of macropixels is shown in Fig.~\ref{fig:map} overlayed on the uncorrected ENA maps measured at 0.029 keV (top panel) and 0.872 keV (bottom panel). At solar wind energy, the ENA intensity map is dominated by the ENA Ribbon \citep{mcc14,sch14}, at low energies, it is dominated by the primary and secondary populations of ISN helium and hydrogen \citep{moe12,sau13,kub14,mcc15}. The macropixels cover the entire downwind hemisphere, except the polar pixels (studied by \citet{rei16}), with the edges situated at ecliptic longitudes of 120$^{\circ}$, 96$^{\circ}$, 72$^{\circ}$, 48$^{\circ}$, 24$^{\circ}$, 0$^{\circ}$, 336$^{\circ}$, and 312$^{\circ}$ and over the ecliptic latitude range $-84^{\circ}$ to $+84^{\circ}$. The construction of the macropixels was determined by balancing the coverage of as much of the downwind hemisphere as possible with equal sized regions, retaining sufficient spatial resolution to identify variability across and between large emission structures, and achieving a sufficient signal-to-noise ratio per region for statistically significant results. The last criterion meant that we wanted at least $N=10$ ENA counts per season per macropixel at each energy bin (refer to Section \ref{sec:errorbars} for an explanation). If less than four of the sixteen $6^{\circ}\times 6^{\circ}$ pixels had a valid intensity, for instance because of data acquisition gaps, the macropixel for that season and energy was omitted from analysis. Zero intensity, on the other hand, was accepted; such cases occurred at low energies where the measured count rates did not exceed the average background count rates. These pixels are colored black in the top panel of Fig.~\ref{fig:map}. Zero-count pixels provide a threshold sensitivity of the instrument; these results should not be interpreted as the absence of ENA emission. We account for this in our error analysis that is described later. Unless we study the evolution with time, the ENA intensity per macropixel is the median over all available seasonal median values, implying 8 ram and anti-ram observations during 2009--2016 for ecliptic longitudes $0^{\circ}$ to $312^{\circ}$ and 8 anti-ram observations during 2009--2016 for $120^{\circ}$ to $0^{\circ}$. A median ENA intensity per macropixel must be based on three independent seasonal values, otherwise that macropixel is omitted from analysis. For interpretation of the results, we grouped the macropixels into the four larger regions shown in the upper panel of Fig.~\ref{fig:map}. The value attributed to such a region is the median intensity calculated over all macropixels. Contrary to previous studies \citep{gal14,gal16}, we did not a priori exclude any pixels with anomalously high count rate. Such a cut-off would exclude most of the intense ISN inflow of helium and hydrogen and would also eliminate some point-like background sources. However, any cut-off criterion for this study would be arbitrary and the ENA results might be biased to low intensities as well as to optimistic uncertainties since the latter were computed from the observed variability of intensity. We thus accepted all pixels and verified that the new median ENA spectrum is identical within error bars to the downwind spectrum in \citet{gal16} for the same observation period of 2009--2012. A separation of the globally distributed heliospheric ENA emission and the ISN atoms is currently not feasible because we do not have models of the extended populations of ISN helium and hydrogen in all IBEX-Lo energy bins. The Warm Breeze model by \citet{kub16}, e.g., assumes a simplistic physical scenario. A more sophisticated model of the Warm Breeze, originating in the outer heliosheath due to charge exchange collisions between interstellar He and He$^{+}$ \citep{bzo17}, is only available for energy bin 2 (around 29 eV). Fortunately, the ISN inflow hardly affects the results, as will be shown in the results section. This non-interference is enabled by our choice of observation direction, which is restricted to hydrogen ENAs from the downwind hemisphere, i.e., looking away from the ISN inflow direction (see Fig.~\ref{fig:map}). We also investigated signal-to-noise filters as employed by \citet{par16} to exclude single pixels from uncorrected seasonal maps. However, at low energy most individual pixels have a low signal-to-noise ratio and the absolute counts per pixel are on the order of 1. As a result, a signal-to-noise ratio filter was either ineffective or resulted in many excluded macropixels in the corrected maps. Finally, we investigated the comparison of raw counts in individual macropixels across different seasons to identify and possibly exclude anomalous pixels before ENA intensity correction. However, this was found to be inappropriate because the survival probability at low energies may change by more than 30\% within one year. We therefore must correct the ENA intensities for survival probability and the spacecraft motion before we can meaningfully construct and compare macropixel averages, variability, and outliers. \section{Uncertainties and error bars}\label{sec:errorbars} The corrected ENA intensity of IBEX-Lo measurements is affected both by statistical uncertainty due to the small number of counts collected during one season and by systematic errors introduced by background sources (counts caused by signals other than heliospheric hydrogen ENAs) and by calibration errors. The first group of uncertainties can be reduced if we average over a larger region and/or over more seasons of observations. We quantified the various uncertainties associated with ENA intensity per macropixel the following way: \begin{enumerate} \item The pixel-by-pixel standard deviation inside a single macropixel over one season could be due to a real small-scale spatial variability or due to low count statistics. The latter is more plausible, as the observed standard deviation agrees with the relative uncertainty expected from a Poisson distribution with $1/\sqrt{N}$, whereby $N$ equals the number of counts per pixels. This uncertainty increases markedly for lower energy where fewer counts are available. The ratios of the pixel-by-pixel standard deviation versus median ENA intensity (for the case of an ENA signal distinguishable against background for ram observations) calculate to 1, 1.2, 0.8, 0.5, 0.3, and 0.3 for the energy bins at 0.55, 0.11, 0.209, 0.439, 0.872, and 1.821 keV. The most limiting case at 0.11 keV with an average 0.7 counts (!) per entire season per $6^{\circ}\times 6^{\circ}$ pixel explains why we organized the measurements into macropixels containing 16 single pixels. This way, we achieved a statistical uncertainty $1/\sqrt{N}$ $<$ 30\% at all energies for a discernible ENA signal. \item The single count limit must be considered for averaged measurements that include multiple zero-count pixels for which both error propagation and empirical variability no longer apply: Introducing an artificial map with background count rate + 1 count per season ($\approx 0.0015$ cnts s$^{-1}$) everywhere, we assessed the single count limit in corrected ENA intensities after corrections for survival probability and Compton-Getting effect: Table \ref{tab:singlecountlimits} lists these lower limits imposed by the single count limit. In the worst case (lowest energy, anti-ram, ecliptic plane), no ENA intensity below $10^{6}$ cm$^{-2}$ sr$^{-1}$ s$^{-1}$ keV$^{-1}$ can be distinguished against the background level, even after combining the counts from 16 single pixels! For higher energies and for regions outside the ecliptic plane (weaker Compton-Getting effect), the single count limit becomes irrelevant compared to other uncertainties. Generally, the single count limit at low energies for a single season agrees with the pixel-to-pixel standard deviation presented in point 1. Both uncertainties originate from low count statistics. \item To estimate the error of the ENA intensity within a macropixel over all different seasons, we relied on the empirical variability between the medians from season to season. As previously stated, a minimum of three different seasons was required to calculate a meaningful spread. Since the seasonal median values are usually not normally distributed (and with some zero values), we calculated 16\% and 84\% quantiles of seasonal values instead of the simple standard deviation. These quantiles represent our best estimate for the $1\sigma$ lower and upper error bar. They approach the classic standard deviation when the distribution of seasonal values approaches a Normal distribution. The different seasons constitute statistically independent measurements. Nevertheless, we did not divide the error bar estimated from the season-to-season difference by the square root of seasons because the variability may be due to systematic errors (e.g., background sources or time variations) and not only due to statistical errors. The resulting error bars of the averaged signal thus represent the full encountered variability and are rather conservative. The medians of the spatial spreads per macropixel reflect the statistical uncertainty of a single season (see point 1). That uncertainty becomes negligible compared to the systematic errors when we average over all available seasons. \item Finally, the absolute calibration uncertainty is $\pm30$\% for a given energy bin. This is irrelevant for analysis and interpretation of maps of a single energy bin. However, quantitative comparison of intensities across multiple energies requires inclusion of this uncertainty. It will be considered the minimum uncertainty for entries in an energy spectrum. \end{enumerate} The default error bars associated with the ENA intensity per macropixel in the following Results section will be the $\pm 1\sigma$ empirical error bars across different seasons, except if we study temporal evolution over individual seasons. In that case, the variability per macropixel or per region will serve as an error estimate. Figure \ref{fig:ram_vs_antiram} illustrates the default error bars. For this figure we sampled the spectrum at the macropixel centered at $\lambda_{ecl}=336^{\circ}\dots312^{\circ}$ and $\beta_{ecl}=36^{\circ}\dots60^{\circ}$, which was covered both with ram and anti-ram observations. This example also demonstrates the challenge of using anti-ram observations at low energies. \section{Results}\label{sec:results} Let us first study the spatial distribution and energy spectra of the heliospheric ENAs averaged over the entire observation time before discussing potential temporal trends and contributions by ISN at the lowest energies. \subsection{Average ENA intensities over all 8 years of measurements} The left columns of Figures \ref{fig:macropixelmap_1to4} and \ref{fig:macropixelmap_5to8} show the macropixel maps (defined by the macropixel mesh of Fig.~\ref{fig:map}) of corrected ENA intensities, averaged over all 16 available seasons. The right columns show the relative uncertainty ($\sigma_j/j$) of each macropixel; if the intensity is zero, the ratio of ($\sigma_j/j$) is set to 1. Figure \ref{fig:macropixelmap_1to4} features the lower energies (energy bins 1 to 4), while Fig.~\ref{fig:macropixelmap_5to8} shows the higher energies (energy bins 5 to 8). Even at this coarse resolution, the ENA Ribbon shows up to the North at solar wind energies (Fig.~\ref{fig:macropixelmap_5to8}, energy bins 7 and 8). At intermediate energies (0.1, 0.2, and 0.4 keV) a few pixels of high ENA intensity occur in the ecliptic but they are interceded with other pixels of low ENA intensity. The only stable spatial feature at 0.1 to 0.9 keV is the very low ENA intensity from the South pole region. At energies even lower (bins 1--3 from 0.015 to 0.055 keV), the downwind hemisphere appears uniformly dim in ENA emission, with few macropixels having an ENA intensity significantly exceeding the background level. The exceptions are situated towards the poles because of better statistics \citep{rei16} and because the correction factors due to the Compton-Getting effect and the survival probability are notably smaller than at low latitudes. From these maps we calculated the power-law exponent or spectral index $\gamma$ of the ENA intensity spectrum above the roll-over energy for each macropixel: \begin{equation} j(E) = j_0 (E/E_0)^{-\gamma} \label{eq:gamma} \end{equation} The resulting map of spectral indices between energy bins 7 and 8 (0.9 to 1.8 keV, corresponding to solar wind energy) is shown in Fig.~\ref{fig:gamma}. The spectral index is ordered according to ecliptic latitude for these energies (in our case $\gamma = 0.3+\cos^{0.8}(\beta)$), whereas at energies below 0.9 keV the spectral index does not notably vary with latitude. At the roll-over energy (typically 0.1 keV, see below), the sign of the spectral index changes and the ENA intensity starts decreasing with decreasing energy. The dependence of spectral index on latitude for solar wind energy confirms IBEX-Hi observations by \citet{des15} who found a similar latitudinal ordering both for upwind and for downwind regions at high ENA energies. In contrast to the data presented here, \citet{des15,zir17} observed this latitudinal ordering only above 2 keV. The latitudinal ordering of the energy spectrum probably reflects the production mechanism of the corresponding ENAs: ENAs with 1 keV or above likely are the neutralized solar wind. Lower energy ENAs originate from other sources in the heliosheath that have lost the latitudinal ordering of the solar wind speed. For ENA energies below 0.1 keV, the trace-back times of the parent ions (see following subsection) are so long that ENAs measured at IBEX contain a mixture of ENAs generated during solar maximum and solar minimum conditions. At these lower energies, the spectral index of ENA intensities, averaged over all macropixels, is $1.3\pm0.2$ between 0.4 and 0.9 keV, $2.1\pm0.5$ between 0.2 and 0.4 keV, $1.4\pm0.9$ between 0.1 and 0.2 keV, and $0.2\pm0.7$ between 0.055 and 0.11 keV, with no notable dependence on latitude or longitude. This generalizes the statement in our previous paper \citep{gal16} that the energy spectrum of ENAs from a few specific downwind regions turns over around 100 eV in the whole downwind hemisphere. Unfortunately, we cannot directly compare this roll-over to upwind directions. There, the inflow of ISN hides the much weaker signal of heliospheric ENAs below 130 eV. For the directions towards Voyager 1 and Voyager 2 (upwind hemisphere outside ecliptic plane), the observed ENA spectra appear to be higher than for the downwind hemisphere at low energies (see Fig. 5 in \citet{gal16}). This implies additional sources of heliospheric ENAs at energies below 100 eV from the upwind hemisphere, with $j \approx 10^{4}$ cm$^{-2}$ sr$^{-1}$ s$^{-1}$ keV$^{-1}$ at 29 and 15 eV \citep{gal16}. However, the lower limit of these ENA intensities is zero and the emission intensity is below the instrument threshold, so we cannot rule out that a similar roll-over around 100 eV also applies to heliospheric ENAs from the upwind hemisphere. From here onwards, let us reduce the amount of data to be interpreted. Discussing the spectral shape and temporal trends of all 49 macropixels independently is not only impractical but also pointless: The uncertainty of the spectral index for a single macropixel (caused by the uncertainties of ENA intensity at the neighboring energies) is similar to the fitted value itself even for solar wind energies. Guided by Fig.~\ref{fig:gamma} we thus organized the downwind macropixels into just four large regions (see upper panel of Fig.~\ref{fig:map}): ``North'' ($\lambda=120^{\circ} \dots 0^{\circ},\, \beta=36^{\circ} \dots 84^{\circ}$, 10 macropixels), ``South'' ($\lambda=120^{\circ} \dots 0^{\circ},\, \beta=-84^{\circ} \dots -36^{\circ}$, 10 macropixels), and the two regions in the ecliptic plane: ``Central Tail'' ($\lambda=120^{\circ} \dots 24^{\circ},\, \beta=-36^{\circ} \dots +36^{\circ}$, 12 macropixels) and ``Port Lobe'' ($\lambda=24^{\circ} \dots 312^{\circ},\, \beta=-36^{\circ} \dots +36^{\circ}$, 9 macropixels). We separate the Central Tail from the Port Lobe region because the latter may also contain ISN hydrogen and helium in the two lowest energy bins. For the same reason, the four macropixels to the north and south of the Port Lobe region were excluded from the North and South regions. The nautical term ``Port Tail lobe'' for the area between $50^{\circ}$ and $0^{\circ}$ at the flank of the heliotail was introduced by \citet{mcc13} to discuss the shape of the heliotail from IBEX-Hi observations. Heliospheric ENAs from the opposite region, the starboard lobe at $120^{\circ}$--$150^{\circ}$, could not be distinguished at low energies against the inflow of ISN (see upper panel in Fig.~\ref{fig:map}). We wanted to expand the energy range of the ENA intensity spectrum for the discussion of plasma pressure in the heliosheath (see Section \ref{sec:discussions}). We therefore synthesized and added IBEX-Hi spectra from the most recent Data Release 10 \citep{mcc17}, which represents 7-year averages corrected for Compton-Getting effect and survival probability. The energy range for which IBEX-Hi data are evaluated spans $\sim0.5$ to 6 keV with central energies at 0.71, 1.11, 1.74, 2.73, and 4.29 keV \citep{fun09}. Between 0.5 and 2.5 keV, the energy ranges of the two IBEX instruments thus overlap. We derived median values and $1\sigma$ variability from the IBEX-Hi Data Release 10 over the four sky regions the same way as for IBEX-Lo data. Figure \ref{fig:energyspectra} shows the energy spectra of the four regions; the values are the medians over single macropixels. The error bars indicate the 16\% and 84\% quantiles of included macropixel values or 30\% relative error, whichever is larger. The four energy spectra are compared with the median IBEX-Hi energy spectra sampled over the same four regions and with the previously published ``Downwind spectrum'' spectrum over the first 4 years of IBEX-Lo data (black ``x'' symbols, \citep{gal16}). The colored symbols in bold are IBEX-Lo values, the colored thin symbols are IBEX-Hi values; blue triangles up denote North, red triangles down denote South, green asterisks denote Central Tail, and orange circles denote the Port Lobe. The spectrum from \citet{gal16} was a composite of four smaller areas at the Central Tail and at the northern and southern flanks of the heliosheath ($\lambda=360^{\circ} \dots 312^{\circ}$), evaluated over the first 4 years of IBEX-Lo observations. It matches the 7-years downwind ENA spectra but obviously blurs regional differences. Another point to note in this figure is that the spectral index of ENA intensities derived with IBEX-Hi around 1 keV is steeper than for IBEX-Lo. This bend may be real as ASPERA-3\&4 observations revealed a knee in the energy spectrum of heliospheric ENAs at $0.83\pm0.12$ keV \citep{gal13}. Table \ref{tab:energyspectra} lists the IBEX-Lo energy spectra illustrated in Fig.~\ref{fig:energyspectra}. \subsection{Temporal trends} If we want to compare the ENA signal from individual years to detect potential trends with time, we can do so only for the Port Lobe where ram observations were available. The signal-to-noise ratio of anti-ram observations for one single season proved insufficient (see Section \ref{sec:errorbars}). Moreover, some pixels were not sampled in every year. Figure~\ref{fig:timeseries} shows the timeseries of ENA intensities measured in the Port Lobe (upper panel) and the solar activity (lower panel). Only the 8 seasons of ram observations (April--October) were included. Two temporal trends can be discerned in this figure: In the lowest two energy bins, the intensity drops in the years 2011--2013 to background levels before recovering in 2014--2016. This is probably ISN hydrogen responding to solar activity (see the discussion in Section \ref{sec:isn}). The other notable change occurs in the years 2014 to 2016 at intermediate energies from 0.05 to 0.2 keV. All other temporal variability remains inside the error bars. The North, South, and Central Tail regions could only be observed for anti-ram configuration, therefore the error bars are generally too large to discern changes between single seasons for energies below 0.1 keV. At 0.11 keV, on the other hand, the last two of the 8 seasons (October--April 2014/2015 and 2015/2016) significantly exceed the median values also in those three regions. Likewise, the four seasons from 2013--2016 feature significantly higher ENA intensities than for 2009--2012 in all three regions at 0.209 keV. The apparent dip of ENA intensity around 1 keV in 2013 (purple and blue curves in Fig.~\ref{fig:timeseries}) agrees with contemporaneous IBEX-Hi \citep{mcc17} and INCA observations \citep{dia17} of the downwind hemisphere. However, the ENA intensities measured with IBEX-Lo at 0.439, 0.872, and 1.821 keV in general do not significantly change over the 8 years in any of the four regions. Figure~\ref{fig:timeseries} is representative for the entire downwind hemisphere in this respect. The absence of significant year-to-year variations in solar wind energy ENAs (0.4 to 2.5 keV) is consistent with the finding of all previous studies of IBEX-Lo time-series \citep{fus14,gal14,rei16}. IBEX-Hi detected intensity variations with solar cycle on the order of 30\% or less around 1 keV \citep{mcc17}. Such variations usually cannot be distinguished with IBEX-Lo because of the poorer signal-to-noise ratio \citep{gal14}. If the observed increase in ENA intensity would occur everywhere between June 2012 and January 2013, we might suspect an instrumental effect as the post-acceleration bias in IBEX-Lo changed in that period (see Section \ref{sec:data}). However, since the intensity increases only in June 2014 or January 2015 in several cases and since the intensities at higher energies do not rise at all in later years, a natural cause seems more plausible. If indeed more ENAs at intermediate energies were produced in the heliosheath at some time during the solar cycle, we must first derive the trace-back time of the detected ENAs. Because temporal variation is likely correlated with the solar cycle, the trace-back time corresponds to the time span between the emission of the source plasma (the solar wind) from the Sun, its transit to the heliosheath where it becomes ENAs, and the transit time back to the inner heliosphere, where the ENAs are detected at IBEX. Following the notation by \citet{rei12,rei16}, \begin{equation} t_{tb}(E) = \frac{d_{TS}}{v_{sw}}+\frac{l_{IHS}/2}{v_{ms}}+\frac{d_{TS}+l_{IHS}/2}{v_{\textup{\tiny{ENA}}}(E)}, \label{eq:traceback} \end{equation} $t_{tb}(E)$ is the trace-back time for ENAs of energy $E$ originating in the inner heliosheath, $d_{TS}$ is the distance to the termination shock, $l_{IHS}$ is the thickness of the inner heliosheath, $v_{sw}$ is the solar wind proton speed, $v_{ms}$ is the average magnetosonic speed in the inner heliosheath, and $v_{\textup{\tiny{ENA}}}(E)$ is the speed of an ENA of energy $E$ observed at IBEX. \citet{rei16} estimated $d_{TS}\approx130$ au and $l_{IHS} \approx 210$ au for the North pole, $d_{TS}\approx110$ au and $l_{IHS} \approx 160$ au for the South pole, and $v_{sw}=690$ km s$^{-1}$ for the polar regions. We assigned $l_{IHS}=280$ au and $d_{TS}$ = 110 au to the Central Tail and $l_{IHS}=180$ au and $d_{TS}$ = 120 au to the Port Lobe (see next Section \ref{sec:discussions}). The magnetosonic speed in the inner heliosheath \citep{rei12}, \begin{equation} \sqrt{v_s^2+v_A^2}\approx \sqrt{\frac{5}{3}\frac{P}{\rho}}\approx 430 \textup{ km s}^{-1}, \end{equation} does not change with latitude if a similar plasma pressure (0.12 pPa = 1.2 pdyn cm$^{-2}$) and proton density (640 m$^{-3}$) is assumed for all downwind directions. Towards Central Tail and Port Lobe, the solar wind speed inside the termination shock is assumed to be 440 km s$^{-1}$ \citep{wha98}. With these speeds and distances, Equation \ref{eq:traceback} yields the trace-back times shown in Table~\ref{tab:trace-back} for the 5 cases we need to distinguish. The effects of solar radiation pressure and gravity on the travel time of low energy ENAs were considered. The uncertainties stated in Table~\ref{tab:trace-back} are introduced by the radiation pressure changing with solar activity. Two things should be noted from this table of trace-back times. First, trace-back dates for ENAs sampled at 15 eV (the lowest energy bin) from anti-ram direction in the ecliptic plane (Central Tail or Port Lobe) differ by several years depending on whether they were sampled during solar minimum (2009) or solar maximum conditions (2014). The incoming ENAs at these low energies are mixed together over an entire solar cycle, so the concept of trace-back time characteristic for a given year and region in the sky cannot distinguish between solar wind emitted during solar maximum and solar maximum. Second, time-series of ram and anti-ram measurements at low energies in the ecliptic plane cannot be directly compared because their corrected energies, and thus trace-back times, are grossly different. This is another reason why we show in Figs.~\ref{fig:timeseries} and \ref{fig:traceback} only the ram measurements from the Port Lobe region. The upper panel in Fig.~\ref{fig:traceback} shows the time-series of ENA intensities measured at 0.11 (dashed lines) and 0.209 keV (dashed-dotted lines) for all four downwind regions versus trace-back date. Orange symbols denote Port Lobe (ram observations only), blue symbols denote North, red symbols denote South, green symbols denote Central Tail (only anti-ram observations available). The lower panel of Fig.~\ref{fig:traceback} illustrates the solar activity via the monthly mean sunspot number \citep{wdc08}. The heliospheric ENA production seems to be anti-correlated with solar activity in the North, South, and Port lobe region of the downwind hemisphere. A simple regression analysis reveals that for 7 out of 8 time-series in Fig.~\ref{fig:traceback} the ENA intensity significantly (confidence levels between 2 to 6 sigma) increases with time from trace-back dates 1999--2003 (the previous solar maximum) until the end of series at 2006--2010 (solar minimum). The time-series for 0.1 keV at Central Tail does not follow this trend. Towards the North, the ENA increase set in at a trace-back date of 2005; towards the South and Port Lobe, the ENA increase followed somewhat later in 2006--2007. From the anti-correlation of ENA intensity and solar activity, we predict that the ENA intensities at 0.2 keV will remain high until the middle of 2017 before decreasing again when the trace-back date will correspond to increasing solar activity. The decrease of 0.1 keV ENA intensity should follow one to two years later. No conclusion can be reached yet with respect to the time variability of the Central Tail. The ENA intensities at 0.1 keV (green dashed line) linked to a trace-back date before 1999 show no tendency to increase with low solar activity. At 0.4 and 0.9 keV, only 1 out of the 8 time-series from the downwind regions exhibits a slope significantly different from zero. In all other cases, a constant fits all 8 seasonal values within respective error bars. The variations of ENA intensity with time are generally less pronounced at solar wind energies than for 0.08--0.3 keV and for 3--6 keV (see \citet{mcc17} about the temporal evolution over the full sky and \citet{rei16} regarding the polar regions). With 8 years of observations available, both IBEX-Lo and IBEX-Hi can be used to track the imprint of the solar cycle on the heliosheath plasma. We cannot answer yet which physical process causes the anti-correlation between solar activity and production of 0.1--0.2 keV ENAs in the heliosheath. But we have seen that the temporal evolution of ENA intensities looks similar for all directions in the downwind hemisphere with the possible exception of the Central Tail. This implies via the trace-back time in Equation \ref{eq:traceback} that the region of ENA production has a similar distance towards the poles and towards the flanks of the heliotail. The estimate for the Central Tail may differ from our assumption. In Section \ref{sec:discussions} we will motivate these assumptions by deriving the heliosheath thickness from the total plasma pressure averaged over time. Once the IBEX-Lo time-series cover an entire solar cycle, we can use an updated version of Fig.~\ref{fig:traceback} to optimize the trace-back times and from there the travel distance of ENAs from the downwind hemisphere. \subsection{Interstellar neutral hydrogen observed at the lowest energy bins}\label{sec:isn} The ISN inflow was seen to extend to longitudes $>300^{\circ}$ in the ecliptic plane in the overview figure of energy bin 1 (Fig.~\ref{fig:map}). Looking at the temporal evolution in Fig.~\ref{fig:timeseries} for the Port Lobe, we recognize the ISN signal in energy bin 1 in 2009--2011, then it vanishes in 2012, only to reappear in 2014. Remember that those ENA measurements include only the ram observations. The anti-ram measurements of the same sky direction usually yield no detectable ENA signal (see the black macropixels at 15 eV and 29 eV in Fig.~\ref{fig:macropixelmap_1to4}). Heliospheric ENAs of 15 eV with trace-back times of decades obviously cannot produce a bi-annual intensity change, so we conclude that this is indeed the outermost part of the seasonal ISN inflow. The energy of ISN hydrogen is too low to produce any signal above energy bin 2, and ISN helium produces a strong signal in IBEX-Lo in all 4 energy bins below 150 eV \citep{sau13}. This can be easily understood from the energy of the ISN particles entering IBEX-Lo. Whereas ISN helium has a maximum relative energy of 130 eV \citep{gal15} for ram measurements, ISN hydrogen will at most have the same velocity as ISN helium but, having 4 times less mass, only reach 33 eV. This is close to the central energy of bin 2 at 29 eV. Any non-gravitational influence on the ISN hydrogen trajectory, such as solar radiation pressure, will tend to further slow down hydrogen with respect to helium. As the signal temporarily disappears only in energy bins 1 and 2 but not in energy bins 3 and 4 in Fig.~\ref{fig:timeseries} we conclude that this signal is mostly ISN hydrogen. In the ecliptic plane it extends to an apparent direction of $336^{\circ}\pm6^{\circ}$, which is even wider than the Warm Breeze of the secondary helium \citep{kub16}. The latter blends into background around $300^{\circ}\pm6^{\circ}$, judging from count rate maps in analogy to Fig.~\ref{fig:map} for 0.055 keV. The variation of the ISN hydrogen signal from 2009 to 2016 in Fig.~\ref{fig:timeseries} is probably caused by the varying survival probability for neutral H reaching the inner solar system depending on solar cycle. \citet{sau13} found that this hypothesis explained the IBEX-Lo observations of ISN hydrogen during the first part of the solar cycle. The rapid recovery of the ISN hydrogen in 2014 is puzzling, however. Based on the solar activity (see Fig.~\ref{fig:timeseries}) we would have expected the signal to recover only in 2016. Possibly, the model used to estimate hydrogen ENA losses due to re-ionization close to 1 au is not accurate enough at these very low energies. We cannot verify if the heliospheric ENAs are affected the same way, i.e., whether they also rise rapidly in 2014 in all other downwind regions at 15 eV and 29 eV. At these energies, the anti-ram observations do not allow for a meaningful seasonal analysis. We will devote a future study to this ISN hydrogen signal and how it expands and diminishes over a full solar cycle. For the discussion of heliospheric ENAs in the following section we will disregard the ISN contribution to the Port Lobe below 0.055~keV. \section{Implications for the heliosheath in the downwind direction}\label{sec:discussions} Throughout this discussion we assume that the observed ENAs originated exclusively in the inner heliosheath. Contributions from the outer heliosheath are unlikely because of the increasing heliosheath thickness in downwind direction. The ENA intensities observed with IBEX-Lo from Voyager 1 and Voyager 2 direction in the upwind hemisphere can be reproduced without a contribution of ENAs from the outer heliosheath \citep{gal16}. It is therefore unlikely that ENAs from the outer heliosheath contribute notably to the ENA intensity in the downwind hemisphere. The ENA measurements provide insight into the integrated plasma pressure over the line-of-sight thickness of the source plasma population from which the ENAs are emitted. We repeat the plasma pressure calculation presented by \citet{sch11} (also see \citet{fus12,sch14,rei12,rei16,gal16}) for the new ENA energy spectra averaged over the four regions. Because this is based on observations, the integration is done step-wise over each of the energy bins of the instrument; the reference frame of the plasma pressure is heliocentric, i.e., not moving with the plasma bulk flow speed $u_R$ \citep{sch11}: \begin{equation} \Delta P \times l = \frac{2 \pi m^{2}}{3n_{H}} \frac{\Delta E}{E} \frac{j_{\textup{\tiny{ENA}}}}{\sigma(E)} \frac{(v_{\textup{\tiny{ENA}}} + u_R)^4}{v_{\textup{\tiny{ENA}}}} \label{eq:pressure1} \end{equation} \noindent The measured intensity $j_{\textup{\tiny{ENA}}}$ of neutralized hydrogen at a given energy thus translates into the product $\Delta P \times l$ of the pressure of the parent ion population in the heliosheath that is the ENA emission source and the thickness of this ion population source region along the instrument line-of-sight. This pressure includes only the internal pressure of the moving plasma, there is no ram pressure term contributing to the balance for the downwind hemisphere. Equation \ref{eq:pressure1} states that the product of pressure times ENA emission thickness can be derived from observations, but to obtain the two values separately further assumptions will be needed. The equation can be rewritten in the notation of \citet{fus12} as the product of a stationary pressure (the internal pressure in the inertial reference frame with $u_{R}=0$) times a correction factor for the plasma bulk flow velocity with respect to the heliocentric rest frame: \begin{equation} \Delta P \times l = \frac{4 \pi m_{H}}{3 n_{H}} \frac{v_{\textup{\tiny{ENA}}} j_{\textup{\tiny{ENA}}}(E_0)}{\sigma(E_0)} \int^{E_0+\Delta E/2}_{E_0-\Delta E/2} dE \left(\frac{E}{E_0}\right)^{-\gamma} \, c_f \label{eq:pressurebalance} \end{equation} \begin{equation} c_f = \frac{(v_{\textup{\tiny{ENA}}} + u_R)^4}{v_{\textup{\tiny{ENA}}}^4}. \label{eq:correctionfactor} \end{equation} In Equation \ref{eq:pressurebalance}, $\Delta E$ denotes the width of the respective energy bin and $\gamma$ is the spectral index (see Equation \ref{eq:gamma}). For the typical radial velocity of solar wind in the downwind hemisphere of the inner heliosheath, we assumed $u_R=140$ km s$^{-1}$ everywhere, as measured by Voyager 2 \citep{wha99,sch11} and similar to the range of 100--150 km s$^{-1}$ assumed by \citet{zir16} over the first 100 au beyond the termination shock. More precisely, $u_{R}$ would be a function of heliolatitude with faster plasma speeds -- up to 225 km s$^{-1}$ -- occurring towards the poles \citep{rei16}. Such a latitudinal dependence could be formulated if we assumed a shock jump of 2.5 everywhere \citep{sch11}. But for this discussion let us assume a constant plasma bulk flow speed for the entire downwind hemisphere. As in previous IBEX-related papers on pressure in the heliosheath, let us first assume a constant density of neutral hydrogen everywhere in the inner heliosheath with $n_H=0.1$ cm$^{-3}$ \citep{sch11,glo15}. We will re-assess this assumption at the end of this section. The charge-exchange cross section $\sigma(E_{0})$ between protons and neutral hydrogen is taken from \citet{lin05}, decreasing from 4 to 2 $\times10^{-15}$ cm$^{-2}$ as the ENA energy increases from 0.015 to 1.821 keV. We applied Eq.~\ref{eq:pressurebalance} to the average ENA intensities in Figure~\ref{fig:energyspectra} and Table~\ref{tab:energyspectra} to calculate stationary and dynamic pressure for all regions. In the stationary case, the ENAs at solar wind energies would dominate the total pressure balance. For the following discussions and illustrations, however, we will concentrate on the dynamic pressure because the plasma is flowing away from the Sun. \subsection{Plasma pressure in the heliosheath} To obtain the full plasma pressure $P$ in the heliosheath we would like to integrate the ENA spectrum from zero to infinite energy. As will be shown briefly hereafter, IBEX-Lo usually covers most energies at the lower end that contribute to the total pressure. To assess how much pressure would be added at energies above the IBEX-Lo cut-off at 2.5 keV, we also considered the IBEX-Hi spectra averaged over the four regions (see Tab.~\ref{tab:energyspectra}). As in our previous study \citep{gal16}, the relative uncertainty attributed to IBEX-Hi measurements is 20\% or the standard deviation between the four different regions, whichever is larger. Whereas the spectral index derived from IBEX-Hi spectra is steeper than for IBEX-Lo spectra at the overlapping energy, the dynamic pressure added over the two IBEX-Lo bins 7 and 8 (covering roughly 0.6--2.5 keV), agrees well with the dynamic pressure integrated over IBEX-Hi energy bins 2 to 4 (0.5--2.5 keV): (72 and 67) pdyn cm$^{-2}$ au for the North, (47 and 40) pdyn cm$^{-2}$ au for the South, (74 and 78) pdyn cm$^{-2}$ au for the Central Tail, and (57 and 50) pdyn cm$^{-2}$ au for the Port Lobe. We therefore combined the 6 lower energy bins of IBEX-Lo with the upper 5 energy bins of IBEX-Hi to obtain four composite energy spectra spanning the whole range from 0.01 to 6 keV. These pressure spectra are averages over all 8 years (IBEX-Lo) or 7 years (IBEX-Hi) of observations. The resulting $\Delta P \times l$ per individual energy bin is shown for all four regions in Fig.~\ref{fig:pressurespectrum} (orange triangles up: North, green triangles down: South, red asterisks: Central Tail, blue circles: Port Lobe). The error bars represent the variability within a region; the uncertainties of the plasma bulk flow velocity were neglected. The values of the pressure spectra plotted in Fig.~\ref{fig:pressurespectrum} are stated in Table \ref{tab:pressurespectrum}. Table \ref{tab:pressure} gives the corresponding total $P \times l$ over two energy ranges (0.01 to 6 keV and 0.08 to 6 keV). The ENA energies at 0.11 keV and 0.209 keV contribute most to the total pressure for all four downwind regions. We therefore set 150 eV as the relevant energy to discuss the cooling length \citep{sch11,sch14} in the downwind heliosheath. The energies below the roll-over around 100 eV contribute on average little to the total pressure because most macropixels feature no discernible ENA signal above the background (see Fig.~\ref{fig:macropixelmap_1to4}). However, because of the large upper error bars and the large correction factor at low ENA energies ($c_f=170$ at 15 eV), the Central Tail could feature a high upper limit of 1800 pdyn cm$^{-2}$ au if the energy range is extended down to 10 eV. We therefore added the plasma pressure over two different energy ranges in Table \ref{tab:pressure} to indicate the uncertainties at the lowest ENA energies. Before interpreting the derived values of $P \times l$ we would like to caution the reader about the effect of solar activity on the values in Table \ref{tab:pressure}. That table incorporates all available observations, thus blending ENA observations representative for high and low solar activity conditions in the heliosheath. Because IBEX observations do not yet cover an entire solar cycle, we cannot create separate plasma pressure spectra for low and high solar activity over the entire energy range of IBEX-Lo and IBEX-Hi. The trends of ENA intensities at 0.1 and 0.2 keV in Fig.~\ref{fig:timeseries} indicate that the $\Delta P \times l$ decreases by a factor of 3 during high solar activity with respect to the average stated in Table \ref{tab:pressure} and increases by a similar factor for low solar activity conditions. Because ENAs of these energies are a dominant contribution to the total plasma pressure (see Fig.~\ref{fig:pressurespectrum}), $P \times l$ varies strongly between the two extreme cases: we estimate that $P \times l$ increases from $300\pm100$ to $1050\pm300$ pdyn cm$^{-2}$ au in the Central Tail (integrated from 0.08--6 keV) as the heliosheath changes from high to low solar activity conditions. The IBEX-Lo observations of 2017 and 2018 will inform whether this anti-correlation with solar activity is real: these years combine a low present solar activity, which allows for a better signal-to-noise ratio, with a trace-back time for 0.1 keV ENAs corresponding to the previous solar minimum. However, we can state with confidence the lower limits of $P \times l$ over the solar cycle. We assumed the lower limits in Table \ref{tab:pressure} and reduced the contributions at 0.1 and 0.2 keV according to the temporal trend in Fig.~\ref{fig:timeseries}. The results are representative for high solar activity conditions and optimum signal-to-noise ratio during the first 3 years of IBEX-Lo observations. These lower limits, evaluated from 0.01 to 6 keV, read: 210, 150, 280, and 180 pdyn cm$^{-2}$ au for the North, South, Central Tail, and the Port Lobe, respectively. The main challenge with interpreting the product $P \times l$ is disentangling the two factors. If they are independent and thus separable, a governing assumption is that $P$ is constant over $l$. Both parameters are unknown a priori, as no in-situ measurements are available, contrary to the upwind hemisphere observed by the Voyager spacecraft \citep{bur08,dec05,gur13,sto13}. If the plasma pressure is assumed to be similar over all directions in the heliosheath, the numbers in Table \ref{tab:pressurespectrum} translate directly into the thickness of the emission structures along lines of sight in the heliosheath. But what should that total pressure be? Most studies, no matter if they are based on models or observations, predict a total plasma pressure of 1--2 pdyn cm$^{-2}$ in the inner heliosheath: \citet{liv13} derived a total pressure of 2.1 pdyn cm$^{-2}$ over the entire energy spectrum, comparing the expected plasma pressure from a kappa distribution of protons with the plasma pressure derived from IBEX-Hi energy spectra. These authors found the same value for all sky directions except for the ENA Ribbon towards the nose of the heliosphere. Rather at the lower limit, \citet{rei12} and \citet{glo11} derived a total plasma pressure of 1.2 pdyn cm$^{-2}$ from Voyager and early IBEX observations. This pressure is dominated by the heliospheric pickup ions, which contribute $1.0 \pm 0.5$ pdyn cm$^{-2}$ \citep{glo11}. \citet{gal16} derived 1.4 pdyn cm$^{-2}$ for a few regions in the flanks and in the heliotail for the IBEX-Lo energy range from 10 eV to 2.5 keV. \citet{rei16} derived a heliosheath thickness of 160 au for the South pole region and 210 au for the North pole region, using a different approach of analyzing the temporal variability of heliospheric ENAs at several keV. These dimensions imply with our values for $P \times l$ in Table~\ref{tab:pressure}: $P = 1.8+1.6-0.3$ pdyn cm$^{-2}$ from 0.01 to 6 keV to the North, which equals the $1.6\pm0.4$ pdyn cm$^{-2}$ to the South. This reduces to $P = 1.3\pm0.2$ pdyn cm$^{-2}$ from 0.08 to 6 keV for the two polar regions. Applying $P = 1.3$ pdyn cm$^{-2}$ to Table~\ref{tab:pressurespectrum}, we then find $l=215+35-22$ au for the ENA emission thickness towards the Port Lobe and $360\pm85$ au towards the Central Tail. The lower limits of $P \times l=$ 210 and 150 pdyn cm$^{-2}$ au imply $P = 1.0$ pdyn cm$^{-2}$ and hence the heliosheath must be at least 280 au in the Central Tail and 180 au towards the Port Lobe for any time during the solar cycle. The difference of $P \times l$ observed towards North and South confirms the dichotomy found by \citet{rei16} at higher energies (0.5 to 6 keV). This independent confirmation at lower energy indicates that the downwind heliosheath likely extends farther to the North than to the South. The dichotomy already appears in the stationary pressure; the lines of sight would be similar at both poles only if the radial bulk flow velocity towards the South were much higher ($u_R\approx 200$ km s$^{-1}$ instead of the assumed 140 km s$^{-1}$) than towards the North to compensate for the observed two-fold difference via the dynamic correction factor (see Eq.~\ref{eq:correctionfactor}). The minimum ENA emission thickness in the Central Tail region considerably exceeds the lower limits encountered for the other downwind regions (high latitudes or the flank of the heliosheath around $\lambda=0^{\circ}$. The presumed tail of the heliosheath therefore is not visibly deflected from the nominal downwind direction of $76^{\circ}$ ecliptic longitude \citep{mcc15}. This confirms Lyman-$\alpha$ observations by \citet{woo14} of interstellar absorption towards nearby stars in the downwind direction. The Port Tail and Starboard Tail lobes show up beside the nominal downwind direction in ENA maps above 1.5 keV energy, but these lobes then seem to blend into the globally distributed ENA flux at solar wind energy \citep{mcc13,zir16}. A similar bi-lobate structure at lower energies has not been detected so far. Since the total plasma pressure is dominated by those lower energies, the IBEX data are consistent with a symmetric heliotail not notably offset from the nominal downwind direction. \subsection{Plasma cooling length and the dimension of the heliosheath} \citet{sch11} introduced the concept of a cooling length $l_{cool}$ in the inner heliosheath as the time scale for plasma ions being neutralized times the plasma bulk flow velocity: \begin{equation} l_{cool} = \frac {u_R}{n_H\sigma v_{\textup{\tiny{ENA}}}} \label{eq:coolinglength} \end{equation} The mean free path length a hydrogen ENA of the same energy can travel before being lost to re-ionization is much longer than that. \citet{sch11} derived from Equation~\ref{eq:coolinglength} a cooling length of 120 au in the inner heliosheath for plasma sampled with ENAs of 1 keV energy, which is appropriate for the upwind heliosheath. For the downwind hemisphere, the energy of 150 eV, relevant for the total pressure (Fig.~\ref{fig:pressurespectrum}), implies $v_{\textup{\tiny{ENA}}}= \sqrt{2E/m} \approx 170$ km s$^{-1}$ and thus $l_{cool}=210$ au. For the lowest energy bin of 15 eV, the cooling length would increase to 430 au. A similar mean free path is found for any heliosheath proton moving through the surrounding neutral hydrogen. The pick-up proton density and the solar wind density are both on the order of only $10^{-3}$ cm$^{-3}$ in the inner heliosheath \citep{ric08,glo11,rei12}. These ions move through the neutral hydrogen with $n_{H}$ = 0.1 cm$^{-3}$ whose speed is only 25 km s$^{-1}$ relative to the Sun \citep{mcc15}. Therefore, \begin{equation} l_{neutr} = \frac {1}{n_H \sigma}\approx 220 \textup{ au}. \label{eq:neutralizationlength} \end{equation} The cooling and neutralization lengths of typically 200 au agree for the North, South, and Port Lobe regions with the dimension derived from $P\times l$ within the respective error bars (this study) and derived from time variations of ENAs \citep{rei16}. Both methods rely on ENA measurements and thus can only sample the cooling length of the plasma rather than the full heliosheath thickness. However, \citet{rei16} also noted that these cooling lengths derived from ENA observations are similar to model predictions for the full heliosheath thickness, equal to 100 au \citep{pog13} up to 240 au \citep{izm09} for the North pole region. This implies that in these directions IBEX may sample plasma all the way from the termination shock to the heliopause. The pressure method can be used to derive the total heliosheath thickness there if the contributions from low energies around 100 eV are taken into account. Otherwise, the total pressure is underestimated, as shown by \citet{sch11} ($P \times l = 72 $ pdyn cm$^{-2}$ au for the IBEX-Hi energy range). The ENA measurements from the Central Tail region suggest that the local neutral hydrogen density is lower than the hitherto assumed 0.1 cm$^{-3}$: the emission thickness derived from $P \times l$ exceeds the cooling and neutralization lengths by a factor of $1.7\pm0.4$. The excess cannot be remedied by major contributions from energies below 80 eV to the total pressure. This would imply typical energies of 15 or 50 eV and thus stretch the cooling length to roughly 400 au. But this assumption would also increase $P \times l$ (Equation~\ref{eq:correctionfactor}). A region of depleted neutral hydrogen density $n_{H}<0.1$ cm$^{-3}$ extending into the heliosheath around the downwind direction offers the simplest solution to the discrepancy (refer to Equations \ref{eq:coolinglength} and \ref{eq:neutralizationlength}). The existence of such a depletion is predicted by state of the art global models of the heliosphere (e.g., \citet{izm09,hee14}). The magnitude of the depletion depends on the strength of the interstellar magnetic field \citep{hee14}. The ENA spectra presented here suggest that the neutral hydrogen density in Central Tail direction cannot exceed $0.1\times 210$ au / 280 au, and more likely drops to 0.06 cm$^{-3}$ if we rely on the 8-year average $P\times l= 466$ pdyn cm$^{-2}$ au and $P =1.3$ pdyn cm$^{-2}$. The lower limit could be as low as 0.02 cm$^{-3}$ if we apply the upper limit of roughly 1000 au from Table~\ref{tab:pressure}. This range of possible neutral densities agrees well with the $\sim 0.06$ cm$^{-3}$ predicted by the simulation of \citet{hee14} for the central heliotail for an interstellar magnetic field strength of 3 $\mu$G. This field strength is consistent with the $2.93\pm0.08$ $\mu$G determined by \citet{zir16b} based on the geometry of the ENA Ribbon. \citet{sch11} argued that since the ENA emission thickness cannot exceed the cooling length, the dimension implied by ENA observations was just the cooling length, and $d_{TS}$ in turn had to be smaller than 145 au. But with our new analysis this argument does not limit the termination shock distance any more to $d_{TS}<200$ au. Our analysis yields a total plasma pressure outside the termination shock of 1.7 pdyn cm$^{-2}$ with a lower limit of 1.0 pdyn cm$^{-2}$ towards the heliotail. Inside the termination shock, the total pressure of the solar wind is dominated by the ram term, with the internal pressure of pickup ion and magnetic pressure the next two smaller contributions at heliocentric distances beyond 50 au \citep{wha98}. Let us just compare the ram pressure term with the lower limit of the heliosheath plasma pressure. Because the solar wind density, and thus the ram pressure, drops with the square of the heliocentric distance, the termination shock must be rather close to the Sun also for the downwind hemisphere. We obtain the required minimum pressure of \begin{equation} P = \frac{n_{p} m_{H}}{2}v_{sw}^2 = 0.1 \textup{ pP} = 1.0 \textup{ pdyn cm}^{-2} \end{equation} with a total proton density of $n_{p} = 10^{-3}$ cm$^{-3}$ and a bulk flow speed of 340 km s$^{-1}$, which are typical values at 100 au distance \citep{wha98}. This is consistent with the 115 au derived for 1.0~pdyn~cm$^{-2}$ by \citet{sch11} who also included internal pressure. These distances are similar to $d_{TS}$ = 130 au at the north pole and $d_{TS}$ = 110 au at the south pole \citep{rei16}. In summary, IBEX observations argue for a rather spherical shape of the termination shock (similar to the model of \citet{izm15}, e.g.). \section{Conclusions}\label{sec:conclusions} We have aggregated all observations of heliospheric ENAs from the downwind hemisphere at low energies measured during the first 8 years of the IBEX mission. \begin{itemize} \item We confirmed previous studies on heliospheric ENAs: Spectral index of ENA intensity depends on heliolatitude at solar wind energy, but this heliolatitudinal ordering disappears below 0.9 keV. The ENA energy spectrum probably has a knee around 0.8 keV and rolls over around 0.1 keV in all downwind directions. \item We have seen the first indication of temporal changes for low energy ENAs (0.1 and 0.2 keV). The apparent anti-correlation with solar cycle must be re-visited once a full solar cycle has been covered. At solar wind energy (0.4 -- 2.5 keV) IBEX-Lo data are insufficient to verify the 30\% changes with time observed with IBEX-Hi. \item The ISN hydrogen signal recovered already in 2014 during solar maximum conditions, earlier than expected. \item Composite energy spectra from 10 eV to 6 keV for the dynamic pressure times ENA emission thickness in the heliosheath have been compiled for the first 8 years of IBEX-Lo and 7 years of IBEX-Hi data. In the downwind hemisphere, the protons giving rise to heliospheric ENAs around 0.1 keV dominate the total plasma pressure. The study of these ENAs is therefore crucial to understand the vast regions of the heliosphere where no in-situ observations are available. \item Our observations at low energies confirm that the heliosheath towards Southern latitudes is compressed compared to Northern latitudes and to the Port Lobe. \item The dynamic pressure of the plasma in the heliosheath reaches 1.7 pdyn cm$^{-2}$ integrated from 10 eV to 6 keV for any direction in the downwind hemisphere; the lower limit is 1.0 pdyn~cm$^{-2}$. \end{itemize} As a consequence the thickness of the plasma structures responsible for emission of ENAs in the heliosheath reach 150--210 au towards the poles and the flanks, which is similar to the cooling length of the plasma. Since these dimensions agree with other observation methods and with model predictions for the total heliosheath thickness, IBEX possibly samples ENAs from plasma all the way from the termination shock to the heliopause in all directions except the Central Tail. Here, the plasma pressure from ENA spectra implies an ENA emission thickness of at least 280 au. This region coincides with the nominal downwind direction around $\lambda_{ecl}=76^{\circ}$ longitude. The heliosheath therefore is extended around the downwind direction compared to the flanks (by at least a factor of 1.4). The upper limit of this shape factor is ill constrained: ENA intensities measured below 0.1 keV from anti-ram direction are affected by large uncertainties and the heliosheath could be thicker than the plasma cooling length in this region. The derived ENA emission thickness along the IBEX line-of-sight indicates that the neutral hydrogen is depleted towards the heliotail with respect to other heliosheath regions to densities between 0.02 and 0.075 cm$^{-3}$. We are eagerly waiting for the next three years of IBEX observations until 2019, as this will allow us to cover the temporal evolution of heliospheric ENAs over one full solar cycle. The present data suggest that the ENA intensities around 0.1 keV from the downwind hemisphere anti-correlate with solar activity. If confirmed, this implies periodic changes in the plasma pressure and/or the heliosheath dimensions in the downwind hemisphere. \textit{Acknowledgements.} We thank all of the outstanding men and women who have made the IBEX mission such a wonderful success. A.G. and P.W. thank the Swiss National Science foundation for financial support. M.B., M.A.K., and J.M.S. were supported by Polish National Science Center grant 2015-18-M-ST9-00036. H.K., E.M., N.S., H.O.F., S.A.F., and D.J.M. were supported by the NASA Explorer program as a part of the IBEX mission.
1712.01277
\section{Introduction} \input{./Introduction.tex} \section{Materials and Methods} One FHD patient and one healthy subject (HS) were involved in this pilot study. The EEG was recorded from one monopolar EEG channel placed on C3, the standard location of the International 10-20 System over the left hemisphere corresponding to the brain region related to the functioning of the (dominant) right-hand. The EMG was recorded from one bipolar channel placed on the \emph{abductor pollicis brevis}, the intrinsic hand muscle responsible for the abduction of the thumb. Both signals were sampled with a sampling frequency of 1 kHz and quantized at 16 bit. In the experiment, the participants were sitting quietly on a comfortable chair in front of a screen placed 1 meter apart from them, on a table. They were simply required to rest with opened eyes for about 3 minutes with their limbs laying on the table in front of them. At a first glance, it was possible to assess a clear difference between the two EMG signals: in the FHD patient, the amplitude of the signal assumed values up to $\pm$ 200 \textmu V, while its power spectral density (PSD) took significant values in the frequency band $[5$, $200]$ Hz. However, in case of HS, the amplitude of the EMG signal did not exceed $\pm$ 20 \textmu V, with a significant PSD extended from $5$ to $50$ Hz. In the offline analysis, signals were preprocessed to limit their frequency range in the frequency band of interest. Specifically, the EEG was filtered through an elliptic filter of order $24$ with a passband of $[4,45]$ Hz. The EMG was processed by a high-pass elliptic filter of order $11$ with cut-off frequency at $5$ Hz. A series of notch filters of order $14$ were used with cut-off frequencies at $50$ Hz and subsequent harmonics up to $350$ Hz were put to reduce the effect of the mains. Then, CMC as well as cross-correlation have been computed between the EEG (otherwise labelled as signal $x[m]$) and the EMG (otherwise labelled as signal $y[m]$) signals, in order to assess the quantitative relationship between them, both in the frequency domain and in the time domain. \subsection{Frequency domain analysis: cortico-muscular coherence} The coherence of two discrete-time signals $x[m]$ and $y[m]$, regarded as stochastic processes, is given by: \begin{equation} {\rm Coh}_{xy}(f) \triangleq \frac{\mathcal{P}_{xy}(f)}{\sqrt{\lvert \mathcal{P}_x(f) \rvert} \cdot \sqrt{ \lvert \mathcal{P}_y(f) \rvert}}, \end{equation} where $\mathcal{P}_{x}(f)$ is the PSD of $x[m]$ and $\displaystyle \mathcal{P}_{xy}(f)=\frac{1}{n} \sum_{i=1}^{n} X_i(f)Y_i^*(f)$ is the cross-power spectral density (CPSD) between $x[m]$ and $y[m]$. In order to provide a statistically significant result, a confidence level $CL$ of $95$ $\%$, i.e. a critical level of $\alpha=0.05$, was obtained from the following formula \cite{Mima1999}: \begin{equation} CL = 1-(1-\alpha)^{\frac{1}{N-1}}, \end{equation} where $N$ is the number of signal segments used to estimate the coherence value. The PSD, the CPSD and the coherence values were estimated via the Fast Fourier Transform (FFT)-based Welch's method: specifically, the signal length was set to $L = 1024$ samples ($1.024$ s) and the number of FFT points to $1024$ samples. Border effects were mitigated by a Hann sliding windowing with overlap of $50$ $\%$ \cite{Mitra2007}. \subsection{Time domain analysis: cross-correlation function} Generally speaking, given two discrete-time signals $x[m]$ and $y[m]$, their cross-correlation function is defined as: \begin{equation} {\tt r}_{xy}[n] \triangleq \sum_{m=-\infty}^{+\infty} x^*[m] y[n+m]. \end{equation} Cross-correlation is particularly useful to evaluate the similarity between two signals as a function of the time shift $n$ (expressed in number of samples) of the second signal behind the first one. In the present analysis, the absolute value of the correlation between the EEG and the EMG signals computed at its maximum and normalized by the square root of the product of the signals energies $E_x$ and $E_y$ was evaluated. Therefore, the quantity: \begin{equation} \displaystyle {\tt r}_{max}=\frac{\max (r_{xy}[n])}{\sqrt{E_x E_y}}. \end{equation} was considered as a measure of similarity between the two signals. Moreover, the lag $n$ which the maximum was found at was taken into account as a measure of the transmission delay from the brain to the muscle, i.e. the time taken for a motor command to travel from its origin in the central nervous system to the target effector at the periphery. Particularly, $71$ pairs of EEG and EMG signals were extracted from the whole EEG and EMG recordings of the FHD patient. They were selected empirically as examples of bursty EMG activity (with their corresponding EEG). The duration of these signals was variable ($0.70 \pm 0.66$ s): all of them were included in the analysis to keep into account the variability of the burst events. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{coerenza_sano.pdf} \caption{Absolute value of CMC for the healthy participant ($CL=0.067$ with $N=44$ and $\alpha=0.05$).} \label{fig:coerenza_sano_totale} \end{figure} \begin{figure}[] \centering \includegraphics[width=0.5\textwidth]{coerenza_patologico.pdf} \caption{Absolute value of CMC for the pathological subject ($CL=0.008$ with $N=393$ and $\alpha=0.05$).} \label{fig:coerenza_patologico_totale} \end{figure} To support the physiological meaning of the EEG-EMG coherent components, the cross-correlation function was computed between the narrow-band EEG signals filtered in the high $\beta$ band, i.e. between $26$ and $31$ Hz, and the EMG signal limited to $250$ Hz by a band-pass filter with frequency band $[5, 250]$ Hz. \section{Results} \subsection{EEG-EMG coherence} In this section the results are shown in regards to the CMC measure for both the HS and the FHD patient. In the case of the healthy participant, the CMC spectrum could be seen in \figurename~\ref{fig:coerenza_sano_totale}. It has to be noted that peaks above the confidence level can be observed in the frequency range between $5$ and $20$ Hz, only, with a particularly strong coherence at $20$ Hz. On the other hand, the CMC spectrum of the FHD patient is reported in \figurename~\ref{fig:coerenza_patologico_totale}. It can be observed that a larger frequency band contribute to the coherence between EEG and EMG signals. It is also important to highlight the presence of peaks in the upper side of the spectrum, i.e. $[20, 45]$ Hz. This is probably due to the larger bandwidth of the pathological EMG of the patient, as mentioned in the previous section. In order to confirm our hypothesis, we selected a portion of the whole recorded data where bursts mostly affected the EMG signal and evaluated the CMC in this particular case. As a further support, we selected another portion of EMG signal where healthy-like activity could be observed and computed CMC as well. Two typical examples of both situations are reported next. \figurename~\ref{fig:125_coerenza} shows the coherence result when comparing two chunks of the EEG and EMG signals for the healthy-like case. Here, two main peaks can be seen at the frequencies of $8$ Hz and $18$ Hz, but no significant coherence values at frequencies higher than $30$ Hz. \begin{figure}[] \centering \includegraphics[width=0.5\textwidth]{1_2-5_coerenza.pdf} \caption{CMC, in absolute value, between chunks of healthy-like EEG and EMG for the pathological subject ($CL=0.776$ with $N=3$ and $\alpha = 0.05$).} \label{fig:125_coerenza} \end{figure} On the contrary, \figurename~\ref{fig:117120_coerenza} reports the coherence spectrum in case of bursts-affected chunks. Significantly, the figure shows that coherence values at low frequencies are heavily reduced, whereas some peaks around $20$ and $35$ Hz appeared, hence the hypothesis that higher frequencies components are related to bursty EMG activity in the FHD patient could actually be supported. \begin{figure}[] \centering \includegraphics[width=0.5\textwidth]{117_120_coerenza.pdf} \caption{CMC, in absolute value, between chunks of bursts-affected EEG and EMG for the pathological subject ($CL=0.451$ with $N=6$ and $\alpha = 0.05$).} \label{fig:117120_coerenza} \end{figure} \subsection{EEG-EMG cross-correlation} As mentioned above, cross-correlation of EEG and EMG was computed to investigate the physiological reliability on the relationship between the high $\beta$ band EEG component with the EMG. \figurename~\ref{fig:isto_maxnorm} reports the empirical distribution of the maximum values of the cross-correlation function found from the $71$ pairs of EEG and EMG signals. Mean value was found of $0.683$, with variance of $0.0293$. This result certainly shows a strong connection between the narrow-band EEG and the EMG, as indicated by the high average value. \begin{figure}[t] \centering \includegraphics[width=0.5 \textwidth]{isto_max_norm.pdf} \caption{Empirical distribution of the maximum value of the cross-correlation function of the $71$ pairs of EEG and EMG signals of the FHD patient.} \label{fig:isto_maxnorm} \end{figure} Finally, \figurename~\ref{fig:isto_lag} displays the empirical distribution of the lag where the maximum value of the cross-correlation function of the $71$ pairs of EEG and EMG signals was found. Mean value occurred at $-11.65$ ms. As neural impulses propagate at a speed of about $100$ m/s, the transmission of signals from the brain to the hand muscles could be estimated of about $10$ ms, which is in line with the results we achieved. The standard deviation is considerably high (it was found to be about $100$ ms) but this could be explained because of the limited size of the data sample. Indeed, we expect that the tendency observed in this study could be further confirmed (with a reduced standard deviation), with an increased sample size. \begin{figure}[t] \centering \includegraphics[width=0.5 \textwidth]{isto_lag.pdf} \caption{Empirical distribution of the lags where the maximum of the cross-correlation was found.} \label{fig:isto_lag} \end{figure} \section{Discussion} \input{./Discussion.tex} \section{Conclusions} The aim of this study is to investigate bursts-related EEG signals in a focal hand dystonia patient. Despite of considering time domain and frequency domain techniques as mutually exclusive analysis, in this contribution we have taken advantage from both of them: particularly, in the frequency domain, cortico-muscular coherence was used to identify the most likely frequency bands of interaction between brain and muscles; then, in the time domain, cross-correlation was exploited to verify the physiological reliability of such a relationship in terms of signal transmission delay from the centre to the periphery. The most interesting result suggested that the high $\beta$ band activity in the EEG could be responsible for the bursty activity observed in the EMG. Even though a future study on a larger sample is needed to statistically support these preliminary findings, this contribution allows to think of new kinds of rehabilitation interventions for focal hand dystonia patients that could target the actual EEG correlate of the pathology with consequence improvement of the motor functions.
1912.08713
\section{Introduction} Quanto CDS is a flavor of a credit default swap. It has a special feature that in case of default all payments including the swap premium and/or the cashflows are done not in a currency of the reference asset, but in another one. As mentioned in \cite{Brigo}, a typical example would be a CDS that has its reference as a dollar-denominated bond for which the premium of the swap is payable in Euros. And in case of default the payment equals the recovery rate on the dollar bond payable in Euro. This can also be seen as the CDS written on a dollar bond, while its premium is payable in Euro. These types of contracts are widely used to hedge holdings in bonds or bank loans that are denominated in a foreign currency (other than the investor home currency). In more detail, see, e.g., \cite{Brigo,isvIJTAF} and references therein. To illustrate, in Fig.~\ref{histSpread} Quanto CDS spreads computed by using historical time series\footnote{The CDS data are from Markit.} of some European 5Y sovereign CDS traded in both Euro and USD are presented for the period from 2012 to January 2018. It can be seen that nowadays these spreads still could reach 400 bps and more for some countries, and thus, building a comprehensive model capable to predict quanto effects remains a problem of current interest. \begin{figure}[!t] \centering \fbox{\includegraphics[width=0.8\textwidth]{quantoCDSpreads}} \caption{Quanto CDS spreads (in bps) computed based on historical time-series data of some European 5Y sovereign CDS traded in both Euro and USD.} \label{histSpread} \end{figure} In \cite{ACS2017} Quanto CDS spreads, defined as the difference between the USD and EUR denominated CDS spreads, are presented for six countries from Eurozone: Belgium, France, Germany, Ireland, Italy, Portugal, at maturities 3, 5, 7, 10, and 15 years relative to the 1 year quanto spread. This data demonstrate the spreads reaching 30 bps at 15 years (France, Ireland). In \cite{Simon2015} the 5 years Quanto CDS spreads are reported for Germany, Italy and France over the period from 2004 to 2013, which, e.g., for Italy could reach 500 bps in 2012. The results presented in \cite{Brigo} indicate a significant basis across domestic and foreign CDS quotes. For instance, for Italy a USD CDS spread quote of 440 bps can translate into a EUR quote of 350 bps in the middle of the Euro-debt crisis in the first week of May 2012. More recent basis spreads (from June 2013) between the EUR quotes and the USD quotes are in the range of 40 bps. On a modeling site \cite{Brigo} proposed a model for pricing Quanto CDS based on the reduced form model for credit risk. They represent the default time as a Cox process with explicit diffusion dynamics for the default intensity/hazard rate and an exponential jump to default, similar to the approach of \cite{ES2006, Mohammadi2006}. Also they introduce an explicit jump-at-default in the FX dynamics. This provides a more efficient way to model credit/FX dependency as the results of simulation are able to explain the observed basis spreads during the Euro-debt crisis. While accounting only for the instantaneous correlation between the driving Brownian motions of the default intensity and the FX rate alone is not sufficient for doing so. In \cite{isvIJTAF} the framework of \cite{Brigo} was extended by introducing stochastic interest rates and jump-at-default in both FX and foreign (defaulted) interest rates. The authors investigated a relative contribution of both jumps into the magnitude of the Quanto CDS spread. The results presented in the paper qualitatively explain the discrepancies observed in the marked values of CDS spreads traded in domestic and foreign economies and, accordingly, denominated in the domestic (USD) and foreign (Euro, Ruble, Brazilian Real, etc.) currencies. The quanto effect (the difference between the prices of the same CDS contract traded in different economies, but represented in the same currency) can, to a great extent, be explained by the devaluation of the foreign currency. This would yield a much lower protection payout if converted to the US dollars. These results are similar to those obtained in \cite{Brigo}. However, in \cite{Brigo} only constant foreign and domestic interest rates are considered, while in \cite{isvIJTAF} they are stochastic even in the no-jumps framework. In contrast to \cite{Brigo}, in \cite{isvIJTAF} the impact of the jump-at-default in the foreign interest rate, which could occur simultaneously with the jump in the FX rate, was also analyzed. The authors found that this jump is a significant component of the process and is able to explain about 20 bps of the basis spread value. They also show that the jumps in the FX rate $z$ and the foreign interest rate $\hatr$ have opposite effects. In other words, devaluation of the foreign currency will decrease the value of the foreign CDS, while the increase of $\hatr$ will increase the foreign CDS value. Influence of other important model parameters such as correlations between the log-hazard rate $y$ and the factors that incorporate jumps, e.g., $\rho_{yz}$ and $\rho_{y\hatr}$, and volatilities of the log-hazard process $\sigma_y$ and the FX rate $\sigma_z$ was also investigated in \cite{isvIJTAF}, with the conclusion made that they have to be properly calibrated. The other correlations just slightly contribute to the basis spread value. Large values of the volatilities can in some cases explain up to 15 bps of the basis spread value. Despite, the model of \cite{isvIJTAF} is sufficiently complex and capable to qualitatively explain the behavior of the Quanto CDS spreads observed in the market, it could be further improved in several ways. First, it is known since the Lehman Brothers bankruptcy that the recovery rates could significantly vary right before or at default. For instance, in \cite{Brigo2015} this is modeled by considering a time-dependent (piece-wise constant) recovery rate. However, such an approach does not take into account possible correlations of the recovery rate with the other stochastic drivers of the model. Meanwhile, in the literature the existence of such correlations is reported, especially for the correlation between the default (hazard) and recovery rates, \cite{Altman2002}. For instance, in \cite{Witzany2013} a two-factor model of \cite{Rosch2009} is used to capture the recovery rate variation and its correlation with the rate of default. The author suggest two approaches to estimate the model parameters: based on time-series of the aggregate default and recovery rates, and a cross-sectional approach based on the exposure level data. The results (based on the Moody's DRD database) confirm not only significant variability of the recovery rate, but also a significant correlation over 50\% between the rate of default and the recovery rates in the context of the model. Therefore, in this paper we let the recovery rate to be stochastic. At the same time, to reduce complexity of the model, we relax the assumption of \cite{isvIJTAF} about stochasticity of the domestic interest rate. This is because, as shown in \cite{isvIJTAF}, the volatility of the domestic interest rate does not contribute much to the value of the Quanto CDS spread. Thus, still our model has four stochastic drivers, as in \cite{isvIJTAF}. Another modification as compared with \cite{isvIJTAF} is related to the numerical method used to compute the CDS prices. Since the pricing problems in \cite{isvIJTAF} are formulated via backward partial differential equations (PDE), computation of the CDS price for every maturity $T$ requires solving as many independent backward problems as the number of time steps from $0$ to $T$ is chosen. This could be improved in two ways. The first option would be if instead of the backward PDE we would work with the forward one for the corresponding density function. In that case all solutions $U_t(t_i), \ i \in [0,m], \ t_0 = 0, t_m = T$ for all times $t_i$ can be computed in one sweep, i.e., by a marching method. This situation is similar to that in option pricing, where the forward equation is useful for calibration as it allows computation of the option smile in one sweep\footnote{In other words, given the spot price the option prices for multiple strikes and maturities could be computed by solving just one forward equation.}. In contrast, solving the backward equation is useful for pricing, as given the strike it allows computation of the option prices for various initial spot values in one sweep, see, e.g., \cite{Itkin2014b}. However, we do not use this approach in this paper, and our results obtained by using the forward approach will be reported elsewhere. The other way to significantly accelerate calculations assumes that still, the backward equations need to be solved, however this can be done by using parallelization of calculations. Indeed, as we demonstrate it below in the paper, solutions $U_t(t_i), \ i \in [0,m], \ t_0 = 0, t_m = T$ for all times $t_i$ can be found in parallel. This is our approach in hands for this paper. Finally, for solving our systems of PDEs we use a different flavor of the Radial Basis Function (RBF) method\footnote{In \cite{isvIJTAF} a flavor of the RBF-PUM method is used.} which is a combination of localized RBF and finite-difference methods (see \cite{SM_RBF2017,FLYER201621, Bayona2017} and references therein), and is known in the literature as the RBF--FD method. More specifically, a flavor of the RBF--FD method used in this paper is described in \cite{Soleymani2018,FazItkin2019}. Both the RBF-PUM and RBF--FD methods belong to a localized version of the classical RBF, and demonstrate similar features, such as high accuracy, sparsity of the differentiation matrices, mesh-free nature and multi-dimensional extendability. The comparison of these methods presented in \cite{SM_RBF2017} illustrates capability of both methods to solve the problems to a sufficient accuracy with reasonable time, while both methods exhibit similar orders of convergence, However, from the potential parallelization viewpoint the RBF--FD is, perhaps, more suitable. The rest of the paper is organized as follows. In Section~\ref{model} we describe our model, and derive the main PDEs for the risky bond price under this model. We first introduce a no-jumps framework, and then extend this framework by adding jumps-at-default into the dynamics of the FX and foreign (defaulted) interest rates. In Section~\ref{zcbPrice} we describe a backward PDE approach for pricing zero-coupon bonds. The connection of this price with the price of the Quanto CDS is established in Section~\ref{bond2cds}. In Section~\ref{numMethod} the RBF--FD method is described in detail. In Section~\ref{experiments} we present numerical results of our experiments obtained by using this model and discuss the influence of various model parameters on the basis quanto spread. Section~\ref{sec:Conclusion} concludes. \section{Model} \label{model} Below we describe our model following the same notation and definitions as in \cite{isvIJTAF}. Let us denote the most liquidly traded currency among all contractual currencies as the {\it domestic currency} or the {\it liquid currency}. In this paper this is the US dollars (USD). We also denote the other contractual currency as {\it contractual} or {\it foreign currency}. In this paper it can be both, e.g., EUR, and USD. Payments of the premium and protection legs of the contract are settled in this currency. We assume that CDS market quotes are available in both currencies (domestic and foreign), and denote these prices as $\mathrm{CDS}_d$ and $\mathrm{CDS}_f$ respectively. The price $\mathrm{CDS}_f$ can be alternatively expressed in the domestic currency if the exchange rate $Z_t$ for two currencies is given by the market. This implies that the price of the CDS contract denominated in the domestic currency could be also expressed in the foreign currency as $Z_t \mathrm{CDS}_d$. If these two prices are different, one can introduce a spread $\mathrm{CDS}_f - Z_t \mathrm{CDS}_d$. It is known that this spread implied from the market could reach hundreds of bps, \cite{Brigo,ACS2017,Simon2015}. Thus, these spreads could be detected if the market quotes on the CDS contracts in both currencies and the corresponding exchange rates are available. Further on, as the risk neutral probability measure $\mathbb{Q}$ we choose that corresponding to the domestic (liquid) currency money market. Also, by $\mathbb{E}_t[\,\cdot\,]$ we denote the expectation conditioned on the information received by time $t$, i.e. $\mathbb{E}[\,\cdot\, | \mathcal{F}_t]$. We also denote a zero-coupon bond price associated with the domestic currency (USD) as $B_t$, and that associated with the foreign currency (EUR) as $\hatB_t$, where $t\geq 0$ is the calendar time. We assume that the dynamics of these two money market accounts is given by \begin{alignat}{2} \label{mmDyn} dB_t & = r(t) B_t dt, \quad &&B_0 = 1,\\ d\hatB_t & = \hatr_t \hatB_t dt, \quad &&\hatB_0 = 1. \nonumber \end{alignat} Here $r(t)$ is the deterministic domestic interest rate, and $\hatr_t$ is the stochastic foreign interest rate. As compared with the model setting in \cite{isvIJTAF}, in this paper $r(t)$ is assumed to be deterministic while in \cite{isvIJTAF} it is stochastic. However, as it has been already explained, this is done because, based on the results of \cite{isvIJTAF}, the volatility of the domestic interest rate does not contribute much to the value of the Quanto CDS spread. Similar to \cite{isvIJTAF}, first we consider a setting where all underlying stochastic processes do not experience jump-at-default except the default process itself. Then in Section~\ref{modelJumps} this will be generalized by taking into account jumps-at-default in other processes. \subsection{No jumps-at-default} We assume that $\hatr_t$ follows the Cox-Ingersoll-Ross (CIR) process, \cite{cir:85} \begin{align} \label{dynR} d\hatr_t &= \kapr(\ther - \hatr_t) dt + \sigr \sqrt{\hatr_t}dW_t^{(2)}, \quad \hatr_0=\hatr, \end{align} where $\kapr, \ther$ are the mean-reversion rate and level, $\sigr$ is the volatility, $W_t^{(2)}$ is the Brownian motion, and $\hatr$ is the initial level of the foreign interest rate. Without loss of generality, in this paper we assume $\kapr, \ther, \sigr$ to be constant, while this can be easily relaxed to have them time-dependent. The exchange rate $Z_t$ denotes the amount of domestic currency one has to pay to buy one unit of foreign currency (so 1 Euro could be exchanged for $Z_t$ US dollars). It is assumed to be stochastic and follow a log-normal dynamics \begin{equation} \label{Z} dZ_t = \mu_z Z_t dt + \sigma_z Z_t dW_t^{(3)}, \quad Z_0=z, \end{equation} \noindent where $\mu_z, \sigma_z$ are the corresponding drift and volatility, and $W_t^{(3)}$ is another Brownian motion. As the underlying security of a CDS contract is a risky bond, we need a model of the credit risk implied by the bond. Here we rely on a reduced form model approach, see e.g., \cite{jarrow/turnbull:95, DuffieSingleton99, Bielecki2004, jarrow2003robust} and references therein. We define the hazard rate $\lambda_t$ to be the exponential Ornstein-Uhlenbeck process \begin{align} \label{lambda} \lambda_t &= e^{Y_t}, \quad t \ge 0, \\ dY_t &= \kapy(\they-Y_t)dt + \sigy dW_t^{(4)}, \quad Y_0=y, \nonumber \end{align} \noindent where $\kapy, \they, \sigy$ are the corresponding mean-reversion rate, the mean-reversion level and the volatility, $W_t^{(4)}$ is the Brownian motion, and $y$ is the initial level of $Y$. Both $Z_t$ and $\lambda_t$ are defined and calibrated in the domestic measure. In contrast to \cite{Brigo,isvIJTAF} in this paper we let the recovery rate to be stochastic. It is popular in the financial literature and also among rating agencies such as Moody’s to model the recovery rate using the Beta distribution. The reasons behind that and an extended survey of the existing literature on the subject can be found, e.g., in \cite{Morozovskiy2007}. As it is well-known, Beta distributions are a family of continuous time distributions defined on the interval $[0,1]$. Since values of recoveries also fall to the same interval, the domain of the Beta distribution can be viewed as a recovery value. The pdf of the Beta distribution is, \cite{BetaDistrib2001} \begin{equation} \label{beta} f(R, \alpha, \beta) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)} \calR^{\alpha-1} (1-\calR)^{\beta-1}, \quad \calR \in [0,1], \end{equation} \noindent where $\calR$ is the recovery rate, $\Gamma(x)$ is the gamma function, and $\alpha > 0, \beta > 0$ are shape parameters. However, to be compatible with our general setting where the dynamics of the stochastic drivers $Z_t, \hatr_t, Y_t$ is described by stochastic differential equations (SDE), and introduce correlations between them, it would be convenient also to introduce some SDE which the Beta distribution solves. For instance, following \cite{Flynn2004} we can write \begin{equation} \label{betaSDE} d\calR = \kappa_R (\theta_R - \calR) dt + \sigma_R \sqrt{\calR(1-\calR)} d W_t^{(1)}, \quad \calR_0 = R. \end{equation} Here $\kappa_R, \theta_R$ are the mean-reversion rate and level, $\sigma_R$ is the volatility of the recovery rate, $W_t^{(1)}$ is the Brownian motion, and $R$ is the initial level of the recovery rate. It is known, that the stationary solution for this SDE is the Beta probability density function, given in \eqref{beta} with \begin{equation} \label{betaParams} \alpha = \frac{\kappa_R \theta_R}{\sigma_R^2}, \quad \beta = \frac{\kappa_R (1-\theta_R)}{\sigma_R^2}. \end{equation} We assume all Brownian motions $W_t^{(i)}, \ i \in [1,4]$ to be correlated with the constant instantaneous correlation $\rho_{ij}$ between each pair $(i,j)$: $\langle d W_t^{(i)}, d W_t^{(j)} \rangle = \rho_{ij} dt$. Thus, the whole correlation matrix reads \begin{equation} \mathfrak{R} = \begin{bmatrix} 1 & \rho_{R \hatr} & \rho_{Rz} & \rho_{Ry} \\ \rho_{\hatr R} & 1 & \rho_{\hatr z} & \rho_{\hatr y} \\ \rho_{zR} & \rho_{z \hatr} & 1 & \rho_{z y} \\ \rho_{yR} & \rho_{y \hatr} & \rho_{yz} & 1 \\ \end{bmatrix}, \qquad |\rho_{ij}| \le 1, \quad i,j \in [R,\hatr, z, y]. \end{equation} The default process $(D_t, \ t \ge 0)$ is defined as \begin{equation} \label{defProc} D_t = {\bf 1}_{\tau \le t}, \end{equation} \noindent where $\tau$ is the default time of the reference entity. In order to exclude trivial cases, we assume that $\QM(\tau > 0) = 1$, and $\QM(\tau \le T) > 0$. \subsection{A jump-at-default framework} \label{modelJumps} Following \cite{isvIJTAF}, we now extend our model by assuming that the exchange rate and the foreign interest rate can jump at the default time. The jump in the exchange rate is due to denomination of the foreign currency which was observed during the European sovereign debt crisis of 2009-2010. It is shown in \cite{Brigo} that allowing for jump-at-default in the FX rate provides a way of modeling the credit/FX dependency which is capable to explain the observed basis spreads during the Euro-debt crisis. In contrast, another approach which takes into account only instantaneous correlations imposed among the driving Brownian motions of the default intensity and FX rates is not able to do so. Then, in \cite{isvIJTAF} it was argued that an existence of jump-at-default in the foreign interest rate could also be justified by historical time-series, especially in case when sovereign obligations are in question. Therefore, in \cite{isvIJTAF} both jump-at-default in the FX and foreign interest rates were considered, and their relative contribution to the value of the Quanto CDS spread was reported. In this paper we reuse this approach. Namely, to add jumps to the dynamics of the FX rate in \eqref{Z}, we follow \cite{Brigo, BieleckiPDE2005} who assume that at the time of default the FX rate experiences a single jump which is proportional to the current rate level, i.e. \begin{equation} \label{jumpZ} d Z_t = \gamma_z Z_{t^-} d M_t, \end{equation} \noindent where $\gamma_z \in [-1,\infty)$ \footnote{This is to prevent $Z_t$ to be negative, \cite{BieleckiPDE2005}.} is a devaluation/revaluation parameter. The hazard process $\Gamma_t$ of a random time $\tau$ with respect to a reference filtration is defined through the equality $e^{-\Gamma_t} = 1 - \QM\{\tau \le t|\calF_t\}$. It is well known that if the hazard process $\Gamma_t$ of $\tau$ is absolutely continuous, so \begin{equation} \label{hazard} \Gamma_t = \int_0^t (1-D_s) \lambda_s ds, \end{equation} \noindent and increasing, then the process $M_t = D_t - \Gamma_t$ is a martingale (which is called as the compensated martingale of the default process $D_t$) under the full filtration $\calF_t \vee {\mathcal H}_t$ with ${\mathcal H}_t$ being the filtration generated by the default process. So, $M_t$ is a martingale under $\QM$, \cite{BieleckiPDE2005}. It can be shown that under the risk-neutral measure associated with the domestic currency, the drift $\mu_z$ is, (\cite{Brigo}) \begin{equation} \label{na-drift} \mu_z = r(t) - \hatr_t. \end{equation} Therefore, with the allowance for \eqref{Z}, \eqref{jumpZ} we obtain \begin{equation} \label{dzJump} dZ_t = [r(t) - \hatr_t] Z_t dt + \sigma_z Z_t dW_t^{(3)} + \gamma_z Z_t d M_t. \end{equation} Thus, $Z_t$ is a martingale under the $\mathbb{Q}$-measure with respect to $\calF_t \vee {\mathcal H}_t$ as it should be, since it is a tradable asset. As the default of the reference entity is expected to negatively impact the value of the local currency, further on we consider mostly the negative values of $\gamma_z$. For instance, we expect the value of EUR expressed in USD to fall down in case if some European country defaults. Similarly, we add jump-at-default to the stochastic process for the foreign interest rate $\hatr_t$ as \[ d \hatr_t = \gamma_\hatr \hatr_{t^-} d D_t, \] \noindent so \eqref{dynR} transforms to \begin{equation} \label{rJump} d\hatr_t = \kapr(\ther-\hatr_t )dt + \sigr \sqrt{\hatr_t}dW_t^{(2)} + \gamma_{\hatr} \hatr_t d D_t. \end{equation} Here $\gamma_{\hatr} \in [-1,\infty)$ is the parameter that determines the post-default cost of borrowing. In this paper we consider just the positive values of $\gamma_{\hatr}$, because most likely the interest rate grows after the default has occurred\footnote{Here we consider sovereign CDS, so the default is associated with the national economy. Therefore, further evolution of the foreign interest rate is, perhaps, subject to government actions. However, the scenario in which the interest rate will increase after the default seems to be preferable.}. Note that $\hatr_t$ is not tradable, and so is not a martingale under the $\mathbb{Q}$-measure. \section{PDE approach for pricing zero-coupon bonds} \label{zcbPrice} Here we consider a general building block necessary to price quanto contingent claims, i.e., claims where the contractual currency differs from the pricing currency, such as Quanto CDS. For doing so we determine a price of a defaultable zero-coupon bond settled in the foreign currency. Under the foreign money market martingale measure $\hat \QM$, this bond price reads \begin{equation} \hat U_t(T) = \hat{\mathbb{E}}_t\left[ \frac{\hatB_t}{\hatB_T} \hat \Phi(T) \right], \end{equation} \noindent where $\hatB_t/\hatB_T = \hat B(t,T)$ is the stochastic discount factor from time $T$ to time $t$ in the foreign economy, and $\Phi(T)$ is the payoff function. However, to compute the quanto spread we need this price under the domestic money market measure $\mathbb{Q}$. It can be obtained by converting the payoff to the domestic currency and discounting by the domestic money market account that yields \begin{equation} U_t(T) = \mathbb{E}_t\left[ B(t,T) Z_t \hat \Phi(T) \right]. \end{equation} Here it is assumed that the notional amount of the contract is equal to one unit of the foreign currency, hence the payoff function is \begin{equation} \label{payoff1} \hat \Phi(T) = \m1_{\tau>T}. \end{equation} Let us assume that in case of the bond default, the recovery rate $\calR$ is paid at the default time. Then the price of a defaultable zero-coupon bond, which pays out one unit of the foreign currency in the domestic economy reads \begin{align} \label{payoff} U_t(T) &= \mathbb{E}_t\left[ B(t,T) Z_T \m1_{\tau>T} + \calR_\tau B(t,\tau) Z_\tau \m1_{\tau \le T} \right] \\ &= \mathbb{E}_t\left[ B(t,T) Z_T \m1_{\tau>T} \right] + \int_t^T \mathbb{E}_t \left[ \calR_\nu B(t,\nu) Z_\nu \m1_{\tau \in (\nu-d\nu,nu]} \right] = w_t(T) + \int_t^T g_t(\nu)d \nu, \nonumber \\ w_t(T) &:= \mathbb{E}_t \left[ Z_{T} B(t,T) \m1_{\tau > T} \right], \qquad g_t(\nu) := \mathbb{E}_t \left[ \calR_\nu B(t,\nu) Z_\nu \frac{\m1_{\tau \in (\nu-d\nu,nu]}}{d\nu} \right]. \nonumber \end{align} As the entire dynamics of the underlying processes is Markovian, \cite{BieleckiPDE2005}, finding the price of a defaultable zero-coupon bond can be done by using a PDE approach, i.e., this price just solves this PDE. This is more efficient from the computationally point of view as compared, e.g., with the Monte Carlo method, despite the resulting PDE becomes four-dimensional. Since in this paper we prefer to use a backward approach, this PDE can be obtained by first conditioning on $\calR_t = R, \hatr_t = \hatr, Z_t = z, Y_t = y, D_t = d$ \footnote{Here $d$ can take two values: 0, which means that the default is not occurred yet, and 1 which means that the default has already occurred.}, and then using the approach of \cite{BieleckiPDE2005} (a detailed derivation can be found in Appendix of \cite{isvIJTAF}). Under the risk-neutral measure $\QM$ the price $U_t(T)$ then reads \begin{equation} \label{bondPrice} U_t(T, R, \hatr, y, z) = \m1_{\tau > t} f(t, T, R,\hatr, y, z, 0) + \m1_{\tau \le t} f(t, T, R,\hatr, y, z, 1). \end{equation} The function $f(t, T, R,\hatr, y, z, 1) \equiv u(t, T, X), \ \bfX := \{R,\hatr, y, z\}$ solves the PDE \begin{equation} \label{PDE1} \fp{u(t,T,\bfX)}{t} + {\cal L} u(t,T,\bfX) - r u(t,T,\bfX) = 0, \end{equation} \noindent and $\cal L$ is the diffusion operator which reads \begin{align} \label{Ldiff} \cal L &= \frac{1}{2}\sigma_{R}^2 R(1-R)\sop{}{R} + \frac{1}{2} \sigma_{\hatr}^2 \hatr \sop{}{\hatr} + \frac{1}{2}\sigma_z^2 z^2 \sop{}{z} + \frac{1}{2}\sigma_y^2\sop{}{y} \\ &+ \rho_{R \hatr} \sigma_R \sigma_{\hatr} \sqrt{R (1-R)\hatr}\cp{}{R}{\hatr} + \rho_{Rz}\sigma_R \sigma_z z\sqrt{R(1-R)} \cp{}{R}{z} + \rho_{\hatr z} \sigma_{\hatr} \sigma_z z \sqrt{\hatr} \cp{}{z}{\hatr} \nonumber \\ &+ \rho_{Ry}\sigma_R \sigma_y \sqrt{R(1-R)} \cp{}{R}{y} + \rho_{\hatr y} \sigma_{\hatr} \sigma_y \sqrt{\hatr} \cp{}{y}{\hatr} + \rho_{yz} \sigma_y \sigma_z z \cp{}{y}{z} \nonumber \\ &+ \kappa_R(\theta_R-R)\fp{}{R} + \kapr(\ther - \hatr) \fp{}{\hatr} + (r - \hatr) z \fp{}{z} + \kapy(\they - y) \fp{}{y}. \nonumber \end{align} The function $f(t, T, R,\hatr, y, z, 0) \equiv v(t, T, \bfX)$ solves another PDE \begin{align} \label{PDE2} \fp{v(t,T,\bfX)}{t} &+ {\cal L} v(t,T,\bfX) - r v(t,T,\bfX) - \lambda \gamma_z z \fp{v(t,T,\bfX)}{z} \\ &+ \lambda \left[\hat{u}(t, T, \bfX) - v(t, T, \bfX) \right] = 0. \nonumber \end{align} \noindent where based on \eqref{lambda}, $\lambda = e^y$. The function $\hat{u}(t, T, \bfX)$ in \eqref{PDE2} is defined as follows. Recall, that the function $u$ corresponds to states {\it after} default, and the function $v$ corresponds to states {\it before} default. At default the variables $z$ and $\hatr$ experience a jump proportional to the value of each variable. Therefore, we introduce a new vector of states $\bfX^+ := \{R, \hatr(1+\gamma_\hatr), y, z(1+\gamma_z)\}$. The above argumentation suggests that after the function $u(t,T,\bfX)$ is found by solving \eqref{PDE1}, this solution should be translated into the new point $\bfX^+$ by shifting (jumping) the independent variables $\hatr$ and $z$. In other words, after this translation is done we obtain a new function $\bar{u}(t,T,\bfX^+) = u(t,T,\bfX)$. However, since in \eqref{PDE2} $v = v(t,T,\bfX)$, it is convenient to re-interpolate $\bar{u}(t,T,\bfX^+)$ back to $\bfX$, and we denote the result of this interpolation as $\hat{u}(t,T,\bfX)$. This is exactly the same method as was used in \cite{isvIJTAF}. Technically, this is similar to how option written on a stock paying discrete dividends are priced by using a finite-difference method, see e.g., \cite{fdm2000}. \subsection{Boundary conditions} \label{bcSec} The \eqref{PDE1} and \eqref{PDE2} form a system of backward PDEs which should be solved subject to some terminal conditions at $t=T$, and the boundary conditions set at the boundaries of the domain $(R, \hatr, y, z) \in [0,1] \times [0,\infty] \times [-\infty,0] \times[0,\infty]$. As the value of the bond price is usually not known at the boundary, a standard way is to assume that the second derivatives vanish towards the boundaries. However, this is subject of the following consideration. As mentioned in \cite{ItkinCarrBarrierR3}, if the diffusion term vanishes at the boundary, the PDE degenerates to the hyperbolic one. Then to set the correct boundary condition at this boundary we need to check the speed of the diffusion term in a direction normal to the boundary versus the speed of the drift term. To illustrate, consider a PDE \begin{equation} \label{oleinik} C_t = a(x)C_{xx} + b(x)C_x + c(x) C, \end{equation} \noindent where $C= C (t,x)$ is some function of the time $t$ and the independent space variable $x \in [0,\infty)$, and $a(x), b(x)$, $c(x) \in \calC^{2}$ are some known functions of $x$. Consider the left boundary to be at $x=0$. Then, as shown by \cite{OleinikRadkevich73}, no boundary condition is required at $x = 0$ if $\lim_{x \rightarrow 0}[b(x) - a_x(x)] \ge 0$. This means that the convection term at $x=0$ is flowing upwards and dominates as compared with the diffusion term. An example of such consideration is the Feller condition as applied to the Heston model, \cite{Lucic2008}. To clarify, no boundary condition at $x=0$ means that instead of the boundary condition at $x \rightarrow 0$ the PDE itself with coefficients $a(0), b(0), c(0)$ should be used at this boundary. The first boundaries to check are $R \to 0$ and $R \to 1$ as the PDEs in \eqref{PDE1} and \eqref{PDE2} degenerate at those boundaries to the hyperbolic ones in the $R$ direction. At $R \to 0$ we have \begin{equation} \label{bcR0} \lim_{R \rightarrow 0}\left[\kappa_R(\theta_R-R) - \frac{1}{2}\sigma_R^2 (1-2R)\right] = \kappa_R \theta_R - \frac{1}{2}\sigma_R^2. \end{equation} Thus, this is the Feller condition for the stochastic recovery $\calR$, i.e., no boundary condition is necessary at $R=0$ if $2 \kappa_R \theta_R /\sigma_r^2 \ge 1$. Similarly, at $R=1$ we have \begin{equation} \label{bcR1} \lim_{R \rightarrow 1}\left[\kappa_R(\theta_R-R) - \frac{1}{2}\sigma_R^2 (1-2R)\right] = \kappa_R (\theta_R-1) + \frac{1}{2}\sigma_R^2. \end{equation} Therefore, if $\kappa_R (\theta_R-1) + \frac{1}{2}\sigma_R^2 \le 0$, no boundary condition is required at $R=1$ \footnote{The sign of this inequality changes because the inflow flux at $R=1$ is oriented in the opposite direction to the inflow flux at $R=0$.}. For $\hatr$ the condition similar to \eqref{bcR0} holds at $\hatr = 0$. Other boundary conditions read \begin{equation} \label{bc} \sop{u}{\hatr}\Big|_{\hatr \uparrow \infty} = \sop{u}{y}\Big|_{y \uparrow -\infty} = \sop{u}{y}\Big|_{y \uparrow 0} = \sop{u}{z}\Big|_{z \uparrow 0} = \sop{u}{z}\Big|_{z \uparrow \infty} = 0. \end{equation} We assume that the default has not yet occurred at the valuation time $t$, therefore, \eqref{bondPrice} reduces to \begin{equation} \label{bondPrice1} U_t(T, R, \hatr, y, z) = v(t, T, \bfX). \end{equation} Thus, it could be found by solving a system \eqref{PDE1}, \eqref{PDE2} as follows. Since the bond price defined in \eqref{payoff} is a sum of two terms, and our PDE is linear, it can be solved independently for each term. Then the solution is just a sum of the two. \subsection{Solving the PDE for $w_t(T)$} \label{wtT} The function $w_t(T)$ can be obtained by solving \eqref{PDE1}, \eqref{PDE2} \footnote{The PDEs remain unchanged since the model is same, and only the contingent claim $G(t,T,r,\hatr, y,z,d)$, which is a function of the same underlying processes, changes.} in two steps. \paragraph {Step 1} First, we solve the PDE in \eqref{PDE1} for $u(t, T, \bfX)$. Since this function corresponds to $d=1$, it describes the evolution of the bond price {\it at or after} default, which implies the terminal condition at $t=T$ \begin{equation} U(\bfX) = u(T, T, \bfX) = 0. \end{equation} This payoff does not assume any recovery paid at default, therefore, the bond expires worthless. By a simple analysis, one can conclude that $u(t, T, \bfX) \equiv 0$ is the solution of our problem at $d=1$. Indeed, it solves the equation itself, and also obeys the terminal and boundary conditions. In other words, at this step we know the solution in closed form. \paragraph {Step 2} Based on the results of the first step, we have $u(t, T, \bfX^+) \equiv 0$ in \eqref{PDE2}. By the definition before \eqref{PDE2}, the function $v(t, T, \bfX)$ corresponds to the states with no default. Therefore, from \eqref{payoff} the payoff function $v(T,T,\bfX)$ reads \begin{equation} \label{tc2} v(T,T,\bfX) = z. \end{equation} This payoff is the terminal condition for \eqref{PDE2} at $t=T$. The boundary conditions, again are set as in Section~\ref{bcSec}. Now the PDE in \eqref{PDE2} for $v(t,T,\bfX)$ takes the form \begin{align} \fp{v(t,T,\bfX)}{t} &+ {\cal L} v(t,T,\bfX) - (r + \lambda) v(t,T,\bfX) - \lambda \gamma_z z \fp{v(t,T,\bfX)}{z} = 0, \end{align} \noindent which should be solved subject to the terminal condition in \eqref{tc2}. Then, finally $w_t(T) = v(t,T,\bfX)$. It can be seen, that in case of no recovery, the defaultable bond price does not depend on the jump in the foreign interest rate, but depends on the jump in the FX rate. \subsection{Solving the PDE for $g_t(\nu)$} \label{gtT} To compute the second part of the payoff in \eqref{payoff}, observe that the integral in \eqref{payoff} is deterministic in $\nu$. Therefore, it can be approximated by a Riemann--Stieltjes sum, i.e. the continuous interval $[t,T]$ is replaced by a discrete (e.g., uniform) grid with a small step $\Delta \nu = h$. Then \begin{equation}\label{eq:Integral24} \int_t^T g_t(\nu) d\nu \approx h \sum_{i=1}^N g_t(t_i), \end{equation} \noindent where $t_i = t + i h, \ i \in [0,N]$, $N = (T-t)/h$. It is important that each term of this sum can be computed independently by solving the corresponding pricing problem in \eqref{PDE1}, \eqref{PDE2} with the maturity $t_i, \ i \in [1,N]$. Obviously, solving many backward problems (one for each maturity) all together can be slow, but this is the consequence of the backward PDE approach. As mentioned in above, this can be significantly improved by either using the forward approach, or by using parallelization. The later is fully possible since the solutions for every maturity $t_i, \ i \in [1,N]$ are independent. Accordingly, for every maturity $t_i$ the function $g_t(T)$ again can be found in two steps. \paragraph {Step 1} At this step we solve \eqref{PDE1} using the boundary condition described in Section~\ref{bcSec} and the terminal condition \begin{equation} \label{tcG} u(T,T,X) = R z (1+\gamma_z)/T, \qquad T > 0, \end{equation} \noindent which is discussed in more detail in Appendix. At $T=0$ we set $g_T(T)\Big|_{T=0} = 0$. \paragraph {Step 2} The function $v(t_i,T,\bfX)$ can be determined by solving \eqref{PDE2}. Indeed, the values of parameters $\gamma_z, \gamma_\hatr$ are known, and the values of $\lambda$ (or $e^y$) are also set at all computational nodes (for instance, on a grid which is used to numerically solve the PDE problem in Step~1). By definition before \eqref{PDE2}, the function $v(t_i,T,\bfX)$ corresponds to the states with no defaults. Accordingly, the recovery is not paid, which means that the terminal condition for this step vanishes \begin{equation} v(T,T,X) = V(X) = 0. \end{equation} Finally we set $g_t(T) = v(t,T,X)$. According to this structure, in case of non-zero recovery the price of a risky bond does depend on jumps in both FX and foreign IR rates. \section{Backward PDE approach to price Quanto CDS} \label{bond2cds} Here, similar to \cite{isvIJTAF} we use the model for the zero-coupon bond proposed in the previous sections and apply it to pricing of the CDS contracts. We remind that the CDS is a contract in which the protection buyer agrees to pay a periodic coupon to a protection seller in exchange for a potential cashflow in the event of default of the CDS reference name before the maturity of the contract $T$. Below we follow the definitions and notation in \cite{isvIJTAF}. Namely, we assume that the CDS contract is settled at time $t$ and assures protection to the CDS buyer until time $T$. We consider CDS coupons to be paid periodically with the payment time interval $\Delta t$, and there will be totally $m$ payments over the life of the contract, i.e., $m \Delta t = T-t$. Assuming unit notional, this implies the following expression for the CDS coupon leg $L_c$, \cite{LiptonSavescu2014, BrigoMorini2005} \begin{equation} L_c = \mathbb{E}_t\left[\sum_{i=1}^{m} c B(t,t_i)\Delta t \m1_{\tau > t_i}\right], \end{equation} \noindent where $c$ is the CDS coupon, $t_i$ is the payment date of the $i$-th coupon, and $B(t,t_i) = B_t/B_{t_i}$ is the stochastic discount factor. If the default occurs in between of the predefined coupon payment dates, there must be an accrued amount from the nearest past payment date till the time of the default event $\tau$. The expected discounted accrued amount $L_a$ reads \begin{equation} L_a = \mathbb{E}_t\left[c B(t,\tau) (\tau - t_{\beta(\tau)}) \m1_{t < t_\beta(\tau) \le \tau < T}\right], \end{equation} \noindent where $t_{\beta(\tau)}$ is the payment date preceding the default event. In other words, $\beta(\tau)$ is a piecewise constant function of the form \[ \beta(\tau) = i, \quad \forall \tau: \ t_i < \tau < t_{i+1}. \] These cashflows are paid by the contract buyer and received by the contract issuer. The opposite expected protection cashflow $L_p$ is \begin{equation} L_p = \mathbb{E}_t\left[(1 - \calR_\tau)B(t,\tau)\m1_{t < \tau \le T}\right], \end{equation} \noindent where the recovery rate $\calR_\tau$ is unknown beforehand, and is determined at or right after the default, e.g., in court. To remind, in this paper in contrast to \cite{isvIJTAF} we assume the recovery rate to be stochastic. We define the so-called \emph{premium} ${\cal L}_{pm} = L_c + L_a$ and \emph{protection} ${\cal L}_{pr} = L_p$ legs in the standard way, as well as define the CDS par spread $s$ as the coupon which equalizes these two legs and makes the CDS contract fair at time $t$. Under the domestic money market measure $\QM$ we need to convert the payoffs to the domestic currency and discount by the domestic money market account. Then $s$ solves the equation \begin{align} \label{eq:CDSequation} \sum_{i=1}^{m} & \mathbb{E}_t \left[s Z_T B(t,t_i)\Delta t \m1_{\tau > t_i}\right] + \mathbb{E}_t\left[s Z_T B(t,\tau)(\tau - t_{\beta(\tau)})\m1_{t<\tau<T}\right] = \mathbb{E}_t\left[(1 - \calR_\tau)Z_\tau B(t,\tau)\m1_{t<\tau\leq T}\right]. \end{align} In the spirit of \cite{ES2006} and \cite{BrigoSlide}, we develop a numerical procedure for finding the par spread $s$ from the bond prices. Consider each term in \eqref{eq:CDSequation} separately. \paragraph{Coupons} For the coupon payment one has \begin{align} \label{eqCoupon} L_c &= \mathbb{E}_t \left[ \sum_{i=1}^{m} s Z_{t_i} B(t,t_i) \Delta t \m1_{\tau \ge t_i} \right] = s\Delta t \sum_{i=1}^m \mathbb{E}_t \left[ Z_{t_i} B(t,t_i) \m1_{\tau \ge t_i} \right] = s \Delta t \sum_{i=1}^m w_t(t_i). \end{align} \noindent where $t_m = T$. Computation of $w_t(T)$ is described in Section~\ref{wtT}. In short, we first solve \eqref{PDE1} with the terminal condition $u(T,T,X) =0$, which can be solved analytically, and gives rise to $u(t,T,X) =0$. At the second step we solve numerically \eqref{PDE2} with the terminal condition $v(T,T,X) =z$. Note, that as follows from the analysis of the previous section, $w_t(T)$ (and, respectively, the coupon payments) does depend on the jump in the FX rate, but does not depend on the jumps in the foreign interest rate which is financially reasonable. \paragraph{Protection leg} A similar approach is provided for the protection leg \begin{align} \label{eqProtection} L_p &= \mathbb{E}_t \left[(1-\calR_\tau) Z_{\tau} B(t,\tau)\m1_{t < \tau \le T} \right] = \int_{t}^{T} \mathbb{E}_t \left[(1-\calR_\nu)Z_\nu B(t,\nu) \m1_{\tau \in (\nu-d\nu,\nu]} \right] d\nu \\ &= \int_{t}^{T} \bar{g}_t(\nu) d \nu, \qquad \bar{g}_t(\nu) \equiv \mathbb{E}_t \left[(1-\calR_\nu)Z_\nu B(t,\nu) \m1_{\tau \in (\nu-d\nu,\nu]} \right]. \nonumber \end{align} Note, that computation of $\bar{g}_t(T)$ could be done in the same way as this is described in Section~\ref{gtT} for $g_t(T)$. This means that to find $\bar{g}_t(T)$ we first solve \eqref{PDE1} for $u(t,T,X)$ subject to the boundary conditions in Section~\ref{bcSec}, and the terminal condition \begin{equation} \label{tcGbar} u(T,T,X) = (1-R) z (1+\gamma_z)/T, \qquad T > 0, \end{equation} \noindent otherwise, we set $\bar{g}_T(T)\Big|_{T=0} = 0$. At the second step we solve \eqref{PDE2}, where $\hat{u}$ is computed by re-interpolating $u(t,T,X)$ found at the first step, subject to the boundary conditions in Section~\ref{bcSec}, and the terminal condition $v(T,T,X) = 0$. Then $\bar{g}_t(\nu) = v(t,\nu,X)$. \paragraph{Accrued payments} For the accrued payment one has \begin{align} \label{eqAccrued} L_a &= \mathbb{E}_t \left[ s Z_\tau B(t,\tau) (\tau - t_{\beta(\tau)}) \frac{\m1_{t < \tau < T}}{d\nu} \right] = s \int_t^T \mathbb{E}_t \left[ Z_\nu B(t,\nu) (\nu - t_{\beta(\nu)}) \frac{\m1_{\tau \in (\nu-d\nu,\nu]}}{d\nu} \right] d\nu \\ &= s \sum_{i=0}^{m-1} \Big\{\int_{t_i}^{t_{i+1}} (\nu - t_i) \mathbb{E}_t \left[ Z_\nu B(t,\nu) \frac{\m1_{\tau \in (\nu-d\nu,\nu]}}{d\nu} \right] d\nu \Big\} = s \sum_{i=0}^{m-1} \int_{t_i}^{t_{i+1}} (\nu - t_i)\tilde{g}_t(\nu) d\nu, \nonumber \end{align} \noindent where $t_0 \equiv t$, $t_m \equiv T$, and \begin{equation} \label{tildeg} \tilde{g}_t(\nu) \equiv \mathbb{E}_t \left[ Z_\nu B(t,\nu) \frac{\m1_{\tau \in (\nu-d\nu,\nu]}}{d\nu} \right]. \end{equation} Computation of $\tilde{g}_t(T)$ could be done in the same way as this is described in Section~\ref{gtT} for $g_t(T)$. This means that to find $\tilde{g}_t(T)$ we first solve \eqref{PDE1} for $u(t,T,X)$ subject to the boundary conditions in Section~\ref{bcSec}, and the terminal condition \begin{equation} \label{tcGbartilde} u(T,T,X) = z (1+\gamma_z)/T, \qquad T > 0, \end{equation} \noindent otherwise, we set $\tilde{g}_T(T)\Big|_{T=0} = 0$. At the second step we solve \eqref{PDE2}, where $\hat{u}$ is computed by re-interpolating $u(t,T,X)$ found at the first step, subject to the boundary conditions in Section~\ref{bcSec}, and the terminal condition $v(T,T,X) = 0$. Then $\tilde{g}_t(\nu) = v(t,\nu,X)$. As was mentioned in Section~\ref{gtT}, both final integrals in \eqref{eqProtection}, \eqref{eqAccrued} are deterministic in $\nu$. Therefore, each one can be approximated by a Riemann--Stieltjes sum where the continuous interval $[t,T]$ is replaced by a discrete grid (e.g., uniform) with a small step $\Delta \nu = h$. Having all necessary components in hands, we finally compute the CDS spread. Using new notation \begin{align} \label{approx1} A_i &= \int_{t_i}^{t_{i+1}} w_t(\nu) d\nu \approx h \sum_{k=1}^N w_t(\nu_k), \qquad B_i = \int_{t_i}^{t_{i+1}} \bar{g}_t(\nu) d\nu \approx h \sum_{k=1}^N \bar{g}_t(\nu_k) \\ C_i &= \int_{t_i}^{t_{i+1}} \nu \tilde{g}_t(\nu) d\nu \approx h \sum_{k=1}^N \nu_k \tilde{g}_t(\nu_k), \qquad D_i = \int_{t_i}^{t_{i+1}} t_i \tilde{g}_t(\nu) d\nu \approx h t_i \sum_{k=1}^N \tilde{g}_t(\nu_k) \nonumber \\ \nu_k &= t_i + k h, \quad k=1,\ldots,N, \quad h = (t_{i+1} - t_i)/N, \nonumber \end{align} \noindent then re-writing \eqref{eqCoupon}, \eqref{eqProtection} and \eqref{eqAccrued} in the form \begin{align} \label{approx2} L_p &= \sum_{i=1}^{m} B_i, \qquad L_c = s \sum_{i=1}^m A_i, \qquad L_a = s \sum_{i=1}^{m} \left[ C_i - D_i \right], \end{align} and finally combining together \eqref{eq:CDSequation} and \eqref{approx2}, we obtain \begin{align} \label{eqParSpread} s &= \left\{\sum_{i=1}^{m} B_i \right\} \left\{\sum_{i=1}^{m} \left[ A_i + C_i - D_i\right]\right\}^{-1}. \end{align} \section{Solving backward PDEs using an RBF--FD method} \label{numMethod} A key block in the described approach of computing a Quanto CDS spread is solving the backward PDEs in \eqref{PDE1}, \eqref{PDE2} subject to the corresponding terminal and boundary conditions. These PDEs are four-dimensional in space, and, therefore, some computational methods, e.g., finite-difference or finite element ones may suffer from the curse of dimensionality. On the other hand, a Monte-Carlo method is slow. In this situation a rationale is to use localized meshless schemes such as, e.g., a radial basis function (RBF) generated finite-difference method (RBF--FD). Within the last decade the localized RBF schemes have turned to be famous tools in solving various financial engineering problems (we refer a reader to \cite{Pettersson} for further discussion on technical issues of the RBF method, such as the number of discretization points and the structure of the discretization matrices optimal for solving these problems). That is why a flavor of the RBF method (a partition of unity method or RBF-PUM) was chosen in the recent work \cite{isvIJTAF} to solve a problem similar to that one in this paper. As this is discussed in \cite{isvIJTAF}, the original formulation of the RBF methods is done based on global RBFs which leads to either ill-conditioned or dense matrices, and thus is computationally expensive, \cite{Fasshauer}. In \cite{isvIJTAF} this problem was eliminated by using the RBF-PUM. In more detail, see \cite{isvIJTAF} and references therein, and also a recent paper \cite{SM_RBF2017}. In this paper we use another flavor of the localized RBF method known as RBF--FD, e.g. see \cite{FLYER201621, Bayona2017} and references therein. The RBF--FD is a combination of the localized RBF and the FD methods. Comparison of both methods is presented in \cite{SM_RBF2017} which illustrates high capability of those methods for solving PDEs to a sufficient accuracy within a reasonable time, while both methods exhibit similar orders of convergence, However, from the potential parallelization viewpoint the RBF--FD is, perhaps, more suitable. The RBF--FD approximation is produced by applying the RBF interpolation at the finite local set of points. As the result, thus obtained matrices of derivatives are i) sparse, and ii) similar to that for the standard FD scheme, but have better convergence properties, \cite{Tolstykh}. Moreover, the RBF--FD scheme is able to tackle irregular geometries and scattered node layouts. A particular flavor of the RBF--FD method used in this paper is described in \cite{Soleymani2018,FazItkin2019}. In this paper as the basis functions for the RBF method we use the Gaussian RBF $\phi(r_i)=e^{-\epsilon^2 r_i^2}$, where $r_i=\|\mathbf{x}-\mathbf{x}_i\|_2$ denotes the Euclidean distance, and $\epsilon$ is the shape parameter. Other popular choices can also be used. The discretization of the PDEs \eqref{PDE1}, \eqref{PDE2} is done first, along the spatial variables by using the RBF--FD methodology, and then the method--of--lines (MOL) is used to finally reach a system of Ordinary Differential Equations (ODEs). The RBF--FD weights for approximation of the first and second derivatives in the above PDEs are obtained as in \cite{FazItkin2019}, and provide the second order of convergence. \subsection{Time-integrator for \eqref{PDE1}, \eqref{PDE2}} When solving the system of ODEs by using the MOL, for marching along the temporal variable we use a flavor of the time-stepping Runge-Kutta method. Namely, we employ the classical fourth order Runge-Kutta method (RK4) meaning that the local truncation error is of the order of $\mathcal{O}(h^5)$, while the total accumulated error is of the order of $\mathcal{O}(h^4)$, where $h$ is the step of the method, \cite{Trott}. Let us denote as $\mathbf{u}^\iota$ a numerical estimate for the exact solution $\mathbf{u}(t _\iota)$. Assume that we split the whole time interval $[0,T]$ into $k_\tau +1$ nodes with the step $\Delta_ t =T/k_\tau > 0$, so $ t _{\iota+1}= t _{\iota}+\Delta_ t $, $0\leq \iota\leq k_\tau$. Given the initial condition $\mathbf{u}^0$, the RK4 scheme is formulated as follows, \cite{Butcher}: \begin{align}\label{RK4} \mathbf{u}^{\iota+1}&=\mathbf{u}^{\iota}+\frac{\Delta_ t }{6}\left(G_{1}+2G_{2}+2G_{3}+G_{4}\right), \nonumber \\ G_{1}&=H\left( t _{\iota},\mathbf{u}^{\iota}\right), \nonumber \\ G_{2}&=H\left( t _{\iota}+\frac{\Delta_ t }{2},\mathbf{u}^{\iota}+\frac{\Delta_ t }{2}G_{1}\right), \nonumber \\ G_{3}&=H\left( t _{\iota}+\frac{\Delta_ t }{2},\mathbf{u}^{\iota}+\frac{\Delta_ t }{2}G_{2}\right), \nonumber \\ G_{4}&=H\left( t _{\iota}+\Delta_ t ,\mathbf{u}^{\iota}+\Delta_ t G_{3}\right). \nonumber \end{align} From the implementation prospective, this method is a part of almost all standard packages including that of Wolfram Mathematica 12, which we use for doing all the calculations in this paper. It can be called by using the following snippet: \centerline{} \begin{verbatim} Method -> {"FixedStep", "StepSize" -> temporalstepsize, Method -> {"ExplicitRungeKutta", "DifferenceOrder" -> 4, "StiffnessTest" -> False}} \end{verbatim} Overall, based on the methodology described in Section~\ref{bond2cds}, computation of the CDS par spread in \eqref{eqParSpread} requires solving the PDE in \eqref{PDE1} two times, and the PDE in \eqref{PDE2} three times. An intermediate 4D interpolation of the solution $u(t,T,\textbf{X})$ to the solution $\hat{u}(t,T,\textbf{X}^+)$ is also required when proceeding from \eqref{PDE1} to \eqref{PDE2}, as this is explained in Section~\ref{zcbPrice}. The latter slows down the marching process when solving \eqref{PDE2}, if the number of discretization points is large. \section{Experiments} \label{experiments} Numerical experiments described in this Section are fulfilled in a way similar to that in \cite{isvIJTAF}. Also, to investigate quanto effects and their impact on the price of a CDS contract, we follow the strategy of \cite{isvIJTAF} and consider two similar CDS contracts. The first one is traded in the foreign economy, e.g., in Portugal, but is priced under the domestic risk-neutral $\QM$-measure, hence is denominated in our domestic currency (US dollars). To find the price of this contract our approach described in the previous sections is fully utilized. The second CDS is the same contract which is traded in the domestic economy, and is also priced in the domestic currency. As such, its price can be obtained by solving the same problem as for the first CDS, but where the equations for the foreign interest rate $\hatr_t$ and the FX rate $Z_t$ are excluded from consideration. Accordingly, all related correlations which include index $z$ and $\hatr$ vanish, and the no-jumps framework is used. However, the terminal conditions remain the same as in Section~\ref{bond2cds} as they are already expressed in the domestic currency\footnote{Alternatively, the whole four-dimensional framework could be used if one sets $z=1, \hatr = r, \gamma_z = \kapr = \sigr = \gamma_\hatr = 0$, and $\rho_{\cdot,z} = \rho_{\cdot,\hatr} = \rho_{z,\hatr} = 0$, where $\langle \cdot \rangle \in [R, z, y]$.}. \subsection{Parameters of the model} The default values of parameters used in our numerical experiments are same as in \cite{isvIJTAF} and are given below in Table~\ref{TabParam}. It is also assumed that for this default set all correlations are zero. If not stated otherwise, we use these values and assume the absence of jumps in the FX and foreign interest rates. \begin{table}[H] \begin{center} \begin{tabular}{c c c c c c c c} \hline $R$ & $\kappa_R$ & $\theta_R$ & $\sigma_R$ & $\hat r$ & $\kapr$ & $\ther$ & $\sigr$ \\ 0.45 & 0.0 & 0.1 & 0.0 & 0.03 & 0.08 & 0.1 & 0.08 \\ \hline $y$ & $\kapy$ & $\they$ & $\sigy$ & $z$ & $\sigma_z$ & $T$ & $r$ \\ -4.089 & 0.0001 & -210 & 0.4 & 1.15 & 0.1 & 5 & 0.02\\ \hline \end{tabular} \caption{The default set of parameter values used in the experiments.} \label{TabParam} \end{center} \end{table} As it can be seen, by default we set $\kappa_R = \sigma_R = 0$, and $R=0.45$. This allows us to mimic a constant recovery rate that was used in \cite{isvIJTAF,Brigo}. Therefore, this default set provides almost the same conditions as the default set in \cite{isvIJTAF}. Here ``almost" means that the domestic interest rate $r$ in \cite{isvIJTAF} is stochastic, while here it is deterministic, but with the same initial value $r=0.02$. As follows from the results of \cite{isvIJTAF}, the domestic interest rate has a minor influence on the par spread, therefore, this comparison is eligible. We also assume that coupons of the CDS contract are paid semi-monthly, thus the total number of coupon payments over the life of the CDS contract is $m=120$. In what follows, let us denote the CDS par spread found by using the first contract (traded in the foreign economy) as $s$, and the second one (traded in the domestic economy) as $s_d$. Hence, the quanto spread (or the quanto impact) is determined as the difference between these two spreads \begin{equation} \Delta s = s - s_d, \end{equation} \noindent which below, in agreement with the notation in \cite{Brigo}, we call the ``basis" spread. \subsection{Parameters of the method} The shape parameter $\epsilon$ of the Gaussian RBF is subject of a separate choice, see, e.g., \cite{Fornberg2}. In our numerical experiments it is always $\epsilon = 2h$. As $(R, \hatr, y, z) \in [0,1] \times [0,\infty] \times [-\infty,0] \times[0,\infty]$, we truncate each semi-infinite or an infinite domain of definition sufficiently far away from the evaluation point, so an error brought by this truncation is relatively small. In particular, we use $0 \le \hatr \le 1$, $0 \le z \le 4$, $-6 \le y \le 0$. In case of jumps, the right boundary for $\hatr$ ($z$) is extended (truncated) by multiplying it by $1+\gamma_\hatr$ (or $1+\gamma_z$), respectively. Furthermore, the temporal step size for marching along time is chosen as $\Delta_t=0.05$. We move the boundary conditions, defined in \eqref{bc}, to the boundaries of this truncated domain. Implementation-wise, the boundary conditions are explicitly incorporated into the pricing scheme. Hence, the latter can be implemented uniformly with no extra check that the boundary conditions are satisfied. As follows from Section~\ref{bcSec}, for the values of the model parameters given in Table~\ref{TabParam}, no boundary condition is required at the boundaries $R=0, R=1$ and $\hatr = 0$. As long as $\gamma_z$ or $\gamma_\hatr$ are positive, the intermediate interpolation for constructing $\hat u$ (as this is described in Section~\ref{zcbPrice}) would encounter values outside of the truncated (computational) domain. In such a case instead of interpolation an extrapolation technique is applied by Mathematica 12.0 automatically. Accordingly, the command $\mathtt{Interpolation[]}$ is used throughout the code as much as required. In all our numerical experiments we use [10, 10, 10, 10] discretization points uniformly spanned over the space $[R,\hatr,y,z]$. The temporal step size of the RK4 integrator is taken 0.0625. All computations were done at laptop having 16.0 GB of RAM, Windows 10, Intel(R) Core(TM) i7-9750H and SSD internal memory. To speed up calculations as much as possible, we do the following. We set $\mathtt{PrecisionGoal -> 5}$, $\mathtt{AccuracyGoal -> 5}$ in our code to speed up the process of solving the large scale system of the discretized ODEs. Also all matrices are created as sparse arrays. The coefficient matrix (the right-hands matrix of the MOL method) is built by using the Kronecker products on tensor product grids. The boundary conditions are incorporated into this matrix in the same way. Finally, to compute the Riemann--Stieltjes sum in \eqref{eqParSpread} we use a built-in parallelization technique inside the Mathematica kernel to distribute the computational job over all available cores at our computer. The typical elapsed time of computing the par spread using this procedure is roughly 210 seconds. \subsection{Benchmarks} Since in our test $\kappa_R = \sigma_R = 0$, and $R$ = const, computation of $s_d$ can be done by solving a one-dimensional problem where the only stochastic driver is $y$. Therefore, we also implemented our algorithm of solving \eqref{PDE1},\eqref{PDE2} by using a finite-difference method - the Crank-Nicholson scheme, \cite{fdm2000}. Thus found value of $s_d$ is compared with that obtained by using the full RBF-FD 4D algorithm where we set $z=1, \hatr = r, \gamma_z = \kapr = \sigr = \gamma_\hatr = 0$, and all the correlations vanish. The results are presented in Table~\ref{TabBench}, first two columns.. \begin{table}[!htb] \centering \begin{tabular}{|r|r|r|r|r|} \hline \multicolumn{2}{|c|}{$\kappa_y$ =0.0001, $\sigma_y$ = 0.4} & \multicolumn{3}{c|}{$\kappa_y = \sigma_y$ = 0} \\ \hline 4D & 1D & 4D & 1D & Asym \\ \hline 102.68 & 102.8 & 91.73 & 94.5 & 92.2 \\ \hline \end{tabular \caption{Par spreads $s_d$ in bps obtained in our benchmark tests.} \label{TabBench} \end{table Also it is known that if the premium leg is paid continuously, and the hazard rate $\lambda$ is constant, there exists a credit triangle relationship $s_d = \lambda (1-R)$. To mimic this in our calculations, we set $\kappa_y = \sigma_y = 0$ and repeat the calculations of $s_d$ by using the 4D RBF-FD and 1D FD algorithms. These results are compared with the credit triangle value and also presented in Table~\ref{TabBench}, the right two columns. Also, the obtained values are close to that reported in \cite{Brigo}. It can be seen, that the 4D implementation produces the results which are close to the benchmarks values (they differ by no more than 0.5\%). In the absence of jumps, i.e., when $\gamma_z = \gamma_\hatr = 0$, the computed value of $s$ corresponding to the model parameters in Table~\ref{TabParam} is 93.22 bps. Thus, it deviates from $s_d$ by a basis of -9.46 bps. In other words, with no jumps or correlations the quanto effect on the CDS spread is small (this was also observed in \cite{isvIJTAF,Brigo}). \begin{figure}[h!] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{The behavior of $\Delta s$ as a function of $\gamma_\hatr$.\label{ds_gamma:a}} {\includegraphics[height=2.in]{Fig2Left}} \hfill \subcaptionbox{The behavior of $\Delta s$ as a function of $\gamma_z$.\label{ds_gamma:b}} {\includegraphics[height=2.in]{Fig2Right}} \caption{The influence of the jump-at-default amplitude $\gamma_\hatr$ and $\gamma_z$ on the five-year CDS basis spread.} \label{ds_gamma} \end{figure} In Fig.~\ref{ds_gamma} the influence of the jump-at-default amplitude $\gamma_z$ (Fig.~\ref{ds_gamma:b}) and $\gamma_\hatr$ (Fig.~\ref{ds_gamma:a}) on the basis CDS spread is presented. These plots are similar to those in Fig.~4 in \cite{isvIJTAF}. In our case the influence of $\gamma_\hatr$ is a bit less significant (by about 5 bps), and the influence of $\gamma_z$ is also less significant (-80 bps here vs -300 bps at $\gamma_z = -0.8$). This can be explained by the difference in $s_d$ (102.68 bps here vs 365 bps in \cite{isvIJTAF} which, in turn, can be attributed to the stochastic interest rate $r$ in \cite{isvIJTAF} vs a constant interest rate in this paper). The financial meaning of the behavior of $\Delta s$ with changes in $\gamma_\hatr$ and $\gamma_z$ is explained in \cite{isvIJTAF}. In the case when the credit triangle approximation is accurate, it is obtained in \cite{Brigo} that $s \approx (1+\gamma_z)s_d$. In words, this means that the CDS spread expressed in the foreign currency is approximately proportional to the reference USD spread, and the coefficient of proportionality is $(1+\gamma_z)$. In turn, the coupon payments expressed in the foreign currency are lower. In Fig.~\ref{ds_gamma:b} we also present this dependence to see that the results obtained by using our model align with the above approximation at $\gamma_z \approx -0.8$ and $\gamma_z \approx -0.1$, but slightly deviate from this straight line at the intermediate values of $\gamma_z$. Overall, these benchmark tests justify that our numerical approach provides reasonable values and behavior of the basis spread at some typical values of the model parameters. Therefore, further on we use the proposed model and numerical method to investigate the impact of stochasticity of the recovery rate on the quanto effect. \subsection{Results} To recall, in all our calculations presented in this Section we use the default set of parameters given in Table~\ref{TabParam} unless different values of the parameters are explicitly specified. This also assumes no jumps-at-default ($\gamma_z = \gamma_\hatr = 0$) in the default set of parameters. \begin{figure}[h!] \centering \includegraphics[width=0.6\linewidth]{Fig3} \caption{The influence of the mean-reversion rate $\kappa_R$ on the five-year CDS basis spread.} \label{ds_kappaR} \end{figure} In the first test we look at the influence of $\kappa_R$ on the basis spread. These results are presented in Fig.~\ref{ds_kappaR}. Note, that changes in $\kappa_R$ also affect the value of $s_d$. It can be seen that the increase in $\kappa_R$ results in the negative increase of $\Delta s$. In other words, while the decrease in $\gamma_z$ (the jump in the FX rate) gives rise to the decrease of $\Delta s$, the increase of $\kappa_R$ has the same effect. This is not obvious in advance, since the increase of $\kappa$ also increases the recovery rate, and hence the CDS spread value decreases. However, it also decreases $s_d$. Finally, it turns out that $s$ drops down faster than $s_d$, and, therefore, the difference $\Delta s = s-s_d$ decreases. We would also expect a pronounced influence of the recovery volatility $\sigma_R$ on the value of the basis spread. However, surprisingly we didn't find this influence. In other words, in the absence of correlations, the magnitude of the second derivatives $\sop{u}{R}$ and $\sop{v}{R}$ is very small. However, this volatility is also a part of the various mixed derivative terms. Therefore, our next test is to look at the influence of $\sigma_R$ on $\Delta s$ when the corresponding correlations do not vanish. By financial meaning, at default the FX rate drops down by factor $1+\gamma_z$. Therefore, the decrease in $z$ signals about a lost of creditworthiness As such, the recovery rate should also drop down, hence the correlation $\rho_{z,R}$ should be positive. When the default log-intensity $y$ increases, the default becomes more probable, and the recovery should decrease. Hence, $\rho_{y,R}$ should be negative. Similarly, as we assume that at default the foreign interest rate increases, $\rho_{\hatr,R}$ should be negative. \begin{figure}[h!] \captionsetup[subfigure]{justification=centering} \centering \subcaptionbox{The behavior of $\Delta s$ as a function of $\sigma_R$ for $\rho_{z,R} > 0$.\label{ds_sigmaR:a}} {\includegraphics[height=2in]{Fig4Left}} \hfill \subcaptionbox{The behavior of $\Delta s$ as a function of $\sigma_R$ for $\rho_{y,R} < 0$.\label{ds_sigmaR:b}} {\includegraphics[height=2in]{Fig4Right}} \caption{The influence of $\sigma_R$ on the five-year CDS basis spread when $\rho_{z,R}$ and $\rho_{z,y}$ do not vanish.} \label{ds_rhoR_1} \end{figure} In Fig.~\ref{ds_rhoR_1} we present the influence of $\sigma_R$ on the five-year CDS basis spread when the correlation $\rho_{z,R}$ takes the values 0.1 and 0.8, and $\rho_{z,y}$ takes the values -0.1 and -0.8. It can be seen that with high negative correlation $\rho_{y,R}$, the increase in $\sigma_R$ results in a slow negative increase of $\Delta s$ (Fig.~\ref{ds_sigmaR:b}), and the influence of $\sigma_R$ at non-zero positive correlation $\rho_{z,R}$ looks to be similar and a bit more pronounced. \begin{figure}[h!] \centering \includegraphics[width=0.6\linewidth]{Fig5} \caption{The influence of $\sigma_R$ on the five-year CDS basis spread when $\rho_{R,\hatr}$ takes the values -0.1 and -0.8.} \label{ds_rhoR_2} \end{figure} Same is true when the correlation $\rho_{R,\hatr}$ takes the values -0.1, -0.8 as this is presented in Fig.~\ref{ds_rhoR_2}. Thus, among all correlations the biggest impact can be attributed to $\rho_{z,R}$. The above results have been obtained with no jumps. Obviously, combining jumps and non-zero correlations will produce a joint effect. This is illustrated in Fig.~\ref{ds_rhoRz_gamma} where we present the results of the same test as in Fig.~\ref{ds_sigmaR:a} but with $\gamma_z = -0.5$. Despite the value of the basis changes (due to the change in $\gamma_z$) the influence of $\sigma_R$ remains to be small. \begin{figure}[h!] \centering \includegraphics[width=0.6\linewidth]{Fig6} \caption{The influence of $\sigma_R$ on the five-year CDS basis spread when $\rho_{R,z}$ takes the values -0.1 and -0.8, and $\gamma_z = -0.5$.} \label{ds_rhoRz_gamma} \end{figure} It turns out, that the same is true when one adds jumps to the foreign interest rate. This has a relatively minor effect on the dependence of $\Delta s $ on $\sigma_R$, just the curves move up by about 5 bps due to the change in $\gamma_\hatr$. This behavior is presented in Fig.~\ref{ds_rhoRr_gamma}. \begin{figure}[h!] \centering \includegraphics[width=0.6\linewidth]{Fig7} \caption{The influence of $\sigma_R$ on the five-year CDS basis spread when $\rho_{R,\hatr}$ takes the values -0.1 and -0.8, and $\gamma_\hatr = 4$.} \label{ds_rhoRr_gamma} \end{figure} \section{Conclusion}\label{sec:Conclusion} In this model we introduce another modification of the model for pricing Quanto CDS that was developed, first in \cite{Brigo}, and then in \cite{isvIJTAF}. Similar to \cite{isvIJTAF}, the presented model operates with four stochastic factors, which are the hazard rate (more rigorously, the log-intensity of default), the foreign exchange and interest rates and the recovery rate. The latter is new as in both \cite{Brigo, isvIJTAF} it was assumed to be constant. By using a PDE approach we derive the corresponding systems of PDEs for pricing both the risky bond and the CDS spread, also similar to how this is done in \cite{BieleckiPDE2005,isvIJTAF}. We make a careful analysis of the suitable boundary and initial conditions, and show at which boundaries the boundary conditions should be omitted (similar to the well-known Feller condition). To solve these systems of 4D PDEs we use a different flavor of the RBF method which is a combination of localized RBF and finite-difference methods, and is known in the literature as RBF--FD. As compared with the corresponding Monte Carlo or FD methods, in our four-dimensional case this method provides high accuracy and uses much less resources. We use the backward PDE approach which is slower than the corresponding forward approach, but is suitable for parallelization. Before investigating the influence of the stochastic recovery rate on the Quanto CDS spread, we benchmark our model and method against the results of \cite{Brigo, isvIJTAF}. We show that our approach described in this paper provides close results to those in \cite{Brigo,isvIJTAF} when the same set of parameters (the constant recovery, values of parameters, etc.) is used. This allows the authors to inherit and incorporate results of \cite{isvIJTAF} into our framework, and be concentrated just of the new features of the model. As shown in \cite{Brigo,isvIJTAF}, these flavors of models are capable to qualitatively explain the differences in the marked values of CDS spreads traded in foreign and domestic economies (accordingly, they are denominated in the foreign and domestic (USD) currencies). To recall, both CDS contracts traded in the different economies can also be priced in the same currency, e.g., in USD. In this case the market demonstrates a spread between these two prices which is called the quanto spread. The existence of the quanto spread, to a great extent, can be explained by the devaluation of the foreign currency. Financially, this means a much lower protection payout if converted to the US dollars. As compared with \cite{Brigo,isvIJTAF}, in this paper we analyze the impact of the stochastic recovery rate on the basis spread $\Delta s$. We found that changes in the recovery mean-reversion rate $\kappa_R$ moderately affect the value of $\Delta s$ even with no jump-at-default in the FX or foreign interest rate, while the changes in the recovery volatility $\sigma_R$ almost have no impact. The influence becomes more perceptible if one takes into account various correlations of the recovery rate, namely $\rho_{z,R}, \rho_{y,R}, \rho_{\hatr,R}$. Also, for non-zero correlations the dependence of $\Delta s$ on $\sigma_R$ becomes observable. Overall, in our setting the maximum impact is about 10 bps, or 10\%. These results have been obtained with no jumps. Obviously, it is expected that combining jumps and non-zero correlations will produce a joint effect. However, we found that while the jumps in $z$ definitely move the curves down (so the basis spread negatively increases) the dependences of $\Delta s$ on $\sigma_R$ and $\kappa_R$ remain almost same as at $\gamma_z = 0$. A similar picture can be observed for jumps in $\hatr$. Thus, we observe the influence of the stochastic recovery on the quanto spread not exceeding 10\%. This, however, is almost the same order of influence as the jumps at default in $\hatr$ produce, but they work in the different directions. The proposed model together with the obtained results are new and represents the main contribution of this paper. Also, despite the numerical method has been already described in the previous papers of the authors (both joint and separate), its application to solving a system of the 4D PDEs derived in this paper is new (also new boundary conditions are used as compared with, e.g., \cite{FazItkin2019}). Finally, in this paper the method was parallelized to achieve the best efficiency. At the end we have to mention that calibration of this model is computationally expensive because of a large number of the model parameters. As explained in \cite{isvIJTAF}, it can be split into few steps, where at every step we calibrate just a subset of parameters, and each set could be calibrated by using different financial instruments. \section*{Acknowledgments} We thank Damiano Brigo and Victor Shcherbakov for valuable discussions on the impact of the recovery rate and the RBF method. We assume full responsibility for any remaining errors. \vspace{0.5in}
1406.4248
\section{Introduction} The aim of this article is to study two-player zero-sum general repeated games with signals (sometimes called ``stochastic games with partial observation''). At each stage, each player chooses some action in a finite set. This generates a stage reward then a new state and new signals are randomly chosen through a transition probability depending on the current state and actions, and with finite support. Shapley \cite{Shapley53} studied the special case of standard stochastic games where the players observe, at each stage, the current state and the past actions. There are several ways to analyze these games. We will distinguish two approaches: Borelian evaluation and uniform value. In this article, we will mainly use a point of view coming from the literature on determinacy of multistage games (Gale and Stewart \cite{Gale53}). One defines a function, called \textit{evaluation}, on the set of plays (infinite histories) and then studies the existence of a value in the normal form game where the payoff is given by the expectation of the evaluation, with respect to the probability induced by the strategies of the players. Several evaluations will be considered. In the initial model of Gale and Stewart \cite{Gale53} of two-person zero-sum multistage game with perfect information, there is no state variable. The players choose, one after the other, an action from a finite set and both observe the previous choices. Given a subset $A$ of the set of plays (in this framework: infinite sequences of actions), player $1$ wins if and only if the actual play belongs to the set $A$: the payoff function is the indicator function of $A$. Gale and Stewart proved that the game is determined: either player $1$ has a winning strategy or player $2$ has a winning strategy, in the case where $A$ is open or closed with respect to the product topology. This result was then extended to more and more general classes of sets until Martin \cite{Martin75} proved the determinacy for every Borel set. When $A$ is an arbitrary subset of plays, Gale and Stewart \cite{Gale53} showed that the game may be not determined. In $1969$, Blackwell \cite{Blackwell69} studied the case (still without state variable) where the players play simultaneously and are told their choices. Due to the lag of information, the determinacy problem is not well defined. Instead, one investigates the probability that the play belongs to some subset $A$. When $A$ is a $G_{\delta}$-set, a countable intersection of open sets, Blackwell proved that there exists a real number $v$, the \textit{value} of the game, such that for each $\varepsilon>0$, player $1$ can ensure that the probability of the event: ``the play is in $A$'' is greater than $v-\varepsilon$, whereas player $2$ can ensure that it is less than $v+\varepsilon$. The extension of this result to Shapley's model (i.e., with a state variable) was done by Maitra and Sudderth. They focus on the specific evaluation where the payoff is the largest stage reward obtained infinitely often. They prove the existence of a value, called \textit{limsup value}, in the countable framework \cite{Maitra92}, in the Borelian framework \cite{Maitra93a} and in a finitely additive setting \cite{Maitra93b}. In the first two cases, they assume some finiteness of the action set (for one of the players). Their result especially applies to finite stochastic games where the global payoff is the limsup of the mean expected payoff. Blackwell's existence result was generalized by Martin \cite{Martin98} to any Borel-measurable evaluation, whereas Maitra and Sudderth \cite {Maitra98} extended it further to stochastic games in the finitely additive setting. In all these results, the players observe the past actions and the current state. Another notion used in the study of stochastic games (where a play generates a sequence of rewards) is the uniform value where some uniformity condition is required. Basically, one looks at the largest amount that can be obtained by a given strategy for a family of evaluations (corresponding to longer and longer games). There are examples where the uniform value does not exist: Lehrer and Sorin \cite{Lehrer92} describe such a game with a countable set of states and only one player, having a finite action set. On the other hand, Rosenberg, Solan and Vieille \cite{Rosenberg2002} proved the existence of the uniform value in partial observation Markov Decision Processes (one player) when the set of states and the set of actions are finite. This result was extended by Renault \cite{Renault2011} to general action space. The case of stochastic games with standard signaling, that is, where the players observes the state and the actions played has been treated by Mertens and Neyman \cite{Mertens81}. They proved the existence of a uniform value for games with a finite set of states and finite sets of actions. In fact, their proof also shows the existence of a value for the limsup of the mean payoff, as studied in Maitra and Sudderth and that both values are equal. The aim of this paper is to provide new existence results when the players are observing only signals on state and actions. In Section~\ref{model}, we define the model and present several specific Borel evaluations. We then prove the existence of a value in games where the evaluation of a play is the largest stage reward obtained along it, called $\mathit{sup}$ evaluation and study several examples where the limsup value does not exist. Section~\ref{secsymmetric} is the core of this paper. We focus on the case of symmetric signaling structure: multistage games where both players have the same information at each stage, and prove that a value exists for any Borel evaluation. For the proof, we introduce an auxiliary game where the players observe the state and the actions played and we apply the generalization of Martin's result to standard stochastic games. Finally, in Section~\ref{uniform}, we introduce formally the notion of uniform value and prove its existence in recursive games with nonnegative payoffs. \section{Repeated game with signals and Borel evaluation}\label{model} Given a set $X$, we denote by $\Delta_f(X)$ the set of probabilities with finite support on $X$. For any element $x\in X$, $\delta_x$ stands for the Dirac measure concentrated on $x$. \subsection{Model} A \textit{repeated game form with signals} $\Gamma=(X,I,J,C,D,\pi, q)$ is defined by a set of states $X$, two finite sets of actions $I$ and $J$, two sets of signals $C$ and $D$, an initial distribution $\pi\in\Delta_f(X \times C \times D)$ and a transition function $q$ from $X\times I \times J$ to $\Delta_f(X\times C \times D)$. A \textit{repeated game with signals} $(\Gamma,g)$ is a pair of a repeated game form and a reward function $g$ from $X \times I \times J$ to $[0,1]$. This corresponds to the general model of repeated game introduced in Mertens, Sorin and Zamir \cite{Mertens94}. The game is played as follows. First, a triple $(x_1,c_1,d_1)$ is drawn according to the probability $\pi$. The initial state is $x_1$, player $1$ learns $c_1$ whereas player~$2$ learns~$d_1$. Then, independently, player $1$ chooses an action $i_1$ in $I$ and player~$2$ chooses an action $j_1$ in $J$. A new triple $(x_2,c_2,d_2)$ is drawn according to the probability distribution $q(x_1,i_1,j_1)$, the new state is $x_2$, player $1$ learns $c_2$, player~$2$ learns $d_2 $ and so on. At each stage $n$ players choose actions $i_n$ and $j_n$ and a triple $(c_{n+1},d_{n+1} ,x_{n+1})$ is drawn according to $q(x_n,i_n,j_n)$, where $x_n$ is the current state, inducing the signals received by the players and the state at the next stage. For each $n\geq1$, we denote by $H_n=(X \times C \times D \times I \times J)^{n-1} \times X \times C \times D$ the set of \textit{finite histories} of length $n$, by $H^1_n=(C \times I )^{n-1} \times C$ the set of histories of length $n$ for player $1$ and by $H^2_n=(D \times J )^{n-1} \times D$ the set of histories of length $n$ for player $2$. Let\vspace*{1pt} $H = \bigcup_{n\geq1} H_n$. Assuming perfect recall, a \textit{behavioral strategy} for player 1 is a sequence $\sigma=(\sigma_n)_{n \geq1}$, where $\sigma_n$, the strategy at stage $n$, is a mapping from $H^1_n$ to $\Delta(I)$, with the interpretation that $\sigma_n(h)$ is the lottery on actions used by player 1 after $h\in H^1_n$. In particular, the strategy $\sigma_1$ at stage $1$ is simply a mapping from $C$ to $\Delta(I)$ giving the law of the first action played by player 1 as a function of his initial signal. Similarly, a \textit{behavorial strategy} for player 2 is a sequence $\tau =(\tau_n)_{n \geq1}$, where $\tau_n$ is a mapping from $H^2_n$ to $\Delta(J)$. We denote by $\Sigma$ and $\mathcal{T}$ the sets of behavioral strategies of player 1 and player 2, respectively. If for every $n\geq1$ and $h\in H^1_n$, $\sigma_n(h)$ is a Dirac measure then the strategy is \textit{pure}. A \textit{mixed strategy} is a distribution over pure strategies. Note\vspace*{1pt} that since the initial distribution $\pi$ and the transition $q$ have finite support and the sets of actions are finite, there exists a finite subset $H^0_n \subset H_n$ such that for all strategies $(\sigma ,\tau)$ the set of histories that are reached at stage $n$ with a positive probability is included in $H^0_n$. Hence, no additional measurability assumptions on the strategies are needed. It is standard that a pair of strategies $(\sigma, \tau)$ induces a probability $\P_{\sigma, \tau} $ on the set of \textit{plays} $H_\infty=(X\times C \times D \times I \times J)^{\infty}$ endowed with the $\sigma$-algebra $\mathcal{H}_\infty$ generated by the cylinders above the elements of $H$. We denote by $\mathbb{E}_{\sigma,\tau}$ the corresponding expectation. Historically, the first models of repeated games assumed that both $c_{n+1}$ and $d_{n+1}$ determine $(i_n, j_n)$ (standard signalling on the moves also called ``full monitoring''). A \textit{stochastic game} corresponds to the case where in addition the state is known: both $ c_{n+1}$ and $d_{n+1}$ contain $x_{n+1}$. A \textit{game with incomplete information} corresponds to the case where in addition the state is fixed: $x_1= x_n, \forall n$, but not necessarily known by the players. Several extensions have been proposed and studied; see, for example, Neyman and Sorin \cite{Neyman03} in particular Chapters~3, 21, 25, 28. It has been noticed since Kohlberg and Zamir \cite{Kohlberg74b} that games with incomplete information, when the information is \textit{symmetric}: $c_{n+1}= d_{n+1}$ and contains $(i_n, j_n)$, could be analyzed by introducing an auxiliary stochastic game. However, the state variable in this auxiliary stochastic game is no longer $x_n \in X$ but the (common) conditional probability on $X$ given the signals, that can be computed by the players: namely the law of $x_n$ in $\Delta (X)$. Since then, this approach has been extended; see, for example, Sorin \cite{Sorin2003b}, Ghosh et al. \cite{Gosh} and the analysis in the current article shows that general repeated games with symmetric information are the natural extension of standard stochastic games. \subsection{Borel evaluation and results} We now describe several ways to evaluate each play and the corresponding concepts. We follow the multistage game determinacy literature and define an evaluation function $f$ on infinite plays. Then we study the existence of the value of the normal form game $(\Sigma,\mathcal{T},f)$. We will consider especially four evaluations: the general Borel evaluation, the sup evaluation, the limsup evaluation and the limsup-mean evaluation. A Borel \textit{evaluation} is a $\mathcal{H}_\infty$-measurable function from the set of plays $H_\infty$ to $[0,1]$. \begin{definition} Given an evaluation $f$, the game $\Gamma$ has a \textit{value} if \[ \sup_\sigma\inf_{\tau} \mathbb{E}_{\sigma,\tau} ( f )=\inf_\tau\sup_\sigma \mathbb{E}_{\sigma,\tau} ( f ). \] This real number is called the value and denoted by $v(f)$. \end{definition} Given a repeated game $(\Gamma,g)$, we will study several specific evaluations defined through the stage payoff function $g$. \subsubsection{Borel evaluation: $\sup$ evaluation}\label{bor2} The first evaluation is the supremum evaluation where a play is evaluated by the largest payoff obtained along it. \begin{definition} $\gamma^s$ is the \textit{sup} evaluation defined by \[ \forall h\in H_\infty, \qquad {\gamma^s}(h)=\sup _{n\geq1} g(x_n,i_n,j_n). \] In $(\Sigma,\mathcal{T},\gamma^s)$, the $\max\min$, the $\min\max $, and the value (called the \textit{sup value} if it exists) are, respectively, denoted by $\underline{v}^s$, $\overline{v}^s$ and $v^s$. \end{definition} The specificity of this evaluation is that for every $n\geq1$, the maximal stage payoff obtained before $n$ is a lower bound of the evaluation on the current play. We prove that the $\sup$ value always exists. \begin{theorem}\label{sup} A repeated game $(\Gamma, g)$ with the $\sup$ evaluation has a value $v^s$. \end{theorem} For the proof, we use the following result. We call \textit{strategic evaluation} a function $F$ from $\Sigma\times\tau$ to $[0,1]$. It is clear that an evaluation $f$ induces naturally a strategic evaluation by $F(\sigma,\tau)=\mathbb{E}_{\sigma,\tau} ( f )$. \begin{proposition}\label{FF} Let $(F_n)_{n\geq1}$ be an increasing sequence of strategic evaluations from $\Sigma\times\tau$ to $[0,1]$ that converges to some function $F$. Assume that: \begin{itemize} \item $\Sigma$ and $\tau$ are compact convex sets, \item for every $n\geq1$, $F_n(\sigma,\cdot)$ is lower semicontinuous and quasiconvex on $\tau$ for every $\sigma\in\Sigma$, \item for every $n\geq1$, $F_n(\cdot,\tau)$ is upper semicontinuous and quasiconcave on $\Sigma$ for every $\tau\in\tau$. \end{itemize} Then the normal form game $(\Sigma,\tau,F)$ has a value $v$. \end{proposition} A more general version of this proposition can be found in Mertens, Sorin and Zamir \cite{Mertens94} (Part A, Exercise 2, Section~1.f, page~10). \begin{pf*}{Proof of Theorem~\protect\ref{sup}} Let $n\geq1$ and define the strategic evaluation $F_n$ by \[ F_n(\sigma,\tau)= \mathbb{E}_{\sigma,\tau} \Bigl(\sup _{t\leq n} g(x_t,i_t,j_t) \Bigr). \] Players remember their own previous actions so by Kuhn's theorem \cite {Kuhn53}, there is equivalence between mixed strategies and behavioral strategies. The sets of mixed strategies are naturally convex. The set of histories of length $n$ having positive probability is finite and, therefore, the set of pure strategies is finite. For every $n\geq1$, the function $F_n(\sigma,\tau)$ is thus the linear extension of a finite game. In particular $F_n(\sigma,\cdot)$ is lower semicontinuous and quasiconvex on $\tau$ for every $\sigma\in\Sigma$ and upper semicontinuous and quasiconcave on $\Sigma$ for every $\tau\in\tau$. Finally, the sequence $(F_n)_{n\geq1}$ is increasing to \[ F(\sigma,\tau)=\mathbb{E}_{\pi,\sigma,\tau} \Bigl(\sup_{t} g(x_t,i_t,j_t) \Bigr). \] It follows from Proposition~\ref{FF} that the game $\Gamma$ with the $\sup$ evaluation has a value. \end{pf*} \subsubsection{Borel evaluation: $\operatorname{limsup}$ evaluation} Several authors have especially focused on the $\operatorname{limsup}$ evaluation and the $\operatorname{limsup}$-mean evaluation. \begin{definition} $\gamma^*$ is the \textit{limsup} evaluation defined by \[ \forall h\in H_\infty, \qquad \gamma^*(h)=\limsup_n g(x_n,i_n,j_n). \] In $(\Sigma,\mathcal{T},\gamma^*)$, the $\max\min$, the $\min\max $, and the value (called the \textit{limsup value}, if it exists) are, respectively, denoted by $\underline{v}^*$, $\overline{v}^*$ and $v^*$. \end{definition} \begin{definition} $\gamma_m^*$ is the \textit{limsup-mean} evaluation defined by \[ \forall h\in H_\infty,\qquad \gamma_m^*(h)=\limsup _n \frac{1}{n}\sum_{t=1}^n g(x_t,i_t,j_t). \] In $(\Sigma,\mathcal{T},\gamma_m^*)$, the $\max\min$, the $\min \max$, and the value (called the \textit{limsup-mean value}, if it exists) are, respectively, denoted by $\underline{v}_m^*$, $\overline{v}_m^*$ and $v_m^*$. \end{definition} The limsup-mean evaluation is closely related to the limsup evaluation. Indeed, the analysis of the limsup-mean evaluation of a stochastic game can be reduced to the study of the limsup evaluation of an auxiliary stochastic game having as set of states the set of finite histories of the original game. These evaluations were especially studied by Maitra and Sudderth \cite{Maitra92,Maitra93a}. In \cite{Maitra92}, they proved the existence of the limsup value in a stochastic game with a countable set of states and finite sets of actions when the players observe the state and the actions played. Next, they extended in \cite{Maitra93a} this result to Borel measurable evaluation. We aim to study potential extensions of their results to repeated game with signals. In general, a repeated game with signals has no value with respect to the limsup evaluation as shown in the following three examples. In each case, we also show that the limsup-mean value does not exist. \begin{example}\label{guess} We consider a recursive game where the players observe neither the state nor the action played by the other player. We say that the players are \textit{in the dark}. This example, due to Shmaya, is also described in Rosenberg, Solan and Vieille \cite{Rosenberg2009} and can be interpreted as ``pick the largest integer.'' The set of states is finite $X=\{s_1,s_2,s_3, 0^*, 1^*,-1^*,2^*,-2^*\} $, the action set of player $1$ is $I=\{T,B\}$, the action set of player $2$ is $J=\{L,R\}$, and the transition is given by \[ \begin{tabular}{cccc} & $ \begin{array}{c@{\quad}c} L & R \end{array} $ & $ \begin{array}{c@{\quad}c} L \hspace{14mm} & \hspace{13mm} R \end{array} $ & $ \begin{array}{c@{\quad}c} L & R \end{array} $ \\ $ \begin{array}{cc} T \\ B \end{array} $ &$\lleft( \begin{array}{c@{\quad}c} s_1 & -2^* \\ s_1 & -2^*\\ \end{array} \rright)$ & $\lleft( \begin{array}{c@{\quad}c} s_2 & 1/2\bigl(-1^*\bigr) +1/2 (s_3) \vspace*{2pt}\\ 1/2\bigl(1^*\bigr) +1/2 (s_1) & 0^*\\ \end{array} \rright)$ & $\lleft( \begin{array}{c@{\quad}c} s_3 & s_3 \\ 2^* & 2^*\\ \end{array} \rright)$ \\ & $s_1$ & $ s_2$ & $ s_3 $ \end{tabular}\hspace*{-6pt}. \] The payoff is $0$ in states $s_1$,$s_2$, and $s_3$. For example, if the state is $s_2$, player $1$ plays~$T$ and player $2$ plays $R$ then with probability $1/2$ the payoff is $-1$ forever, and with probability $1/2$ the next state is $s_3$. States denoted with a star are absorbing states: if state $k^*$ is reached, then the state is $k^*$ for the remaining of the game and the payoff is $k$. \begin{cl*}\label{cl1} The game which starts in $s_2$ has no limsup value: $\underline{v}^*= -1/2< 1/2=\overline{v}^*$. \end{cl*} Since the game is recursive, the limsup-mean evaluation and the limsup evaluation coincide, so there is no limsup-mean value either. It also follows that the uniform value, defined formally in Section~\ref{uniform}, does not exist. \begin{pf*}{Proof of \hyperref[cl1]{Claim}} The situation is symmetric, so we consider what player $1$ can guarantee. After player $1$ plays $B$, the game is essentially over from player $1$'s viewpoint: either absorption occurs or the state moves to $s_1$ where player $1$'s actions are irrelevant. Therefore, the only relevant past history in order to define a strategy of player $1$ corresponds to all his past actions being $T$. A strategy of player $1$ is thus specified by the probability $\varepsilon_n$ to play $B$ for the first time at stage $n$; let $\varepsilon^*$ be the probability that player $1$ plays $T$ forever. Player 2 can reply as follows: fix $\varepsilon>0$, and consider $N$ such that\break $\sum_{n=N}^{\infty} \varepsilon_n \leq\varepsilon$. Define the strategy $\tau$ which plays $L$ until stage $N-1$ and $R$ at stage $N$. For any $n>N$, we have \[ \mathbb{E}_{s_2, \sigma, \tau} \bigl(g (x_n,i_n,j_n) \bigr)\leq \varepsilon ^*(-1/2)+ \Biggl( \sum_{n=1}^{N-1} \varepsilon_n \Biggr) (-1/2) + \varepsilon (1/2 ) \leq-1/2 + \varepsilon. \] It follows that player 1 cannot guarantee more than $-1/2$ in the limsup sense. \end{pf*} \end{example} \begin{example}\label{semiguess} We consider a recursive game where one player is more informed than the other: player $2$ observes the state variable and the past actions played whereas player $1$ observes neither the state nor the actions played. This structure of information has been studied, for example, by Rosenberg, Solan, and Vieille \cite{Rosenberg2004}, Renault \cite {Renault2012a} and Gensbittel, Oliu-Barton and Venel \cite {Gensbittel2013}. They proved the existence of the uniform value under the additional assumption that the more informed player controls the evolution of the beliefs of the other player on the state variable. The set of states is finite $X=\{s_2,s_3, 0^*, 1/2^*,-1^*,2^*\}$, the action set of player $1$ is $I=\{T,B\}$, the action set of player $2$ is $J=\{L,R\}$, and the transition is given by \[ \begin{tabular}{c@{\quad}c@{\quad}c} & $ \begin{array}{c@{\quad}c} L \hspace{13mm} & \hspace{7mm} R \end{array} $ & $ \begin{array}{c@{\quad}c} L & R \end{array} $\\ $ \begin{array}{cc} T \\ B \end{array} $ & $\lleft( \begin{array}{c@{\quad}c} s_2 & 1/2\bigl(-1^*\bigr) +1/2 (s_3) \vspace*{2pt}\\ (-1/2)^* & 0^*\\ \end{array} \rright)$ & $\lleft( \begin{array}{c@{\quad}c} s_3 & s_3 \\ 2^* & 2^*\\ \end{array} \rright)$ \\ & $s_2$ & $s_3$ \end{tabular}\hspace*{-6pt}. \] We focus on the game which starts in $s_2$. Both players can guarantee $0$ in the $\sup$ evaluation: player $2$ by playing $L$ forever and player $1$ by playing $T$ at the first stage and then $B$ forever. Since the game is recursive, the limsup-mean evaluation and the limsup evaluation are equals. \begin{cl*}\label{cl2} The game which starts in $s_2$ has no limsup value: $\underline{v}^*=-1/2<-1/6=\overline{v}^*$. \end{cl*} \begin{pf} The computation of the $\max\min$ with respect to the limsup-mean evaluation is similar to the computation of Example~\ref{guess}. The reader can check that player $1$ cannot guarantee more than $-1/2$. We now prove that the $\min\max$ is equal to $-1/6$. Contrary to Example~\ref{guess}, player $2$ observes the state and actions, nevertheless the game is from his point of view strategically finished as soon as $B$ or $R$ is played: if $B$ is played then absorption occurs, if $R$ is played then either absorption occurs or the state moves to $s_3$ where player 2's action are irrelevant. Therefore, when defining the strategy of player $2$ at stage $n$, the only relevant past history is $(s_2,T,L)^n$ and a strategy of player $2$ is defined by the probability $\varepsilon_n$ that he plays $R$ for the first time at stage $n$ and the probability $\varepsilon^*$ that he plays $L$ forever. Fix $\varepsilon>0$, and consider $N$ such that $\sum_{n=N}^{\infty} \varepsilon_n \leq\varepsilon$. Player 1's replies can be reduced to the two following strategies: $\sigma_1$ which plays $T$ forever and, $\sigma_2$ which plays $T$ until stage $N-1$ and $B$ at stage $N$. All the other strategies are yielding a payoff smaller with an $\varepsilon $-error. The strategy $\sigma_1$ yields $0 \varepsilon^* + (1-\varepsilon^*)(-1/2) $ and the strategy $\sigma_2$ yields $(-1/2) \varepsilon^* +(1-\varepsilon^*) 1/2- \varepsilon$. The previous payoff functions are almost the payoff of the two-by-two game where player $1$ chooses $\sigma_1$ or $\sigma_2$ and player $2$ chooses either never to play $R$ or to play $R$ at least once: \[ \pmatrix{ 0 & -1/2 \vspace*{2pt}\cr -1/2 & 1/2}. \] The value of this game is $-1/6$, giving the result. \end{pf} \end{example} \begin{example}\label{bigmatch} In the previous examples, the state is not known to at least one player. The following game is a variant of the Big Match introduced by Blackwell and Ferguson \cite{Blackwell68}. It is an absorbing game: every state except one are absorbing. Since there is only one state where players can influence the transition and the payoff, the knowledge of the state is irrelevant. Players can always consider that the current state is the nonabsorbing state. We assume that player $2$ observes the past actions played whereas player $1$ does not (in the original version, both player $1$ and player $2$ were observing the state and past actions): \[ \begin{array}{c@{\quad}c} & \hspace*{-2pt}\begin{array}{c@{\quad}c} L & R \\ \end{array} \\ \begin{array}{c} T \\ B \\ \end{array} & \lleft( \begin{array}{c@{\quad}c} 1^* & 0^* \\ 0 & 1 \\ \end{array} \rright).\\ \end{array} \] \begin{cl*}\label{cl3} The game with the sup evaluation has a value $v_s=1$. The game with the limsup evaluation and the game with the limsup-mean evaluation do not have a value: $\underline{v}^*=\underline{v}_m^*=0 < 1/2=\overline{v}_m^*=\underline{v}^*$. \end{cl*} \begin{pf We first prove the existence of the value with respect to the sup evaluation. Player $1$ can guarantee the payoff $1$. Let $\varepsilon>0$, and $\sigma$ be the strategy which plays $T$ with probability $\varepsilon$ and $B$ with probability $1-\varepsilon$. This strategy yields a sup evaluation greater than $1-\varepsilon$. Since $1$ is the maximum payoff, it is the value: $v^s=1$. We now focus on the limsup evaluation and the limsup-mean evaluation. After player $1$ plays $T$ absorption occurs. Therefore, the only relevant past history in order to define a strategy of player $1$ corresponds to all his past actions being $B$. Let $\varepsilon_n$ be the probability that player $1$ plays $T$ for the first time at stage $n$ and $\varepsilon^*$ be the probability that player $1$ plays $B$ forever. Player 2 can reply as follows: fix $\varepsilon>0$, and consider $N$ such that\break $\sum_{n=N}^{\infty} \varepsilon_n \leq\varepsilon$. Define the strategy $\tau$ which plays $R$ until stage $N-1$ and $L$ at stage $N$. For any $n>N$, we have \[ \mathbb{E}_{s, \sigma, \tau} \bigl(g (x_n,i_n,j_n) \bigr)\leq \varepsilon ^*0+ \Biggl( \sum_{n=1}^{N-1} \varepsilon_n \Biggr)0 + \varepsilon (1 ) \leq\varepsilon. \] Let us compute what player $2$ can guarantee with respect to the limsup evaluation. The computation is similar for the limsup-mean evaluation. First, player $2$ can guarantee $1/2$ by playing the following mixed strategy: with probability $1/2$, play $L$ at every stage and with probability $1/2$, play $R$ at every stage. We now prove that it is the best payoff that player $2$ can achieve. Fix a strategy $\tau$ for player $2$ and consider the induced law $\P$ on the set $H_\infty=\{L,R\}^{\infty}$ of infinite sequences of $L$ and $R$ induced by $\tau$ when player $1$ plays $B$ at every stage. Denote by $\beta_n$ the probability that player $2$ plays $L$ at stage $n$. If there exists a stage $N$ such that $\beta_{N} \geq1/2$, then playing $B$ until $N-1$ and $T$ at stage $N$ yields a payoff greater than $1/2$ to player $1$. If for every $n$, $\beta_n\leq1/2$, then the stage payoff is in expectation greater than $1/2$ when player $1$ plays $B$. Therefore, the expected $\mathit{limsup}$ payoff is greater than $1/2$. \end{pf} \end{example} \section{Symmetric repeated game with Borel evaluation} \label{secsymmetric} Contrary to the $\sup$ evaluation, in general the existence of the value for a given evaluation depends on the signaling structure. In Section~\ref{model}, we analyzed three games without $\operatorname{limsup}$-mean value. In this section, we prove that if the signaling structure is symmetric as defined next, the value always exists in every Borel evaluation. \subsection{Model and results} \begin{definition} A \textit{symmetric signaling repeated game form} is a repeated game form with signals $\Gamma=(X,I,J,C,D, \pi, q)$ such that there exists a set $S$ with $C=D=I \times J \times S$ satisfying \[ \forall(x,i,j)\in X\times I \times J, \qquad \sum_{s,x'} q(x,i,j) \bigl(x',(i,j,s),(i,j,s)\bigr)=1 \] and the initial distribution $\pi$ is also symmetric: $\pi(x,c,d)>0$ implies $c=d$. \end{definition} Intuitively, at each stage of a symmetric signaling repeated game form, the players observe both actions played and a public signal $s$. It will be convenient to write such a game form as a tuple $\Gamma =(X,I,J,S, \pi, q)$ and since for such a game: $q(x,i,j)(x',(i',j',s'),(i'',j'',s''))>0$ only if $i=i'=i''$ and $j=j'=j''$ and $s'=s''$, without loss of generality, we can and will write $q(x,i,j)(x',s)$ as a shorthand for $q(x,i,j)(x',(i,j,s),(i,j,s))$. With this notation $q(x,i,j)$ and the initial distribution $\pi$ are elements of $\Delta_f(X\times S)$. The set of observed plays is then $V_\infty=(S\times I \times J)^\infty$. \begin{theorem}\label{theo3} Let $\Gamma$ be a symmetric signaling repeated game form. For every Borel evaluation $f$, the game $\Gamma$ has a value. \end{theorem} \begin{corollary}\label{corotheo3} A symmetric signaling repeated game $(\Gamma,g)$ has a limsup value and a limsup-mean value. \end{corollary} \subsection{Proof of Theorem~\texorpdfstring{\protect\ref{theo3}}{8}} Let us first give an outline of the proof. Given a symmetric signaling repeated game form $\Gamma$ and a Borel evaluation $f$, we construct an auxiliary standard stochastic game $\widehat{\Gamma}$ (where the players observe the state and the actions) and a Borel evaluation $\widehat{f}$ on the corresponding set of plays.\vspace*{1.5pt} We use the existence of the value in the game $\widehat{\Gamma}$ with respect to the evaluation $\widehat{f}$ to deduce the existence of the value in the original game. The difficult point is the definition of the evaluation $\widehat{f}$. The key idea is to define a conditional probability with respect to the $\sigma$-algebra of observed plays. For a given probability on plays, the existence of such conditional probability is easy since the sets involved are Polish. In our case, the difficulty comes from the necessity to have the same conditional probability for any of the probability distributions that could be generated by the strategies of the players (Sections~\ref{secfinite}--\ref{sectrzy}). (As remarked by a referee the observed plays generate in fact a sufficient statistic for the plays with respect to all these distributions.) The definition of the conditional probability is achieved in three steps: we first define the conditional probability of a finite history with respect to a finite observed history, then we use a martingale result to define the conditional probability of a finite history with respect to an observed play and finally we rely on Kolmogorov extension theorem to construct a conditional probability on plays. Finally, we introduce the function $\widehat{f}$ on the observed plays as the integral of $f$ with respect to this conditional probability. After introducing few notations we prove the existence of the value by defining the game $\widehat{\Gamma}$, assuming the existence of the function $\widehat{f}$ (Lemma~\ref{transfer}). The next three sections will be dedicated to the construction of the conditional probability, then to the definition and properties of the function $\widehat{f}$ for any Borelian payoff function~$f$. Let $\Gamma$ be a symmetric signaling repeated game form, we do not assume the Borel evaluation to be given. \subsection{Notation} Let $H_n=(X\times S \times I \times J)^{n-1}\times X \times S$, $H = \bigcup_{n\geq1} H_n$, the set of histories and $H_\infty= (X\times S \times I \times J)^\infty$, the set of plays. For all $h\in H_\infty$, define $ h |_{n} \in H_n$ as the projection of $h$ on the $n$ first stages. For all $h_n\in H_n$, denote by $h_n^+$ the cylinder generated by $h_n$ in $H_\infty$: $h_n^+=\{h \in H_\infty, h |_{n}=h_n \}$ and by ${\mathcal H}_n$ the corresponding $\sigma$-algebra. ${\mathcal H}_\infty$ denotes the $\sigma$-algebra generated by $\bigcup_n {\mathcal H}_n$. Let $V_n=(S \times I \times J)^{n-1}\times S = H^1_n = H^2_n $, $V = \bigcup_{n\geq1} V_n$ and $V_\infty=(S\times I \times J)^\infty$. For all $v\in V_\infty$, define $ v |_{n} \in V_n$ as the projection of $v$ on the $n$ first stages. For all $v_n\in V_n$, denote by $v_n^+$ the cylinder generated by $v_n$ in $V_\infty$: $v_n^+=\{v \in V_\infty, v |_{n}=v_n \}$ and by ${\mathcal V}_n$ the corresponding $\sigma$-algebra. ${\mathcal V}_\infty$ is the $\sigma$-algebra generated by $\bigcup_n {\mathcal V}_n$. We denote by $\Theta$ the application from $H_\infty$ to $V_\infty$ which forgets all the states: more precisely, $\Theta( x_1, s_1, i_1, j_1,\ldots, x_n, s_n, i_n, j_n, \ldots) = ( s_1, i_1, j_1, \ldots,\break s_n, i_n, j_n,\ldots)$. We use the same notation for the corresponding application defined from $H$ to $V$. We denote by ${\mathcal V}^*_n$ (resp., ${\mathcal V}^*_\infty$) the image of ${\mathcal V}_n$ (resp., ${\mathcal V}_\infty$) by $\Theta^{-1}$ which are sub $\sigma$-algebras of ${\mathcal H}_n$ (resp., ${\mathcal H}_\infty$). Explicitly, for $v_n\in V_n$, $v_n^*$ denotes the cylinder generated by $v_n$ in $H_\infty$: $v_n^*=\{h\in H_\infty, \Theta(h) |_{n}=v_n \}$, ${\mathcal V}^*_n$ are the corresponding $\sigma$-algebras and ${\mathcal V}^*_\infty$ the $\sigma$-algebra generated by their union. Any ${\mathcal V}_n$ (resp., ${\mathcal V}_\infty$)-measurable function $\ell$ on $V_\infty$ induces a ${\mathcal V}^*_n$ (resp., ${\mathcal V}^*_\infty$)-measurable function $\ell\circ\Theta$ on $H_\infty$. Define $\alpha$ from $H$ to $[0,1]$ where for $h_n = ( x_1, s_1, i_1, j_1,\ldots, x_n, s_n)$: \[ \alpha( h_n) =\pi(x_1,s_1) \prod_{t=1}^{n-1} q(x_t,i_t,j_t) (x_{t+1},s_{t+1}) \] and $\beta$ from $V$ to $[0,1]$ where for $v_n = (s_1, i_1, j_1,\ldots, s_n)$: \[ \beta(v_n)= \sum_{ h_n \in H_n ; \Theta(h_n) = v_n } \alpha(h_n). \] Let ${\overline H}_n = \{ h_n \in H_n$; $ \alpha( h_{n}) >0 \}$ and ${\overline V}_n = \Theta({\overline H}_n)$ and recall that these sets are finite. We introduce now the set of plays and observed plays that can occur during the game as ${\overline H}_\infty= \bigcap_n \overline{H}_n ^+$ and ${\overline V}_\infty= \Theta ({\overline H}_\infty) = \bigcap_n \overline{V}_n$. Remark that both are measurable subsets of $H_\infty$ and $V_\infty$, respectively. For every pair of strategies $(\sigma,\tau)$, we denote by $\P _{\sigma ,\tau}$ the probability distribution induced over the set of plays $(H_\infty, {\mathcal H}_\infty)$ and by $\mathbb {Q}_{\sigma ,\tau}$ the probability distribution over the set of observed plays $(V_\infty, {\mathcal V}_\infty)$. Thus, $\mathbb {Q}_{\sigma,\tau }$ is the image of $\P_{\sigma,\tau}$ under~$\Theta$. Note that $\operatorname{supp}( \P_{\sigma,\tau} ) \subset{\overline H}_\infty$. We denote, respectively, by $\mathbb{E}_{\P_{\sigma,\tau }}$ and $\mathbb{E} _{\mathbb{Q}_{\sigma,\tau}}$ the corresponding expectations. It turns out that for technical reasons it is much more convenient to work with the space $\overline{V}_\infty$ rather than with $V_\infty$ (and with $\overline {H}_\infty$ rather than with $H_\infty$). And then, abusing slightly the notation, $\mathcal{V}_\infty$ and $\mathcal{V}_n$ will tacitly denote the restrictions to $\overline{V}_\infty$ of the corresponding $\sigma$-algebras defined on $V_\infty$. On rare occasions this can lead to a confusion and then we will write, for example, $\overline{\mathcal{V}}_n$ to denote the $\sigma$-algebra $\{ U\cap\overline{V}_\infty\vert U\in \mathcal{V}_n \}$ the restriction of $\mathcal{V}_n$ to $\overline {V}_\infty$. \subsubsection{Definition of an equivalent game}\label{secconclusion} Let\vspace*{1pt} us define an auxiliary stochastic game $\widehat{\Gamma}$. The sets of actions $I$ and $J$ are the same as in $\Gamma$. The set of states is $V=\bigcup_{n\geq1} V_n$ and the transition $\widehat{q}$ from $V\times I \times J$ to $\Delta(V)$ is given by \[ \forall v_n\in V_n, \forall i\in I, \forall j\in J, \qquad \widehat{q}(v_n,i,j)=\sum_{s \in S} \psi(v_n, i, j, s)\delta_{v_n, i, j, s}, \] where\vspace*{1pt} $\psi(v_n, i, j, s)= \frac{\beta(v_n, i, j, s)}{\beta(v_n)}$. Note that if $v_n\in V_n$ then the support of $\widehat{q}(v_n,i,j)$ is included in $V_{n+1}$, in particular is finite. Moreover, if $\widehat{q}(v_n,i,j)(v_{n+1})>0$ then $v_{n+1}|_n=v_n$. The initial distribution of $\widehat{\Gamma}$ is the marginal distribution $\pi^S$ of $\pi$ on $S$, if $s\in S= V_1$, then $\pi ^S(s)=\sum_{x\in X}\pi(x,s)$ and $\pi^S(v)=0$ for $v\in V\setminus V_1$. Let us note that the original game $\Gamma$ and the auxiliary game $\widehat{\Gamma}$ have the same sets of strategies. Indeed a behavioral strategy in $\Gamma$ is a mapping from $V$ to probability distributions over actions. Thus, each behavioral strategy in $\Gamma$ is a stationary strategy in $\widehat{\Gamma}$. On the other hand however, each state of $\widehat{\Gamma}$ ``contains'' all previously visited states and all played actions; thus, for all useful purposes, in $\widehat{\Gamma}$ behavioral strategies and stationary strategies coincide. Now suppose that $(v_1, i_1, j_1, v_2,i_2, j_2, \ldots)$ is a play in $\widehat{\Gamma}$. Then $v_{n+1}|_n=v_n$ for all $n$ and there exists $v\in V_\infty$ such that $v|_n=v_n$ for all $n$. Thus, defining a payoff on infinite histories in $\widehat{\Gamma}$ amounts to defining a payoff on $V_\infty$. \begin{lemma}\label{transfer} Given a Borel function $f$ on $H_\infty$, there exists a Borel function $\widehat{f}$ on $V_\infty$ such that \begin{equation} \label{eqfinal} \mathbb{E}_{\P_{\sigma,\tau}}(f)=\mathbb{E}_{\mathbb{Q}_{\sigma ,\tau}} ( \widehat{f} ). \end{equation} \end{lemma} Therefore, playing in $\Gamma$ with strategies $(\sigma,\tau)$ and payoff $f$ is the same as playing in $\widehat {\Gamma}$ with the same strategies and payoff $\widehat{f}$. By Martin~\cite{Martin98} or Maitra and Sudderth~\cite{maitra2003}, the stochastic game $\widehat{\Gamma}$ with payoff $\widehat{f}$ has a value implying that $\Gamma$ with payoff $f$ has the same value, which completes the proof of Theorem~\ref{theo3}. The three next sections are dedicated to the proof of Lemma~\ref{transfer}. \subsubsection{Regular conditional probability of finite time events with respect to finite observed histories}\label{secfinite} For $m\geq n \geq1$, we define $\Phi_{n,m}$ from $ H_{\infty} \times {\overline V}_\infty$ to $[0, 1 ] $ by \[ \Phi_{n,m} ( h, v) = \cases{ \displaystyle \frac{ \sum_{h', h'|_n = h|_n, \Theta(h'|_m )= v |_m}\alpha( h' |_{m})}{ \beta( v |_{m}) },& \quad\mbox{if }$\Theta( h |_{n})= v |_{n}$,\vspace*{3pt} \cr 0,& \quad\mbox{otherwise}. } \] This corresponds to the joint probability of the players on the realization of the history $h$ up to stage $n$, given the observed history $v$ up to stage $m$. Since\vspace*{1pt} $\Phi_{n,m} ( h, v)$ depends only on $h|_n$ and $v|_m$, we can see $\Phi_{n,m}$ as a function defined on $H_n \times\overline{V}_m $ and note that its support is included in $\overline{H}_n \times\overline{V}_m$. On the other hand, since each set $U\in\mathcal{H}_n$ is a finite union of cylinders $h_n^+$ for $h_n\in H_n$ such that $h_n^+\subset U$, $\Phi _{n,m}$ can be seen as a mapping from $\mathcal{H}_n\times\overline{V}_\infty$ into $[0,1]$, where $\Phi_{n,m}(U,v)=\sum_{h_n, h_n^+\subseteq U}\Phi_{n,m}(h_n,v)$. Bearing this last observation in mind, we have the following. \begin{lemma}\label{lemkernel} For every $m\geq n\geq1$, $\Phi_{n,m}$ is a probability kernel from $(\overline{V}_\infty, {\mathcal V}_m)$ to $(H_\infty, {\mathcal H}_n)$. \end{lemma} \begin{pf} Since $\sum_{h_n\in H_n}\Phi_{n,m}(h_n,v)=1$ for $v\in\overline {V}_\infty$, $ \Phi_{n,m} ( \cdot,v)$ defines a probability on $\mathcal{H}_n$. Moreover, for any $U\in\mathcal{H}_n$, $\Phi_{n,m}( U, v)$ is a function of the $m$ first components of $v$ hence is ${\mathcal V}_m$-measurable. \end{pf} \begin{lemma}\label{link} Let $m\geq n \geq1$ and $(\sigma,\tau)$ be a pair of strategies. Then, for every $v_m\in\overline{V}_m$ such that $\mathbb{Q}_{\sigma ,\tau }(v_m^+)=\P_{\sigma,\tau}(v_m^*)>0$, and every $h_n\in H_n$: \[ \P_{\sigma,\tau}\bigl(h_n^+|v_m^*\bigr)= \Phi_{n,m}(h_n,v_m). \] \end{lemma} \begin{pf} Let $v_m = ( s_1, i_1, j_1, \ldots, s_m)$ and $h_n \in H_n$, \begin{eqnarray*} &&\hspace*{-3pt} \P_{\sigma,\tau}\bigl(h_n^+|v_m^*\bigr)\\ &&\hspace*{-3pt}\qquad = \frac{\P_{\sigma,\tau}(h_n^+ \cap v_m^*)}{ \P_{\sigma,\tau }(v_m^*) } \\ &&\hspace*{-3pt}\qquad= \cases{ \displaystyle\frac{ \sum_{h', h'|_n = h_n, \theta( h'|_m) = v_m {\alpha( h' |_m ) W(i_1,j_1,\ldots,j_{m-1}) {\beta(v_m) W(i_1,j_1,\ldots,j_{m-1})}, & \hspace*{-4pt}\quad$\mbox{if } \Theta(h _{n})= v_m |_{n}$,\vspace*{3pt} \cr 0,& \hspace*{-4pt}$\quad\mbox{otherwise}$,} \end{eqnarray*} where $W(i_1,j_1,\ldots,j_{m-1})=\prod_{t\leq m-1}\sigma(v_m|_t)(i_t)\tau(v_m|_t)(j_t)$. After simplification, we recognize on the right the definition of $\Phi_{n,m}(v_m,h_n)$. \end{pf} We deduce the following lemma. \begin{lemma}\label{phi-m-n} For every pair of strategies $(\sigma,\tau)$, each $W\in\overline {\mathcal{V}}_m$ and $U\in\mathcal{H}_n$ we have \begin{equation} \label{eqmn} \P_{\sigma,\tau}\bigl(U \cap\Theta^{-1}(W)\bigr)=\int _W \Phi_{n,m}(U, v)\mathbb{Q}_{\sigma,\tau}(dv) . \end{equation} \end{lemma} \begin{pf} Clearly, it suffices to prove (\ref{eqmn}) for cylinders $U=h_n^+$ and $W=v_m^+$ with $\beta(v_m)>0$. We have \begin{eqnarray*} \int_{v_m^+} \Phi_{n,m}(h_n,v) \mathbb{Q}_{\sigma,\tau}(dv) &=& \Phi_{n,m}(h_n,v_m) \mathbb{Q}_{\sigma,\tau}\bigl(v_m^+\bigr) \\ &=& \P_{\sigma,\tau}\bigl(h_n^+ | v_m^*\bigr) \mathbb{Q}_{\sigma,\tau}\bigl(v_m^+\bigr) \\ &=& \P_{\sigma,\tau}\bigl(h_n^+ | v_m^*\bigr) \P_{\sigma,\tau}\bigl(v_m^*\bigr) \\ &=& \P_{\sigma,\tau}\bigl(h_n^+ \cap v_m^*\bigr). \end{eqnarray*} \upqed \end{pf} Note that (\ref{eqmn}) can be equivalently written as: for every pair of strategies $(\sigma,\tau)$, each $W^*\in\overline{\mathcal{V}}^*_m$ and $U\in\mathcal{H}_n$ \begin{equation} \label{eqmn^*} \P_{\sigma,\tau}\bigl(U \cap W^*\bigr)=\int_{W^*} \Phi_{n,m}\bigl(U, \Theta(h)\bigr)\P_{\sigma,\tau}(dh). \end{equation} \subsubsection{Regular conditional probability of finite time events with respect to infinite observed histories} \label{secdwa} In this paragraph, we prove that instead of defining one application $\Phi_{n,m}$ for every pair $(m,n) $ such that $m\geq n\geq1$, one can define a unique probability kernel $\Phi_n$ from $(\Omega_n, {\mathcal V}_\infty)$ to $(H_\infty,{\mathcal H}_n)$, with $\mathbb{Q}_{\sigma ,\tau }(\Omega_n)=1$, for all $(\sigma, \tau)$, such that the extension of Lemma~\ref{phi-m-n} holds. For $h\in H_\infty$, let \[ \Omega_{h} = \bigl\{ v\in\overline{V}_\infty\vert\mbox{$\Phi_{n,m}(h,v)$ converges as $m\uparrow\infty$} \bigr\}. \] The domain $\Omega_{h}$ is measurable (see Kallenberg~\cite {Kallenberg97}, page~6, e.g.). Recall that $\Omega_{h}$ depends only on $h|_n$ and write also $\Omega_{h|_n}$ for $ \Omega_{h}$. Let then \[ \Omega_n = \bigcap_{h_n\in H_n} \Omega_{h_n}. \] We define $\Phi_n \dvtx H_\infty\times\overline{V}_\infty\to[0,1]$ by $\Phi_n = \lim_{m\rightarrow\infty} \Phi_{n,m}$ on $ H_\infty \times \Omega_n $ and $0$ otherwise. As a limit of a sequence of measurable mappings $\Phi_n$ is measurable (see Kallenberg~\cite{Kallenberg97}, page~6, e.g.). \begin{lemma}\label{phi-n} \textup{(i)} For each pair of strategies $(\sigma,\tau)$, $\mathbb{Q}_{\sigma,\tau}(\Omega_n)=1$. \textup{(ii)} For each $v\in\Omega_n$, $\sum_{h_n\in H_n} \Phi_n(h_n, v)=1$. \textup{(iii)} For each $U\in\mathcal{H}_n$ the mapping $v \mapsto\Phi_n(U, v)$ is a measurable mapping from $(\overline{V}_\infty,\mathcal{V}_\infty )$ to $\mathbb{R}$. \textup{(iv)} For each pair of strategies $(\sigma,\tau)$, for each $U\in\mathcal{H}_n$ and each $W\in\mathcal{V}_\infty$ \begin{equation} \label{eqn} \P_{\sigma,\tau}\bigl(U\cap\Theta^{-1}(W)\bigr)=\int _W \Phi_n(U,v) \mathbb{Q}_{\sigma,\tau}(dv). \end{equation} \end{lemma} \begin{pf} (i) For $h_n\in H_n$ and each pair of strategies $\sigma,\tau$ we define on $H_\infty$ a sequence of random variables $Z_{h_n,m}$, $m\geq n$, \[ Z_{h_n,m} = \P_{\sigma,\tau} \bigl[ h_n^+ | \mathcal{V}_m^* \bigr]. \] As a conditional expectation of a bounded random variable with respect to an increasing sequence of $\sigma$-algebras, $Z_{h_n,m}$ is a martingale (with respect to $\P_{\sigma,\tau}$), hence converges $\P_{\sigma,\tau}$-almost surely and in $L^1$ to the random variable $Z_{h_n}=\P_{\sigma,\tau}[ h_n^+ | \mathcal{V}_\infty^*]$. For $m\geq n$, we define the mappings $\psi_{n,m}[h_n] \dvtx \overline {H}_\infty\to[0,1]$, \[ \psi_{n,m}[h_n](h)= \Phi_{n,m} \bigl(h_n,\Theta(h)\bigr). \] Let us show that for each $h_n\in H_n$, $\psi_{m,n}[h_n]$ is a version of the conditional expectation $\mathbb{E}_{\P_{\sigma,\tau }}[\mathbh{1}_{h_n}| \mathcal {V}_m^*] = \P_{\sigma,\tau} [ h_n^+ | \mathcal {V}_m^* ]$. First note that $\psi_{n,m}[h_n]$ is $(H_\infty,\mathcal{V}_m^*)$ measurable. Lemma~\ref{link} implies that, for $h\in\operatorname {supp}( \P_{\sigma ,\tau} ) \subset\overline{H}_\infty$, $\psi_{n,m}[h_n](h)=\Phi_{n,m}(h_n,\Theta(h))=\P_{\sigma,\tau}(h_n^+ | v|_m^*)=\P_{\sigma,\tau}(h_n^+ | \mathcal{V}_m^* )(h)$, where $v=\Theta(h)$. Hence, the claim. Since $\psi_{n,m}[h_n]$ is a version of $\P_{\sigma,\tau}(h_n^+ | \mathcal{V}_m^* )$, its limit $\psi_n[h_n]$ exists and is a version of $\P_{\sigma,\tau}(h_n^+ | \mathcal{V}_\infty^* )$, $\P_{\sigma,\tau}$-almost surely. In particular, \begin{enumerate}[(C1)] \item[(C1)] the set $\Theta^{-1}(\Omega_{h_n}) = \{ h\in H_\infty\vert \mbox {$\lim_m \psi_{n,m}[h_n](h)$ exists} \}$ is $\mathcal{V}_\infty^*$ measurable and has $\P_{\sigma,\tau }$-measure $1$, \item[(C2)] for each $W^*\in\mathcal{V}_\infty^*$, $\int_{W^*} \psi_n[h_n](h) \P_{\sigma,\tau}(dh)= \int_{W^*} \mathbb{E}[\mathbh{1}_{h_n^+} | \mathcal{V}_\infty^*] \P_{\sigma,\tau}=\break \P_{\sigma,\tau}(W^*\cap h_n^+)$. \end{enumerate} Note that (C1) implies that $\mathbb{Q}_{\sigma,\tau}(\Omega _n)=1$. (ii) If $v\in\Omega_n$ then, for all $h_n\in H_n$, $\Phi_{n,m}(h_n,v)$ converges to $\Phi_n(h_n, v)$. But, by Lemma~\ref{lemkernel}, $\sum_{h_n\in H_n} \Phi_{n,m}(h_n,v)=1$. The\vspace*{1pt} sum being with finitely many nonzero terms one\vspace*{1pt} has $\sum_{h_n\in H_n} \Phi_n(h_n, v)=1$. (iii) Was proved before the lemma. (iv) Since $\int_W \Phi_n(h_n,v) \mathbb{Q}_{\sigma,\tau}(dv) = \int_{\Theta^{-1}(W)} \psi_n[h_n](h) \P_{\sigma,\tau}(dh)$ for $W \in \mathcal{V}_\infty$, using (C2) we get \[ \P_{\sigma,\tau}\bigl(h_n^+\cap\Theta^{-1}(W)\bigr)=\int _W \Phi_n(h_n,v) \mathbb{Q}_{\sigma,\tau}(dv) \] for $U\in\mathcal{V}_\infty$. \end{pf} \subsubsection{Regular conditional probability of infinite time events with respect to infinite observed histories} \label{sectrzy} In this section, using Kolmogorov extension theorem we construct from the sequence $\Phi_n$ of probability kernels from $(\Omega_n, {\mathcal V}_\infty)$ to $(H_\infty,{\mathcal H}_n)$, one probability kernel $\Phi$ from $(\Omega_\infty, {\mathcal V}_\infty )$ to $(H_\infty,{\mathcal H}_n)$, with $\mathbb{Q}_{\sigma,\tau }(\Omega_\infty ) =1$, for all $(\sigma, \tau)$. \begin{lemma}\label{phi} There exists a measurable subset $\Omega_\infty$ of $V_\infty$ such that, for all strategies $\sigma,\tau$: \begin{itemize} \item $\mathbb{Q}_{\sigma,\tau}(\Omega_\infty)=1$ and \item there exists a probability kernel $\Phi$ from $(\Omega_\infty, {\mathcal V}_\infty)$ to $(H_\infty,{\mathcal H}_\infty)$ such that for each $W\in\mathcal{V}_\infty$ and $U\in\mathcal{H}_\infty$ \begin{equation} \label{eqfinall} \P_{\sigma,\tau}\bigl(U\cap\Theta^{-1}(W)\bigr)=\int _W \Phi(U,v) \mathbb{Q}_{\sigma,\tau}(dv). \end{equation} \end{itemize} \end{lemma} Before proceeding to the proof, some remarks are in order. A probability kernel having the property given above is called a regular conditional probability. For given strategies $\sigma$ and $\tau$, the existence of a transition kernel $\kappa_{\alpha,\beta}$ from $(V_\infty, {\mathcal V}_\infty)$ to $(H_\infty,{\mathcal H}_\infty)$ such that for each $U\in\mathcal {V}_\infty$ and $A\in\mathcal{H}_\infty$ \[ \P_{\sigma,\tau}\bigl(A\cap\Theta^{-1}(U)\bigr)=\int _U \kappa_{\sigma,\tau}(A,v) \mathbb{Q}_{\sigma,\tau}(dv) \] is well known provided that $V_\infty$ is a Polish space and $\mathcal {V}_\infty$ is the Borel $\sigma$-algebra. In the current framework it is easy to introduce an appropriate metric on $V_\infty$ such that this condition is satisfied thus the existence of $\kappa_{\sigma ,\tau }$ is immediately assured. The difficulty in our case comes from the fact that we look for a regular conditional probability which is \textit{common for all probabilities} $\P_{\sigma,\tau}$, where $(\sigma,\tau)$ range over all strategies of both players. \begin{pf*}{Proof of Lemma~\protect\ref{phi}} We follow the notation of the proof of Lemma~\ref{phi-n} and define $\Omega_\infty=\bigcap_{n\geq1} \Omega_n$. Let $(\sigma,\tau)$ be a couple of strategies. For every \mbox{$n\geq1$}, $\mathbb{Q}_{\sigma,\tau }(\Omega _n)=1$, hence $\mathbb{Q}_{\sigma,\tau}(\Omega_\infty)=1$. By Lemma~\ref{phi-n}(ii), given $v\in\Omega_\infty$, the sequence $\{\Phi _n(\cdot, v) \}_{n\geq1}$ of probabilities on $\{(H_\infty, {\mathcal H}_n)\}_{n\geq1}$ is well defined. Let us show that this sequence satisfies the condition of Kolmogorov's extension theorem. In fact $\Phi_{n,m}(\cdot, v)$ is defined on the power set of $H_{n}$ by \[ \forall A \subset H_{n}, \qquad \Phi_{n,m}(A, v)=\sum _{h_n\in A} \Phi_{n,m}(h_n, v). \] Thus, for every $h_n \in H_n$, we have \begin{eqnarray*} \Phi_{n,m}(h_n, v)&=&\frac{\P_{\sigma,\tau}(v|_m^* \cap h_n^+)}{\P_{\sigma,\tau}(v|_m^*)} \\ &=& \frac{\P_{\sigma,\tau}(v|_m^* \cap(h_n\times I \times J \times X\times S)^+)}{\P_{\sigma,\tau}(v|_m^*)} \\ &=&\Phi_{n+1,m}\bigl(h_n \times(I \times J \times X\times S), v\bigr). \end{eqnarray*} Taking the limit, we obtain the same equality for $\Phi_n$ and $\Phi _{n+1}$ hence the compatibility condition. By the Kolmogorov extension theorem for each $v\in\Omega$, there exists a measure $\Phi(\cdot, v)$ on $(H_\infty,\mathcal{H}_\infty)$ such that \[ \Phi\bigl(h_n^+, v\bigr)= \Phi_n\bigl(h_n^+, v\bigr) \] for each $n$ and each $h_n\in H_n$. Let us prove that, for each $U\in\mathcal{H}_\infty$, the mapping $v\mapsto\Phi(U,v)$ is $\mathcal{V}_\infty$-measurable on $\Omega _\infty$. Let $\mathcal{C}$ be the class of sets $A \in\mathcal{H}_\infty$ such that $\Phi(A,\cdot)$ has this property. By Lemma~\ref{phi-n}, $\mathcal {C}$ contains the $\pi$-system consisting of cylinders generating $\mathcal{H}_\infty$. To show that $\mathcal{H}_\infty\subseteq\mathcal{C}$ it suffices to show that $\mathcal{C}$ is a $\lambda$-system. Let $A_i$ be an increasing sequence of sets belonging to $\mathcal{C}$. Since, for each $v\in\overline{V}_\infty$, $\Phi (\cdot ,v)$ is a measure, we have $\Phi(\bigcup_n A_n,v)=\sup_n \Phi(A_n,v)$. However, $v\mapsto\sup_n \Phi(A_n,v)$ is measurable as a supremum of measurable mappings $v\mapsto\Phi(A_n,v)$. Let $A\supset B$ be two sets belonging to $\mathcal{C}$. Then $\Phi (A\setminus B,v) + \Phi(B,v)=\Phi(A,v)$ by additivity of measure and $v\mapsto\Phi(A\setminus B,v)= \Phi(A,v) - \Phi(B,v)$ is measurable as a difference of measurable mappings. To prove (\ref{eqfinall}), take a measurable subset $W$ of $\overline {V}_\infty$ and consider the set function \[ \mathcal{H}_\infty\ni U \mapsto\int_W \Phi(U,dv) Q_{\sigma,\tau}(dv). \] Since $\Phi(\cdot,v)$ is nonnegative this set function is a measure on $(H_\infty,\mathcal{H}_\infty)$. However, by Lemma~(\ref{phi-n}), this mapping is equal to $U \mapsto\P_{\sigma,\tau}( U \cap\Theta^{-1}(W))$ for $U$ belonging to the $\pi$-system of cylinders generating $\mathcal {H}_\infty$. But two measures equal on a generating $\pi$-system are equal, which terminates the proof of (\ref{eqfinall}). \end{pf*} A standard property of probability kernels and the fact that $\Omega _\infty$ has measure $1$ imply: \begin{corollary}\label{cortransfer} Let $f : H_\infty\to[0,1]$ be $\mathcal{H}_\infty$-measurable mapping. Then the mapping $\widehat{f} : V_\infty\to[0,1]$ defined by \[ \widehat{f}(v) = \cases{ \displaystyle\int_{H_\infty} f(h)\Phi(dh, v), & \quad\mbox{if $v\in\Omega_\infty$},\vspace*{3pt} \cr 0, & \quad$\mbox{otherwise}$,} \] is $\mathcal{V}_\infty$-measurable and \[ \mathbb{E}_{\P_{\sigma,\tau}}[ f ] = \mathbb{E}_{\mathbb {Q}_{\sigma,\tau}}[ \widehat{f} ]\qquad \forall\sigma, \tau. \] \end{corollary} \begin{remark} In the previous proof, we proceeded through a reduction from a symmetric repeated game to a stochastic game in order to apply Martin's existence result. The same procedure can be applied for $N$-player repeated games. Let us consider a $N$-player symmetric signaling repeated game. One defines a conditional probability and therefore associates to all Borel payoffs $f^i$ on plays, $i \in N$ an associated Borel evaluation $\widehat{f}^i$ on the space of observed plays, therefore, reducing the problem to a $N$-player stochastic game with Borelian payoffs. For example, Mertens \cite{Mertens86} showed the existence of pure $\varepsilon$-Nash equilibrium in $N$-person stochastic games with Borel payoff functions where at each stage at most one of the players is playing. Using the previous reduction, one can deduce the existence of pure $\varepsilon$-Nash equilibrium in $N$-person symmetric repeated games with Borel payoff functions where at each stage at most one of the players is playing. \end{remark} \section{Uniform value in recursive games with nonnegative payoffs}\label{uniform} In Section~\ref{model} and Section~\ref{secsymmetric}, we focused on Borel evaluations. In this last section, we focus on the family of mean average of the $n$ first stage rewards and the corresponding uniform value. \begin{definition} For each $n \geq1$, the \textit{mean expected payoff} induced by $(\sigma,\tau)$ during the first $n$ stages is \[ \gamma_n (\sigma, \tau)=\mathbb{E}_{ \sigma, \tau} \Biggl( \frac {1}{n}\sum_{t=1}^n g (x_t,i_t,j_t) \Biggr). \] \end{definition} \begin{definition}\label{stocuni1} \label{stocuni} Let $v$ be a real number. A strategy $\sigma^*$ of player $1$ \textit{guarantees} $v$ \textit{in the uniform sense} in $(\Gamma, g)$ if for all $\eta>0$ there exists $n_0 \geq1$ such that \begin{equation} \forall n\geq n_0, \forall\tau\in\mathcal{T},\qquad \gamma _n\bigl(\sigma^*,\tau\bigr) \geq v-\eta. \end{equation} Player $1$ can \textit{guarantee} $v$ \textit{in the uniform sense} in $(\Gamma, g)$ if for all $\varepsilon>0$ there exists a strategy $\sigma^*\in\Sigma$ which guarantees $v-\varepsilon$ in the uniform sense. A symmetric notion holds for player $2$. \end{definition} \begin{definition} The \textit{uniform $\max\min$}, denoted by $\underline{v}_\infty$, is the supremum of all the payoff that player $1$ can guarantee in the uniform sense. A \textit{uniform $\min\max$} denoted by $\overline {v}_\infty$ is defined in a dual way. If both players can guarantee $v$ in the uniform sense, then $v$ is the \textit{uniform value} of the game $(\Gamma, g)$ and denoted by $v_\infty$. \end{definition} Many existence results have been proven in the literature concerning the uniform value and uniform $\max\min$ and $\min\max$; see, for example, Mertens, Sorin and Zamir \cite{Mertens94} or Sorin \cite {Sorin2002}. Mertens and Neyman \cite{Mertens81} proved that in a stochastic game with a finite state space and finite actions spaces, where the players observe past payoffs and the state, the uniform value exists. Moreover, the uniform value is equal to the limsup-mean value and for every $\varepsilon>0$ there exists a strategy which guarantees $v_\infty-\varepsilon$ both in the limsup-mean sense and in the uniform sense. In general, the uniform value does not exist (either in games with incomplete information on both sides or in stochastic games with signals on the actions) and in particular its existence depends upon the signaling structure. \begin{remark} For $n\geq1$, the \textit{$n$-stage game} $(\Gamma_n, g)$ is the zero-sum game with normal form $(\Sigma, \mathcal{T}, \gamma_n)$ and value $v_n$. It is interesting to note that in the special case of symmetric signaling repeated games with a finite set of states and finite set of signals, a uniform value may not exist, since even the sequence of values $v_n $ may not converge (Ziliotto \cite{Ziliotto2013}), but there exists a value for any Borel evaluation by Theorem~\ref{theo3}. \end{remark} We focus now on the specific case of recursive games with nonnegative payoff defined as follows. \begin{definition} Recall that a state is \textit{absorbing} if the probability to stay in this state is 1 for all actions and the payoff is also independent of the actions played. A repeated game is \textit{recursive} if the payoff is equal to $0$ outside the absorbing states. If all absorbing payoffs are nonnegative, the game is \textit{recursive} and \textit{nonnegative}. \end{definition} Solan and Vieille \cite{Solan2002} have shown the existence of a uniform value in nonnegative recursive games where the players observe the state and past actions played. We show that the result is true without assumption on the signals to the players. In a recursive game, the limsup-mean evaluation and the limsup evaluation coincide. If the recursive game has nonnegative payoffs, the sup evaluation, the limsup evaluation and the limsup-mean evaluation both coincide. So, Theorem~\ref{sup} implies the existence of the value with respect to these evaluations. Using a similar proof, we obtain the stronger theorem. \begin{theorem}\label{recursive} A recursive game with nonnegative payoffs has a uniform value~$v_\infty$, equal to the sup value and the limsup value. Moreover, there exists a strategy of player $2$ that guarantees $v_\infty$. \end{theorem} The proof of the existence of the uniform value is similar to the proof of Proposition~\ref{FF} while using a specific sequence of strategic evaluations. \begin{pf*}{Proof of Theorem~\protect\ref{recursive}} The sequence of stage payoffs is nondecreasing on each history: $0$ until absorption occurs and then constant, equal to some nonnegative real number. In particular, the payoff converges and the $\operatorname{limsup}$ can be replaced by a limit. Let $\sigma$ be a strategy of player $1$ and $\tau$ be a strategy of player $2$, then $\gamma_n(\sigma, \tau)$ is nondecreasing in $n$. This implies that the corresponding sequence of values $(v_n)_{n\in \mathbb{N} }$ is nondecreasing in $n$. Denote $v=\sup_n v_n$ and let us show that $v$ is the uniform value. Fix $\varepsilon>0$, consider $N$ such that $v_N\geq v-\varepsilon$ and $\sigma^*$ a strategy of player 1 which is optimal in $\Gamma_N$. We have for each $\tau$ and, for every $n\geq N$, \[ \gamma_n\bigl(\sigma^*, \tau\bigr) \geq\gamma_N\bigl( \sigma^*, \tau\bigr)\geq v_N \geq v-\varepsilon. \] Hence, the strategy $\sigma^*$ guarantees $v-\varepsilon$ in the uniform sense. This is true for every positive $\varepsilon$, thus player $1$ guarantees $v$ in the uniform sense. Using the monotone convergence theorem, we also have \begin{eqnarray*} \gamma^*\bigl(\sigma^*,\tau\bigr)&=&\mathbb{E}_{\sigma^*,\tau} \Biggl( \lim _n \frac{1}{n}\sum_{t=1}^n g(x_t,i_t,j_t) \Biggr) \\ &=& \lim _n \mathbb{E}_{\sigma^*,\tau} \Biggl( \frac{1}{n}\sum _{t=1}^n g(x_t,i_t,j_t) \Biggr) \\ &\geq & v-\varepsilon. \end{eqnarray*} We now show that player $2$ can also guarantee $v$ in the uniform sense. Consider for every $n$, the set \[ K_n=\bigl\{\tau, \forall\sigma, \gamma_n(\sigma, \tau)\leq v\bigr\}. \] $K_n$ is nonempty because it contains an optimal strategy for player 2 in $\Gamma_n$ (since $v_n\leq v$). The set of strategies of player 2 is compact, hence by continuity of the $n$-stage payoff $\gamma_n$, $K_n$ is itself compact. $\gamma_n \leq\gamma_{n+1}$ implies $K_{n+1}\subset K_n$ hence $\bigcap_n K_n\neq\varnothing$: there exists $\tau^*$ such that for every strategy of player $1$, $\sigma$ and for every positive integer $n$, $\gamma_n(\sigma,\tau) \leq v$. It follows that both players can guarantee $v$, thus $v$ is the uniform value. By the monotone convergence theorem, we also have \[ \gamma^*\bigl(\sigma,\tau^*\bigr)= \mathbb{E}_{\sigma,\tau^*} \Biggl( \lim _n \frac{1}{n}\sum_{t=1}^n g(x_t,i_t,j_t) \Biggr) = \lim _n \mathbb{E}_{\sigma,\tau^*} \Biggl( \frac{1}{n}\sum _{t=1}^n g(x_t,i_t,j_t) \Biggr) \leq v. \] Hence, $v$ is the sup and limsup value. \end{pf*} \begin{remark} The fact that the sequence of $n$-stage values $(v_n)_{n\geq1}$ is nondecreasing is not enough to ensure the existence of the uniform value. For example, consider the Big Match \cite{Blackwell68} with no signals: $v_n=1/2$ for each $n$, but there is no uniform value. \end{remark} \begin{remark} The theorem states the existence of a $0$-optimal strategy for player 2 but player 1 may only have $\varepsilon$-optimal strategies. For example, in the following MDP, there are two absorbing states, two nonabsorbing states with payoff~$0$ and two actions $\mathit{Top}$ and $\mathit{Bottom}$: \[ \begin{tabular}{cc} $\lleft( \begin{array}{c} 1/2 (s_1) + 1/2 (s_2)\vspace*{1pt} \\ 0^*\\ \end{array} \rright)$ & $\lleft( \begin{array}{c} s_2 \\ 1^*\\ \end{array} \rright)$. \\ $s_1$ & $s_2$ \end{tabular} \] The starting state is $s_1$ and player $1$ observes nothing. A good strategy is to play $Top$ for a long time and then $Bottom$. While playing $Bottom$, the process absorbs and with a strictly positive probability the absorption occurs in state $s_1$ with absorbing payoff $0$. So player $1$ has no strategy which guarantees the uniform value of 1. \end{remark}
2002.11946
\section{Proof of the main theorem}\label{app:average-hardness} In this section, we provide a detailed proof of the main theorem of the main text, which reads: \bigskip \begin{m_thm}\label{thm:main} Assuming conjecture 1 and 2, the ability to classically sample from $p_M(\mathbf z)$ up to an additive error $\beta=1/(8e)$ for all unitary matrices in $\{\hat{U}_F\}$ implies the collapse of the polynomial hierarchy to the third level. \end{m_thm} \bigskip The proof relies on the theorems 1 and 2 and conjectures 1 and 2 presented in the main text. \bigskip \begin{thm1}\label{thm:sharp_P} Let $\mathcal{Y}$ be a set of output probabilities $\tilde{p}_M(\bold{z})=|\langle \bold{z}|\hat{U}^M_{\rm COE}|\bold{z}_0\rangle|^2$ obtained from all possible COE matrices $\{\hat{U}_{\rm COE}\}$ and all possible output strings $\mathbf z\in\mathcal{Z}$. Approximating $\tilde{p}_M(\bold{z})$ in $\mathcal{Y}$ up to multiplicative error is $\#\mathrm{P}$ hard in the worst case. \end{thm1} \begin{thm2} The distribution of $\tilde{p}_M(\bold{z})$ in $\mathcal{Y}$ anticoncentrates with $\delta =1$ and $\gamma=1/e$, where $e$ is the base of the natural logarithm. \end{thm2} \begin{conjecture_app}[Average-case hardness]\label{conj:average-hard} For any 1/(2e) fraction of $\mathcal{Y}$ approximating $\tilde{p}_M(\mathbf z)$ up to multiplicative error with $\alpha= 1/4+o(1)$, where $o(\cdot)$ is little-o notation\footnote{$f(n)=o(g(n))$ means that $f(n)/g(n) \to 0$ when $n\to\infty$.}, is as hard as the hardest instance. \end{conjecture_app} \begin{conjecture_app}[Computational ETH]\label{conj:ETH} The experimentally accessible set $\{\hat{U}_F\}$ is approximately Haar random over the ensemble $\{\hat{U}_{\rm COE}\}$ in the sense that \begin{enumerate} \item The distribution of $p_M({\bf z})$ over $\hat{U}_F$ is the same as that of $\tilde{p}_M(\bf z)$ over $\hat{U}_{\rm COE}$. \item Average instances in $\{\hat{U}_F\}$ are as hard as average instances in $\{\hat{U}_{\rm COE}\}$. \end{enumerate} \end{conjecture_app} \bigskip Let us begin by considering a classical probabilistic computer with an NP oracle, also called a $\mathrm{BPP^{NP}}$ machine. This is a theoretical object that can solve problems in the third level of the polynomial hierarchy. The Stockmeyer theorem states that a $\mathrm{BPP^{NP}}$ machine with an access to a classical sampler $\mathcal{C}$, as defined in the main text, can efficiently output an approximation $\tilde{q}(\mathbf z)$ of $q(\mathbf z)$ such that \begin{align} |q(\mathbf z)-\tilde{q}(\mathbf z)| \le \frac{q(\mathbf z)}{\poly{L}}. \end{align} We emphasise that the $\mathrm{BPP^{NP}}$ machine grants us the ability to perform the approximating task, in contrast to the machine $\mathcal{C}$ that can only sample strings from a given distribution. To see how the $\mathrm{BPP^{NP}}$ machine can output a multiplicative approximation of $p_M(\mathbf z)$ for most of $\mathbf z\in\mathcal{Z}$, let us consider \begin{align}\label{eq:long} |p_M(\mathbf z)&-\tilde{q}(\mathbf z)| \nonumber \\ &\le |p_M(\mathbf z)-q(\mathbf z)| + |q(\mathbf z)-\tilde{q}(\mathbf z)| \nonumber \\ &\le |p_M(\mathbf z)-q(\mathbf z)| + \frac{q(\mathbf z)}{\poly{L}} \nonumber \\ &\le |p_M(\mathbf z)-q(\mathbf z)| + \frac{|p_M(\mathbf z) - q(\mathbf z)| + p_M(\mathbf z)}{\poly{L}} \nonumber \\ &= \frac{p_M(\mathbf z)}{\poly{L}} + |p_M(\mathbf z)-q(\mathbf z)| \left(1 + \frac{1}{\poly{L}}\right). \end{align} The first and the third lines are obtained using the triangular inequality. To get multiplicative approximation of $p_M(\mathbf z)$ using $\tilde{q}(\mathbf z)$, we need the term $|p_M(\mathbf z)-q(\mathbf z)|$ to be small. Given the additive error defined in Eq. (3) in the main text, this is indeed the case for a large portion of $\{\mathbf z\}\in\mathcal{Z}$. Since the left hand side of Eq. (3) in the main text involves summing over an exponentially large number of terms but the total error is bounded by a constant $\beta$, most of the terms in the sum must be exponentially small. This statement can be made precise using Markov's inequality. \bigskip \begin{fact}[Markov's inequality] If $X$ is a non-negative random variable and $a>0$, then the probability that $X$ is at least $a$ is \begin{equation} {\rm Pr}(X\ge a) \le \frac{\mathbb{E}(X)}{a}, \end{equation} where $\mathbb{E}(X)$ is the expectation value of $X$. \end{fact} By setting $X=|p_M(\mathbf z)-q(\mathbf z)|$, we get \begin{align} \Pr_{\mathbf z} \left( |p_M(\mathbf z)-q(\mathbf z)| \ge a \right) \le \frac{\mathbb{E}_{\mathbf z}(|p_M(\mathbf z)-q(\mathbf z)|)}{a}, \end{align} Here, the distribution and the expectation value are computed over $\mathbf z\in\mathcal{Z}$. Note that $\mathbb{E}_{\mathbf z}(|p_M(\mathbf z)-q(\mathbf z)|)\leq \beta/N$ is given by the additive error defined in Eq. (3) in the main text. By setting $a=\beta/N\zeta$ for some small $\zeta>0$, we get \begin{align} \Pr_{\mathbf z} \left( |p_M(\mathbf z)-q(\mathbf z)| \ge \frac{\beta}{N\zeta} \right)\le \zeta \end{align} or equivalently \begin{align} \Pr_{\mathbf z} \left( |p_M(\mathbf z)-q(\mathbf z)| < \frac{\beta}{N\zeta} \right) > 1- \zeta. \end{align} By substituting $|p_M(\mathbf z)-q(\mathbf z)|$ from Eq. (\ref{eq:long}), we get \begin{align}\label{eq:stock1} \Pr_{\mathbf z}\left(|p_M(\mathbf z)-\tilde{q}(\mathbf z)| < \frac{p_M(\mathbf z)}{\poly{L}} + \frac{\beta}{N\zeta} \left(1 + \frac{1}{\poly{L}}\right) \right) > 1-\zeta. \end{align} Theorem 2 in the main text (the anti-concentration condition) together with conjecture 2 imply that $\{p_M({\bf z})\}$ follows the Porter-Thomas distribution, specially that $1/N<p_M(\mathbf z)$ for at least $1/e$ fraction of the unitary matrices in $\{\hat{U}_F\}$. Hence, we can rewrite Eq. (\ref{eq:stock1}) as \begin{align}\label{eq:stock2} &\Pr_{\mathcal{Y}}\left\{|p_M(\mathbf z)-\tilde{q}(\mathbf z)|< p_M(\mathbf z)\left[ \frac{1}{\poly{L}} + \frac{\beta}{\zeta} \left(1 + \frac{1}{\poly{L}}\right) \right]\right\} >1/e-\zeta. \end{align} Here, the distribution is over all $\mathbf z\in\mathcal{Z}$ and all unitary matrices in $\{\hat{U}_F\}$. To understand the right hand side of the equation, let $P\cap Q$ be the intersection between the set $P$ of probabilities that anticoncentrate and the set $Q$ of probabilities that satisfy the Markov's inequality. Since $\Pr(P\cap Q) = \Pr(P) + \Pr(Q) - \Pr(P\cup Q) \ge \Pr(P) + \Pr(Q) -1$, $\Pr(P)=1/e$ and $\Pr(Q)=1-\zeta$ , it follows that $\Pr(P\cap Q)$ is no less than $1/e + 1 - \zeta -1 =1/e-\zeta$. Following \cite{bremner_average_2016, 2018_hartmut_natphy}, we further set $\beta = 1/(8e)$ and $\zeta = 1/(2e)$, so that \begin{align}\label{eq:stock3} \Pr_{\hat{U}_F,\mathbf z}\left\{|p_M(\mathbf z)-\tilde{q}(\mathbf z)| < \left(\frac{1}{4} + o(1)\right) p_M(\mathbf z)\right\} > \frac{1}{2e}, \end{align} giving an approximation up to multiplicative error $1/4+o(1)$ for at least $1/(2e)$ instances of the set of experimentally realizable unitary matrices $\{\hat{U}_F\}$. If according to the conjecture \ref{conj:average-hard} and conjecture 2 in the main text, multiplicatively estimating $1/(2e)$ fraction of the output probabilities from $\{\hat{U}_F\}$ is \#P-hard, then the Polynomial Hierarchy collapses. This concludes the proof of the main theorem in the main text. \section{Mapping of approximating output distribution of COE dynamics onto estimating partition function of complex Ising models}\label{app:mapping} In this section, we provide evidence to support the conjecture 1 in the main text, showing how hardness instances could appear on average. To do this, we map the task of approximating an output distributions of COE dynamics onto calculating the partition function of a classical Ising model which is widely believed to be \#P-hard on average for multiplicative approximation \cite{2018_eisert_prx,bremner_average_2016}. The section is divided into two parts. In the first part, we explain the overall concept and physical intuition of this procedure. In the second part, mathematical details are provided. \subsection{Physical perspective of the mapping procedure} The mapping protocol consists of two intermediate procedures. First, we map the COE unitary evolution on \textit{universal} random quantum circuits and, second, we derive a complex Ising model from those circuits following Ref. \cite{2018_hartmut_natphy}. Let us begin by expressing an unitary evolution of COE as $\hat{U}_{\rm COE} = \hat{U}^T_{\rm CUE}\hat{U}_{\rm CUE}$ where $\hat{U}_{\rm CUE}$ is a random unitary drawn from the Circular Unitary Ensemble (CUE) i.e. Haar ensemble \cite{2010_haake}. We then further decompose $\hat{U}_{\rm CUE}$ into a set of universal quantum gates \cite{2018_hartmut_natphy}. Following Ref. \cite{2018_hartmut_natphy}, we choose random quantum circuits consisting of $n+1$ layers of gates and $\log_2 N $ qubits, as shown in Fig. \ref{fig1}(a). The first layer consists of Hadamard gates applied to all qubits. The following layers consist of randomly chosen single-qubit gates from the set $\{\sqrt{{X}},\sqrt{{Y}},{T}\}$ and two-qubit controlled-Z (CZ) gates. Here, $\sqrt{{X}}$ ($\sqrt{{Y}}$) represents a $\pi/2$ rotation around the ${X}$ (${Y}$) axis of the Bloch sphere and $\hat{T}$ is a non-Clifford gate representing a diagonal matrix $\{1,e^{i\pi/4}\}$. Such circuits have been shown to be approximately $t$-design \cite{beni} for an arbitrary large $t$ when $n\to\infty$, which implies the CUE evolution \cite{Harrow2009}. The operator $\hat{U}^T_{\rm CUE}$ can be implemented by reversing the order of the gates in $\hat{U}_{\rm CUE}$ and replacing $\sqrt{ Y}$ with $\sqrt{Y}^T$. We emphasize that decomposing the COE evolution into the random circuits is only done theoretically with an aim to show the average case hardness. In the real experiments, this COE dynamics is realized by the driven many-body systems. \begin{figure*} \includegraphics[width=17.5cm,height=13.5cm]{fig1} \caption{\textbf{Mapping driven many-body dynamics to the partition function of complex Ising lattices:} (a) An example of a random circuit that generates COE dynamics and its conversion to the Ising model. (b) An example of a simple random quantum circuit, illustrating the mapping to the classical Ising model. STEP I to STEP III in the diagrammatic procedure are shown in (b)-(d), respectively. (e) Lookup table for the contribution of each gate to the local fields $h_i$, $h_j$ and the interaction $J_{ij}$ in the Ising lattice.} \label{fig1} \end{figure*} The mathematical procedure for the mapping from random quantum circuits to classical complex Ising models is discussed in details in the next part. Specifically, $p_M(\bold{z})$ from the circuit $(\hat{U}^T_{\rm CUE}\hat{U}_{\rm CUE})^M$, as depicted in Fig. \ref{fig1}(a), can be calculated from the partition function, \begin{equation} \langle \bold{z} |\hat{U}^M_{\rm COE}|\mathbf z_0\rangle = \sum_{\bold{s}\in \mathcal{S}} A(\bold{s})\exp \left[\frac{i\pi}{4} \left(\sum_{i} h_i s_i+\sum_{\langle i,j\rangle}J_{ij}s_is_j\right)\right]. \label{eq:ising_partition} \end{equation} Here, $A(\bold{s})$ is the degeneracy number associated with a classical spin configuration $\bold{s}$ in the lattice $\mathcal{S}$, $s_i=\pm 1$, $h_i$ represents a on-site field on site $i$ and $J_{ij}$ represents the coupling between the classical spins on site $i$ and $j$. Since the output probability can also be interpreted as the path integral in Eq.~(\ref{eq:ising_partition}) in the main text, the intuition behind the mapping is that the sum over all possible paths is translated into the sum over all possible classical spin configurations, where the phase accumulated in each path is given by the energy of the complex Ising lattice $\mathcal{S}$. To gain intuitive understanding of this standard mapping, we provide a diagrammatic approach to visualize the lattice $\mathcal{S}$ and extract the field parameters $\{h_i\}$, $\{J_{ij}\}$. To begin with, we use the random circuit in Fig. \ref{fig1}(b) as a demonstration. The mathematical descriptions behind each steps are discussed in the next part. \begin{itemize} \item \underline{STEP I} - For each qubit, draw a circle between every consecutive non-diagonal gates, see Fig. \ref{fig1}(c). Each circle or `node' represents one classical spin. \item \underline{STEP II} - For each qubit, draw a horizontal line between every consecutive nodes $i$,$j$, see Fig. \ref{fig1}(d). These lines or `edges' represent interaction $J_{ij}$ between two neighboring spins in the same row. In addition, draw a line between every two nodes that are connected by $CZ$ gates. These lines represent the interaction $J_{ij}$ between spins in different rows. \item \underline{STEP III} - Labeling each nodes and edges with the corresponding gates, see Fig. \ref{fig1}(e). \item \underline{STEP IV} - Use the lookup table in Fig. \ref{fig1}(f) to specify $h_i$ and $J_{ij}$ introduced by each gate. For example, the $\sqrt{Y}$ gate that acts between nodes $i$ and $j$ adds $-1$ to $J_{ij}$, $-1$ to $h_i$ and $+1$ to $h_j$. We use the convention that the leftmost index represents the leftmost node. Also, the two T-gates that are enclosed by the node $i$ will add $0.5+0.5=+1$ to the local field $h_i$. \item \underline{STEP V} - Finally, spins at the leftmost side of the lattice are fixed at $+1$, corresponding to the initial state $|\bold{0}\rangle$. Similarly, spins at the rightmost side of the lattice are fixed according to the readout state $\ket{\bold{z}}$. \end{itemize} Following the above recipe, we provide the exact form of the parameters in the Ising model for the COE dynamics in the next part, showing that the field parameters $\{h_i\}$ and $\{J_{ij}\}$ are quasi-random numbers with no apparent structure. Specifically, neither the phase $\pi \sum_i h_i s_i /4$ nor the phase $\pi \sum_{\langle i,j\rangle} J_{ij} s_i s_j/4$ is restricted to the values $0,\pi/2,\pi,3\pi/2$ (mod $2\pi$) for each spin configurations $\bold{s}$. Without such stringent restrictions, approximating the partition function up to multiplicative error is known to be $\#\mathrm{P}$-hard in the worst case \cite[Theorem 1.9]{goldberg_guo_2017}. This motivates a widely used conjecture in quantum supremacy proposals that such task is also hard on average \cite{2018_eisert_prx,bremner_average_2016}. We emphasize here the major differences between random quantum circuits as proposed in Ref. \cite{2018_hartmut_natphy} and our systems. Firstly, our systems are analog with no physical quantum gates involved. The decomposition to quantum gates is only done mathematically. Secondly, our system has discrete time-reversal symmetry, while such symmetry is absent in random quantum circuits. Consequently, the COE in our system is achieved from the Floquet operator $\hat{U}_F$, while the CUE in random quantum circuits are achieved from the entire unitary evolution. In addition, $\hat{U}_F^M$ in our system does not have the $t$-design property due to the COE \cite[pp.117-119]{nicholas_2018}. However, as shown above, the hardness arguments for the random quantum circuits can be naturally applied to our case. \subsection{Mathematical details of the mapping procedure} In this section, we prove Eq.~(\ref{eq:ising_partition}) by providing justifications of the diagrammatic recipes to map the the evolution $\hat U_{\rm CUE}$ on a Ising spin model with complex fields. Again, the quantum gates of interest consist of both diagonal gates $\{T,CZ\}$ and non-diagonal gates $\{\sqrt{X}, \sqrt{Y}, \sqrt{Y}^T,H\}$. For simplicity, we start with one- and two- qubit examples before generalizing to the COE dynamics. The mathematical procedure here is adapted from Ref. \cite{2018_hartmut_natphy}. \subsubsection{One-qubit example} Let us consider a one-qubit circuit and $N+1$ gates randomly chosen from the set $\{\sqrt{X},\sqrt{Y},\sqrt{Y}^T,T\}$. The zeroth gate is fixed to be a Hadamard gate. The output probability is $p(z)=|\langle z |\hat{U}|0\rangle|^2$, where $\hat{U}=\prod_{n=0}^N \hat{U}^{(n)}$ is the total unitary matrix, $\hat{U}^{(n)}$ is the $n^{\rm th}$ gate and $z\in\{0, 1\}$ is the readout bit. Below, we outline the mathematical steps underlying the diagrammatic approach followed by detailed explanations for each step: \begin{align} p(z) & = \left|\bra{z} \prod_{n=0}^{N} \hat{U}^{(n)} \ket{0}\right|^2 \nonumber\\ & = \left|\sum_{\underline{z}\in\{0,1\}^{N}} \prod_{n=0}^{N} \bra{z_{n}} \hat{U}^{(n)} \ket{z_{n-1}} \right|^2\nonumber\\ & = \left|\sum_{\underline{z}\in\{0,1\}^{N}} \prod_{n=0}^{N} A(z_{n},z_{n-1}) \exp\left[\frac{i\pi}{4}\Phi(z_{n},z_{n-1})\right]\right|^2 \nonumber\\ & = \left|\sum_{\underline{z}\in\{0,1\}^{N+2}} A({\underline{z}}) \exp\left[\frac{i\pi}{4} \sum_{n=0}^{N}\Phi(z_{n},z_{n-1})\right]\right|^2. \label{eqA1} \end{align} In the second line, we insert an identity $\hat{I}_n=\sum_{z_n\in\{0,1\}} \ket{z_n} \bra{z_n}$ between $\hat{U}^{(n+1)}$ and $\hat{U}^{(n)}$ for every $n\in\{0,..,N-1\}$. As a result, this line can be interpreted as the Feynman's path integral where each individual path or `world-line' is characterized by a sequence of basis variables $\underline{z} = (z_{-1},z_0,...,z_{N})$. The initial and the end points for every path are $\ket{z_{-1}} = \ket{0}$ and $\ket{z_N} = \ket{z}$, respectively. In the third line, we decompose $\bra{z_{n}} \hat{U}^{(n)} \ket{z_{n-1}}$ into the amplitude $A(z_{n},z_{n-1})$ and phase $\Phi(z_{n},z_{n-1})$. In the fourth line, we introduce $A({\underline{z}})=\prod_{n=0}^{N} A(z_{n},z_{n-1})$. The equation now takes the form of the partition of a classical Ising model with complex energies. Here, $\underline{z}$ can be interpreted as a classical spin configuration, $A(\underline{z})$ as the degeneracy number and $i\frac{\pi}{4}\Phi(z_n,z_{n-1})$ as a complex energy associated with spin-spin interaction. Further simplifications are possible by noticing that, the diagonal gates in the circuits allow the reduction of the number of classical spins. Specifically, if a $T$ gate is applied to $\ket{z_{n-1}}$, it follows that $z_n=z_{n-1}$. Hence, the variables $z_{n-1}$ and $z_{n}$ can be represented by a single classical spin state. The two variables $z_{n-1}, z_{n}$ become independent only when a non-diagonal gate is applied. Therefore, we can group all variables $\{z_n\}$ between two non-diagonal gates as one classical spin. This procedure leads to the directives presented as the the STEP I of the procedure in the previous section. Formally, for $N_{\rm spin}+1$ non-diagonal gates in the circuit (including the first Hadamard gate) $\underline{z}$ can be characterized by a classical spin configuration $\underline{s} = (s_{-1},s_0,...,s_k,...,s_{N_{\rm spin}})$ where $s_{k}=1-2z_k\in\{\pm 1\}$ is a spin representing the basis variable immediately after the $k^{th}$ non-diagonal gate, i.e. \begin{align} p(z) & = \left|\sum_{\underline{s} \in \{\pm 1\}^{N_{\rm spin}+1}} A({\underline{s}})\exp{\left[\frac{i\pi}{4}\sum_{k=0}^{N_{\rm spin}}\Phi(s_{k},s_{k-1})\right]}\right|^2 \\ & = \left|Z_{\rm Ising}\right|^2 \\ \end{align} Lastly, we need to specify $A(\underline{s})$ and $\Phi(s_{k},s_{k-1})$ in term of the local fields $h_{k-1}$, $h_k$, the interaction $J_{k-1,k}$, and spin configurations $s_{k-1},s_{k}$. This is done by first considering the gates in their matrix form, i.e. \begin{align} \sqrt{X} & = \frac{1}{\sqrt{2}}\begin{pmatrix} e^{\frac{i\pi}{2}} & 1 \\ 1 & e^{\frac{i\pi}{2}} \end{pmatrix} = \frac{1}{\sqrt{2}}\left[ e^{\frac{i\pi}{4}(1+s_ks_{k-1})}\right]_{s_k,s_{k-1}}, \\ \sqrt{Y} & = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & -1 \\ 1 & 1 \end{pmatrix} = \frac{1}{\sqrt{2}}\left[ e^{\frac{i\pi}{4}(1-s_{k-1})(1+s_{k})}\right]_{s_k,s_{k-1}}, \\ \sqrt{Y}^T & = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\ -1 & 1 \end{pmatrix} = \frac{1}{\sqrt{2}}\left[ e^{\frac{i\pi}{4}(1+s_{k-1})(1-s_{k})}\right]_{s_k,s_{k-1}}, \\ H & = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} = \frac{1}{\sqrt{2}}\left[ e^{\frac{i\pi}{4}(1-s_{k-1})(1-s_{k})}\right]_{s_k,s_{k-1}}, \\ T & = \begin{pmatrix} 1 & 0 \\ 0 & e^{\frac{i\pi}{4}} \end{pmatrix} = {\rm Diag}\left[ e^{\frac{i\pi}{4}(\frac{1-s_{k}}{2})}\right]_{s_k} \end{align} Notice that all non-diagonal gates contribute to the same amplitude $A(s_k,s_{k-1})=1/\sqrt{2}$, leading to $A(\underline{s})=2^{-(N_{\rm spin}+1)/2}$. Hence, we can extract the contribution of each gate to $\Phi(s_k,s_{k-1})$ as \begin{align} \Phi_{\sqrt{X}}(s_k,s_{k-1}) & = 1+s_{k-1}s_{k},\\ \Phi_{\sqrt{Y}}(s_k,s_{k-1}) & = (1-s_{k-1})(1+s_{k}) \\ &= 1-s_{k-1}+s_{k}-s_{k-1}s_k,\\ \Phi_{\sqrt{Y}^T}(s_k,s_{k-1}) & = (1+s_{k-1})(1-s_{k}) \\ &= 1+s_{k-1}-s_k-s_{k-1}s_k ,\\ \Phi_T(s_{k}) & = \frac{1-s_{k}}{2}. \end{align} The under-script indicates which gate is contributing to the phase. The corresponding $h_i$, $h_j$ and $J_{ij}$ are depicted in the lookup table in Fig. \ref{fig1}(f), where $i=k-1$ and $j=k$. The global phase that does not depend on $\underline{s}$ is ignored as it does not contribute to $p(z)$. \subsubsection{Two-qubit example} Now we consider a two-qubit random circuits to demonstrate the action of the $CZ$ gates. We introduce a new index $l \in\{1,2\}$ to label each qubit, which is placed on a given horizontal line (row). Since the $CZ$ gate is diagonal, its presence does not alter the number of spins in each row. However, the gate introduces interaction between spins in different rows. This can be seen from its explicit form, i.e. \begin{align} CZ & = \begin{pmatrix} 1 & 0 & 0& 0 \\ 0 & 1 & 0& 0 \\ 0 & 0 & 1& 0 \\ 0 & 0 & 0& -1 \end{pmatrix} = {\rm Diag}\left[ e^{\frac{i\pi}{4}(1-s_{1,k})(1 - s_{2,k'})}\right]_{s_{1,k},s_{2,k'}}, \end{align} where $s_{1,k}$ ($s_{2,k'}$) is the state of the $k^{\rm th}$ ($k'^{\rm th}$) spin at the first (second) row. It follows that \begin{align} \Phi^{CZ}_{s_{1,k},s_{2,k'}} &= (1-s_{1,k})(1 - s_{2,k'}) \\ &= 1 - s_{1,k} - s_{2,k'} + s_{1,k}s_{2,k'}. \end{align} The corresponding $h_i$, $h_j$, and $J_{ij}$ are depicted in Fig. \ref{fig1}(f) where $i=(1,k)$ and $j=(2,k')$. We have now derived all necessary ingredients to map a random quantum circuit to a classical Ising model. \subsubsection{Full COE dynamics} Since the COE dynamics can be expressed in terms of a quasi-random quantum circuit, we can straightforwardly apply the above procedure to find the corresponding Ising model. The complexity here solely arises from the number of indices required to specify the positions of all the gates in the circuit. To deal with this, we introduce the following indices \begin{itemize} \item[--] an index $l\in\{1,...,L\}$ to indicate which qubit / row. \item[--] an index $m\in\{1,...,M\}$ to indicate which period. \item[--] an index $\mu\in\{A,B\}$ to indicate which part of the period. $A$ and $B$ refer to the $\hat{U}_{\rm CUE}$ part and the $\hat{U}_{\rm CUE}^T$ part, respectively \item[--] an index $k\in\{0,1,...,N_{\rm spin}(l)\}$ to indicate the spin position for a given $m$ and $\mu$. Here, $N_{\rm spin}(l)$ is the total number of spins at the $l^{\rm th}$ row. Note that due to the symmetric structure of $\hat{U}_{\rm CUE}$ and $\hat{U}_{\rm CUE}^T$, we run the index $k$ backward for the transpose part, i.e. $k=0$ refers to the last layer. \item[--] an index $\nu_{l,k}$ so that $\nu_{l,k}=1$ if the $k^{\rm th}$ non-diagonal gate acting on the qubit $l$ is $\sqrt{X}$ otherwise $\nu_{l,k}=0$. \end{itemize} With these indices, the partition function of the circuit, as shown in Fig. \ref{fig1}(a), can be written as \begin{equation} \bra{\bold{z}} \psi \rangle = 2^{-\frac{G}{2}}\sum_{\underline{s}\in \mathcal{S}} \exp{\left[\frac{i\pi}{4}E(\textbf{s})\right]}, \end{equation} with \begin{align} E(\textbf{s}) = & E(\bold{z}) + \sum_{m=1}^{M}\sum_{\mu=A}^{B}\sum_{l=1}^{L}\sum_{k=0}^{N_{\rm spin}(l)} h_{lk} s^{\mu,m}_{l,k} \\ &+\sum_{m=1}^{M}\sum_{\mu=A}^{B}\sum_{l=1}^{L}\sum_{k=1}^{N_{\rm spin}(l)} (2\nu_{l,k}-1)s^{\mu,m}_{l,k-1}s^{\mu,m}_{l,k} \nonumber \\ & + \sum_{m=1}^{M}\sum_{\mu=A}^{B }\sum_{l=1}^{L}\sum_{l'=1}^{l-1}\sum_{k=1}^{N_{\rm spin}(l)}\sum_{k'=1}^{N_{\rm spin}(l')} \zeta_{(l,k)}^{(l',k')} s^{\mu,m}_{l,k} s^{\mu,m}_{l',k'} \nonumber, \end{align} and \begin{align} h_{lk} & = \nu_{l,k+1} - \nu_{l,k} - \frac{1}{2} N_T{(l,k)} - N_{CZ}(l,k), \\ E(\bold{z}) & = -s^{B,M}_{0,l} - s_{z_{l}} + s^{B,M}_{0,l}s_{z_{l}}. \end{align} Here $G$ is the total number of non-diagonal gates in the circuit. $\zeta_{(l,k)}^{(l',k')}$ represents the total number of $CZ$ gates which introduces the interaction between spins $s^{\mu,m}_{l,k}$ and $s^{\mu,m}_{l',k'}$. $N_{CZ}(l,k)$ ($N_T{(l,k)}$) is the total number of $CZ$ ($T$) gates which introduces local fields on the spin $s^{\mu,m}_{l,k}$. $E(\bold{z})$ is the contribution from the last Hadamard layer which depends on the readout bit-string $\bold{z}$. $\{s_{z_l}\}$ are the spins corresponding to $\bold{z}$ and their configuration is fixed. In addition, there are also two extra boundary conditions (i) between part $A$ and $B$ and (ii) between the two adjacent periods $m$ and $m+1$, i.e. $s^{A,m}_{l,N_{\rm spin}(l)} = s^{B,m}_{l,N_{\rm spin}(l)}$ and $s^{A,m+1}_{l,0} = s^{B,m}_{l,0}$. \section{Derivation of Porter-Thomas distribution from COE dynamics.}\label{app:anti-concentration} In this section, we provide additional mathematical details involved in the proof of theorem 2. More precisely, we show that the distribution of the output probability of COE dynamics, ${\rm Pr} (p)$, follows the Porter-Thomas distribution ${\rm Pr_{PT}}(p)=Ne^{-Np}$. First, let us consider the output probability $p_M(\bold{z})= |\bra{\bold{z}} \psi_M \rangle|^2$ with \begin{align} \bra{\bold{z}} \psi_M \rangle & = \bra{\bold{z}} U^M_{\rm COE} \ket{\bold{0}} \nonumber \\ & = \bra{\bold{z}} \left[ \sum_{\epsilon=0}^{N-1} e^{iME_{\epsilon}T} \ket{E_{\epsilon}} \bra{E_{\epsilon}} \right]\ket{\bold{0}}\nonumber \\ & = \sum_{\epsilon=0}^{N-1} d_\epsilon(\bold{z})e^{i\phi_{M,\epsilon}} \nonumber \\ & = \left[\sum_{\epsilon=0}^{N-1} d_\epsilon(\bold{z}) \cos\phi_{M,\epsilon}\right] + i \left[\sum_{\epsilon=0}^{N-1} d_\epsilon(\bold{z}) \sin\phi_{M,\epsilon}\right] \nonumber \\ & = a_{\bold{z}} + i b_{\bold{z}}, \end{align} where $N$ is the dimension of the Hilbert space, $d_{\epsilon}(\bold{z})=\langle \bold{z} | E_{\epsilon}\rangle \langle E_{\epsilon}|\bold{0}\rangle$, $\phi_{m,\epsilon}=ME_\epsilon T \text{ mod }2\pi$, $a_{\bold{z}}={\rm Re}\left[\bra{\bold{z}} \psi_M \rangle\right]$ and $b_{\bold{z}}={\rm Im}\left[\bra{\bold{z}} \psi_M \rangle\right]$. \bigskip \begin{lem} \label{theorem1} The distribution of $d_{\epsilon}(\bold{z})$ over $\forall\epsilon\in\{0,...,N-1\}$ or $\forall \bold{z}\in\{0,1\}^L$ is the Bessel function of the second kind. \end{lem} \begin{lem} \label{theorem2} The distribution of $a_{\bold{z}}$ and $b_{\bold{z}}$ over $\forall \bold{z}\in\{0,1\}^L$ is the normal distribution with zero mean and variance equal to $1/2N$. \end{lem} \bigskip To prove lemma 1, we first write $d_{\epsilon}(\bold{z}) = c_{\bold{z},\epsilon} c_{\bold{0},\epsilon}$, where $c_{\bold{z},\epsilon}=\langle \bold{z}|E_{\epsilon}\rangle$ and $c_{\bold{0},\epsilon}=\langle \bold{0}|E_{\epsilon}\rangle$. For the COE dynamics, the coefficients $c_{\bold{z},\epsilon}$ and $c_{\bold{0},\epsilon}$ are real numbers whose distribution is \cite{2010_haake} \begin{equation} \rm{Pr}(c)=\sqrt{\frac{2N}{\pi}}\exp\left[-\frac{Nc^2}{2}\right]. \end{equation} As discussed in the main text, the phase $\phi_{M,\epsilon}$ becomes random as $M\gg 2\pi /E_\epsilon T$. The random sign ($\pm 1$) from $c_{z,\epsilon}$ can therefore be absorbed into the phase without changing its statistics. The distribution of $d_{\epsilon}(\bold{z})$ can be obtained using the product distribution formula \begin{align} \rm{Pr}(d) & = \int_0^{\infty} \rm{Pr}(c) \rm{Pr}(\frac{d}{c}) \cdot\frac{1}{c}\cdot \rm{d} c \nonumber \\ & = \frac{2N}{\pi} \int_0^{\infty} \exp{\left( -\frac{Nc^2}{2}\right)} \exp{\left( -\frac{Nd^2}{2c^2}\right)}\rm{d} c \nonumber \\ & = \frac{2N}{\pi}K_0(Nd), \end{align} where $K_0$ is the modified Bessel function of the second kind. To prove lemma 2, we first note that the distribution of $\cos\phi_{m,\epsilon}$ and $\sin\phi_{m,\epsilon}$ with $\phi_{M,\epsilon}$ being uniformly distributed in the range $[0,2\pi)$ are \begin{align} \rm{Pr}(\cos\phi) & = \frac{1}{\pi\sqrt{1-\cos^2\phi}}, \\ \rm{Pr}(\sin\phi) & = \frac{1}{\pi\sqrt{1-\sin^2\phi}}. \end{align} We then calculate the distribution of $\kappa_{\epsilon}\equiv d_{\epsilon}(\bold{z}) \cos\phi_{M,\epsilon}$ using the product distribution formula, i.e. \begin{align} \rm{Pr}(\kappa) & = \int_{-1}^1 \frac{1}{\pi\sqrt{1-\cos^2\phi}}\cdot \frac{2N}{\pi} K_{0}(\frac{N\kappa}{d}) \cdot \frac{1}{\cos\phi} \rm{d} \cos\phi \nonumber \\ & = \frac{N}{\pi^2} K^2_0\left(\frac{N|\kappa|}{2}\right). \end{align} The mean and the variance of $\kappa_{\epsilon}$ can be calculated as \begin{align} \langle \kappa \rangle & = \int_{-\infty}^\infty d\cos \phi\cdot \frac{N}{\pi^2} \cdot K^2_0\left(\frac{N|\kappa|}{2}\right)\cdot \rm{d} \kappa = 0 \\ {\rm Var}(\kappa) & = \int_{-\infty}^\infty (d\cos\phi)^2\cdot\frac{N}{\pi^2}\cdot K^2_0\left(\frac{N|\kappa|}{2}\right)\cdot \rm{d} \kappa = \frac{1}{2N^2}. \end{align} Since $a_{\bold{z}}$ is a sum of independent and identically distributed random variables, i.e. $a_{\bold{z}}=\sum_{\epsilon=1}^{N-1} \kappa_\epsilon$, we can apply the central limit theorem for large $N$. Hence, the distribution of $a_{\bold{z}}$ is normal with the mean zero and variance ${\rm Var}(a) = N \cdot{\rm Var}(\kappa) = 1/2N$. The same applies for the distribution of $b_{\bold{z}}$. Theorem 2 can be proven using the fact that the sum of the square of Gaussian variables follows the $\chi$-squared distribution with second degree of freedom $\rm{Pr}_{\chi^2,k=2}(p) \sim \exp\{-p/2\sigma^2\}$ \cite{2002_simon}. By specifying the variance obtained in Lemma 2 and normalization, the distribution of $p_M(\textbf{z}) = a_{\bold{z}}^2 + b_{\bold{z}}^2$ over $\forall \bold{z}\in\{0,1\}^L$ is the Porter-Thomas distribution. Since the Porter Thomas distribution anti-concentrates i.e. ${\rm Pr}_{\rm PT}\left(p > \frac{1}{N}\right) = \int_{Np=1}^{\infty} d(Np) e^{-Np} = 1/e$ , we complete the proof of the theorem 2. \section{Undriven thermalized many-body systems\label{app:undriven_physical}} In this section, we analyze the long-time unitary evolution for undriven systems in the thermalized phase. The results presented here highlight the key role played by the drive in generating the randomness required for the above quantum supremacy proof. In particular, we show that for typical undriven physical systems with local constraints (e.g. finite-range interactions) and conserved energy, the output distribution never coincides with the PT distribution. We emphasize that this is a consequence of the inability of random matrix theory to accurately describe the full spectral range of undriven thermalized many-body systems. Indeed, it has been shown that for undriven many-body systems which thermalizes (to a finite temperature), the statistics of the Hamiltonian resembles the statistics of the Gaussian orthogonal ensemble (GOE)~\cite{2016_Alessio_AiP}. However, it is implicit that an accurate match only applies over a small energy window (usually far from the edges of the spectrum). If one zooms in this small energy window, the Hamiltonian looks random, but if one consider the full spectrum, the local structure of the Hamiltonian appears and the random matrix theory fails at capturing it. \begin{figure*} \includegraphics[width=1\textwidth]{fig_sm_undriven.pdf} \caption{\textbf{Undriven thermalized Ising models versus the GOE:} (a) Level-spacing statistic of an ensemble $\{ \hat H \}$ and their corresponding long-time evolution operator $\hat U$ obtained from the physical Ising system (circle) and the GOE (square). The blue dashed and the orange solid lines are theoretical predictions for the POI and the GOE distributions, respectively. (b) The eigenstate distribution $d_{\epsilon}(\bold{z})$ [see Eq. (5) of the main text] with the GOE prediction (solid line). (c) The $l_1-$norm distance between the output distribution and the PT distribution as a function of time. The driven case studied in the main text is presented for comparison. The parameters used are: $L = 9$, $W=1.5J, F=2.5J, \omega=8J$ (for the driven case) and $500$ disorder/instances realizations.} \label{fig_sm} \end{figure*} To see this, we numerically simulate the undriven Ising Hamiltonian, $\hat{H}_0=\sum_{l=0}^{L-1} \mu_l \hat{Z}_l + J\sum_{l=0}^{L-2}\hat{Z}_l\hat{Z}_{l+1} + \frac{F}{2}\sum_{l=0}^{L-1}\hat{X}_l$, where $\mu_l\in\{0,W\}$ is a local disorder, $W$ is the disorder strength, $F$ is the static global magnetic field along $x$ and $J$ is the interaction strength. This Hamiltonian is in fact the average Hamiltonian of the driven Ising Hamiltonian used in the main text. In comparison, we also simulate the quantum evolution under an ensemble $\{\hat H_{\rm COE}\}$ of synthetic Hamiltonians that are uniformly drawn from the GOE (i.e. without any local constraints). Fig.\ref{fig_sm} (a) shows the level-spacing statistics of $\{ \hat{H}_{0} \}$ (obtained over $500$ disorder realizations), $\{ \hat{H}_{\rm COE} \}$ (obtained over $500$ random instances) and their corresponding long-time unitary operators $\hat{U} = \lim_{t\rightarrow \infty} e^{-it\hat{H}}$. We see that the level statistic of the physical Hamiltonian (and its long-time evolution) is indistinguishable from the GOE. However, the discrepancy between the physical and synthetic (GOE) realizations becomes apparent when looking at the eigenstate statistics as shown in Fig.\ref{fig_sm} (b). While the distribution of $d_{\epsilon}(\bold{z})$ [see Eq. (5) of the main text] from the GOE is in a good agreement with the Bessel function of the second kind, the physical system fails to meet the theoretical prediction. This is in contrast to the driven case as presented in the main text. More importantly in the context of this work, a key difference between the physical Hamiltonian and the random matrix theory prediction can be seen by comparing the distribution of the output states after some time evolution. In Fig.\ref{fig_sm} (c), we show that the Porter-Thomas distribution is never achieved with the physical systems while it is for the synthetic realizations as well as for the driven case studied in the main text. These results underline the gap between physical Hamiltonians and true random matrices and more importantly, they highlights the important role of the drive in bridging that gap. \end{widetext} \end{document}
math/9805127
\section{#2}\label{S:#1}% \ifShowLabels \TeXref{{S:#1}} \fi} \newcommand{\ssec}[2]{\subsection{#2}\label{SS:#1}% \ifShowLabels \TeXref{{SS:#1}} \fi} \newcommand{\refs}[1]{Section ~\ref{S:#1}} \newcommand{\refss}[1]{Subsection ~\ref{SS:#1}} \newcommand{\reft}[1]{Theorem ~\ref{T:#1}} \newcommand{\refl}[1]{Lemma ~\ref{L:#1}} \newcommand{\refp}[1]{Proposition ~\ref{P:#1}} \newcommand{\refc}[1]{Corollary ~\ref{C:#1}} \newcommand{\refd}[1]{Definition ~\ref{D:#1}} \newcommand{\refr}[1]{Remark ~\ref{R:#1}} \newcommand{\refe}[1]{\eqref{E:#1}} \newenvironment{thm}[1]% { \begin{Thm} \label{T:#1} \ifShowLabels \TeXref{T:#1} \fi }% { \end{Thm} } \renewcommand{\th}[1]{\begin{thm}{#1} \sl } \renewcommand{\eth}{\end{thm} } \newenvironment{lemma}[1]% { \begin{Lem} \label{L:#1} \ifShowLabels \TeXref{L:#1} \fi }% { \end{Lem} } \newcommand{\lem}[1]{\begin{lemma}{#1} \sl} \newcommand{\end{lemma}}{\end{lemma}} \newenvironment{propos}[1]% { \begin{Prop} \label{P:#1} \ifShowLabels \TeXref{P:#1} \fi }% { \end{Prop} } \newcommand{\prop}[1]{\begin{propos}{#1}\sl } \newcommand{\end{propos}}{\end{propos}} \newenvironment{corol}[1]% { \begin{Cor} \label{C:#1} \ifShowLabels \TeXref{C:#1} \fi }% { \end{Cor} } \newcommand{\cor}[1]{\begin{corol}{#1} \sl } \newcommand{\end{corol}}{\end{corol}} \newenvironment{defeni}[1]% { \begin{Def} \label{D:#1} \ifShowLabels \TeXref{D:#1} \fi }% { \end{Def} } \newcommand{\defe}[1]{\begin{defeni}{#1} \sl } \newcommand{\end{defeni}}{\end{defeni}} \newenvironment{remark}[1]% { \begin{Rem} \label{R:#1} \ifShowLabels \TeXref{R:#1} \fi }% { \end{Rem} } \newcommand{\rem}[1]{\begin{remark}{#1}} \newcommand{\end{remark}}{\end{remark}} \newcommand{\eq}[1]% { \ifShowLabels \TeXref{E:#1} \fi \begin{equation} \label{E:#1} } \newcommand{\end{equation}}{\end{equation}} \newcommand{ \begin{proof} }{ \begin{proof} } \newcommand{ \end{proof} }{ \end{proof} } \newcommand\alp{\alpha} \newcommand\Alp{\Alpha} \newcommand\bet{\beta} \newcommand\gam{\gamma} \newcommand\Gam{\Gamma} \newcommand\del{\delta} \newcommand\Del{\Delta} \newcommand\eps{\varepsilon} \newcommand\zet{\zeta} \newcommand\tet{\theta} \newcommand\Tet{\Theta} \newcommand\iot{\iota} \newcommand\kap{\kappa} \newcommand\lam{\lambda} \newcommand\Lam{\Lambda} \newcommand\sig{\sigma} \newcommand\Sig{\Sigma} \newcommand\vphi{\varphi} \newcommand\ome{\omega} \newcommand\Ome{\Omega} \newcommand\calA{{\mathcal{A}}} \newcommand\calB{{\mathcal{B}}} \newcommand\calC{{\mathcal{C}}} \newcommand\calD{{\mathcal{D}}} \newcommand\calE{{\mathcal{E}}} \newcommand\calF{{\mathcal{F}}} \newcommand\calG{{\mathcal{G}}} \newcommand\calH{{\mathcal{H}}} \newcommand\calI{{\mathcal{I}}} \newcommand\calJ{{\mathcal{J}}} \newcommand\calK{{\mathcal{K}}} \newcommand\calL{{\mathcal{L}}} \newcommand\calM{{\mathcal{M}}} \newcommand\calN{{\mathcal{N}}} \newcommand\calO{{\mathcal{O}}} \newcommand\calP{{\mathcal{P}}} \newcommand\calQ{{\mathcal{Q}}} \newcommand\calR{{\mathcal{R}}} \newcommand\calS{{\mathcal{S}}} \newcommand\calT{{\mathcal{T}}} \newcommand\calU{{\mathcal{U}}} \newcommand\calV{{\mathcal{V}}} \newcommand\calW{{\mathcal{W}}} \newcommand\calX{{\mathcal{X}}} \newcommand\calY{{\mathcal{Y}}} \newcommand\calZ{{\mathcal{Z}}} \newcommand\bfa{{\mathbf a}} \newcommand\bfA{{\mathbf A}} \newcommand\bfb{{\mathbf b}} \newcommand\bfB{{\mathbf B}} \newcommand\bfc{{\mathbf c}} \newcommand\bfC{{\mathbf C}} \newcommand\bfd{{\mathbf d}} \newcommand\bfD{{\mathbf D}} \newcommand\bfe{{\mathbf e}} \newcommand\bfE{{\mathbf E}} \newcommand\bff{{\mathbf f}} \newcommand\bfF{{\mathbf F}} \newcommand\bfg{{\mathbf g}} \newcommand\bfG{{\mathbf G}} \newcommand\bfh{{\mathbf h}} \newcommand\bfH{{\mathbf H}} \newcommand\bfi{{\mathbf i}} \newcommand\bfI{{\mathbf I}} \newcommand\bfj{{\mathbf j}} \newcommand\bfJ{{\mathbf J}} \newcommand\bfk{{\mathbf k}} \newcommand\bfK{{\mathbf K}} \newcommand\bfl{{\mathbf l}} \newcommand\bfL{{\mathbf L}} \newcommand\bfm{{\mathbf m}} \newcommand\bfM{{\mathbf M}} \newcommand\bfn{{\mathbf n}} \newcommand\bfN{{\mathbf N}} \newcommand\bfo{{\mathbf o}} \newcommand\bfO{{\mathbf O}} \newcommand\bfp{{\mathbf p}} \newcommand\bfP{{\mathbf P}} \newcommand\bfq{{\mathbf q}} \newcommand\bfQ{{\mathbf Q}} \newcommand\bfr{{\mathbf r}} \newcommand\bfR{{\mathbf R}} \newcommand\bfs{{\mathbf s}} \newcommand\bfS{{\mathbf S}} \newcommand\bft{{\mathbf t}} \newcommand\bfT{{\mathbf T}} \newcommand\bfu{{\mathbf u}} \newcommand\bfU{{\mathbf U}} \newcommand\bfv{{\mathbf v}} \newcommand\bfV{{\mathbf V}} \newcommand\bfw{{\mathbf w}} \newcommand\bfW{{\mathbf W}} \newcommand\bfx{{\mathbf x}} \newcommand\bfX{{\mathbf X}} \newcommand\bfy{{\mathbf y}} \newcommand\bfY{{\mathbf Y}} \newcommand\bfz{{\mathbf z}} \newcommand\bfZ{{\mathbf Z}} \newcommand\QQ{\mathbb{Q}} \newcommand\WW{\mathbb{W}} \newcommand\EE{\mathbb{E}} \newcommand\RR{\mathbb{R}} \newcommand\TT{\mathbb{T}} \newcommand\YY{\mathbb{Y}} \newcommand\UU{\mathbb{U}} \newcommand\II{\mathbb{I}} \newcommand\OO{\mathbb{O}} \newcommand\PP{\mathbb{P}} \renewcommand\AA{\mathbb{A}} \renewcommand\SS{\mathbb{S}} \newcommand\DD{\mathbb{D}} \newcommand\FF{\mathbb{F}} \newcommand\GG{\mathbb{G}} \newcommand\HH{\mathbb{H}} \newcommand\JJ{\mathbb{J}} \newcommand\KK{\mathbb{K}} \newcommand\LL{\mathbb{L}} \newcommand\ZZ{\mathbb{Z}} \newcommand\XX{\mathbb{X}} \newcommand\CC{\mathbb{C}} \newcommand\VV{\mathbb{V}} \newcommand\BB{\mathbb{B}} \newcommand\NN{\mathbb{N}} \newcommand\MM{\mathbb{M}} \newcommand\grA{{\mathfrak{A}}} \newcommand\gra{{\mathfrak{a}}} \newcommand\grB{{\mathfrak{B}}} \newcommand\grb{{\mathfrak{b}}} \newcommand\grC{{\mathfrak{C}}} \newcommand\grc{{\mathfrak{c}}} \newcommand\grD{{\mathfrak{D}}} \newcommand\grd{{\mathfrak{d}}} \newcommand\grE{{\mathfrak{E}}} \newcommand\gre{{\mathfrak{e}}} \newcommand\grF{{\mathfrak{F}}} \newcommand\grf{{\mathfrak{f}}} \newcommand\grG{{\mathfrak{G}}} \newcommand\grg{{\mathfrak{g}}} \newcommand\grH{{\mathfrak{H}}} \newcommand\grh{{\mathfrak{h}}} \newcommand\grI{{\mathfrak{I}}} \newcommand\gri{{\mathfrak{i}}} \newcommand\grJ{{\mathfrak{J}}} \newcommand\grj{{\mathfrak{j}}} \newcommand\grK{{\mathfrak{K}}} \newcommand\grk{{\mathfrak{k}}} \newcommand\grL{{\mathfrak{L}}} \newcommand\grl{{\mathfrak{l}}} \newcommand\grM{{\mathfrak{M}}} \newcommand\grm{{\mathfrak{m}}} \newcommand\grN{{\mathfrak{N}}} \newcommand\grn{{\mathfrak{n}}} \newcommand\grO{{\mathfrak{O}}} \newcommand\gro{{\mathfrak{o}}} \newcommand\grP{{\mathfrak{P}}} \newcommand\grp{{\mathfrak{p}}} \newcommand\grQ{{\mathfrak{Q}}} \newcommand\grq{{\mathfrak{q}}} \newcommand\grR{{\mathfrak{R}}} \newcommand\grr{{\mathfrak{r}}} \newcommand\grS{{\mathfrak{S}}} \newcommand\grs{{\mathfrak{s}}} \newcommand\grT{{\mathfrak{T}}} \newcommand\grt{{\mathfrak{t}}} \newcommand\grU{{\mathfrak{U}}} \newcommand\gru{{\mathfrak{u}}} \newcommand\grV{{\mathfrak{V}}} \newcommand\grv{{\mathfrak{v}}} \newcommand\grW{{\mathfrak{W}}} \newcommand\grw{{\mathfrak{w}}} \newcommand\grX{{\mathfrak{X}}} \newcommand\grx{{\mathfrak{x}}} \newcommand\grY{{\mathfrak{Y}}} \newcommand\gry{{\mathfrak{y}}} \newcommand\grZ{{\mathfrak{Z}}} \newcommand\grz{{\mathfrak{z}}} \newcommand\nek{,\ldots,} \newcommand\sdp{\times \hskip -0.3em {\raise 0.3ex \hbox{$\scriptscriptstyle |$}}} \newcommand\area{\operatorname{area}} \newcommand\Aug{\operatorname{Aug}} \newcommand\Aut{\operatorname{Aut}} \newcommand\Char{\operatorname{Char}} \newcommand\Cl{\operatorname{Cl}} \newcommand\cf{{\rm \,cf\,}} \newcommand\Cone{\operatorname{Cone}} \newcommand\cont{\operatorname{cont}} \newcommand\codim{\operatorname{codim}} \newcommand\conv{\operatorname{conv}} \newcommand\Conv{\operatorname{Conv}} \newcommand\const{\operatorname{const}} \newcommand\Const{\operatorname{Const}} \newcommand\Det{\operatorname{Det}} \newcommand\diag{\operatorname{diag}} \newcommand\diam{\operatorname{diam}} \newcommand\Diam{\operatorname{Diam}} \newcommand\dist{\operatorname{dist}} \newcommand\Dom{\operatorname{Dom}} \newcommand\dom{\operatorname{dom}} \newcommand\End{\operatorname{End\,}} \newcommand\Ext{\operatorname{Ext}} \newcommand\esssup{\operatorname{ess\ sup}} \newcommand\Ran{{\rm Ran}} \newcommand\RANK{\operatorname{rank}} \newcommand\Geo{\operatorname{Geo}} \newcommand\GL{\operatorname{GL}} \newcommand\Gr{\operatorname{Gr}} \newcommand\gl{\operatorname{gl}} \newcommand\grad{\mathop {\rm grad}} \newcommand\Hom{\operatorname {Hom}} \newcommand\im{\operatorname {im}} \newcommand\IM{\operatorname{Im}} \newcommand\Ind{\operatorname{Ind}} \newcommand\ind{\operatorname{ind}} \newcommand\Inf{\operatorname{Inf}} \newcommand\Int{\operatorname{Int}} \newcommand\Min{\operatorname{Min}} \newcommand\MOD{\operatorname{mod}} \newcommand{\operatorname{Mor}}{\operatorname{Mor}} \newcommand{\operatorname{Ob\,}}{\operatorname{Ob\,}} \newcommand\ord{\operatorname{ord}} \newcommand\Ka{\operatorname{Ka}} \newcommand\Ker{\operatorname{Ker}} \newcommand\PGL{{\rm PGL}} \newcommand\PGSp{{\rm PGSp}} \newcommand\Plt{\operatorname{Plt}} \newcommand\proj{\operatorname{proj}} \newcommand\Proj{\operatorname{Proj}} \newcommand\res{\rm res} \newcommand\rk{\operatorname{rk}} \newcommand\Range{\operatorname{Range}} \newcommand\RE{\operatorname{Re}} \newcommand\Res{\operatorname{Res}} \newcommand\rot{\operatorname{rot}} \newcommand\Max{\operatorname{Max}} \newcommand\Maximum{\operatorname{Maximum}} \newcommand\Minimum{\operatorname{Minimum}} \newcommand\Minimize{\operatorname{Minimize}} \newcommand\Prob{\operatorname{Prob}} \newcommand\sech{\rm sech} \newcommand\sgn{\operatorname{sgn}} \newcommand{\operatorname{sign}}{\operatorname{sign}} \newcommand\SL{{\rm SL}} \newcommand\Sbm{\operatorname{Sbm}} \newcommand\SO{{\rm SO}} \newcommand\SPAN{\operatorname{span}} \newcommand\spin{\operatorname{spin}} \newcommand\spec{{\rm spec}} \newcommand\supess{\operatorname{sup\ ess}} \newcommand\supp{\operatorname{supp}} \newcommand\Supp{\operatorname{Supp}} \newcommand\Sup{\operatorname{Sup}} \newcommand\Sym{\operatorname{Sym}} \newcommand\tr{\operatorname{tr}} \newcommand\Tr{\operatorname{Tr}} \newcommand\Tor{\operatorname{Tor}} \newcommand\Var{\operatorname{Var}} \newcommand\Vol{\operatorname{Vol}} \newcommand\oa{{\overline{a}}} \newcommand\oA{{\overline{A}}} \newcommand\ob{{\overline{b}}} \newcommand\oB{{\overline{B}}} \newcommand\oc{{\overline{c}}} \newcommand\oC{{\overline{C}}} \newcommand\oD{{\overline{D}}} \newcommand\od{{\overline{d}}} \newcommand\oE{{\overline{E}}} \renewcommand\oe{{\overline{e}}} \newcommand\of{{\overline{f}}} \newcommand\oF{{\overline{F}}} \newcommand\og{{\overline{g}}} \newcommand\oG{{\overline{G}}} \newcommand\oh{{\overline{h}}} \newcommand\oH{{\overline{H}}} \newcommand\oI{{\overline{I}}} \newcommand\oj{{\overline{j}}} \newcommand\oJ{{\overline{J}}} \newcommand\ok{{\overline{k}}} \newcommand\oK{{\overline{K}}} \newcommand\oL{{\overline{L}}} \newcommand\om{{\overline{m}}} \newcommand\oM{{\overline{M}}} \newcommand\oN{{\overline{N}}} \newcommand\oO{{\overline{O}}} \newcommand\oo{{\overline{o}}} \newcommand\op{{\overline{p}}} \newcommand\oP{{\overline{P}}} \newcommand\oq{{\overline{q}}} \newcommand\oQ{{\overline{Q}}} \newcommand\OR{{\overline{r}}} \newcommand\oS{{\overline{S}}} \newcommand\os{{\overline{s}}} \newcommand\ot{{\overline{t}}} \newcommand\oT{{\overline{T}}} \newcommand\ou{{\overline{u}}} \newcommand\oU{{\overline{U}}} \newcommand\ov{{\overline{v}}} \newcommand\oV{{\overline{V}}} \newcommand\ow{{\overline{w}}} \newcommand\oW{{\overline{W}}} \newcommand\ox{{\overline{x}}} \newcommand\oX{{\overline{X}}} \newcommand\oy{{\overline{y}}} \newcommand\oY{{\overline{Y}}} \newcommand\oz{{\overline{z}}} \newcommand\oZ{{\overline{Z}}} \newcommand\oalp{{\overline{\alpha}}} \newcommand\obet{{\overline{\bet}}} \newcommand\odel{{\overline{\del}}} \newcommand\oDel{{\overline{\Del}}} \newcommand\ocup{{\overline{\cup}}} \newcommand\ovarphi{{\overline{\varphi}}} \newcommand\ochi{{\overline{\chi}}} \newcommand\oeps{{\overline{\eps}}} \newcommand\oeta{{\overline{\eta}}} \newcommand\ogam{{\overline{\gam}}} \newcommand\okap{{\overline{\kap}}} \newcommand\olam{{\overline{\lambda}}} \newcommand\oLam{{\overline{\Lambda}}} \newcommand\omu{{\overline{\mu}}} \newcommand\onu{{\overline{\nu}}} \newcommand\oOme{{\overline{\Ome}}} \newcommand\ophi{\overline{\phi}} \newcommand\oPhi{{\overline{\Phi}}} \newcommand\opi{{\overline{\pi}}} \newcommand\oPsi{{\overline{\Psi}}} \newcommand\opsi{{\overline{\psi}}} \newcommand\orho{{\overline{\rho}}} \newcommand\osig{{\overline{\sig}}} \newcommand\otau{{\overline{\tau}}} \newcommand\otet{{\overline{\theta}}} \newcommand\oxi{{\overline{\xi}}} \newcommand\oome{\overline{\ome}} \newcommand\opart{{\overline{\partial}}} \newcommand\ua{{\underline{a}}} \newcommand\ub{{\underline{b}}} \newcommand\uc{{\underline{c}}} \newcommand\uD{{\underline{D}}} \newcommand\uk{{\underline{k}}} \newcommand\ue{{\underline{e}}} \newcommand\uj{{\underline{j}}} \newcommand\ul{{\underline{l}}} \newcommand\uL{{\underline{L}}} \newcommand\uo{{\underline{o}}} \newcommand\uO{{\underline{O}}} \newcommand\uP{{\underline{P}}} \newcommand\uQ{{\underline{Q}}} \newcommand\um{{\underline{m}}} \newcommand\uM{{\underline{M}}} \newcommand\un{{\underline{n}}} \newcommand\us{{\underline{s}}} \newcommand\ut{{\underline{t}}} \newcommand\uu{{\underline{u}}} \newcommand\uv{{\underline{v}}} \newcommand\uV{{\underline{V}}} \newcommand\ux{{\underline{x}}} \newcommand\uX{{\underline{X}}} \newcommand\uy{{\underline{y}}} \newcommand\uz{{\underline{z}}} \newcommand\ualp{{\underline{\alp}}} \newcommand\ubet{{\underline{\bet}}} \newcommand\uchi{{\underline{\chi}}} \newcommand\udel{{\underline{\del}}} \newcommand\uell{{\underline{\ell}}} \newcommand\ueps{{\underline{\eps}}} \newcommand\ueta{{\underline{\eta}}} \newcommand\uGam{{\underline{\Gamma}}} \newcommand\unu{{\underline{\nu}}} \newcommand\uome{{\underline{\omega}}} \newcommand\utet{{\underline{\tet}}} \newcommand\ulam{{\underline{\lam}}} \newcommand\hata{{\widehat{a}}} \newcommand\hatA{{\widehat{A}}} \newcommand\hatb{{\widehat{b}}} \newcommand\hatc{{\widehat{c}}} \newcommand\hatC{{\widehat{C}}} \newcommand\hatB{{\widehat{B}}} \newcommand\hatD{{\widehat{D}}} \newcommand\hate{{\widehat{e}}} \newcommand\hatE{{\widehat{E}}} \newcommand\hatf{{\widehat{f}}} \newcommand\hatF{{\widehat{F}}} \newcommand\hatg{{\widehat{g}}} \newcommand\hatG{{\widehat{G}}} \newcommand\hath{{\widehat{h}}} \newcommand\hatH{{\widehat{H}}} \newcommand\hati{{\hat{i}}} \newcommand\hatI{{\hat{I}}} \newcommand\hatj{{\widehat{j}}} \newcommand\hatJ{{\widehat{J}}} \newcommand\hatk{{\widehat{k}}} \newcommand\hatK{{\widehat{K}}} \newcommand\hatL{{\widehat{L}}} \newcommand\hatm{{\widehat{m}}} \newcommand\hatM{{\widehat{M}}} \newcommand\hatn{{\widehat{n}}} \newcommand\hatN{{\widehat{N}}} \newcommand\hatp{{\widehat{p}}} \newcommand\hatP{{\widehat{P}}} \newcommand\hatr{{\widehat{r}}} \newcommand\hatR{{\widehat{R}}} \newcommand\hatq{{\widehat{q}}} \newcommand\hatQ{{\widehat{Q}}} \newcommand\hatT{{\widehat{T}}} \newcommand\hatu{{\widehat{u}}} \newcommand\hatU{{\widehat{U}}} \newcommand\hatV{{\widehat{V}}} \newcommand\hatv{{\widehat{v}}} \newcommand\hatw{{\widehat{w}}} \newcommand\hatW{{\widehat{W}}} \newcommand\hatx{{\widehat{x}}} \newcommand\hatX{{\widehat{X}}} \newcommand\haty{{\widehat{y}}} \newcommand\hatY{{\widehat{Y}}} \newcommand\hatZ{{\widehat{Z}}} \newcommand\hatz{{\widehat{z}}} \newcommand\hatalp{{\widehat{\alpha}}} \newcommand\hatdel{{\widehat{\delta}}} \newcommand\hatDel{{\widehat{\Delta}}} \newcommand\hatbet{{\widehat{\beta}}} \newcommand\hateps{{\hat{\eps}}} \newcommand\hatgam{{\widehat{\gamma}}} \newcommand\hatGam{{\widehat{\Gamma}}} \newcommand\hatlam{{\widehat{\lambda}}} \newcommand\hatmu{{\widehat{\mu}}} \newcommand\hatnu{{\widehat{\nu}}} \newcommand\hatOme{{\widehat{\Ome}}} \newcommand\hatphi{{\widehat{\phi}}} \newcommand\hatPhi{{\widehat{\Phi}}} \newcommand\hatpi{{\widehat{\pi}}} \newcommand\hatpsi{{\widehat{\psi}}} \newcommand\hatPsi{{\widehat{\Psi}}} \newcommand\hatrho{{\widehat{\rho}}} \newcommand\hatsig{{\widehat{\sig}}} \newcommand\hatSig{{\widehat{\Sig}}} \newcommand\hattau{{\widehat{\tau}}} \newcommand\hattet{{\widehat{\theta}}} \newcommand\hatvarphi{{\widehat{\varphi}}} \newcommand\hatZZ{{\widehat{\ZZ}}} \newcommand\tilA{{\widetilde{A}}} \newcommand\tila{{\widetilde{a}}} \newcommand\tilB{{\widetilde{B}}} \newcommand\tilb{{\widetilde{b}}} \newcommand\tilc{{\widetilde{c}}} \newcommand\tilC{{\widetilde{C}}} \newcommand\tild{{\widetilde{d}}} \newcommand\tilD{{\widetilde{D}}} \newcommand\tilE{{\widetilde{E}}} \newcommand\tilf{{\widetilde{f}}} \newcommand\tilF{{\widetilde{F}}} \newcommand\tilg{{\widetilde{g}}} \newcommand\tilG{{\widetilde{G}}} \newcommand\tilh{{\widetilde{h}}} \newcommand\tilk{{\widetilde{k}}} \newcommand\tilK{{\widetilde{K}}} \newcommand\tilj{{\widetilde{j}}} \newcommand\tilJ{{\widetilde{J}}} \newcommand\tilm{{\widetilde{m}}} \newcommand\tilM{{\widetilde{M}}} \newcommand\tilH{{\widetilde{H}}} \newcommand\tilL{{\widetilde{L}}} \newcommand\tilN{{\widetilde{N}}} \newcommand\tiln{{\widetilde{n}}} \newcommand\tilO{{\widetilde{O}}} \newcommand\tilP{{\widetilde{P}}} \newcommand\tilp{{\widetilde{p}}} \newcommand\tilq{{\widetilde{q}}} \newcommand\tilQ{{\widetilde{Q}}} \newcommand\tilR{{\widetilde{R}}} \newcommand\tilr{{\widetilde{r}}} \newcommand\tilS{{\widetilde{S}}} \newcommand\tils{{\widetilde{s}}} \newcommand\tilT{{\widetilde{T}}} \newcommand\tilt{{\widetilde{t}}} \newcommand\tilu{{\widetilde{u}}} \newcommand\tilU{{\widetilde{U}}} \newcommand\tilv{{\widetilde{v}}} \newcommand\tilV{{\widetilde{V}}} \newcommand\tilw{{\widetilde{w}}} \newcommand\tilW{{\widetilde{W}}} \newcommand\tilX{{\widetilde{X}}} \newcommand\tilx{{\widetilde{x}}} \newcommand\tily{{\widetilde{y}}} \newcommand\tilY{{\widetilde{Y}}} \newcommand\tilZ{{\widetilde{Z}}} \newcommand\tilz{{\widetilde{z}}} \newcommand\tilalp{{\widetilde{\alpha}}} \newcommand\tilbet{{\widetilde{\beta}}} \newcommand\tildel{{\widetilde{\delta}}} \newcommand\tilDel{{\widetilde{\Delta}}} \newcommand\tilchi{{\widetilde{\chi}}} \newcommand\tileps{{\widetilde{\eps}}} \newcommand\tileta{{\widetilde{\eta}}} \newcommand\tilgam{{\widetilde{\gamma}}} \newcommand\tilGam{{\widetilde{\Gamma}}} \newcommand\tilome{{\widetilde{\ome}}} \newcommand\tillam{{\widetilde{\lam}}} \newcommand\tilmu{{\widetilde{\mu}}} \newcommand\tilphi{{\widetilde{\phi}}} \newcommand\tilpi{{\widetilde{\pi}}} \newcommand\tilpsi{{\widetilde{\psi}}} \renewcommand\tilome{{\widetilde{\ome}}} \newcommand\tilOme{{\widetilde{\Ome}}} \newcommand\tilPhi{{\widetilde{\Phi}}} \newcommand\tilQQ{{\widetilde{\QQ}}} \newcommand\tilrho{{\widetilde{\rho}}} \newcommand\tilsig{{\widetilde{\sig}}} \newcommand\tiltau{{\widetilde{\tau}}} \newcommand\tiltet{{\widetilde{\theta}}} \newcommand\tilvarphi{{\widetilde{\varphi}}} \newcommand\tilxi{{\widetilde{\xi}}} \newcommand\twolongrightarrow{\ \hbox{$\longrightarrow\hskip -17pt \longrightarrow$}\ } \renewcommand{\>}{\rangle} \newcommand{\<}{\langle} \newcommand{-}{-} \renewcommand{\d}{\text{\( \partial\)}} \newcommand{\bar{\d}}{\bar{\d}} \renewcommand{\b}{\bullet} \newcommand{\omega}{\omega} \newcommand{\operatorname{Ad}}{\operatorname{Ad}} \newcommand{\nabla}{\nabla} \newcommand{\tilDel}{\tilDel} \newcommand{\tilJ}{\tilJ} \newcommand{\nabla^{\calE}\newcommand{\W}{\calW\otimes\L^k}}{\nabla^{\calE}\newcommand{\W}{\calW\otimes\L^k}} \newcommand{{\calE}\newcommand{\W}{\calW\otimes\L^k}}{{\calE}\newcommand{\W}{\calW\otimes\L^k}} \newcommand{\widetilde{\nabla}^{\calE}\newcommand{\W}{\calW\otimes\L^k}}{\widetilde{\nabla}^{\calE}\newcommand{\W}{\calW\otimes\L^k}} \newcommand{\widetilde{\Del}_k}{\widetilde{\Del}_k} \newcommand{\calE}\newcommand{\W}{\calW}{\calE}\newcommand{\W}{\calW} \newcommand{\calZ}\newcommand{\F}{\calF}{\calZ}\newcommand{\F}{\calF} \newcommand{\calA}\renewcommand{\O}{\calO}{\calA}\renewcommand{\O}{\calO} \renewcommand{\L}{\calL} \renewcommand{\tilS}{\widetilde{\calS}} \newcommand{\L^k}{\L^k} \newcommand{F^{\E/\calS}}{F^{\calE}\newcommand{\W}{\calW/\calS}} \newcommand{\n^{\E\otimes\calW}}{\nabla^{\calE}\newcommand{\W}{\calW\otimes\calW}} \newcommand{\operatorname{End}_{C(M)}\,}{\operatorname{End}_{C(M)}\,} \newcommand{{\Gam}}{{\Gam}} \newcommand{{\Gam(M,C(M))}}{{\Gam(M,C(M))}} \newcommand{{\Gam(M,\E)}}{{\Gam(M,\calE}\newcommand{\W}{\calW)}} \newcommand{^{1,0}}{^{1,0}} \newcommand{^{0,1}}{^{0,1}} \newcommand{\hm}[2]{{H^{#2}(M_{#1},\calO(L_{#1}))}} \newcommand{\hmk}[2]{{H_k^{#2}(M_{#1},\calO(L_{#1}))}} \newcommand{\hma}[1]{{H^{*}(M_{#1},\calO(L_{#1}))}} \newcommand{\hmak}[1]{{H_k^{*}(M_{#1},\calO(L_{#1}))}} \newcommand{\hmo}[2]{{H^{#2}_0(M_{#1},\calO(L_{#1}))}} \newcommand{\backslash}{\backslash} \newcommand{{\Ome^{0,k}}}{{\Ome^{0,k}}} \newcommand{^p_k}{^p_k} \newcommand\ch{\operatorname{char}} \newcommand\mult{\operatorname{mult}} \newcommand{\preccurlyeq}{\preccurlyeq} \newcommand{\bar\square_t}{\bar\square_t} \newcommand{^{\text{hor}}}{^{\text{hor}}} \renewcommand{\vert}{^{\text{vert}}} \newcommand{K\"ahler }{K\"ahler } \setcounter{tocdepth}{1} \begin{document} \title{Vanishing theorems for the half-kernel of a Dirac operator} \author{Maxim Braverman} \address{Institute of Mathematics\\ The Hebrew University \\ Jerusalem 91904 \\ Israel } \email{maxim@math.huji.edu} \thanks{This research was partially supported by grant No. 96-00210/1 from the United States-Israel Binational Science Foundation (BSF)} \subjclass{Primary: 32L20; Secondary: 58G10, 14F17} \keywords{Vanishing theorem, Clifford bundle, Dirac operator, Andreotti-Grauert theorem, Melin inequality} \begin{abstract} We obtain a vanishing theorem for the half-kernel of a Dirac operator on a Clifford module twisted by a sufficiently large power of a line bundle, whose curvature is non-degenerate at any point of the base manifold. In particular, if the base manifold is almost complex, we prove a vanishing theorem for the half-kernel of a $\spin^c$ Dirac operator twisted by a line bundle with curvature of a mixed sign. In this case we also relax the assumption of non-degeneracy of the curvature. These results are generalization of a vanishing theorem of Borthwick and Uribe. As an application we obtain a new proof of the classical Andreotti-Grauert vanishing theorem for the cohomology of a compact complex manifold with values in the sheaf of holomorphic sections of a holomorphic vector bundle, twisted by a large power of a holomorphic line bundle with curvature of a mixed sign. As another application we calculate the sign of the index of a signature operator twisted by a large power of a line bundle. \end{abstract} \maketitle \tableofcontents \sec{introd}{Introduction} One of the most fundamental facts of complex geometry is the Kodaira vanishing theorem for the cohomology of the sheaf of sections of a holomorphic vector bundle twisted by a large power of a positive line bundle. In 1962, Andreotti and Grauert \cite{AndGr62} obtained the following generalization of this result to the case when the line bundle is not necessarily positive. Let $\L$ be a holomorphic line bundle over a compact complex $n$-dimensional manifold $M$. Suppose $\L$ admits a holomorphic connection whose curvature $F^\L$ has a least $q$ negative and at least $p$ positive eigenvalues at any point of $M$. Then the Andreotti-Grauert theorem asserts that, for any holomorphic vector bundle $\W$ over $M$, the cohomology $H^j(M,\O(\W\otimes\L^k))$ of $M$ with coefficients in the sheaf of holomorphic sections of the tensor product $\W\otimes\L^k$ vanishes for $k\gg0, \ j\not=q,q+1\nek n-p$. In particular, if $F^\L$ is non-degenerate at all points of $M$, then the number $q$ of negative eigenvalues of $F^\L$ is independent of $x\in M$, and the Andreotti-Grauert theorem implies that the cohomology $H^j(M,\O(\W\otimes\L^k))$ vanishes for $k\gg0, j\not=q$. If $M, \W$ and $\L$ are endowed with metrics, then the cohomology $H^*(M,\O(\W\otimes\L^k))$ is isomorphic to the kernel of the Dolbeault-Dirac operator \[ \sqrt2(\bar{\d}+\bar{\d}^*):\calA}\renewcommand{\O}{\calO^{0,*}(M,\W\otimes\L^k)\to \calA}\renewcommand{\O}{\calO^{0,*}(M,\W\otimes\L^k). \] Here $\calA}\renewcommand{\O}{\calO^{0,*}(M,\W\otimes\L^k)$ denotes the space of $(0,*)$-differential forms on $M$ with values in $\W\otimes\L^k$. The Andreotti-Grauert theorem implies, in particular, that the restriction of the kernel of the Dolbeault-Dirac operator on the space $\calA}\renewcommand{\O}{\calO^{0,\text{odd}}(M,\W\otimes\L^k)$ (resp. $\calA}\renewcommand{\O}{\calO^{0,\text{even}}(M,\W\otimes\L^k)$) vanishes provided the curvature $F^\L$ is non-degenerate and has an even (resp. an odd) number of negative eigenvalues at any point of $M$. The last statement may be extended to the case when the manifold $M$ is not complex. First step in this direction was done by Borthwick and Uribe \cite{BoUr96}, who showed that, if $M$ is an almost K\"ahler manifold and $\L$ is a positive line bundle over $M$, then the restriction of the kernel of the $\spin^c$-Dirac operator $D_k:\calA}\renewcommand{\O}{\calO^{0,*}(M,\L^k)\to\calA}\renewcommand{\O}{\calO^{0,*}(M,\L^k)$ on the space $\calA}\renewcommand{\O}{\calO^{0,\text{odd}}(M,\W\otimes\L^k)$ vanishes for $k\gg0$. Moreover, they showed that, for any $\alp\in \Ker D_k$, ``most of the norm" of $\alp$ is concentrated in $\calA}\renewcommand{\O}{\calO^{0,0}(M,\L^k)$. This result generalizes the Kodaira vanishing theorem to the case of an almost K\"ahler manifolds. One of the results of the present paper is the extension of the Borthwick-Uribe theorem to the case when the curvature $F^\L$ of $\L$ is not positive. In other words, we extend the Andreotti-Grauert theorem to almost complex manifolds. More generally, assume that $M$ is a compact oriented even-dimensional Riemannian manifold and let $C(M)$ denote the Clifford bundle of $M$, i.e., a vector bundle whose fiber at any point is isomorphic to the Clifford algebra of the cotangent space. Let $\calE}\newcommand{\W}{\calW$ be a self-adjoint Clifford module over $M$, i.e., a Hermitian vector bundle over $M$ endowed with a fiberwise action of $C(M)$. Then (cf. \refss{chir}) $\calE}\newcommand{\W}{\calW$ possesses a natural grading $\calE}\newcommand{\W}{\calW=\calE}\newcommand{\W}{\calW^+\oplus\calE}\newcommand{\W}{\calW^-$. Let $\L$ be a Hermitian line bundle endowed with a Hermitian connection $\nabla^\L$ and let $\calE}\newcommand{\W}{\calW$ be a Hermitian vector bundle over $M$ endowed with an Hermitian connection $\nabla^\calE}\newcommand{\W}{\calW$. These data define (cf. \refs{dirac}) a self-adjoint Dirac operator $D_k:\Gam(\calE}\newcommand{\W}{\calW\otimes\L^k)\to \Gam(\calE}\newcommand{\W}{\calW\otimes\L^k)$. The curvature $F^\L$ of $\nabla^\L$ is an imaginary valued 2-form on $M$. If it is non-degenerate at all points of $M$, then $iF^\L$ is a symplectic form on $M$, and, hence, defines an orientation of $M$. Our main result (\reft{main}) states that {\em the restriction of the kernel of $D_k$ to $\Gam(\calE}\newcommand{\W}{\calW^-\otimes\L^k)$ (resp. to $\Gam(\calE}\newcommand{\W}{\calW^+\otimes\L^k))$ vanishes for large $k$ if this orientation coincides with (resp. is opposite to) the given orientation of $M$.} Our result may be considerably refined when $M$ is an almost complex $2n$-dimensional manifold and the curvature $F^\L$ is a $(1,1)$-form on $M$. In this case, $F^\L$ may be considered as a sesquilinear form on the holomorphic tangent bundle to $M$. Let $\W$ be a Hermitian vector bundle over $M$ endowed with an Hermitian connection. Then (cf. \refss{almcomp}) there is a canonically defined Dirac operator $D_k:\calA}\renewcommand{\O}{\calO^{0,*}(M,\W\otimes\L^k)\to \calA}\renewcommand{\O}{\calO^{0,*}(M,\W\otimes\L^k)$. We prove (\reft{UB+}) that {\em if $F^\L$ has at least $q$ positive and at least $p$ negative eigenvalues at every point of $M$, then, for large $k$, ``most of the norm" of any element $\alp\in \Ker D_k$ is concentrated in $\bigoplus_{j=q}^{n-p}\calA}\renewcommand{\O}{\calO^{0,j}(M,\W\otimes\L^k)$.} In particular, {\em if the sesquilinear form $F^\L$ is non-degenerate and has exactly $q$ negative eigenvalues at any point of $M$, then ``most of the norm" of $\alp\in \Ker D_k$ is concentrated in $\calA}\renewcommand{\O}{\calO^{0,q}(M,\W\otimes\L^k)$, and, depending on the parity of $q$, the restriction of the kernel of $D_k$ either to $\calA}\renewcommand{\O}{\calO^{0,\text{odd}}(M,\W\otimes\L^k)$ or to $\calA}\renewcommand{\O}{\calO^{0,\text{even}}(M,\W\otimes\L^k)$ vanishes.} These results generalize both the Andreotti-Grauert and the Borthwick-Uribe vanishing theorems. In particular, we obtain a new proof of the Andreotti-Grauert theorem. As another application of \reft{main}, we study the index of a signature operator twisted by a line bundle having a non-degenerate curvature. We prove (\refc{derham}) that, {\em if the orientation defined by the curvature of $\L$ coincides with (resp. is opposite to) the given orientation of $M$, then this index is non-negative (resp. non-positive).} The proof of our main vanishing theorem (\reft{main}) is based on an estimate of the square $D_k^2$ of the twisted Dirac operator for large values of $k$. This estimate is obtained in two steps. First we use the Lichnerowicz formula to compare $D_k^2$ with the metric Laplacian $\Del_k=\left(\nabla^{\calE}\newcommand{\W}{\calW\otimes\L^k}\right)^*\nabla^{\calE}\newcommand{\W}{\calW\otimes\L^k}$. Then we use the method of \cite{GuiUr88,BoUr96} to estimate the large $k$ behavior of the metric Laplacian. \subsection*{Contents} The paper is organized as follows: In \refs{dirac}, we briefly recall some basic facts about Clifford modules and Dirac operators. We also present some examples of Clifford modules which are used in the rest of the paper. In \refs{main}, we formulate the main results of the paper and discuss their applications. The rest of the paper is devoted to the proof of these results. In \refs{prmain}, we present the proof of \reft{main} (the vanishing theorem for the half-kernel of a Dirac operator). The proof is based on two statements (Propositions~\ref{P:D-D} and \ref{P:lapl}) which are proven in the later sections. In \refs{prUB}, we prove an estimate on the Dirac operator on an almost complex manifold (\refp{estDk}) and use it to prove \reft{UB+} (our analogue of the Andreotti-Grauert vanishing theorem for almost complex manifolds). The proof is based on Propositions~\ref{P:lapl} and \ref{P:D-Dac} which are proved in later sections. In \refs{prAG}, we prove the Andreotti-Grauert theorem (\reft{AG}). In \refs{lichn}, we use the Lichnerowicz formula to prove Propositions~\ref{P:D-D}, \ref{P:D-Dac} and \ref{P:gr}. These results establish the connection between the Dirac operator and the metric Laplacian. They are used in the proofs of Theorems~\ref{T:main}, \ref{T:UB+} and \ref{T:AG}. {}Finally, in \refs{lapl}, we apply the method of \cite{GuiUr88,BoUr96} to prove \refp{lapl} (the estimate on the metric Laplacian). \subsection*{Acknowledgments} This paper grew out of our joint effort with Yael Karshon to understand the possible applications and generalizations of the results of \cite{BoUr96}. I would like to thank Yael Karshon for drawing my attention to this paper and for her help and support during the work on this project. Many of the results of the present paper were obtained in our discussions with Yael. I would like to thank Joseph Bernstein and Gideon Maschler for valuable discussions. \sec{dirac}{Clifford modules and Dirac operators} In the first part of this section we briefly recall the definitions and some basic facts about Clifford modules and Dirac operators. We refer the reader to \cite{BeGeVe,Duis96,LawMic89} for details. In our exposition we adopt the notations of \cite{BeGeVe}. In the second part of the section we present some examples of Clifford modules, which will be used in the subsequent sections. \ssec{clbundle}{The Clifford bundle} Suppose $(M,g)$ is an oriented Riemannian manifold of dimension $2n$. For any $x\in M$, we denote by $C(T^*_xM)=C^+(T^*_xM)\oplus C^-(T^*_xM)$ the Clifford algebra of the cotangent space $T_x^*M$, cf. \cite[\S3.3]{BeGeVe}. The {\em Clifford bundle \/} $C(M)$ of $M$ (cf. \cite[\S3.3]{BeGeVe}) is the $\ZZ_2$-graded bundle over $M$, whose fiber at $x\in M$ is $C(T^*_xM)$. The Riemannian metric $g$ induces the Levi-Civita connection $\nabla$ on $C(M)$ which is compatible with the multiplication and preserves the $\ZZ_2$-grading on $C(M)$. \ssec{clmodule}{Clifford modules} A {\em Clifford module \/} on $M$ is a complex vector bundle $\calE}\newcommand{\W}{\calW$ on $M$ endowed with an action of the bundle $C(M)$. We write this action as \[ (a,s) \ \mapsto \ c(a)s, \quad \mbox{where} \quad a\in {\Gam(M,C(M))}, \ s\in {\Gam(M,\E)}. \] A Clifford module $\calE}\newcommand{\W}{\calW$ is called {\em self-adjoint \/} if it is endowed with a Hermitian metric such that the operator $c(v):\calE}\newcommand{\W}{\calW_x\to\calE}\newcommand{\W}{\calW_x$ is skew-adjoint, for any $x\in M$ and any $v\in T_x^*M$. A connection $\nabla^\calE}\newcommand{\W}{\calW$ on a Clifford module $\calE}\newcommand{\W}{\calW$ is called a {\em Clifford connection \/} if it is {\em compatible with the Clifford action}, i.e., if for any $a\in {\Gam(M,C(M))}$ and $X\in{\Gam}(M,TM)$, \[ [\nabla^\calE}\newcommand{\W}{\calW_X,c(a)] \ = \ c(\nabla_X a). \] In this formula, $\nabla_X$ is the Levi-Civita covariant derivative on $C(M)$, and $[\nabla^\calE}\newcommand{\W}{\calW_X,c(a)]$ denotes the commutator of the operators $\nabla_X^\calE}\newcommand{\W}{\calW$ and $c(a)$. Suppose $\calE}\newcommand{\W}{\calW$ is a Clifford module and $\calW$ is a vector bundle over $M$. The {\em twisted Clifford module obtained from $\calE}\newcommand{\W}{\calW$ by twisting with $\calW$ \/} is the bundle $\calE}\newcommand{\W}{\calW\otimes\calW$ with Clifford action $c(a)\otimes1$. Note that the twisted Clifford module $\calE}\newcommand{\W}{\calW\otimes\calW$ is self-adjoint if and only if so is $\calE}\newcommand{\W}{\calW$. Let $\nabla^\calW$ be a connection on $\calW$ and let $\nabla^\calE}\newcommand{\W}{\calW$ be a Clifford connection on $\calE}\newcommand{\W}{\calW$. Then the {\em product connection} \begin{equation}\label{E:nEW} \nabla^{\calE}\newcommand{\W}{\calW\otimes\calW} \ = \ \nabla^\calE}\newcommand{\W}{\calW\otimes 1 \ + \ 1\otimes \nabla^\calW \end{equation} is a Clifford connection on $\calE}\newcommand{\W}{\calW\otimes\calW$. \ssec{chir}{The chirality operator. The natural grading} Let $e_1\nek e_{2n}$ be an oriented orthonormal basis of $C(T_x^*M)$ and consider the element \begin{equation}\label{E:Gam} \Gam \ = \ i^n\, e_1\cdots e_{2n} \ \in \ C(T_x^*M)\otimes\CC. \end{equation} This element is independent of the choice of the basis, anti-commutes with any $v\in T_x^*M\subset C(T_x^*M)$, and satisfies $\Gam^2=-1$, cf. \cite[\S3.2]{BeGeVe}. This element $\Gam$ is called the {\em chirality operator}. We also denote by $\Gam$ the section of $C(M)$ whose restriction to each fiber is equal to the chirality operator. Let $\calE}\newcommand{\W}{\calW$ be a Clifford module, i.e. (cf. \refss{clmodule}), a vector bundle over $M$ endowed with a fiberwise action of $C(M)$. Set \begin{equation}\label{E:grad} \calE}\newcommand{\W}{\calW^\pm \ = \ \{v\in \calE}\newcommand{\W}{\calW: \ \Gam\, v=\pm v\}. \end{equation} Then $\calE}\newcommand{\W}{\calW=\calE}\newcommand{\W}{\calW^+\oplus\calE}\newcommand{\W}{\calW^-$ is a {\em graded module} over $C(M)$ in the sense that $C^+(M)\cdot\calE}\newcommand{\W}{\calW^\pm\subset \calE}\newcommand{\W}{\calW^\pm$ and $C^-(M)\cdot\calE}\newcommand{\W}{\calW^\pm\subset \calE}\newcommand{\W}{\calW^\mp$. We refer to the grading \refe{grad} as the {\em natural grading \/} on $\calE}\newcommand{\W}{\calW$. Note that this grading is preserved by any Clifford connection on $\calE}\newcommand{\W}{\calW$. Also, if $\calE}\newcommand{\W}{\calW$ is a self-adjoint Clifford module (cf. \refss{clmodule}), then the chirality operator $\Gam:\calE}\newcommand{\W}{\calW\to\calE}\newcommand{\W}{\calW$ is self-adjoint. Hence, the subbundles $\calE}\newcommand{\W}{\calW^\pm$ are orthogonal with respect to the Hermitian metric on $\calE}\newcommand{\W}{\calW$. {\em In this paper we endow all our Clifford modules with the natural grading.} \ssec{dirac}{Dirac operators} The {\em Dirac operator \/} associated to a Clifford connection $\nabla^\calE}\newcommand{\W}{\calW$ is defined by the following composition \begin{equation}\label{E:dir1} \begin{CD} {\Gam(M,\E)} @>\nabla^\calE}\newcommand{\W}{\calW>> {\Gam}(M,T^*M\otimes \calE}\newcommand{\W}{\calW) @>c>> {\Gam(M,\E)}. \end{CD} \end{equation} In local coordinates, this operator may be written as $D=\sum\, c(dx^i)\,\nabla^\calE}\newcommand{\W}{\calW_{\d_i}$. Note that $D$ sends even sections to odd sections and vice versa: $D:\, \Gam(M,\calE}\newcommand{\W}{\calW^\pm)\to \Gam(M,\calE}\newcommand{\W}{\calW^\mp)$. Suppose that the Clifford module $\calE}\newcommand{\W}{\calW$ is endowed with a Hermitian structure and consider the $L_2$-scalar product on the space of sections ${\Gam(M,\E)}$ defined by the Riemannian metric on $M$ and the Hermitian structure on $\calE}\newcommand{\W}{\calW$. By \cite[Proposition~3.44]{BeGeVe}, {\em the Dirac operator associated to a Clifford connection $\nabla^\calE}\newcommand{\W}{\calW$ is formally self-adjoint with respect to this scalar product if and only if $\calE}\newcommand{\W}{\calW$ is a self-adjoint Clifford module and $\nabla^\calE}\newcommand{\W}{\calW$ is a Hermitian connection.} \ We finish this section with some examples of Clifford modules, which will be used later. \ssec{spin}{Spinor bundles} Assume that $M$ is a spin manifold and let $\calS= \calS^+\oplus \calS^-$ be a {\em spinor bundle \/} over $M$ (cf. \cite[\S 3.3]{BeGeVe}). It is a minimal Clifford module in the sense that any other Clifford module $\calE}\newcommand{\W}{\calW$ may be decomposed as a tensor product \begin{equation}\label{E:E=SW} \calE}\newcommand{\W}{\calW \ = \ \calS \ \otimes \calW, \end{equation} where $\calW= \Hom_{C(M)}(\calS,\calE}\newcommand{\W}{\calW)$ and the action of the Clifford bundle $C(M)$ is trivial on the second factor of \refe{E=SW}. In this case, the natural grading $\calE}\newcommand{\W}{\calW=\calE}\newcommand{\W}{\calW^+\oplus\calE}\newcommand{\W}{\calW^-$ is defined by $\calE}\newcommand{\W}{\calW^\pm=\calS^\pm\otimes\W$. The Riemannian metric on $M$ induces the Levi-Civita connection $\nabla^\calS$ on $\calS$, which is compatible with the Clifford action. Moreover, a connection $\nabla^\calE}\newcommand{\W}{\calW$ on the twisted Clifford module $\calE}\newcommand{\W}{\calW= \calS\otimes \calW$ is a Clifford connection if and only if \begin{equation}\label{E:nE=} \nabla^\calE}\newcommand{\W}{\calW \ = \ \nabla^\calS\otimes 1 \ + \ 1\otimes \nabla^\calW \end{equation} for some connection $\nabla^\calW$ on $\calW$. Note that locally the spinor bundle and, hence, the decompositions \refe{E=SW}, \refe{nE=} always exist. In particular, suppose that $\tilS$ is a Clifford module whose fiber dimension is equal to $\dim\calS=2^n$. Then locally $\tilS=\calS\otimes\calW$ for some locally defined complex line bundle $\calW$. In this case $\tilS$ is called a $\spin^c$ vector bundle over $M$ (\cite[Ch.~5]{Duis96}, \cite[Appendix~D]{LawMic89}). A Dirac operator on a $\spin^c$ vector bundle is called a {\em $\spin^c$ Dirac operator}. \ssec{difforms}{The exterior algebra} Consider the exterior algebra $\Lam{T^*M}=\bigoplus_i\Lam^iT^*M$ of the cotangent bundle $T^*M$. There is a canonical action of the Clifford bundle $C(M)$ on $\Lam{T^*M}$ such that \begin{equation}\label{E:cv} c(v)\, \alp \ = \ v\wedge\alp \ - \ \iot(v)\, \alp, \qquad v\in \Gam(M,T^*M), \ \alp\in\Gam(M,\Lam{T^*M}). \end{equation} Here $\iot(v)$ denotes the contraction with the vector $v^*\in T_xM$ dual to $v$. The chirality operator \refe{Gam} coincides in this case (cf. \cite[{\S}3.6]{BeGeVe}) with the Hodge $*$-operator. Hence, the usual grading $\Lam{T^*M}=\Lam^{\text{even}}{T^*M}\oplus\Lam^{\text{odd}}{T^*M}$ is not the natural grading in the sense of \refss{chir}. {\em We will always consider $\Lam{T^*M}$ with the natural grading}. The positive and negative elements of $\Gam(M,\Lam{T^*M})$, with respect to this grading, are called {\em self-dual \/} and {\em anti-self-dual \/} differential forms respectively. The action \refe{cv} is self-adjoint with respect to the metric on $\Lam{T^*M}$ defined by the Riemannian metric on $M$. The connection induced on $\Lam{T^*M}$ by the Levi-Civita connection on $T^*M$ is a Clifford connection. The Dirac operator associated with this connection is equal to $d+d^*$ and is called the {\em signature operator}, \cite[{\S}3.6]{BeGeVe}. If the dimension of $M$ is divisible by four, then its index is equal to the signature of the manifold $M$. \ssec{almcomp}{Almost complex manifolds} Assume that $M$ is an almost complex manifold with an almost complex structure $J:TM\to TM$. Then $J$ defines a structure of a complex vector bundle on the tangent bundle $TM$. Let $h^{TM}$ be a Hermitian metric on $TM\otimes\CC$. The real part $g^{TM}=\RE h^{TM}$ of $h^{TM}$ is a Riemannian metric on $M$. Note also that $J$ defines an orientation on $M$. Let $\Lam^q =\Lam^q(T^{0,1}M)^*$ denote the bundle of $(0,q)$-forms on $M$ and set \[ \Lam^+ \ = \ \bigoplus_{q \text{ even}} \Lam^q, \quad \Lam^- \ = \ \bigoplus_{q \text{ odd}} \Lam^q. \] Let $\lam^{1/2}$ be the square root of the complex line bundle $\lam=\det T^{1,0}{M}$ and let $\calS$ be the spinor bundle over $M$ associated to the Riemannian metric $g^{TM}$. Although $\lam^{1/2}$ and $\calS$ are defined only locally, unless $M$ is a spin manifold, it is well known (cf. \cite[Appendix~D]{LawMic89}) that the products $\calS^\pm\otimes\lam^{1/2}$ are globally defined and \[ \Lam^\pm \ = \ \calS^\pm\otimes\lam^{1/2}. \] It follows that $\Lam$ is a $\spin^c$ vector bundle over $M$ (cf. \refss{spin}). In particular, the grading $\Lam=\Lam^+\oplus\Lam^-$ is natural. The Clifford action of $C(M)$ on $\Lam$ may be described as follows: if $f\in\Gam(M,T^*M)$ decomposes as $f=f^{1,0}+f^{0,1}$ with $f^{1,0}\in\Gam(M,(T^{1,0} M)^*)$ and $f^{0,1}\in\Gam(M,(T^{0,1} M)^*)$, then the Clifford action of $f$ on $\alp\in\Gam(M,\Lam)$ equals \begin{equation}\label{E:claction} c(f)\alp \ = \ \sqrt{2}\, \left( f^{0,1}\wedge\alp \ - \ \iot(f^{1,0})\, \alp\right). \end{equation} Here $\iot(f^{1,0})$ denotes the interior multiplication by the vector field $(f^{1,0})^*\in T^{0,1} M$ dual to the 1-form $f^{1,0}$. This action is self-adjoint with respect to the Hermitian structure on $\Lam$ defined by the Riemannian metric $g^{TM}$ on $M$. The Levi-Civita connection $\nabla^{TM}$ of $g^{TM}$ induces a Hermitian connection on $\lam^{1/2}$ and on $\calS$. Let $\nabla^M$ be the product connection (cf. \refss{clmodule}), \[ \nabla^M \ = \ \nabla^\calS\otimes 1 \ + \ 1\otimes\nabla^{\lam^{1/2}}. \] Then $\nabla^M$ is a well-defined Hermitian Clifford connection on the $\spin^c$ bundle $\Lam$. Hence, it gives rise to a self-adjoint $\spin^c$ Dirac operator. More generally, assume that $\W$ is a Hermitian vector bundle over $M$ and let $\nabla^\W$ be a Hermitian connection on $\W$. Consider the twisted Clifford module $\calE}\newcommand{\W}{\calW=\Lam\otimes\W$. The product connection $\nabla^\calE}\newcommand{\W}{\calW=\nabla^{\Lam\otimes\W}$ determines a Dirac operator $D:\Gam(M,\calE}\newcommand{\W}{\calW)\to\Gam(M,\calE}\newcommand{\W}{\calW)$. \ssec{comp}{K\"ahler manifolds} If $(M,J,g^{TM})$ is a K\"ahler manifold, then (cf. \cite[Proposition~3.67]{BeGeVe}) the Dirac operator defined in \refe{dir1} coincides with the Dolbeault-Dirac operator \begin{equation}\label{E:dirka} D \ = \ \sqrt2\, \left(\bar{\d}+\bar{\d}^*\right). \end{equation} Here $\bar{\d}^*$ denotes the operator adjoint to $\bar{\d}$ with respect to the $L_2$-scalar product on $\calA}\renewcommand{\O}{\calO^{0,*}(M,\W)$. Hence, the restriction of the kernel of $D$ to $\calA}\renewcommand{\O}{\calO^{0,i}(M,\W)$ is isomorphic to the cohomology $H^i(M,\O(\W))$ of $M$ with coefficients in the sheaf of holomorphic section of $\W$. \sec{main}{Vanishing theorems and their applications} In this section we state the main theorems of the paper. The section is organized as follows: In \refss{lbundle}, we formulate our main result -- the vanishing theorem for the half-kernel of a Dirac operator (\reft{main}). In \refss{sofproof}, we briefly indicate the idea of the proof of \reft{main}. In \refss{sign}, we apply this theorem to calculate the sign of the signature of a vector bundle twisted by a high power of a line bundle. In \refss{cvanish}, we refine \reft{main} for the case of a complex manifold. In particular, we recover the Andreotti-Grauert vanishing theorem for a line bundle with curvature of a mixed sign, cf. \cite{AndGr62,DemPetSch93}. {}Finally, in \refss{acvanish}, we present an analogue of the Andreotti-Grauert theorem for almost complex manifolds. This generalizes a result of Borthwick and Uribe \cite{BoUr96}. \ssec{lbundle}{Twisting by a line bundle. The vanishing theorem} Suppose $\calE}\newcommand{\W}{\calW$ is a self-adjoint Clifford module over $M$. Recall from \refss{chir} that $\calE}\newcommand{\W}{\calW=\calE}\newcommand{\W}{\calW^+\oplus\calE}\newcommand{\W}{\calW^-$ denotes the {\em natural \/} grading on $\calE}\newcommand{\W}{\calW$. Let $\L$ be a Hermitian line bundle over $M$, and let $\nabla^\L$ be a Hermitian connection on $\L$. The connection $\nabla^{\calE}\newcommand{\W}{\calW\otimes\L^k}$ (cf. \refe{nEW}) is a Hermitian Clifford connection on the twisting Clifford module ${\calE}\newcommand{\W}{\calW\otimes\L^k}$. Consider the self-adjoint Dirac operator \[ D_k:\, \Gam(M,\calE}\newcommand{\W}{\calW\otimes\L^k) \ \to \ \Gam(M,\calE}\newcommand{\W}{\calW\otimes\L^k) \] associated to this connection and let $D_k^\pm$ denote the restriction of $D_k$ to the spaces $\Gam(M,\calE}\newcommand{\W}{\calW^\pm\otimes\L^k)$. The curvature $F^\L=(\nabla^\L)^2$ of the connection $\nabla^\L$ is an imaginary valued closed 2-form on $M$. If it is non-degenerate, then $iF^\L$ is a symplectic form on $M$ and, hence, defines an orientation of $M$. Our main result is the following \th{main} Let $\calE}\newcommand{\W}{\calW$ be a self-adjoint Clifford module over a compact oriented even-dimensional Riemannian manifold $M$. Let $\nabla^\calE}\newcommand{\W}{\calW,\L$, $\nabla^\L,D_k$ be as above. Assume that the curvature $F^\L=(\nabla^\L)^2$ of the connection $\nabla^\L$ is non-degenerate at all points of $M$. If the orientation defined by the symplectic form $iF^\L$ coincides with the original orientation of $M$, then \begin{equation}\label{E:main} \Ker D^-_k \ = \ 0 \qquad \mbox{for} \qquad k\gg 0. \end{equation} Otherwise, $\Ker D^+_k = 0$ for $k\gg 0$. \eth The theorem is a generalization of a vanishing theorem of Borthwick and Uribe \cite{BoUr96}, who considered the case where $M$ is an almost K\"ahler manifold, $D$ is a $\spin^c$-Dirac operator and $\L$ is a positive line bundle over $M$. The theorem is proven in \refs{prmain}. Here we only explain the main ideas of the proof. \ssec{sofproof}{The scheme of the proof} Our proof of Theorem~\ref{T:main} follows the lines of \cite{BoUr96}. It is based on an estimate from below on the large $k$ behavior of the square $D_k^2$ of the Dirac operator. Using this estimate we show that, if the orientation defined by $iF^\L$ coincides with (resp. is opposite to) the given orientation of $M$, then, for large $k$, the restriction of $D_k^2$ to $\calE}\newcommand{\W}{\calW^-\otimes\L^k$ (resp. to $\calE}\newcommand{\W}{\calW^+\otimes\L^k$) is a strictly positive operator and, hence, has no kernel. This estimate for $D_k^2$ is obtained in two steps. First we use the Lichnerowicz formula (cf. \refss{lichn}) to compare $D_k^2$ with the {\em metric Laplacian \/} $\Del_k=\nabla^{\calE}\newcommand{\W}{\calW\otimes\L^k}(\nabla^{\calE}\newcommand{\W}{\calW\otimes\L^k})^*$. Then it remains to study the large $k$ behavior of the metric Laplacian $\Del_k$. This is done in \refs{lapl}. In fact, the estimate which we need is essentially obtained in \cite{BoUr96,GuiUr88}. Roughly speaking it says that $\Del_k$ grows linearly in $k$. The proof of the estimate for $\Del_k$ also consists of two steps. {}First we consider the principal bundle $\calZ}\newcommand{\F}{\calF\to M$ associated to the vector bundle $\calE}\newcommand{\W}{\calW\otimes\L$, and construct a differential operator $\tilDel$ ({\em horizontal Laplacian}) on $\calZ}\newcommand{\F}{\calF$, such that the operator $\Del_k$ is ``equivalent" to a restriction of $\tilDel$ on a certain subspace of the space of $L_2$-functions on the total space of $\calZ}\newcommand{\F}{\calF$. Then we apply the {\em a priory \/} Melin estimates \cite{Melin71} (see also \cite[Theorem~22.3.3]{Horm3}) to the operator $\tilDel$. \rem{exest} It would be very interesting to obtain an effective estimates of a minimal value of $k$ which satisfies \refe{main} at least for the simplest cases (say, when $\calE}\newcommand{\W}{\calW$ is a spinor bundle over a spin manifold $M$). Unfortunately, such an estimate can not be obtained using our method. This is because the Melin inequalities \cite{Melin71}, \cite[Theorem~22.3.3]{Horm3} (see also \refss{melin}), used in our proof, contain a constant $C$, which can not be estimated effectively. \end{remark} We will now discuss applications and refinements of \reft{main}. In particular, we will see that \reft{main} may be considered as a generalization of the vanishing theorems of Kodaira, Andreotti-Grauert \cite{AndGr62} and Borthwick-Uribe \cite{BoUr96}. \ssec{sign}{The signature operator} Recall from \refss{difforms} that, for any oriented even-\linebreak{dimensional} Riemannian manifold $M$, the exterior algebra $\Lam{T^*M}$ of the cotangent bundle is a self-adjoint Clifford module. The connection induced on $\Lam{T^*M}$ by a Levi-Civita connection on $T^*M$ is a Hermitian Clifford connection and the Dirac operator associated to this connection is the signature operator $d+d^*$. Consider a twisted Clifford module $\calE}\newcommand{\W}{\calW=\Lam{T^*M}\otimes\W$, where $\W$ is a Hermitian vector bundle over $M$ endowed with a Hermitian connection $\nabla^\W$. Let $\L$ be a Hermitian line bundle over $M$ and let $\nabla^\L$ be an Hermitian connection on $\L$. The space $\Gam(M,\calE}\newcommand{\W}{\calW\otimes\L^k)$ of sections of the twisted Clifford module $\calE}\newcommand{\W}{\calW\otimes\L^k$ coincides with the space $\calA}\renewcommand{\O}{\calO^*(M,\W\otimes\L^k)$ of differential forms on $M$ with values in $\W\otimes\L^k$. The positive and negative elements of $\calA}\renewcommand{\O}{\calO^*(M,\W\otimes\L^k)$ with respect to the natural grading are called the {\em self-dual \/} and the {\em anti-self-dual \/} differential forms respectively. Let $D_k:\calA}\renewcommand{\O}{\calO^*(M,\W\otimes\L^k)\to\calA}\renewcommand{\O}{\calO^*(M,\W\otimes\L^k)$ denote the Dirac operator corresponding to the tensor product connection $\nabla^{\W\otimes\L^k}$ on $\W\otimes\L^k$. Then \begin{equation}\label{E:derham} D_k \ = \ \nabla^{\W\otimes\L^k} \ + \ \left(\nabla^{\W\otimes\L^k}\right)^*, \end{equation} where $\left(\nabla^{\W\otimes\L^k}\right)^*$ denotes the adjoint of $\nabla^{\W\otimes\L^k}$ with respect to the $L_2$-scalar product on $\W\otimes\L^k$. The operator \refe{derham} is called the {\em signature operator \/} of the bundle $\W\otimes\L^k$. Let $D_k^+$ and $D_k^-$ denote the restrictions of $D_k$ on the spaces of self-dual and anti-self-dual differential forms respectively. As an immediate consequence of \reft{main}, we obtain the following \th{derham} Suppose $M$ is a compact oriented even-dimensional Riemannian manifold. Let $\W, \nabla^\W, \L, \nabla^\L, D_k^\pm$ be as above. Assume that the curvature $F^\L=(\nabla^\L)^2$ of the connection $\nabla^\L$ is non-degenerate at any point $x\in M$. If the orientation defined by $iF^\L$ coincides with the given orientation of $M$, then $\Ker D^-_k=0$, for $k\gg0$. Otherwise, $\Ker D^+_k=0$, for $k\gg0$. \eth The index \[ \ind D_k \ = \ \dim\Ker D_k^+ \ - \ \dim\Ker D_k^- \] of the Dirac operator $D_k$ is called the {\em signature \/} of the bundle $\W\otimes\L^k$ and is denoted by $\operatorname{sign}(\W\otimes\L^k)$. It depends only on the manifold $M$, its orientation and the bundle $\W\otimes\L^k$ (but not on the choice of Riemannian metric on $M$ and of Hermitian structures and connections on the bundles $\W,\L$). If the bundles $\W$ and $\L$ are trivial, then it coincides with the usual signature of the manifold $M$. From \reft{derham}, we obtain the following \cor{derham} Let $\W$ and $\L$ be respectively a vector and a line bundles over a compact oriented even-dimensional Riemannian manifold $M$. Suppose that, for some Hermitian metric on $\L$, there exist a Hermitian connection, whose curvature $F^\L$ is non-degenerate at any point of $M$. If the orientation defined by the symplectic form $iF^\L$ coincides with the given orientation of $M$, then \[ \operatorname{sign}(\W\otimes\L^k) \ \ge \ 0 \qquad \mbox{for} \qquad k\gg 0. \] Otherwise, $\operatorname{sign}(\W\otimes\L^k)\le0$ for $k\gg0$. \end{corol} \ssec{cvanish}{Complex manifolds. The Andreotti-Grauert theorem} Suppose $M$ is a compact complex manifold, $\W$ is a holomorphic vector bundle over $M$ and $\L$ is a holomorphic line bundles over $M$. Fix a Hermitian metric $h^\L$ on $\L$ and let $\nabla^\L$ be the {\em Chern connection \/} on $\L$, i.e., the unique holomorphic connection which preserves the Hermitian metric. The curvature $F^\L$ of $\nabla^\L$ is a $(1,1)$-form which is called the {\em curvature form of the Hermitian metric $h^\L$}. The orientation condition of \reft{main} may be reformulated as follows. Let $(z^1\nek z^n)$ be complex coordinates in the neighborhood of a point $x\in M$. The curvature $F^\L$ may be written as \[ iF^\L \ = \ \frac{i}2\sum_{i,j} F_{ij}dz^i\wedge d\oz^j. \] Denote by $q$ the number of negative eigenvalues of the matrix $\{F_{ij}\}$. Clearly, the number $q$ is independent of the choice of the coordinates. We will refer to this number as the {\em number of negative eigenvalues of the curvature $F^\L$ at the point $x$}. Then the orientation defined by the symplectic form $iF^\L$ coincides with the complex orientation of $M$ if and only if $q$ is even. A small variation of the method used in the proof of \reft{main} allows to get a more precise result which depends not only on the parity of $q$ but on $q$ itself. In this way we obtain a new proof of the following vanishing theorem of Andreotti and Grauert \cite{AndGr62,DemPetSch93} \begin{Thm}[{\textbf{Andreotti-Grauert}}]\label{T:AG} Let $M$ be a compact complex manifold and let $\L$ be a holomorphic line bundle over $M$. Assume that $\L$ carries a Hermitian metric whose curvature form $F^\L$ has at least $q$ negative and at least $p$ positive eigenvalues at any point $x\in M$. Then, for any holomorphic vector bundle $\W$ over $M$, the cohomology $H^j(M,\O(\W\otimes\L^k))$ with coefficients in the sheaf of holomorphic sections of $\W\otimes\L^k$ vanish for $j\not=q, q+1\nek n-p$ and $k\gg0$. \end{Thm} The proof is given in \refss{prAG}. In contrary to \reft{main}, the curvature $F^\L$ in \reft{AG} needs not be non-degenerate. If $F^\L$ is non-degenerate, then the number $q$ of negative eigenvalues of $F^\L$ does not depend on the point $x\in M$. Then we obtain the following \cor{AG} If, in the conditions of \reft{AG}, the curvature $F^\L$ is non-degenerate and has exactly $q$ negative eigenvalues at any point $x\in M$, then $H^j(M,\O(\W\otimes\L^k))$ vanishes for any $j\not=q$ and $k\gg0$. \end{corol} Note that, if $\L$ is a positive line bundle, \refc{AG} reduces to the classical Kodaira vanishing theorem (cf., for example, \cite[Theorem~3.72(2)]{BeGeVe}). \rem{2} a. \ It is interesting to compare \refc{AG} with \reft{main} for the case when $M$ is a K\"ahler manifold. In this case the Dirac operator $D_k$ is equal to the Dolbeault-Dirac operator \refe{dirka}. Hence (cf. \refss{comp}), \reft{main} implies that $H^j(M,\O(\W\otimes\L^k))$ vanishes when the parity of $j$ is not equal to the parity of $q$. \refc{AG} refines this result. b. \ If $M$ is not a K\"ahler manifold, then the Dirac operator $D_k$ defined by \refe{dir1} is not equal to the Dolbeault-Dirac operator, and the kernel of $D_k$ is not isomorphic to the cohomology $H^*(M,\O(\W\otimes\L^k))$. However, we show in \refs{prAG} that the operators $D_k$ and $\sqrt(\bar{\d}+\bar{\d}^*)$ have the same asymptotic as $k\to\infty$. Then the vanishing of the kernel of $D_k$ implies the vanishing of the cohomology $H^j(M,\O(\W\otimes\L^k))$. c. \ In \reft{AG} the bundle $\W$ can be replaced by an arbitrary coherent sheaf $\calF$. This follows from \reft{AG} by a standard technique using a resolution of $\calF$ by locally free sheaves (see, for example, \cite[Ch.~5]{ShSom85} for similar arguments). \end{remark} \ssec{acvanish}{Andreotti-Grauert-type theorem for almost complex manifolds} In this section we refine \reft{main} assuming that $M$ is endowed with an almost complex structure $J$ such that the curvature $F^\L$ of $\L$ is a $(1,1)$ form on $M$ with respect to $J$. In other words, we assume that, for any $x\in M$ and any basis $(e^1\nek e^n)$ of the holomorphic cotangent space $(T^{1,0}{M})^*$, one has \[ iF^\L \ = \ \frac{i}2\sum_{i,j} F_{ij}e^i\wedge \oe^j. \] This section generalizes a result of Borthwick and Uribe \cite{BoUr96}. We denote by $q$ the number of negative eigenvalues of the matrix $\{F_{ij}\}$. As in \refss{cvanish}, the orientation of $M$ defined by the symplectic form $iF^\L$ depends only on the parity of $q$. It coincides with the orientation defined by $J$ if and only if $q$ is even. We will use the notation of \refss{almcomp}. In particular, $\Lam=\Lam(T^{0,1} M)^*$ denotes the bundle of $(0,*)$-forms on $M$ and $\W$ is a Hermitian vector bundle over $M$. Then $\calE}\newcommand{\W}{\calW=\Lam\otimes\W$ is a self-adjoint Clifford module endowed with a Hermitian Clifford connection $\nabla^\calE}\newcommand{\W}{\calW$. The space $\Gam(M,\calE}\newcommand{\W}{\calW\otimes\L^k)$ of sections of the twisted Clifford module $\calE}\newcommand{\W}{\calW\otimes\L^k$ coincides with the space $\calA}\renewcommand{\O}{\calO^{0,*}(M,\W\otimes\L^k)$ of differential forms of type $(0,*)$ with values in $\W\otimes\L^k$. Let \[ D_k:\, \calA}\renewcommand{\O}{\calO^{0,*}(M,\W\otimes\L^k) \ \to \ \calA}\renewcommand{\O}{\calO^{0,*}(M,\W\otimes\L^k) \] denote the Dirac operator corresponding to the tensor product connection on $\W\otimes\L^k$. {}For a form $\alp\in \calA}\renewcommand{\O}{\calO^{0,*}(M,\W\otimes\L^k)$, we denote by $\|\alp\|$ its $L_2$-norm and by $\alp_i$ its component in $\calA}\renewcommand{\O}{\calO^{0,i}(M,\W\otimes\L^k)$. \th{UB+} Assume that the matrix $\{F_{ij}\}$ has at least $q$ negative and at least $p$ positive eigenvalues at any point $x\in M$. Then there exists a sequence $\eps_1,\eps_2,\ldots$ convergent to zero, such that for any $k\gg 0$ and any $\alp\in\Ker D_k^2$ one has \[ \|\alp_j\| \ \le \ \eps_k\|\alp\|, \qquad \mbox{for} \quad j\not=q,q+1\nek n-p. \] In particular, if the form $F^\L$ is non-degenerate and $q$ is the number of negative eigenvalues of $\{F_{ij}\}$ (which is independent of $x\in M$), then there exists a sequence $\tileps_{1},\tileps_{2},\ldots$, convergent to zero, such that $\alp\in\Ker D_k$ implies \[ \|\alp \ - \ \alp_q\| \ \le \ \tileps_{k}\|\alp_q\|. \] \eth \reft{UB+} is proven in \refss{prUB}. The main ingredient of the proof is the following estimate on $D_k$, which also has an independent interest: \prop{estDk} If the matrix $\{F_{ij}\}$ has at least $q$ negative and at least $p$ positive eigenvalues at any point $x\in M$, then there exists a constant $C>0$, such that \[ \|D_k\, \alp\| \ \ge \ Ck^{1/2}\, \|\alp\|, \] for any $k\gg0, \ j\not=q,q+1\nek n-p$ and $\alp\in \calA}\renewcommand{\O}{\calO^{0,j}(M,\W\otimes\L^k)$. \end{propos} The proof is given in \refss{prestDk}. \rem{UB} a. \ For the case when $\L$ is a positive line bundle, the Riemannian metric on $M$ is almost K\"ahler and $\W$ is a trivial line bundle, \reft{UB+} was established by Borthwick and Uribe \cite[Theorem~2.3]{BoUr96}. b. \ \reft{UB+} implies that, if $F^\L$ is non-degenerate, then $\Ker D_k$ is dominated by the component of degree $q$. If $\alp\in \Gam(M,\calE}\newcommand{\W}{\calW^-)$ (resp. $\alp\in\Gam(M,\calE}\newcommand{\W}{\calW^+)$) and $q$ is even (resp. odd) then $\alp_q=0$. So, we obtain the vanishing result of \reft{main} for the case when $M$ is almost complex and $F^\L$ is a $(1,1)$ form. c. \ \reft{UB+} is an analogue of \reft{AG}. Of course, the cohomology $H^j(M,\O(\W\otimes\L^k))$ is not defined if $J$ is not integrable. Moreover, the square $D^2_k$ of the Dirac operator does not preserve the $\ZZ$-grading on $\calA}\renewcommand{\O}{\calO^{0,*}(M,\W\otimes\L^k)$. Hence, one can not hope that the kernel of $D_k$ belongs to $\oplus_{j=q}^{n-p}\calA}\renewcommand{\O}{\calO^{0,j}(M,\W\otimes\L^k)$. However, \reft{UB+} shows, that for any $\alp\in\Ker D_k$, ``most of the norm" of $\alp$ is concentrated in $\oplus_{j=q}^{n-p}\calA}\renewcommand{\O}{\calO^{0,j}(M,\W\otimes\L^k)$. \end{remark} \sec{prmain}{Proof of the vanishing theorem for the half-kernel of a Dirac operator} In this section we present a proof of \reft{main} based on Propositions~\ref{P:D-D} and \ref{P:lapl}, which will be proved in the following sections. The idea of the proof is to study the large $k$ behavior of the square $D_k^2$ of the Dirac operator. \ssec{tJ}{The operator $\tilJ$} We need some additional definitions. Recall that $F^\L$ denotes the curvature of the connection $\nabla^\L$. In this subsection we do not assume that $F^\L$ is non-degenerate. For $x\in M$, define the skew-symmetric linear map $\tilJ_x:T_xM\to T_xM$ by the formula \[ iF^\L(v,w) \ = \ g^{TM}(v,\tilJ_xw), \qquad v,w\in T_xM. \] The eigenvalues of $\tilJ_x$ are purely imaginary. Note that, in general, $\tilJ$ is not an almost complex structure on $M$. Define \begin{equation}\label{E:tau} \tau(x) \ = \ \Tr^+ \tilJ_x \ := \ \mu_1+\cdots+\mu_l, \qquad m(x) \ = \ \min_{j}\, \mu_j(x). \end{equation} where $i\mu_j, \ j=1\nek l$ are the eigenvalues of $\tilJ_x$ for which $\mu_j>0$. Note that $m(x)=0$ if and only if the curvature $F^\L$ vanishes at the point $x\in M$. \ssec{lapl}{Estimate on $D_k^2$} Our estimate on the square $D_k^2$ of the Dirac operator is obtained in two steps: first we compare it to the {\em metric Laplacian} \[ \Del_k \ := \ (\nabla^{\calE}\newcommand{\W}{\calW\otimes\L^k})^*\, \nabla^{\calE}\newcommand{\W}{\calW\otimes\L^k}, \] and then we estimate the large $k$ behavior of $\Del_k$. These two steps are the subject of the following two propositions. \prop{D-D} Supposed that the differential form $F^\L$ is non-degenerate. If the orientation defined on $M$ by the symplectic form $iF^\L$ coincides with (resp. is opposite to) the given orientation of $M$, then there exists a constant $C$ such that, for any $s\in\Gam(M,\calE}\newcommand{\W}{\calW^-\otimes\L^k)$ (resp. for any $s\in \Gam(M,\calE}\newcommand{\W}{\calW^+\otimes\L^k)$), one has an estimate \[ \big\langle\, (D^2_k - \Del_k)\, s,\, s\, \big\rangle \ \ge \ -k\,\big\langle\, (\tau(x)-2m(x))\, s,\, s\, \big\rangle - C\, \|s\|^2. \] Here $\<\cdot,\cdot\>$ denotes the $L_2$-scalar product on the space of sections and $\|\cdot\|$ is the norm corresponding to this scalar product. \end{propos} The proposition is proven in \refss{lichn} using the Lichnerowicz formula \refe{lichn}. In the next proposition we do not assume that $F^\L$ is non-degenerate. \prop{lapl} Suppose that $F^\L$ does not vanish at any point $x\in M$. For any $\eps>0$, there exists a constant $C_\eps$ such that, for any $k\in\ZZ$ and any section $s$ of the bundle $\calE}\newcommand{\W}{\calW\otimes\L^k$, \eq{lapl} \<\Del_k\, s,s\> \ \ge \ k\, \<(\tau(x)-\eps) s,s\> \ - \ C_\eps\, \|s\|^2. \end{equation} \end{propos} \refp{lapl} is essentially proven in \cite[Theorem~2.1]{BoUr96}. The only difference is that we do not assume that the curvature $F^\L$ has a constant rank. This forces us to use the original Melin inequality \cite{Melin71} (see also \cite[Theorem~22.3.3]{Horm3}) and not the H\"ormander refinement of this inequality \cite[Theorem~22.3.2]{Horm3}. That is the reason that $\eps\not=0$ appears in \refe{lapl}. Note, \cite{BoUr96}, that if $F^\L$ has constant rank, then \refp{lapl} is valid for $\eps=0$. \refp{lapl} is proven in \refs{lapl}. \ssec{prmain}{Proof of \reft{main}} Assume that the orientation defined by $iF^\L$ coincides with the given orientation of $M$ and $s\in\Gam(M,\calE}\newcommand{\W}{\calW^-\otimes\L)$, or that the orientation defined by $iF^\L$ is opposite to the given orientation of $M$ and $s\in\Gam(M,\calE}\newcommand{\W}{\calW^+\otimes\L)$. By \refp{D-D}, \eq{D>D} \<D_k^2\, s,s\> \ \ge \ \< \Del_k\, s,s\> \ - \ k\,\big\langle\, (\tau(x)-2m(x))\, s,\, s\big\rangle \ - \ C\, \|s\|^2. \end{equation} Choose \[ 0 \ < \ \eps \ < \ 2\, \min_{x\in M} m(x) \] and set \[ C'=2\min_{x\in M} m(x)-\eps>0. \] Since the metric Laplacian $\Del_k$ is a non-negative operator, it follows from \refe{lapl} and \refe{D>D} that \[ \<D_k^2\, s,\, s\> \ \ge \ kC'\|s\|^2 \ - \ (C+C_\eps)\, \|s\|^2. \] Thus, for $k> (C+C_\eps)/C'$, we have $\<D_k^2\, s,s\>>0$. Hence, $D_k s\not=0$. \hfill$\square$ \sec{prUB}{Proof of the vanishing theorem for almost complex manifolds} In this section we prove \reft{UB+} and \refp{estDk}. The proof is very similar to the proof of \reft{main} (cf. \refs{prmain}). It is based on \refp{lapl} and the following refinement of \refp{D-D}: \prop{D-Dac} Assume that the matrix $\{F_{ij}\}$ (cf. \refss{acvanish}) has at least $q$ negative eigenvalues at any point $x\in M$. For any $x\in M$, we denote by $m_q(x)>0$ the minimal positive number, such that at least $q$ of the eigenvalues of $\{F_{ij}\}$ do not exceed $-m_q$. Then there exists a constant $C$ such that \[ \big\langle\, (D^2_k - \Del_k)\, \alp,\, \, \alp\big\rangle \ \ge \ -k\, \big\langle\, (\tau(x)-2m_q(x))\, \alp,\, \alp\, \big\rangle \ - \ C\, \|\alp\|^2 \] for any $j=0\nek q-1$ and any $\alp\in \calA}\renewcommand{\O}{\calO^{0,j}(M,\W\otimes\L^k)$. \end{propos} The proposition is proven in \refss{prD-Dac}. \ssec{prestDk}{Proof of \refp{estDk}} Choose $0<\eps<2\min_{x\in M}\, m_q(x)$ and set \[ C' \ = \ 2\min_{x\in M}\, m_q(x)-\eps. \] {}Fix $j=0\nek q-1$ and let $\alp\in \calA}\renewcommand{\O}{\calO^{0,j}(M,\W\otimes\L^k)$. Since the metric Laplacian $\Del_k$ is a non-negative operator, it follows from Propositions~\ref{P:lapl} and \ref{P:D-Dac}, that \[ \< D_k^2\, \alp,\alp\> \ \ge \ kC'\, \|\alp\|^2 \ - \ (C+C_\eps)\, \|\alp\|^2. \] Hence, for any $k>2(C+C_\eps)/C'$, we have \[ \|D_k\, \alp\|^2 \ = \ \<D_k^2\, \alp,\, \alp\, \> \ \ge \ \frac{k C'}2\, \|\alp\|^2. \] This proves \refp{estDk} for $j=0\nek q-1$. The statement for $j=n-p+1\nek n$ may be proven by a verbatim repetition of the above arguments, using a natural analogue of \refp{D-Dac}. (Alternatively, the statement for $j=n-p+1\nek n$ may be obtained as a formal consequence of the statement for $j=0\nek q-1$ by considering $M$ with an opposite almost complex structure). \hfill$\square$ \ssec{gr}{} If the manifold $M$ is not K\"ahler, then the operator $D_k^2$ does not preserve the $\ZZ$-grading on $\calA}\renewcommand{\O}{\calO^{0,*}(M,\W\otimes\L^k)$. However, the next proposition shows that the {\em mixed degree operator} $\alp_i\mapsto(D_k^2\alp_i)_j$ is, in a certain sense, small. \prop{gr} There exists a sequence $\del_{1}, \del_{2}\nek$ such that $\lim_{k\to\infty}\del_{k}=0$ and \[ |\<D_k^2\, \alp, \bet\>| \ \le \del_{k}\, \<D_k^2\, \alp, \alp\> \ + \ \del_{k}\, \<D_k^2\, \bet, \bet\> \ + \ \del_{k}k\, \|\alp\|^{2} \ + \ \del_{k}k\, \|\bet\|^{2}, \] for any $i\not=j$ and any $\alp\in \calA}\renewcommand{\O}{\calO^{0,i}(M,\W\otimes\L^k), \bet\in \calA}\renewcommand{\O}{\calO^{0,j}(M,\W\otimes\L^k)$. \end{propos} The proof of the proposition, based on the Lichnerowicz formula, is given in \refss{prgr}. \ssec{prUB}{Proof of \reft{UB+}} Let $\alp\in \Ker D_k$ and fix $j\not\in q,q+1\nek n-p$. Set $\bet=\alp-\alp_j$. Then \[ 0 \ = \ \|D_k\, \alp\|^2 \ = \ \|D_k\, \alp_j\|^2 \ + \ 2\RE\, \<\, D_k\, \alp_j,\, D_k\, \bet\, \> \ + \ \|D_k\, \bet\|^2. \] Hence, it follows from \refp{gr}, that \begin{equation}\label{E:alpbet} 0 \ \ge \ (1-2\del_k)\, \|D_k\, \alp_j\|^2 \ + \ (1-2\del_k)\, \|D_k\, \bet\|^2 \ - \ 2\del_{k}k\, \|\alp_i\|^{2} \ - \ 2\del_{k}k\, \|\bet\|^{2}. \end{equation} If we assume now that $k$ is large enough, so that $1-2\del_k>0$, then we obtain from \refe{alpbet} and \refp{estDk} that \[ \big((1-2\del_k)C^2k -2\del_kk\big)\, \|\alp_i\|^{2} \ \le \ 2\del_{k}k\, \|\bet\|^{2} \ \le \ 2\del_{k}k\, \|\alp\|^{2}. \] Thus \[ \|\alp_i\|^{2} \ \le \ \frac{2\del_{k}}{(1-2\del_k)C^2 -2\del_k}\, \|\alp\|^{2}. \] Hence, \reft{UB+} holds with $\eps_k=\sqrt{\frac{2\del_{k}}{(1-2\del_k)C^2-2\del_k}}$. \hfill$\square$ \sec{prAG}{Proof of the Andreotti-Grauert theorem} In this section we use the results of \refss{acvanish} in order to get a new proof of the Andreotti-Grauert theorem (\reft{AG}). Note first, that, if the manifold $M$ is K\"ahler, then the Andreotti-Grauert theorem follows directly from \reft{UB+}. Indeed, in this case the Dirac operator $D_k$ is equal to the Dolbeault-Dirac operator $\sqrt2(\bar{\d}+\bar{\d}^*)$. Hence, the restriction of the kernel of $D_k$ to $\calA}\renewcommand{\O}{\calO^{0,j}(M,\O(\W\otimes\L^k))$ is isomorphic to the cohomology $H^j(M,\O(\W\otimes\L^k))$. In general, $D_k\not=\sqrt2(\bar{\d}+\bar{\d}^*)$. However, the following proposition shows that those two operators have the same ``large $k$ behavior". Recall from \refss{acvanish} the notation \[ \calE}\newcommand{\W}{\calW \ = \ \Lam(T^{0,1}M)\otimes\W. \] \prop{D-dd} There exists a bundle map $A\in\End(\calE}\newcommand{\W}{\calW)\subset \End(\calE}\newcommand{\W}{\calW\otimes\L^k)$, independent of $k$, such that \begin{equation}\label{E:D-dd} \sqrt2\, \left(\bar{\d}+\bar{\d}^*\right) \ = \ D_k \ + \ A. \end{equation} \end{propos} \begin{proof} Choose a holomorphic section $e(x)$ of $\L$ over an open set $U\subset\L$. It defines a section $e^k(x)$ of $\L^k$ over $U$ and, hence, a holomorphic trivialization \begin{equation}\label{E:triv} U\times\CC \ \overset{\sim}{\longrightarrow} \ \L^k, \qquad (x,\phi)\mapsto \phi\cdot e^k(x)\in \L^k \end{equation} of the bundle $\L^k$ over $U$. Similarly, the bundles $\W$ and $\W\otimes\L^k$ may be identified over $U$ by the formula \begin{equation}\label{E:WLk=W} w \ \mapsto \ w\otimes e^k. \end{equation} Let $h^\L$ and $h^\W$ denote the Hermitian fiberwise metrics on the bundles $\L$ and $\W$ respectively. Let $h^{\W\otimes\L^k}$ denote the Hermitian metric on $\W\otimes\L^k$ induced by the metrics $h^\L, h^\W$. Set \[ f(x)\ := \ |e(x)|^2, \qquad x\in U, \] where $|\cdot|$ denotes the norm defined by the metric $h^\L$. Under the isomorphism \refe{WLk=W} the metric $h^{\W\otimes\L^k}$ corresponds to the metric \begin{equation}\label{E:hk} h_k(\cdot,\cdot) \ = \ f^k\, h^\W(\cdot,\cdot) \end{equation} on $\W$. By \cite[p.~137]{BeGeVe}, the connection $\nabla^\L$ on $\L$ corresponds under the trivialization \refe{triv} to the operator \[ \Gam(U,\CC) \ \to \ \Gam(U,T^*U\otimes\CC); \qquad s\mapsto ds+kf^{-1}\d f\wedge s. \] Similarly, the connection on $\calE}\newcommand{\W}{\calW\otimes\L^k= \Lam(T^{0,1} M)^*\otimes\W\otimes\L^k$ corresponds under the isomorphism \refe{WLk=W} to the connection \[ \nabla_k:\, \alp \ \mapsto \ \nabla^\calE}\newcommand{\W}{\calW\alp \ + \ kf^{-1}\d f\wedge\alp, \qquad \alp\in \Gam(U,\Lam(T^{0,1} U)^*\otimes\W|_U) \] on $\calE}\newcommand{\W}{\calW|_U$. It follows now from \refe{claction} and \refe{dir1} that the Dirac operator $D_k$ corresponds under \refe{WLk=W} to the operator \begin{equation}\label{E:tilDk} \tilD_k:\, \alp \ \mapsto \ D_0\alp \ - \ \sqrt2kf^{-1}\iot(\d f)\alp, \qquad \alp\in \calA}\renewcommand{\O}{\calO^{0,*}(U,\W|_U). \end{equation} Here $\iot(\d f)$ denotes the contraction with the vector field $(\d f)^*\in T^{0,1}{M}$ dual to the 1-form $\d{f}$, and $D_0$ stands for the Dirac operator on the bundle $\calE}\newcommand{\W}{\calW=\calE}\newcommand{\W}{\calW\otimes\L^0$. Let $\bar{\d}_k^*:\calA}\renewcommand{\O}{\calO^{0,*}(U,\W|_U) \to \calA}\renewcommand{\O}{\calO^{0,*-1}(U,\W|_U)$ denote the adjoint of the operator $\bar{\d}$ with respect to the scalar product on $\calA}\renewcommand{\O}{\calO^{0,*}(U,\W|_U)$ determined by the Hermitian metric $h_k$ on $\W$ and the Riemannian metric on $M$. Then, it follows from \refe{hk}, that \begin{equation}\label{E:pk} \bar{\d}_k^* \ = \ \bar{\d} \ + \ kf^{-1}\iot(\d f). \end{equation} By \refe{tilDk} and \refe{pk}, we obtain \[ \sqrt2\, \left(\bar{\d}+\bar{\d}^*_k\right) \ - \ \tilD_k \ = \ \sqrt2\, \left(\bar{\d}+\bar{\d}^*_0\right) \ - \ D_0. \] Set $A=\sqrt2(\bar{\d}+\bar{\d}^*_0)-D_0$. By \cite[Lemma~5.5]{Duis96}, $A$ is a zero order operator, i.e. $A\in\End(\calE}\newcommand{\W}{\calW)$ (note that our definition of the Clifford action on $\Lam(T^{0,1}{M})^*$ and, hence, of the Dirac operator defers from \cite{Duis96} by a factor of $\sqrt2$). \end{proof} \ssec{prAG}{Proof of \reft{AG}} Let $A\in\End(\calE}\newcommand{\W}{\calW)$ be the operator defined in \refp{D-dd} and let \[ \|A\| \ = \ \sup_{\|\alp\|=1} \, \|A\alp\|, \qquad \alp\in \calA}\renewcommand{\O}{\calO^{0,*}(M,\calE}\newcommand{\W}{\calW\otimes\L^k) \] be the $L_2$-norm of the operator $A:\calA}\renewcommand{\O}{\calO^{0,*}(M,\calE}\newcommand{\W}{\calW\otimes\L^k)\to \calA}\renewcommand{\O}{\calO^{0,*}(M,\calE}\newcommand{\W}{\calW\otimes\L^k)$. By \refp{estDk}, there exists a constant $C>0$ such that \[ \|D_k\, \alp\| \ \ge \ Ck^{1/2}\, \|\alp\|, \] for any $k\gg0, \ j\not=q,q+1\nek n-p$ and $\alp\in \calA}\renewcommand{\O}{\calO^{0,j}(M,\W\otimes\L^k)$. Then, if $k> \|A\|^2/C^2$, we have \[ \|\sqrt2(\bar{\d}+\bar{\d}^*)\alp\| \ = \ \|(D_k+A)\alp\| \ \ge \ \|D_k\alp\| \ - \ \|A\|\, \|\alp\| \ge \ \left(Ck^{1/2}-\|A\|\right)\, \|\alp\| \ > \ 0, \] for any $j\not=q,q+1\nek n-p$ and $0\not=\alp\in \calA}\renewcommand{\O}{\calO^{0,j}(M,\W\otimes\L^k)$. Hence, the restriction of the kernel of the Dolbeault-Dirac operator to the space $\calA}\renewcommand{\O}{\calO^{0,j}(M,\calE}\newcommand{\W}{\calW\otimes\L^k)$ vanishes for $j\not=q,q+1\nek n-p$. \hfill$\square$ \sec{lichn}{The Lichnerowicz formula. Proof of Propositions~\ref{P:D-D}, \ref{P:D-Dac} and \ref{P:gr}} In this section we use the Lichnerowicz formula (cf. \refss{lichn}) to prove the Propositions~\ref{P:D-D}, \ref{P:D-Dac} and \ref{P:gr}. Before formulating the Lichnerowicz formula, we need some more information about the Clifford modules and Clifford connections (cf. \cite[Section~3.3]{BeGeVe}). \ssec{quant}{The symbol map and the quantization map} Recall from \refss{difforms} that the exterior algebra $\Lam{T^*M}$ has a natural structure of a self-adjoint Clifford module. The Clifford bundle $C(M)$ is isomorphic to $\Lam{T^*M}$ as a bundle of vector spaces. The isomorphism is given by the {\em symbol map} \[ \sig: \, C(M) \ \to \Lam T^*M, \qquad \sig: \, a \ \mapsto c(a)1. \] The inverse of $\sig$ is called the {\em quantization map \/} and is denoted by $\bfc$. If $e^1\nek e^{2n}$ is an orthonormal basis of $T^*_xM$, then (cf. \cite[Proposition~3.5]{BeGeVe}) \[ \bfc(e^{i_1}\wedge\dots\wedge e^{i_k}) \ = \ c(e^{i_1})\cdots c(e^{i_k}). \] Note that $\sig$ {\em is not \/} a map of algebras, i.e., $\sig(ab)\not= \sig(a)\sig(b)$. Assume now that $\calE}\newcommand{\W}{\calW$ is a Clifford module. The composition of the quantization map with the Clifford action of $C(M)$ on $\calE}\newcommand{\W}{\calW$ defines a map $\bfc:\Lam{T^*M}\to \End(\calE}\newcommand{\W}{\calW)$. Though this map does not define the action of the exterior algebra on $\calE}\newcommand{\W}{\calW$ (i.e. $\bfc(ab)\not=\bfc(a)\bfc(b)$) it plays an important role in differential geometry. Let $\calA}\renewcommand{\O}{\calO(M)=\Gam(M,\Lam{T^*M})$ denote the space of smooth sections of $\Lam{T^*M}$, i.e., the space of smooth differential forms on $M$. The quantization map induces an isomorphism between $\calA}\renewcommand{\O}{\calO(M)$ and the space of sections of $C(M)$. More generally, for any bundle $\calE}\newcommand{\W}{\calW$ over $M$, there is an isomorphism \begin{equation}\label{E:AE=GCE} \calA}\renewcommand{\O}{\calO(M,\calE}\newcommand{\W}{\calW) \ \cong \ \Gam(M,C(M)\otimes \calE}\newcommand{\W}{\calW) \end{equation} between the space of differential forms on $M$ with values in $\calE}\newcommand{\W}{\calW$ and the space of smooth sections of the tensor product $C(M)\otimes \calE}\newcommand{\W}{\calW$. Combining this isomorphism with the Clifford action \[ c:\, \Gam(M,C(M)\otimes \calE}\newcommand{\W}{\calW) \ {\to} \ \Gam(M,\calE}\newcommand{\W}{\calW), \] we obtain a map $\bfc:\, \calA}\renewcommand{\O}{\calO(M,\calE}\newcommand{\W}{\calW) \to \calE}\newcommand{\W}{\calW$. Similarly, we have a map \begin{equation}\label{E:bfc} \bfc:\, \calA}\renewcommand{\O}{\calO(M,\End(\calE}\newcommand{\W}{\calW)) \ \to \ \End(\calE}\newcommand{\W}{\calW). \end{equation} In this paper we will be especially interested in the restriction of the above map to the space of 2-forms. In this case the following formula is useful \begin{equation}\label{E:bfc2} \bfc(F) \ = \ \sum_{i<j}\, F(e_i,e_j)\, c(e^i)\, c(e^j), \qquad F\in \calA}\renewcommand{\O}{\calO^2(M,\End(\calE}\newcommand{\W}{\calW)), \end{equation} where $(e_1\nek e_{2n})$ is an orthonormal frame of the tangent space to $M$, and $(e^1\nek e^{2n})$ is the dual frame of the cotangent space. \ssec{curv}{The curvature of a Clifford connection} Let $\nabla^\calE}\newcommand{\W}{\calW$ be a Clifford connection on a Clifford module $\calE}\newcommand{\W}{\calW$ and let \[ F^\calE}\newcommand{\W}{\calW \ = \ (\nabla^\calE}\newcommand{\W}{\calW)^2 \ \in \ \calA}\renewcommand{\O}{\calO^2(M,\End(\calE}\newcommand{\W}{\calW)) \] denote the curvature of $\nabla^\calE}\newcommand{\W}{\calW$. Let $\operatorname{End}_{C(M)}\,(\calE}\newcommand{\W}{\calW)$ denote the bundle of endomorphisms of $\calE}\newcommand{\W}{\calW$ commuting with the action of the Clifford bundle $C(M)$. Then the bundle $\End(\calE}\newcommand{\W}{\calW)$ of all endomorphisms of $\calE}\newcommand{\W}{\calW$ is naturally isomorphic to the tensor product \begin{equation}\label{E:End} \End(\calE}\newcommand{\W}{\calW) \ \cong \ C(M)\otimes \operatorname{End}_{C(M)}\,(\calE}\newcommand{\W}{\calW). \end{equation} By Proposition~3.43 of \cite{BeGeVe}, $F^\calE}\newcommand{\W}{\calW$ decomposes with respect to \refe{End} as \begin{equation}\label{E:curv} F^\calE}\newcommand{\W}{\calW \ = \ R^\calE}\newcommand{\W}{\calW \ + \ F^{\E/\calS}, \qquad\qquad R^\calE}\newcommand{\W}{\calW\in \calA}\renewcommand{\O}{\calO^2(M,C(M)), \ F^{\E/\calS}\in \calA}\renewcommand{\O}{\calO^2(M,\operatorname{End}_{C(M)}\,(\calE}\newcommand{\W}{\calW)). \end{equation} In this formula, $F^{\E/\calS}$ is an invariant of $\nabla^\calE}\newcommand{\W}{\calW$ called the {\em twisting curvature \/} of $\calE}\newcommand{\W}{\calW$, and $R^\calE}\newcommand{\W}{\calW$ is determined by the Riemannian curvature $R$ of $M$. If $(e_1\nek e_{2n})$ is an orthonormal frame of the tangent space $T_xM, \ x\in M$ and $(e^1\nek e^{2n})$ is the dual frame of the cotangent space $T^*M$, then \[ R^\calE}\newcommand{\W}{\calW(e_i,e_j) \ = \ \frac14\, \sum_{k,l}\, \langle R(e_i,e_j)e_k,e_l\rangle \, c(e^k)\, c(e^l). \] Assume that $\calS$ is a spinor bundle, $\calE}\newcommand{\W}{\calW=\calW\otimes\calS$ and the connection $\nabla^\calE}\newcommand{\W}{\calW$ is given by \refe{nE=}. Then $\calA}\renewcommand{\O}{\calO(M,\operatorname{End}_{C(M)}\,(\calE}\newcommand{\W}{\calW))\cong \calA}\renewcommand{\O}{\calO(M,\End(\calW))$. The twisting curvature $F^{\E/\calS}$ is equal to the curvature $F^\calW=(\nabla^\calW)^2$ via this isomorphism (cf. \cite[p.~121]{BeGeVe}). This explains why $F^{\E/\calS}$ is called the twisting curvature. Let $\calW$ be a vector bundle over $M$ with connection $\nabla^\calW$ and let $F^\calW= (\nabla^\calW)^2$ denote the curvature of this connection. The twisting curvature of the connection \refe{nEW} on the tensor product $\calE}\newcommand{\W}{\calW\otimes\calW$ is the sum \begin{equation}\label{E:WE/S} F^{(\calE}\newcommand{\W}{\calW\otimes\calW)/\calS} \ = \ F^\calW \ + \ F^{\calE}\newcommand{\W}{\calW/\calS}. \end{equation} \ssec{lichn}{The Lichnerowicz formula} Let $\calE}\newcommand{\W}{\calW$ be a Clifford module endowed with a Hermitian structure and let $D:{\Gam(M,\E)}\to {\Gam(M,\E)}$ be a self-adjoint Dirac operator associated to a Hermitian Clifford connection $\nabla^\calE}\newcommand{\W}{\calW$. Consider the metric Laplacian (cf. \refss{lapl}) \[ \Del^\calE}\newcommand{\W}{\calW \ = \ (\nabla^\calE}\newcommand{\W}{\calW)^*\, \nabla^\calE}\newcommand{\W}{\calW\, : \ {\Gam(M,\E)} \ \to {\Gam(M,\E)}, \] where $(\nabla^\calE}\newcommand{\W}{\calW)^*$ denotes the operator adjoint to $\nabla^\calE}\newcommand{\W}{\calW:{\Gam(M,\E)}\to \Gam(M,T^*M\otimes\calE}\newcommand{\W}{\calW)$ with respect to the $L_2$-scalar product. Clearly $\Del^\calE}\newcommand{\W}{\calW$ is a non-negative self-adjoint operator. The following {\em Lichnerowicz formula \/} (cf. \cite[Theorem~3.52]{BeGeVe}) plays a crucial role in our proof of vanishing theorems: \begin{equation}\label{E:lichn} D^2 \ = \ \Del^\calE}\newcommand{\W}{\calW \ + \ \bfc(F^{\E/\calS}) \ + \ \frac{r_M}4, \end{equation} where $r_M$ stands for the scalar curvature of $M$ and $F^{\E/\calS}$ is the twisting curvature of $\nabla^\calE}\newcommand{\W}{\calW$, cf. \refss{curv}. The operator $\bfc(F^{\E/\calS})$ is defied in \refe{bfc} (see also \refe{bfc2}). Let $\L$ be a Hermitian line bundle over $M$ endowed with a Hermitian connection $\nabla^\L$ and let $\nabla_k= \nabla^{\calE}\newcommand{\W}{\calW\otimes\L^k}$ denote the product connection (cf. \refe{nEW}) on the tensor product $\calE}\newcommand{\W}{\calW\otimes\L^k$. It is a Hermitian Clifford connection on $\calE}\newcommand{\W}{\calW\otimes\L^k$. We denote by $D_k$ and $\Del_k$ the Dirac operator and the metric Laplacian associated to this connection. By \refe{WE/S}, it follows from the Lichnerowicz formula \refe{lichn}, that \begin{equation}\label{E:lechnW} D_k^2 \ = \ \Del_k \ + \ k\, \bfc(F^\L) \ + \ A, \end{equation} where $F^\L=(\nabla^\L)^2$ is the curvature of $\nabla^\L$ and \begin{equation}\label{E:A} A \ := \ \bfc(F^{\E/\calS}) \ + \ \frac{r_M}4 \ \in \ \End(\calE}\newcommand{\W}{\calW) \ \subset \ \End(\calE}\newcommand{\W}{\calW\otimes\L^k) \end{equation} is independent of $\L$ and $k$. \ssec{cFL}{Calculation of $\bfc(F^\L)$} To compare $D_k^2$ with the Laplacian $\Del_k$ we now need to calculate the operator $\bfc(F^\L)\in \End(\calE}\newcommand{\W}{\calW)\subset \End(\calE}\newcommand{\W}{\calW\otimes\L^k)$. This may be reformulated as the following problem of linear algebra. Let $V$ be an oriented Euclidean vector space of real dimension $2n$ and let $V^*$ denote the dual vector space. We denote by $C(V)$ the Clifford algebra of $V^*$. Let $E$ be a module over $C(V)$. We will assume that $E$ is endowed with a Hermitian scalar product such that the operator $c(v):E\to E$ is skew-symmetric for any $v\in V^*$. In this case we say that $E$ is a {\em self-adjoint \/} Clifford module over $V$. The space $E$ possesses a {\em natural grading \/} $E=E^+\oplus E^-$, where $E^+$ and $E^-$ are the eigenspaces of the chirality operator with eigenvalues $+1$ and $-1$ respectively, cf. \refss{chir}. In our applications $V$ is the tangent space $T_xM$ to $M$ at a point $x\in M$ and $E$ is the fiber of $\calE}\newcommand{\W}{\calW$ over $x$. Let $F$ be an imaginary valued antisymmetric bilinear form on $V$. Then $F$ may be considered as an element of $V^*\wedge V^*$. We need to estimate the operator $\bfc(F)\in\End(E)$. Here $\bfc:\Lam V^*\to C(V)$ is the quantization map defined exactly as in \refss{quant} (cf. \cite[\S{3.1}]{BeGeVe}). Let us define the skew-symmetric linear map $\tilJ:V\to V$ by the formula \[ iF(v,w) \ = \ \<v,\tilJ w\>, \qquad v,w\in V. \] The eigenvalues of $\tilJ$ are purely imaginary. Let $\mu_1\ge\cdots\ge\mu_l>0$ be the positive numbers such that $\pm{i}\mu_1\nek\pm{i}\mu_l$ are all the non-zero eigenvalues of $\tilJ$. Set \[ \tau \ = \ \Tr^+ \tilJ\ := \ \mu_1+\cdots+\mu_l, \qquad m \ = \ \min_{j}\, \mu_j. \] By the Lichnerowicz formula \refe{lichn}, \refp{D-D} is equivalent to the following \prop{cFL} Suppose that the bilinear form $F$ is non-degenerate. Then it defines an orientation of $V$. If this orientation coincides with (resp. is opposite to) the given orientation of $V$, then the restriction of $\bfc(F)$ onto $E^-$ (resp. $E^+$) is greater than $-(\tau-2m)$, i.e., for any $\alp\in E^-$ (resp. $\alp\in E^+)$ \[ \<c(F)\alp,\alp\> \ \ge \ -(\tau-2m)\, \|\alp\|^2. \] \end{propos} We will prove the proposition in \refss{prcFL} after introducing some additional constructions. Since we need these constructions also for the proof of \refp{D-Dac}, we do not assume that $F$ is non-degenerate unless this is stated explicitly. \ssec{cs}{A choice of a complex structure on $V$} By the Darboux theorem (cf. \cite[Theorem~1.3.2]{Audin91}), one can choose an orthonormal basis $f^1\nek f^{2n}$ of $V^*$, which defines the positive orientation of $V$ (i.e., $f^1\wedge\dots\wedge f^{2n}$ is a positive volume form on $V$) and such that \begin{equation}\label{E:iF} iF_x^\L \ = \ \sum_{j=1}^l\, r_j\, f^j\wedge f^{j+n}, \end{equation} for some integer $l\le n$ and some non-zero real numbers $r_j$. We can and we will assume that $|r_1|\ge|r_2|\ge\cdots\ge|r_l|$. Let $f_1\nek f_{2n}$ denote the dual basis of $V$. \rem{cFL} If the vector space $V$ is endowed with a complex structure $J:V\to V$ compatible with the metric (i.e., $J^*=-J$) and such that $F$ is a $(1,1)$ form with respect to $J$, then the basis $f_1\nek f_{2n}$ can be chosen so that $f_{j+n}=Jf_j, \ i=1\nek n$. \end{remark} Let us define a complex structure $J:V\to V$ on $V$ by the condition $f_{i+n}=Jf_i, \ i=1\nek n$. Then, the complexification of $V$ splits into the sum of its holomorphic and anti-holomorphic parts \[ V\otimes\CC \ = \ V^{1,0}\oplus V^{0,1}, \] on which $J$ acts by multiplication by $i$ and $-i$ respectively. The space $V^{1,0}$ is spanned by the vectors $e_j=f_j-if_{j+n}$, and the space $V^{0,1}$ is spanned by the vectors $\oe_j=f_j+if_{j+n}$. Let $e^1\nek e^n$ and $\oe^1\nek \oe^n$ be the corresponding dual base of $(V^{1,0})^*$ and $(V^{0,1})^*$ respectively. Then \refe{iF} may be rewritten as \[ iF_x^\L \ = \ \frac{i}2\, \sum_{j=1}^n\, r_j\, e^j\wedge \oe^j. \] We will need the following simple \lem{mur} Let $\mu_1\nek\mu_l$ and $r_1\nek r_l$ be as above. Then $\mu_i=|r_i|$, for any $i=1\nek l$. In particular, \[ \Tr^+\tilJ \ = \ |r_1| +\cdots+|r_l|. \] \end{lemma} \begin{proof} Clearly, the vectors $e_1\nek e_n; \oe_1\nek\oe_n$ form a basis of eigenvectors of $\tilJ$ and \begin{align} \tilJ\, e_j \ &= \ ir_j\, e_j, \quad \tilJ\, \oe_j \ = \ -ir_j\, \oe_j \qquad &\mbox{for} \quad j&=1\nek l, \notag\\ \tilJ\, e_j \ &= \ \tilJ\, \oe_j \ = \ 0 \qquad &\mbox{for} \quad j&=l+1\nek n.\notag \end{align} Hence, all the nonzero eigenvalues of $\tilJ$ are $\pm{i}|r_1|\nek\pm{i}|r_l|$. \end{proof} \ssec{spin1}{Spinors} Set \begin{equation}\label{E:spin1} S^+ \ = \ \bigoplus_{j \ \text{even}}\Lam^j(V^{0,1}), \quad S^- \ = \ \bigoplus_{j \ \text{odd}}\Lam^j(V^{0,1}). \end{equation} Define a graded action of the Clifford algebra $C(V)$ on the graded space $S=S^+\oplus S^-$ as follows (cf. \refss{almcomp}): \ if $v\in V$ decomposes as $v=v^{1,0}+v^{0,1}$ with $v^{1,0}\in V^{1,0}$ and $v^{0,1}\in V^{0,1}$, then its Clifford action on $\alp\in E$ equals \begin{equation}\label{E:clact} c(v)\alp \ = \ \sqrt{2}\, \left( v^{0,1}\wedge\alp \ - \ \iot(v^{1,0})\, \alp\right). \end{equation} Then (cf. \cite[\S3.2]{BeGeVe}) $S$ is the {\em spinor representation \/} of $C(V)$, i.e., the complexification $C(V)\otimes\CC$ of $C(V)$ is isomorphic to $\End(S)$. In particular, the Clifford module $E$ can be decomposed as \[ E \ = \ S\otimes W, \] where $W=\Hom_{C(V)}\, (S,E)$. The action of $C(V)$ on $E$ is equal to $a\mapsto c(a)\otimes1$, where $c(a) \ (a\in C(V))$ denotes the action of $C(V)$ on $S$. The natural grading on $E$ is given by $E^\pm= S^\pm\otimes{W}$. To prove \refp{cFL} it suffices now to study the action of $\bfc(F)$ on $S$. The latter action is completely described be the following \lem{ecF} The vectors $\oe^{j_1}\wedge\dots\wedge\oe^{j_m}\in S$ form a basis of eigenvectors of $\bfc(F)$ and \[ \bfc(F)\, \oe^{j_1}\wedge\dots\wedge\oe^{j_m} \ = \ \big(\sum_{j'\not\in\{j_1\nek j_m\}} \, r_{j'} \ - \ \sum_{j''\in\{j_1\nek j_m\}} \, r_{j''} \big)\, \oe^{j_1}\wedge\dots\wedge\oe^{j_m}. \] \end{lemma} \begin{proof} Obvious. \end{proof} \ssec{prcFL}{Proof of \refp{cFL}} Recall that the orientation of $V$ is fixed and that we have chosen the basis $f_1\nek f_{2n}$ of $V$ which defines the same orientation. Suppose now that the bilinear form $F$ is non-degenerate. Then $l=n$ in \refe{iF}. It is clear, that the orientation defined by $iF$ coincides with the given orientation of $V$ if and only if the number $q$ of negative numbers among $r_1\nek r_n$ is even. Hence, by \refl{ecF}, the restriction of $\bfc(F)\in\End(S)$ on $\Lam^j(V^{0,1})\subset S$ is greater than $-(\tau-2m)$ if the parity of $j$ and $q$ are different. The \refp{cFL} follows now from \refe{spin1}. \hfill$\square$ \ssec{prD-Dac}{Proof of \refp{D-Dac}} Assume that at least $q$ of the numbers $r_1\nek r_l$ are negative and let $m_q>0$ be the minimal positive number such that at least $q$ of these numbers are not greater than $-m_q$. It follows from \refl{ecF}, that \[ \<\, c(F)\alp,\alp\, \> \ \ge \ -\, (\tau-2m_q)\, \|\alp\|^2, \] for any $j<q$ and any $\alp\in \Lam^j(V^{0,1})$. \refp{D-Dac} follows now from the Lichnerowicz formula \refe{lichn}. \hfill $\square$ \ssec{prgr}{Proof of \refp{gr}} Let $\pi_i:\calA}\renewcommand{\O}{\calO^{0,*}(M,\calE}\newcommand{\W}{\calW\otimes\L^k)\to \calA}\renewcommand{\O}{\calO^{0,i}(M,\calE}\newcommand{\W}{\calW\otimes\L^k)$ denote the projection and set \[ \widetilde{\nabla}^{\calE}\newcommand{\W}{\calW\otimes\L^k} \ = \ \sum_i\, \pi_i\circ\nabla^{\calE}\newcommand{\W}{\calW\otimes\L^k}\circ\pi_i. \] Denote $\widetilde{\Del}_k=(\widetilde{\nabla}^{\calE}\newcommand{\W}{\calW\otimes\L^k})^*\widetilde{\nabla}^{\calE}\newcommand{\W}{\calW\otimes\L^k}$. Clearly, $\widetilde{\Del}_k$ preserves the $\ZZ$-grading on $\calA}\renewcommand{\O}{\calO^{0,*}(M,\calE}\newcommand{\W}{\calW\otimes\L^k)$. It follows from the proof of Theorem~2.16 in \cite{Dem85}, that there exists a sequence $\eps_1,\eps_2,\ldots$, convergent to zero, such that \[ (1-\eps_k)\, \<\, \widetilde{\Del}_k\, \gam,\, \gam\, \> \ - \ \eps_kk\, \|\gam\|^2 \ \le \ \<\, \Del_k\, \gam,\, \gam\, \> \ \le \ (1+\eps_k)\, \<\, \widetilde{\Del}_k\, \gam,\, \gam\, \> \ + \ \eps_kk\, \|\gam\|^2, \] for any $\gam$ in the domain of $\Del_k$. Hence, \begin{multline}\nota \|\<\, (\Del_k-\widetilde{\Del}_k)\, \gam,\, \gam\, \>\| \\ \ \le \ \eps_k\, \<\, \widetilde{\Del}_k\, \gam,\, \gam\, \> \ + \ \eps_kk\, \|\gam\|^2 \ \le \ \eps_k \, \sum_{i}\, \<\, \Del_k\, \pi_i\gam,\, \pi_i\gam\, \> \ + \ \eps_kk\, \|\gam\|^2. \end{multline} Suppose now that $\gam=\alp+\bet$, where $\alp\in\Dom^i(D^2_k),\bet\in\Dom^j(D^2_k), \ i\not=j$. Then \begin{multline}\notag \|\, 2\RE\<\, \Del_k\, \alp,\, \bet\, \>\, \| \ = \ \|\, 2\RE\<\, (\Del_k-\widetilde{\Del}_k)\, \alp,\, \bet\, \>\, \| \\ \ \le \ \|\, \<\, (\Del_k-\widetilde{\Del}_k)\, \gam,\, \gam\, \>\, \| \ + \ \|\, \<\, (\Del_k-\widetilde{\Del}_k)\, \alp,\, \alp\, \>\, \| \ + \ \|\, \<\, (\Del_k-\widetilde{\Del}_k)\, \bet,\, \bet\, \>\, \| \\ \ \le \ 2\eps_k\, \<\, \Del_k\, \alp,\, \alp\, \> \ + \ 2\eps_k\, \<\, \Del_k\, \bet,\, \bet\, \> \ + \ 2\eps_kk\, \|\alp\|^2 \ + \ 2\eps_kk\, \|\bet\|^2. \end{multline} Similarly one obtains an estimate for the imaginary part of $\<\Del_k\alp,\bet\>$. This leads to the following analogue of \refp{gr} for the operator $\Del_k$: \begin{equation}\label{E:albe} \|\, \<\, \Del_k\, \alp,\, \bet\, \>\, \| \ \le \ 2\eps_k\, \<\, \Del_k\, \alp,\, \alp\, \> \ + \ 2\eps_k\, \<\, \Del_k\, \bet,\, \bet\, \> \ + \ 2\eps_kk\, \|\alp\|^2 \ + \ 2\eps_kk\, \|\bet\|^2. \end{equation} We now apply the Lichnerowicz formula \refe{lechnW} to obtain \refp{gr} from \refe{albe}. Note, first, that the operator $A\in\End(\calE}\newcommand{\W}{\calW)\subset\End(\calE}\newcommand{\W}{\calW\otimes\L^k)$, defined in \refe{A}, is independent of $k$ and bounded. Note, also, that, by \refl{ecF}, the operator $\bfc(F^\L)$ preserves the $\ZZ$-grading on $\calA}\renewcommand{\O}{\calO^{0,*}(M,{\calE}\newcommand{\W}{\calW\otimes\L^k})$. Hence, it follows from \refe{albe} and the Lichnerowicz formula \refe{lechnW} that \begin{multline}\notag \|\, \<\, D_k^2\, \alp,\, \bet\, \>\, \| \ \le \ |\, \<\, \Del_k\, \alp,\, \bet\, \>\, | \ + \ |\, \<\, A\, \alp,\, \bet\, \>\, | \\ \ \le \ 2\eps_k\, \<\, \Del_k\, \alp,\, \alp\, \> \ + \ 2\eps_k\, \<\, \Del_k\, \bet,\, \bet\, \> \ + \ 2\eps_kk\, \|\alp\|^2 \ + \ 2\eps_kk\, \|\bet\|^2 \ + \ \|A\|\, \|\alp\|\, \|\bet\| \\ \ \le \ 2\eps_k\, \<\, D_k^2\, \alp,\, \alp\, \> \ + \ 2\eps_k\, \<\, D_k^2\, \bet,\, \bet\, \> \ + \ 2\eps_kk\, \big(1+\|\bfc(F^\L)\|+2\|A\|\big)\, \|\alp\|^2 \\ \ + \ 2\eps_kk\, \big(1+\|\bfc(F^\L)\|+2\|A\|\big)\, \|\bet\|^2. \end{multline} Hence, \refp{gr} holds with $\del_k=(1+\|\bfc(F^\L)\|+2\|A\|\big)\eps_k$. \hfill$\square$ \sec{lapl}{Estimate of the metric Laplacian} In this section we prove \refp{lapl}. \ssec{reduc}{Reduction to a scalar operator} In this subsection we construct a space $\calZ}\newcommand{\F}{\calF$ and an operator $\tilDel$ on the space $L_2(\calZ}\newcommand{\F}{\calF)$ of $\calZ}\newcommand{\F}{\calF$, such that the operator $\Del_k$ is ``equivalent" to a restriction of $\tilDel$ onto certain subspace of $L_2(\calZ}\newcommand{\F}{\calF)$. This allow to compare the operators $\Del_k$ for different values of $k$. Let $\F$ be the principal $G$-bundle with a compact structure group $G$, associated to the vector bundle $\calE}\newcommand{\W}{\calW\to M$. Let $\calZ}\newcommand{\F}{\calF$ be the principal $(S^1\times{G})$-bundle over $M$, associated to the bundle $\calE}\newcommand{\W}{\calW\otimes\L\to M$. Then $\calZ}\newcommand{\F}{\calF$ is a principle $S^1$-bundle over $\F$. We denote by $p:\calZ}\newcommand{\F}{\calF\to \F$ the projection. The connection $\nabla^\L$ on $\L$ induces a connection on the bundle $p:\calZ}\newcommand{\F}{\calF\to\F$. Hence, any vector $X\in T\calZ}\newcommand{\F}{\calF$ decomposes as a sum \begin{equation}\label{E:dec} X \ = \ X{^{\text{hor}}} \ + \ X\vert, \end{equation} of its horizontal and vertical components. Consider the {\em horizontal exterior derivative \/} $d^{\text{hor}}: C^\infty(\calZ}\newcommand{\F}{\calF)\to \calA}\renewcommand{\O}{\calO^1(\calZ}\newcommand{\F}{\calF,\CC)$, defined by the formula \[ d^{\text{hor}} f(X) \ = \ df(X^{\text{hor}}), \qquad X\in T\calZ}\newcommand{\F}{\calF. \] The connections on $\calE}\newcommand{\W}{\calW$ and $\L$, the Riemannian metric on $M$, and the Hermitian metrics on $\calE}\newcommand{\W}{\calW, \L$ determine a natural Riemannian metrics $g^\F$ and $g^\calZ}\newcommand{\F}{\calF$ on $\F$ and $\calZ}\newcommand{\F}{\calF$ respectively, cf. \cite[Proof of Theorem~2.1]{BoUr96}. Let $(d^{\text{hor}})^*$ denote the adjoint of $d^{\text{hor}}$ with respect to the scalar products induced by this metric. Let \[ \tilDel \ = \ (d^{\text{hor}})^*d^{\text{hor}}:\, C^\infty(\calZ}\newcommand{\F}{\calF) \to C^\infty(\calZ}\newcommand{\F}{\calF) \] be the {\em horizontal Laplacian} for the bundle $p:\calZ}\newcommand{\F}{\calF\to \F$. Let $C^\infty(\calZ}\newcommand{\F}{\calF)_k$ denote the space of smooth functions on $\calZ}\newcommand{\F}{\calF$, which are homogeneous of degree $k$ with respect to the natural fiberwise circle action on the circle bundle $p:\calZ}\newcommand{\F}{\calF\to \F$. It is shown in \cite[Proof of Theorem~2.1]{BoUr96}, that to prove \refp{lapl} it suffices to prove \refe{lapl} for the restriction of $\tilDel$ to the space $C^\infty(\calZ}\newcommand{\F}{\calF)_k$. \ssec{symbol}{The symbol of $\tilDel$} The decomposition \refe{dec} defines a splitting of the cotangent bundle $T^*\calZ}\newcommand{\F}{\calF$ to $\calZ}\newcommand{\F}{\calF$ into the horizontal and vertical subbundles. For any $\xi\in T^*\calZ}\newcommand{\F}{\calF$, we denote by $\xi^{\text{hor}}$ the horizontal component of $\xi$. Then, one easily checks (cf. \cite[Proof of Theorem~2.1]{BoUr96}), that the principal symbol $\sig_2(\tilDel)$ of $\tilDel$ may be written as \begin{equation}\label{E:symb} \sig_2(\tilDel)(z,\xi) \ = \ g^\F(\xi^{\text{hor}},\xi^{\text{hor}}). \end{equation} The subprincipal symbol of $\tilDel$ is equal to zero. On the {\em character set \/} $\calC=\big\{(z,\xi)\in T^*\calZ}\newcommand{\F}{\calF\backslash\{0\}:\, \xi^{\text{hor}}=0\big\}$ the principal symbol $\sig_2(\tilDel)$ vanishes to second order. Hence, at any point $(z,\xi)\in\calC$, we can define the {\em Hamiltonian map \/} $F_{z,\xi}$ of $\sig_2(\tilDel)$, cf. \cite[\S21.5]{Horm3}. It is a skew-symmetric endomorphism of the tangent space $T_{z,\xi}(T^*\calZ}\newcommand{\F}{\calF)$. Set \[ \Tr^+ F_{z,\xi} \ = \ \nu_1 \ + \ \cdots \ + \ \nu_l, \] where $i\nu_1\nek i\nu_l$ are the nonzero eigenvalues of $F_{z,\xi}$ for which $\nu_i>0$. Let $\rho:\calZ}\newcommand{\F}{\calF\to M$ denote the projection. Then, cf. \cite[Proof of Theorem~2.1]{BoUr96} \footnote{The absolute value sign of $\xi\vert$ is erroneously missing in \cite{BoUr96}.}, \begin{equation}\label{E:tr+F} \Tr^+ F_{z,\xi} \ = \ \tau(\rho(z))\, |\xi\vert| \end{equation} Here $\xi\vert$ is the vertical component of $\xi\in T^*\calZ}\newcommand{\F}{\calF$, and $\tau$ denotes the function defined in \refe{tau}. \ssec{melin}{Application of the Melin inequality} Let $D\vert$ denote the generator of the $S^1$ action on $\calZ}\newcommand{\F}{\calF$. The symbol of $D\vert$ is $\sig(D\vert)(z,\xi) =\xi\vert$. Fix $\eps>0$, and consider the operator \[ A \ = \ \tilDel \ - \big(\tau(\rho(z))-\eps\big)\, D\vert:\, C^\infty(\calZ}\newcommand{\F}{\calF) \ \to \ C^\infty(\calZ}\newcommand{\F}{\calF). \] The principal symbol of $A$ is given by \refe{symb}, and the subprincipal symbol \[ \sig_1^s(A)(z,\xi) \ = \ -\big(\tau(\rho(z))-\eps\big)\, \xi\vert. \] It follows from \refe{tr+F}, that \[ \Tr^+ F_{z,\xi}+\sig_1^s(A)(z,\xi) \ \ge \ \eps\, |\xi\vert| \ > \ 0. \] Hence, by the Melin inequality (\cite{Melin71}, \cite[Theorem~22.3.3]{Horm3}), there exists a constant $C_\eps$, depending on $\eps$, such that \begin{equation}\label{E:Aff} \<\, Af,f\, \> \ \ge -\, C_\eps\, \|f\|^2. \end{equation} Here $\|\cdot\|$ denotes the $L_2$ norm of the function $f\in C^\infty(\calZ}\newcommand{\F}{\calF)$. {}From \refe{Aff}, we obtain \[ \<\, \tilDel f,f\, \> \ \ge \ \<\, (\tau(\rho(z))-\eps)D\vert f,f\, \> \ - \ C_\eps\, \|f\|^2. \] Noting that if $f\in C^\infty(\calZ}\newcommand{\F}{\calF)_k$, then $D\vert f=kf$, the proof is complete. \hfill $\square$ \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
2011.13936
\section{Introduction} \label{sec:Introduction} Vacuum decay may be the cause of both the beginning and end of our universe. It is also one of the very few systems in which the quantum aspects of gravity are crucial in order to have a proper description of the physical process. Euclidean methods, developed mostly by Coleman and collaborators~\cite{Coleman:1977py, Callan:1977pt, Coleman:1980aw}, have been used to extend the well understood WKB quantum mechanics techniques to field theory and gravity. Decay rates have been computed for simple systems, corresponding to scalar field potentials with several minima, mostly in the thin-wall approximation. Related investigations by Brown and Teitelboim involved studying the nucleation of branes interpolating between vacua of different values of the cosmological constant~\cite{Brown:1988kg}. However, while the quantum mechanical calculations are well under control, the extensions to field theory and gravity rely on extrapolations such as analytic continuations and approximations, such as dilute instantons, that are not fully justified especially in the presence of gravity (for a review see for instance~\cite{Weinberg:2012pjx} and for a recent comprehensive and critical discussion see~\cite{Andreassen:2016cvx}). A Hamiltonian approach to vacuum transitions that describes them directly without the need to use Euclidean techniques was developed by Fischler, Morgan and Polchinski (FMP)~\cite{Fischler:1989se,Fischler:1990pk}. Solving the Hamiltonian constraints and the Israel matching conditions for the system of two spacetimes with different cosmological constants, together with the bubble wall (brane) separating them, allowed them to compute the transitions from a Schwarzschild black hole (i.e. spherically symmetric asymptotically Minkowski) space-time to a de Sitter (dS) space-time, but the formalism applies to all vacuum transitions considered by Coleman-De Luccia (CDL). The motivation for their calculation was the series of papers by Guth and collaborators on the theme of creating an inflating ‘universe in the lab’ culminating in the FGG work~\cite{Farhi:1989yr}. The latter was based on a Euclidean instanton construction whose validity was somewhat questionable (as pointed out by the authors of the paper themselves). The reason was that this instanton was singular and the question of whether it should be included in the Euclidean functional integral became an issue. However the fact that the Hamiltonian calculation of the decay rate agrees with the Euclidean approach, implied (as FMP argued) that the final results for the transition rates are robust despite the fact that Euclidean singular instanton calculation was not well defined. One important difference between the Euclidean and Hamiltonian approaches is the fact that in the Euclidean approach a series of analytic continuations (which go beyond what may be justified by WKB quantum mechanics) are needed in order to find the Lorentzian geometry after the transition. Even though the starting point may be a closed universe with $SO(4)$ spherical symmetry, the resulting geometry (inside the light cone of an observer at the centre of the nucleated bubble) after the transition turns out to correspond to an open universe with hyperbolic symmetry. However in the Hamiltonian approach of FMP there is no such implication. The entire analysis is done within the context of a spherically symmetric ($SO(3)$) ansatz but the calculation does not imply the CDL argument for an open universe\footnote{As is well known dS space allows several foliations including those that correspond to closed, open and flat slicings. In CDL the symmetries of the scalar field determine the preferred slicing after analytic continuation. However the latter cannot be justified by standard WKB arguments and this argument is not meaningful in the Hamiltonian formalism.}. Furthermore the natural Lorentzian mini-superspace calculation corresponding to the CDL Euclidean calculation would have $SO(4)$ symmetry and hence necessarily gives rise to a closed universe as we discuss later. This is an important difference especially if we consider the possibility that our own universe could be the result of a vacuum transition. Actually, based on the CDL result, it has been claimed that this is the only generic prediction of the string landscape~\cite{Freivogel:2005vv, Kleban:2012ph}\footnote{For a different view based on the ``no boundary wave function'', see the extensive work of Hawking, Hartle and Hertog (for example \cite{Hartle:2013oda} and references therein). In particular in \cite{Hawking:2017wrd} it has been argued based on a dS/CFT conjecture that the probability of observing negative curvature on exit from eternal inflation is exponentially suppressed. }. To the best of our knowledge, the fact that the Hamiltonian formalism developed by FMP can give rise to a closed universe has not been emphasised so far. Probably because the approach of FMP was originally developed in order to address the question of Minkowski black hole to dS transition that was posed by~\cite{Farhi:1989yr} in the Euclidean approach. However, the FMP formalism applies also to all other potential vacuum transitions (dS to dS, Minkowski to dS as well as transitions involving AdS~\cite{Freivogel:2005qh,Bachlechner:2016mtp, deAlwis:2019dkc,Fu:2019oyc,Mirbabayi:2020grb}). One limitation of this approach is that it only describes transitions among two spacetimes differing by the vacuum energy separated by a brane, whereas in general the CDL approach is formulated in terms of scalar potentials with different minima and barriers between them. However, in actual practice most such calculations reverted to the thin-wall approximation so that there was no essential difference to having the spaces separated by a brane as in the Brown-Teitelboim~\cite{Brown:1988kg} (BT) calculation. Furthermore the landscape of string theory results from transitions due to the nucleation of branes, which in the EFT approach are of string scale thickness, and hence effectively a thin wall so, as discussed by Bousso and Polchinski~\cite{Bousso:2000xa} for instance, the BT process (and hence FGG/FMP) is more relevant for the landscape of string theory than the scalar field process of CDL. The latter is more relevant for questions of eternal inflation and other transitions such as the transition towards decompactification, however. A generalisation of the FMP formalism to include explicit scalar field potentials is still an open question. Here, we are only partially successful in addressing this since we do it only in a mini-superspace model in which the metric and scalar field only depend on time. Nevertheless we find several interesting results essentially extending the `tunneling from nothing' arguments of Hartle-Hawking, Vilenkin and Linde~\cite{Vilenkin:1982de, Hartle:1983ai, Vilenkin:1984wp,Linde:1983mx}, and then compare with the Euclidean approach\footnote{Of course in the context of the ``no boundary wave function'' Hawking and collaborators have long advocated for a landscape of closed universes. See for example~\cite{Hartle:2013oda} and references therein.}. We start by revisiting the extension of the WKB approximation to field theory by solving the functional, time-independent Schr\"odinger equation in the WKB approximation. We reproduce the Euclidean results for the (exponential term in the) decay rates but within a totally different method. The pre-factor however is different and there is no problem with negative modes \footnote{For a recent discussion of the negative modes issues see for instance~\cite{Jinno:2020zzs} and references therein.}. Next we include gravity albeit in a mini-superspace model and then we discuss the extension (for the case of a brane) to an $SO(3)$ symmetric situation following FMP and our earlier work. One of our concrete results is to confirm the fact that in the Hamiltonian approach the end result for the geometry of the remaining universe may be a closed rather than an open universe. We then study the physical implications of this result comparing with previous studies of CDL transitions. The article is organised as follows: \begin{itemize} \item In Sec.~\ref{sec:WKBandWDW} we develop the formalism to address quantum transitions in Wheeler's superspace adapting the semi-classical WKB approximation in a covariant superspace approach. Given the nature of the Hamiltonian constraint leading to the Wheeler-DeWitt equation, the corresponding Schr\"odinger wave functional is time independent. The transition probabilities are ratios of squares of wave functionals for the different configurations. Expanding on standard WKB techniques in quantum mechanics and on previous approaches towards field theory~ \cite{Banks:1973ps, Banks:1974ij, Gervais:1977nv, Bitar:1978vx, Tanaka:1993ez}, we find general expressions, covariant on a generalised Wheeler superspace, for the leading and next order corrections to the wave functions. In particular, we find a general closed expression for the semi-classical wave functional with the pre-factor given by the analog of the Van Vleck determinant. \item In Sec.~\ref{sec:FlatSpace} we apply the formalism of Sec.~\ref{sec:WKBandWDW} to the flat space field theoretical case of a scalar field potential neglecting the effects of gravity. In particular, following the textbook quantum mechanics matching conditions we explicitly compute the S-matrix as well as the lifetime of the corresponding resonances and obtain the decay rate. In contrast to the calculations of Coleman and collaborators~\cite{Coleman:1977py, Callan:1977pt, Coleman:1980aw}, in our calculation we find that there is no issue with negative modes or analytic continuation of a manifestly real amplitude to get an imaginary part to the energy that can be interpreted as a decay width. Instead in our calculation the decay width is identified in the standard way as the imaginary part of a complex pole in the S-matrix. \item In Sec.~\ref{sec:DecayCurvedSpace} we include gravity but in order to have explicit results we concentrate on the mini-superspace model in order to compare with the CDL Euclidean approach. In this case superspace reduces to a two-dimensional space with coordinates the scalar field $\phi(t)$ and the metric scale factor $a(t)$. We find a general expression for the decay rate which in the thin-wall approximation gives exactly the CDL result but with an unclear interpretation. Furthermore, the fact that the metric in superspace is not positive definite allows for classical paths to connect the two dS minima without the need to pass through the barrier modifying substantially the results for the transition rate as compared with CDL. We argue that both approaches may be addressing different questions. We emphasise the difference between the two approaches. Even though the calculations in both cases can be said to be done in mini-superspace with an $SO(4)$ symmetry, in CDL, after analytic continuation the standard picture of vacuum transition with a wall separating the two dS spacetimes emerges turning the original $SO(4)$ symmetry into $SO(3,1)$. Whereas in the Lorentzian approach the $SO(4)$ symmetry remains. We also point out the main difference: the fact that CDL implies an open universe whereas we, as with the ‘tunneling from nothing’ scenarios and FMP, find a closed universe. \item In Sec.~\ref{sec:OpenClosed} we quickly review the relevant points of FMP to extend the results of the previous sections beyond mini-superspace, though with the restriction that the matter sector includes only the two cosmological constants and no scalar field. We describe the trajectory of the wall after nucleation and find it similar to CDL with the curious fact that the speed of the wall reaches a maximum which is less than the speed of light. Then we review the CDL arguments to obtain an open universe and revise the other implications for early universe cosmology as addressed for instance in~\cite{Freivogel:2005vv}: impact on CMB, the number of inflation e-folds, etc. We then concentrate on the implications of having a closed rather than an open universe after the transition. We point out the physical differences not only regarding the potential for measuring the curvature of the universe, but also on how inflation is obtained after the transition, address the constrains on the number of e-foldings and the effect on density perturbations. \item Finally we discuss open questions and give a general outlook in the concluding section. \end{itemize} \section{WKB for field theory and quantum gravity} \label{sec:WKBandWDW} In this section we develop a general formalism to generalise the WKB formalism for vacuum decay to the Wheeler-DeWitt wave function $\Psi$ in Wheeler's superspace. The Wheeler-DeWitt (WDW) equation is a constraint equation on the space of wave functionals that describe a gravitational system. The ratios of absolute squares of the WDW wave functionals for the different configurations ${\mathcal M}_1$ and ${\mathcal M}_2$ can then be interpreted as relative probabilities for realising them. \begin{equation} {\mathcal P}({\mathcal M}_1\rightarrow {\mathcal M}_2) = \frac{|\Psi({\mathcal M}_2)|^2}{|\Psi({\mathcal M}_1)|^2} \end{equation} The discussion below may be trivially specialised to the case of field theory in which case the (spatial integral of the) WDW equation is essentially the time independent Schr\"odinger equation. \subsection{Semi-classical expansion for the WDW equation} We assume that space-time can be foliated into a family of non-intersecting spacelike three-slices that can be seen (at least locally) as the level surfaces of a scalar function $t$. The function $t$ can be interpreted as a global time function. Given a line element of the generic form \begin{equation} \label{eq:GeneralLineElement} ds^2 = \left(-N_t^2 + N_i N^i\right) dt^2 + 2 N_i dt dx^i + \gamma_{ij} dx^i dx^j \,, \end{equation} where $N_t$, $N_i$ are lapse and shift, while $\gamma_{ij}$ is the spatial three-dimensional metric, the gravitational contribution to the Lagrangian of the system can be written as \begin{equation} \label{eq:GeneralLagrangian} L_g = \int d^3x \, N_t \sqrt{\gamma} \left(K_{ij} K^{ij} - K^2 + {}^{(3)} R\right) \,, \end{equation} where $K_{ij} = \frac{1}{2 N_t} \left(\partial_i N_j + \partial_j N_i - \partial_0 \gamma_{ij}\right)$ is the extrinsic curvature, $K = \gamma^{ij} K_{ij}$ and ${}^{(3)} R$ is the intrinsic curvature of the three-dimensional slice. We denote the canonically conjugate momentum to a field $\Phi^M$ by $\pi_M$. The primary constraints of the system arising from Eq.~\eqref{eq:GeneralLagrangian} are $\pi_{N_{t}}\approx0,\,\pi_{N_{i}}\approx 0$, where $\approx$ means that they are constraints on the classical solutions. In the quantum case we correspondingly have constraints on the space of wave-functionals $\Psi(\Phi)$, where $\Phi$ collectively denotes the three-metric and matter fields (and their spatial derivatives up to second order) present in the system. Note that the classical constraints $\pi_{N_{t}}\approx0,\,\pi_{N_{i}}\approx 0$ imply that the wave function $\Psi$ is independent of $N_{t}$, $N_{r}$. The full system is described by $L_g + L_{\rm mat}$ where $L_{\rm mat}$ is the Lagrangian that described the matter present in the system.\\ The Hamiltonian constraint takes the general form \begin{equation} \label{eq:GeneralHamiltonian} {\cal H=}\frac{1}{2}G^{MN}(\Phi)\pi_{M}\pi_{N}+f[\Phi]\approx0 \,. \end{equation} where $G_{MN}$ is the metric on the $d$-dimensional field space, including all the components of $\gamma_{ij}$ and the matter fields, and can be read from the kinetic terms of $L_g + L_{\rm mat}$. At the same time, the momentum constraint ${\cal P}_{i}\approx 0$ has to hold. Up to operator ordering ambiguities which are fixed by demanding the derivatives with respect to the components $\Phi^{M}$ of $\Phi$ are covariant with respect to the metric $G_{MN}$ (which we emphasise is not positive definite in the presence of gravity), we have for the WDW equation (replacing $\pi_{M}\rightarrow-i\hbar\nabla_{M}$) \begin{equation} {\cal H} \Psi(\Phi) = \left[-\frac{\hbar^{2}}{2}G^{MN}(\Phi)\nabla_{M}\nabla_{N}+f[\Phi]\right]\Psi(\Phi) =0 \,.\label{eq:WD} \end{equation} As usual we can write \begin{equation} \Psi[\Phi]=e^{\frac{i}{\hbar}S[\Phi]} \,,\label{eq:Psi} \end{equation} and define the semi-classical expansion \begin{equation} S[\Phi]=S_{0}[\Phi]+\hbar S_{1}[\Phi]+O(\hbar^{2}).\label{eq:Sexpn} \end{equation} Substituting Eq.~\eqref{eq:Psi} and Eq.~\eqref{eq:Sexpn} in the WDW equation we can in principle determine recursively the semi-classical expansion coefficients. The lowest two orders give \begin{eqnarray} \frac{1}{2}G^{MN}\frac{\delta S_{0}}{\delta\Phi^{M}}\frac{\delta S_{0}}{\delta\Phi^{N}}+f[\Phi] & = & 0\label{eq:HJ} \,,\\ 2G^{MN}\frac{\delta S_{0}}{\delta\Phi^{M}}\frac{\delta S_{1}}{\delta\Phi^{N}} & = & iG^{MN}\nabla_{M}\nabla_{N}S_{0} \,. \label{eq:prefactoreqn} \end{eqnarray} In Hamilton-Jacobi theory, which corresponds to the classical limit of the quantum calculation, $\pi_M=\frac{\partial S}{\partial\Phi^M}$. Observe that at a turning point $\pi_{M}=\frac{\delta S_{0}}{\delta\Phi^{M}}=0$ for all $M$, and the semi-classical expansion breaks down since $S_{1}$ cannot be determined. Let us now introduce on the selected spatial slice a set of integral curves, parametrised by $s$, on the field manifold \begin{equation} C(s)\frac{d\Phi^{N}}{ds}=G^{MN}\frac{\delta S_{0}}{\delta\Phi^{M}}.\label{eq:curves} \end{equation} Given the constraints, the classical action (on a classical trajectory) becomes \begin{eqnarray} S_{0}[\Phi_{s}] & = & \int^{\Phi_{s}}\int_{X}\pi_{M}d\Phi^{M} \label{eq:HJ2}\nonumber \\ & = & \int^{s}ds'\int_{X}\frac{\delta S_{0}}{\delta\Phi^{M}}\frac{d\Phi^{M}}{ds'} \nonumber \\ & = & \int^{s}ds'C^{-1}(s')\int_{X}\frac{\delta S_{0}}{\delta\Phi^{M}}G^{MN}\frac{\delta S_{0}}{\delta\Phi^{N}} \nonumber \\ & = & -2\int^{s}ds'C^{-1}(s')\int_{X}f[\Phi_{s'}] \,.\label{eq:S0intf} \end{eqnarray} We have used Eq.~\eqref{eq:curves} in the third line and Eq.~\eqref{eq:HJ} in the fourth. Similarly from Eq.~\eqref{eq:prefactoreqn} after integrating over the spatial slice $X$ we have \begin{equation} \frac{dS_{1}}{ds}=\int_{X}\frac{d\Phi^{N}}{ds}\frac{\delta S_{1}}{\delta\Phi^{N}}=\frac{i}{2}C^{-1}(s)\int_{X}\nabla^{2}S_{0}\label{eq:dS1} \end{equation} giving \begin{equation} S_{1}[\Phi_{s}]=\frac{i}{2}\int^{s}ds'C^{-1}(s')\int_{X}\nabla^{2}S_{0}[\Phi_{s'}] \,. \label{eq:S1} \end{equation} For a given parametrisation $C(s)$ one can in principle solve the first order differential equation in Eq.~\eqref{eq:curves} (with $S_0$ given by Eq.~\eqref{eq:S0intf} to get $\Phi_{s}$ as a function of $s$ and substituting in Eq.~\eqref{eq:S1} the semi-classical correction $S_{1}$ is determined away from any turning points (caustics). One can also choose a parameter $\tau$ to be the distance function along the trajectories, defined as \begin{equation} d\tau^{2}\equiv\int_{X}\delta\Phi^{M}G_{MN}\delta\Phi^{N} \,,\label{eq:dtau} \end{equation} we get using Eq.~\eqref{eq:curves} and Eq.~\eqref{eq:HJ} \begin{equation} \left(\frac{d\tau}{ds}\right)^{2}=\int_{X}\frac{\delta\Phi^{M}}{ds}G_{MN}\frac{\delta\Phi^{N}}{ds}=-2C^{-2}(s)\int_{X}f[\Phi] \,.\label{eq:dtauds} \end{equation} Solving this for $C(s)$ we have from Eq.~\eqref{eq:S0intf} for the classical action $S_{0}$ (with an arbitrary parametrisation of the integration trajectory) \begin{equation} S_{0}[\Phi_{s}]-S_{0}[\Phi_{0}]=\int_{0}^{s}ds'\left[\int_{X}\frac{d\Phi_{s'}^{M}}{ds'}G_{MN}\frac{d\Phi_{s'}^{N}}{ds'}\right]^{1/2}\left[\int_{X}(-2f[\Phi_{s'}]\right]^{1/2} \,. \end{equation} The classical path along which this has to be evaluated is of course the one which extremises this action with the end points fixed. The variational derivative may be worked out easily by observing that the variation of the first factor gives the left hand side of the geodesic equation on superspace. Introducing the metric compatible connection on superspace (Vilkovsky connection) $D/D\tau=\frac{d\Phi^{M}}{d\tau}\nabla_{M}$ we get \begin{equation} G_{PN} \, \Sigma[\Phi_{\tau}]\frac{D}{D\tau} \left(\Sigma[\Phi_{\tau}]\frac{d\Phi_{\tau}^{N}}{d\tau} \right)+\frac{\delta f[\Phi_{\tau}]}{\delta\Phi^{P}}=0 \,, \end{equation} where after doing the variation we have set $s=\tau$ and defined $\Sigma[\Phi_{\tau}]\equiv\sqrt{-2\int_{X}f[\Phi_{\tau}]}$. Or defining the affine parameter $\sigma$ by $d\sigma=d\tau/\Sigma[\Phi_{\tau}]$ (note that this corresponds to $s$ if we choose $C(s)=1$ in Eq.~\eqref{eq:dtauds}) we have the classical equations of motion (that would follow also directly from the Hamiltonian in Eq.~\eqref{eq:GeneralHamiltonian}) \begin{equation} \label{eq:GeneralEOM} \frac{D}{D\sigma}\frac{d\Phi_{\sigma}^{N}}{d\sigma}+G^{NP}\frac{\delta f[\Phi_{\sigma}]}{\delta\Phi^{P}}=0 \,. \end{equation} We remark in passing that this equation of motion does not have an obvious interpretation as Lorentzian or Euclidean since in the presence of gravity (as we stressed before) the superspace metric $G_{MN}$ is not positive definite. We will see the consequences of this explicitly when we discuss the mini-superspace example. Now going back to Eq.~\eqref{eq:dtauds} and putting $s=\tau$, \begin{equation} C^{2}(\tau)=-2\int_{X}f[\Phi].\label{eq:ctau} \end{equation} Hence in terms of $\tau$ we have \begin{empheq}[box=\fbox]{align} \quad S_{0}[\Phi_{\tau}] &=\int^{\tau}d\tau'\sqrt{-2\int_{X}f[\Phi_{\tau'}]} \,.\quad \nonumber \\ \quad S_{1}[\Phi_{\tau}] & =\frac{i}{2}\int^{\tau}d\tau'\frac{1}{\sqrt{-2\int_{X}f[\Phi_{\tau'}]}}\int_{X}\nabla^{2}S_{0}[\Phi_{\tau'}] \,,\quad\label{eq:S1tau} \end{empheq} \subsection{Wave Function and van Vleck Determinant} \label{sec:VanVleckDeterminant} In the corresponding multi-dimensional quantum mechanical case the expression for $S_{1}$ is given by the VanVleck determinant~\cite{VanVleck:1928zz} (see also~\cite{Brown:1971zzc}). To see the connection we first observe (essentially generalising an argument in~\cite{Tanaka:1993ez}) the following. Changing to the coordinates defined with respect to the orthonormal basis we have for a variation around a trajectory \begin{equation} \delta\Phi^{M}(x)=\delta\tau t^{M}(x)+\delta\lambda^{\bar{P}}\frac{\partial\Phi^{M}}{\partial\lambda^{\bar{P}}} \,,\label{eq:deltaPhi} \end{equation} where $t^M=\partial \Phi^M/\partial \tau$, while \begin{equation} \label{eq:OrthogonalVects} \frac{\partial}{\partial\lambda^{\bar{P}}} = \int_{X} \frac{\partial\Phi^{M}}{\partial\lambda^{\bar{P}}}\frac{\partial}{\partial \Phi^{M}} \,. \end{equation} The vectors defined in Eq.~\eqref{eq:OrthogonalVects} are orthogonal to the vector $\partial/\partial\tau=\int_{X}\left(\partial\Phi^{M}/\partial\tau\right)\partial/\partial \Phi^{M}$. Let us denote the components in the original coordinates (including the spatial position $x$ which is to be treated as an index as $A=\{M,x\}$. In other words it is convenient to use DeWitt's condensed notation treating the set of fields (including metric components) $\{\Phi^{M}\}$ as a set of ``coordinates'' i.e. $q^{i}\rightarrow\Phi^{Mx}$ where the spatial variable $x$ is treated as an index with the understanding that sums over $x$ are integrals and Kronecker deltas are replaced by Dirac delta functions. The superspace metric in the new coordinate system $(\bar{A}=\tau,\lambda^{\bar{P}})$ is \begin{equation} \bar{G}_{\bar{A}\bar{B}}=\frac{\partial\Phi^{A}}{\partial\lambda^{\bar{A}}}G_{AB}\frac{\partial\Phi^{B}}{\partial\lambda^{\bar{B}}} \,,\label{eq:metricnew} \end{equation} therefore \begin{eqnarray} G^{AB}\nabla_{A}\nabla_{B}S_{0} & = & G^{\bar{A}\bar{B}}\nabla_{\bar{A}}\nabla_{\bar{B}}S_{0}=\frac{1}{\sqrt{\bar{G}}}\partial_{\bar{A}}\left(\sqrt{\bar{G}}G^{\bar{A}\bar{B}}\partial_{\bar{B}}S_{0}\right)\nonumber \\ & = & \frac{1}{\sqrt{\bar{G}}}\partial_{\bar{A}}\left(\frac{\partial\lambda^{\bar{A}}}{\partial\Phi^{A}}\sqrt{\bar{G}}G^{AB}\partial_{B}S_{0}\right)=\frac{1}{\sqrt{\bar{G}}}\partial_{\bar{A}}\left(\frac{\partial\lambda^{\bar{A}}}{\partial\Phi^{A}}\sqrt{\bar{G}}C(\tau)\frac{\partial\Phi^{A}}{\partial\tau}\right)\nonumber \\ & = & \frac{1}{\sqrt{\bar{G}}}\partial_{\bar{A}}\left(\sqrt{\bar{G}}C(\tau)\frac{\partial\lambda^{\bar{A}}}{\partial\tau}\right)=\frac{1}{\sqrt{\bar{G}}}\partial_{\tau}\left(\sqrt{\bar{G}}C(\tau)\right)\,. \label{eq:Laplacian} \end{eqnarray} In the second line we used Eq.~\eqref{eq:curves}. From Eq.~\eqref{eq:dS1} we have \begin{equation} \frac{dS_{1}}{d\tau}=\frac{i}{2}\partial_{\tau}\ln\left(C(\tau)\sqrt{\bar{G}}\right)=\frac{i}{2}\partial_{\tau}\ln\left(C(\tau)\det\frac{\partial\Phi^{A}}{\partial\lambda^{\bar{A}}}\sqrt{G}\right) \,.\label{eq:dS1-2} \end{equation} Noting that the line element on superspace may be rewritten as \begin{equation} ds^{2}=\bar{G}{}_{\bar{A}\bar{B}}d\lambda^{\bar{A}}d\lambda^{\bar{B}}=d\tau^{2}+G_{AB}\frac{d\Phi^{A}}{d\lambda^{\bar{N}}}\frac{d\Phi^{B}}{d\lambda^{\bar{M}}}d\lambda^{\bar{N}}d\lambda^{\bar{M}} \,,\label{eq:lineelement} \end{equation} we have (using also Eq.~\eqref{eq:ctau}) \begin{equation} S_{1}[\Phi_{\tau}]=\frac{i}{2}\ln\sqrt{-2\int_{X}f[\Phi_{\tau}]}+\frac{i}{2}\ln\sqrt{\left(\det G_{AB}\frac{d\Phi^{A}}{d\lambda^{\bar{N}}}\frac{d\Phi^{B}}{d\lambda^{\bar{M}}}\right)_{\tau}}+{\rm constant}.\label{eq:S1-2} \end{equation} Thus the integral in the second term of Eq.~\eqref{eq:S1tau} for $S_1$ is in fact the log of the determinant of the superspace metric in the orthogonal directions to the trajectory defined by $\partial/\partial\tau$. Thus the semi-classical wave function may be written as \begin{equation} \boxed{\quad \Psi[\Phi_{\tau}]=\frac{\left[-2\int_{X}f[\Phi_{0}]\right]^{1/4}}{\left[-2\int_{X}f[\Phi_{\tau}]\right]^{1/4}}\frac{\left(\det G_{AB}\frac{d\Phi^{A}}{d\lambda^{\bar{N}}}\frac{d\Phi^{B}}{d\lambda^{\bar{M}}}\right)_{0}^{1/4}}{\left(\det G_{AB}\frac{d\Phi^{A}}{d\lambda^{\bar{N}}}\frac{d\Phi^{B}}{d\lambda^{\bar{M}}}\right)_{\tau}^{1/4}}e^{\frac{i}{\hbar}\left(S_{0}\left[\Phi_{\tau}\right]-S_{0}[\Phi_{0}]\right)}\Psi[\Phi_{0}] \,,\quad}\label{eq:Psi1} \end{equation} with $S_{0}$ given by Eq.~\eqref{eq:S1tau}. This formula generalises one obtained for many particle Quantum Mechanics and canonical QFT in~\cite{Banks:1973ps,Banks:1974ij, Gervais:1977nv, Tanaka:1993ez}. Alternatively we can rewrite the expression for this in terms of the (generalisation of) the VanVleck determinant. To see this we go back to Eq.~\eqref{eq:dS1} and choose the parameter $s$ such that $C(s)=1$. Then the calculation in Eq.~\eqref{eq:Laplacian} shows that \begin{equation} S_{1}[\Phi_{s}]-S_{1}\left[\Phi_{0}\right]=\frac{i}{2}\ln\left(\det\frac{\delta\Phi^{A}}{\delta\lambda^{\bar{A}}}\sqrt{G}\right) \,. \end{equation} Assuming that the complete integral of the Hamilton-Jacobi equation depends on a set of parameters $\alpha_{\bar{A}}$, we can identify the conjugate variables with our parameters $\lambda^{\bar{A}}$ , i.e. \begin{equation} \lambda^{\bar{A}}=\frac{\delta S_{0}[\Phi_{s};\alpha]}{\delta\alpha^{\bar{A}}} \,.\label{eq:lambdaconj} \end{equation} The VanVleck matrix can then be written as \begin{equation} \left[\frac{\delta^{2}S_{0}}{\delta\Phi^{A}\delta\alpha^{\bar{A}}}\right]=\left[\frac{\delta\lambda^{\bar{A}}}{\delta\Phi^{A}}\right]=\left[\frac{\delta\Phi^{A}}{\delta\lambda^{\bar{A}}}.\right]^{-1} \end{equation} Therefore \begin{equation} S_{1}[\Phi_{s}]=-\frac{i}{2}\ln\left(\det\left[\frac{\delta^{2}S_{0}}{\delta\Phi^{A}\delta\alpha^{\bar{A}}}\right]\sqrt{G}\right)_{s}+{\rm constant} \,.\label{eq:S1-S0} \end{equation} Finally, the wave function with semi-classical corrections take the form \begin{empheq}[box=\fbox]{align} & \nonumber \\ \qquad \Psi[\Phi_{s}] &= \frac{1}{P_{0}}\sqrt[4]{G_{s}}\det\left[\frac{\delta^{2}S_{0}}{\delta\Phi^{A}\delta\alpha^{\bar{A}}}\right]_{s}e^{\frac{i}{\hbar}S_{0}\left[\Phi s\right]}\Psi[\Phi_{0}] \,,\, \qquad \nonumber \\ \qquad S_{0}[\Phi_{s}] & = \int_{o}^{s}ds'\sqrt{-2\int_{X}f[\Phi_{s'}]ds'}+{\rm constant} \,,\qquad \\ \nonumber \label{eq:Psi2} \end{empheq} with the constant $1/P_{0}$ fixed such that the left hand side at $s=0$ agrees with the right hand side at $s=0$. A formula similar to this in the context of canonical field theory has been given by Bitar and Chang~\cite{Bitar:1978vx}. However the pre-factor was obtained there, not by following the WKB method for getting it, but by switching to a functional integral over the fluctuations around the classical path. This gives them a pre-factor which is the inverse of the VanVleck pre-factor above. Furthermore these authors (in contrast to those of~\cite{Gervais:1977nv, Tanaka:1993ez}) claim agreement with the pre-factor in the Euclidean instanton analyis of Coleman et al.~\cite{Coleman:1977py, Callan:1977pt, Coleman:1980aw}. However we fail to see this. For instance the latter depended on the dilute gas approximation and relied crucially on the presence of a single negative mode in the fluctuations around it. Clearly in the above formula the issue of negative modes do not play a special role. In the next two sections we will elaborate on these differences. \section{WKB in flat space} \label{sec:FlatSpace} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.7]{PotentialFlatSpace.pdf} \caption{Potential.\label{fig:PotentialFlatSpace}} \end{center} \end{figure} In this section we will recall how to use WKB for the study of vacuum decay in field theory, in flat space. We will analyse two different situations, corresponding to the two potentials in Fig.~\ref{fig:PotentialFlatSpace}. The left panel corresponds to the process of vacuum decay from a false vacuum to a true vacuum characterised by a lower energy density. In this context, we will show explicitly how the leading order final result corresponds to the CDL bounce, and how the analytic continuation is well justified by the fact that there is an under the barrier integral, which is equivalent to analytically continuing time to an imaginary variable. In this case, the WKB formalism can be used to compute the transmission coefficient $T^2 = \frac{|\psi(\phi_B)|^2}{|\psi(\phi_A)|^2}$, that gives a measure of the decay rate. In the case corresponding to the right panel of Fig.~\ref{fig:PotentialFlatSpace} we are able to be more precise; as the potential asymptotically goes to zero, we can define the S-matrix for such a system and we will show how to compute the decay rate exactly from the interpretation of a resonance as a complex pole of the S-matrix.\\ Consider the potential in Fig.~\ref{fig:PotentialFlatSpace}. We need to solve the flat space version of Eq.~\eqref{eq:WD}. Now we have a global constraint - classically it is a constraint on the Hamiltonian (rather than the density), \begin{equation} H=\int_{X}\left[\frac{\pi_{\phi}^{2}}{2}+\frac{1}{2}\boldsymbol{\nabla}_{x}^{2}\phi+V(\phi)\right]=E \,,\label{eq:Hphi} \end{equation} and hence the Schr\"odinger equation \begin{equation} \int_{X}\left[-\frac{\hbar^{2}}{2}\frac{\delta^{2}}{\delta\phi(x)^{2}}+\frac{1}{2}\boldsymbol{\nabla}_{x}^{2}\phi+V(\phi)\right]\Psi[\phi]=E\Psi[\phi]. \end{equation} Using $G^{\phi\phi}=1$ in Eq.~\eqref{eq:HJ} and Eq.~\eqref{eq:prefactoreqn} \begin{equation} \int_{X}f[\phi]=\int_{X}\left[\frac{1}{2}\boldsymbol{\nabla}_{x}^{2}\phi+V(\phi)\right]-E\equiv U[\phi]-E,\label{eq:Uphi} \end{equation} so Eq.~\eqref{eq:S0intf} becomes \begin{equation} S_{0}\left[\phi\right]=-2\int^{s}ds'C^{-1}(s)\left[U[\phi_{s'}]-E\right] \,. \end{equation} Note that in order to make these expressions well-defined we choose the field configuration $\phi$ such that at large $|{\bf x}|$ it goes asymptotically to $\phi_{IV}$ rapidly enough to make all the integrals above finite. Alternatively as we will do in the next section, we can work in a compact space such as a three-sphere. At this point we can write, in general \begin{align} k & =\sqrt{2\left[E-U[\phi(\tau)]\right]}\,,\qquad {\rm for}\,E>U\left(\phi(\tau)\right) \,,\label{eq:ktau}\\ \kappa & =\sqrt{2\left[U[\phi(\tau)]-E\right]}\,,\qquad {\rm for}\,E<U(\phi(\tau)) \,.\label{eq:kappatau} \end{align} The leading order wave-functionals in the classically non-allowed and allowed regions respectively are determined by the WKB matching conditions. Consider the case in which the classically forbiden region is located at $\tau > \tau_0$, then the wave functionals in the classically allowed and in the classically forbidden regions are respectively \begin{eqnarray} \Psi[\phi] &=& \frac{2 A}{\sqrt{k} } \cos\left(\int_\tau^{\tau_0} k(\phi(\tau)) d\tau - \frac{\pi}{4}\right) - \frac{B}{\sqrt{k}} \sin\left(\int_\tau^{\tau_0} k(\phi(\tau)) d\tau - \frac{\pi}{4}\right) \,, \nonumber \\ \Psi[\phi] &=& \frac{A}{\sqrt{\kappa} } \exp\left(-\int_{\tau_0}^{\tau} \kappa(\phi(\tau)) d\tau\right) + \frac{B}{\sqrt{\kappa}} \exp\left(\int^\tau_{\tau_0} \kappa(\phi(\tau)) d\tau \right) \,, \label{eq:WKB1} \end{eqnarray} If the classically forbidden region is located at $\tau < \tau_0$, the wave-functionals are \begin{eqnarray} \Psi[\phi] &=& \frac{A}{\sqrt{\kappa} } \exp\left(-\int_{\tau_0}^{\tau} \kappa(\phi(\tau)) d\tau\right) + \frac{B}{\sqrt{\kappa}} \exp\left(\int^\tau_{\tau_0} \kappa(\phi(\tau)) d\tau\right) \,, \nonumber \\ \Psi[\phi] &=& \frac{2 A}{\sqrt{k} } \cos\left(\int_\tau^{\tau_0} k(\phi(\tau)) d\tau - \frac{\pi}{4}\right) - \frac{B}{\sqrt{k}} \sin\left(\int_\tau^{\tau_0} k(\phi(\tau)) d\tau - \frac{\pi}{4}\right) \,, \label{eq:WKB2} \end{eqnarray} In the above we've only kept the pre-factor corresponding to longitudinal fluctuations of the field i.e. corresponding to the first term in Eq.~\eqref{eq:S1-2}. The second term coming from transverse fluctuations is not explicitly written since it plays no role in the further discussion. Now recall that the classically allowed and forbidden regions are defined in terms of the potential $U[\phi]$, that depends on the specific path in the field space chosen to perform the integration. Hence they cannot be visualised in the potentials of Fig.~\ref{fig:PotentialFlatSpace}. However, we can expect that for a generic path in field space, it would take a form that is similar to that shown in Fig.~\ref{fig:PotentialFlatSpace}, with a finite barrier in the middle and infinite barriers on both sides for the case of the left panel of Fig.~\ref{fig:PotentialFlatSpace}, and on the left side only for the case of the right panel of Fig.~\ref{fig:PotentialFlatSpace}, see Fig.~\ref{fig:PotentialUFlatSpace}. At this point we will distinguish between the two cases. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.7]{PotentialUFlatSpace.pdf} \caption{Potential.\label{fig:PotentialUFlatSpace}} \end{center} \end{figure} \subsection{WKB for decay in a two-vacua potential} In the case of the left panel of Fig.~\ref{fig:PotentialUFlatSpace}, we can identify five different regions, three of which are classically disallowed ($1$, $3$ and $5$) while two are classically allowed ($2$ and $4$). We impose that in region $1$ the decaying component of the wave-functional is absent, hence $B_1 = 0$ and \begin{equation} \Psi_1[\phi] = \frac{A_1}{\sqrt{\kappa}} \exp\left(- \int_\tau^{a} \kappa d\tau \right) \,. \end{equation} Using Eq.~\eqref{eq:WKB2} we can easily find that \begin{equation} A_2 = \left(\cos\theta e^{-i\pi/4} + \sin\theta e^{i\pi/4}\right) A_1 \,, \qquad B_2 = \left(\cos\theta e^{i\pi/4} +\sin\theta e^{-i\pi/4}\right) A_1 \,, \end{equation} where $\theta = \int_{a}^{b} k d\tau \,, $ so that \begin{equation} \Psi_2[\phi] = \frac{A_2}{\sqrt{k}} \exp\left(i \int_b^\tau k d\tau\right) + \frac{B_2}{\sqrt{k}} \exp\left(-i \int_b^\tau k d\tau\right) \,. \end{equation} The connection between the regions $2$ and $4$ can be easily found by using the following connection formula, see~\cite{Merzbacher:1998} \begin{equation} \begin{pmatrix} A_4 \\ B_4 \end{pmatrix} = \frac{1}{2} \begin{pmatrix} \frac{1}{2 \lambda} + 2 \lambda & i \left(\frac{1}{2 \lambda} - 2 \lambda\right) \\ - i \left(\frac{1}{2 \lambda} - 2 \lambda\right) & \frac{1}{2 \lambda} + 2 \lambda \end{pmatrix} \begin{pmatrix} A_2 \\ B_2 \end{pmatrix} \,, \end{equation} where $\lambda = \exp\left(\int_b^c \kappa d \tau\right)$. We find that \begin{equation} \label{eq:A4B4} A_4 = e^{i\pi/4} \frac{A_2}{2 \lambda} (\sin\theta - 4 i \cos\theta \lambda^2) \,, \qquad B_4 = e^{i\pi/4} \frac{A_2}{2 \lambda} (-i\sin\theta + 4 \cos\theta \lambda^2) \,,\nonumber \end{equation} so that the wave-functional takes the form \begin{equation} \Psi_4[\phi] = \frac{A_4}{\sqrt{k}} \exp\left(i \int_c^\tau k d\tau\right) + \frac{B_4}{\sqrt{k}} \exp\left(-i \int_c^\tau k d\tau\right) \,. \end{equation} Using Eq.~\eqref{eq:WKB1} we find that \begin{equation} A_5 = \frac{A_4 e^{i \omega} + B_4 e^{-i\omega}}{2} \,, \qquad B_5 = i(A_4 e^{i\omega} - B_4 e^{-i\omega}) \,,\nonumber \end{equation} where $\omega = \int_c^d k d\tau \, $, so that \begin{equation} \Psi_5[\phi] = \frac{A_5}{\sqrt{\kappa}} \exp\left(-\int_d^\tau \kappa d\tau \right) + \frac{B_5}{\sqrt{\kappa}} \exp\left(\int_d^\tau \kappa d\tau\right) \,. \end{equation} Of course, we need to require that the rising component of the wave-functional is absent in region $5$, namely that $B_5 = 0$. This implies \begin{equation} - 4 \lambda^2 \left(1 + i e^{2 i \omega}\right) \cos\theta + i \left(1 - i e^{2 i \omega}\right) \sin\theta = 0 \,, \nonumber \end{equation} which is satisfied if \begin{eqnarray} \cos\theta &=& 0 \quad \text{and} \quad \omega = -\frac{\pi}{4} + n \pi \quad (n \in \mathbb{Z}) \,, \quad \text{or} \nonumber \\ \sin\theta &=& 0 \quad \text{and} \quad \omega = \frac{\pi}{4} + n \pi \quad (n \in \mathbb{Z}) \,. \nonumber \end{eqnarray} However, note that if we require that $\sin\theta = 0$, the coefficients $A_4$ and $B_4$ in Eq.~\eqref{eq:A4B4} would be enhanced with respect to $A_2$ and $B_2$ by a factor $\propto \lambda$. If the initial condition of the process is a homogeneous configuration with the field in the false vacuum, we expect that the coefficients $A_4$ and $B_4$ are suppressed with respect to $A_2$ and $B_2$, therefore we need to impose $\cos\theta = 0$. Note that the leading order of the transmission coefficient $T^2 = \frac{|\Psi_4[\phi]|^2}{|\Psi_2[\phi]|^2}$ is given by the factor $1/\lambda^2$ and gives a measure of the decay rate, despite in this case it is not possible to formally define it in terms of the S-matrix, unlike the case discussed in the next section. At leading order then \begin{equation} \label{eq:DecayRateWKB} \Gamma \sim T^2 \sim \frac{1}{\lambda^2} \propto \exp\left(-2 \int_b^c\kappa d\tau\right) \,, \end{equation} which is equivalent to the CDL result. It is particularly interesting to notice that the result in Eq.~\eqref{eq:DecayRateWKB} is equivalent to the Euclidean action evaluated on the bounce solution, upon subtraction of the background action. Let us make this statement more explicit: between the two turning points $b$ and $c$, the potential energy is larger than the total energy $E$ of the system. As the total energy is given by the sum of kinetic energy and potential energy, the kinetic energy has to be negative. This can be achieved by rotating the time variable to the Euclidean time, that gains an imaginary unit and makes the kinetic energy negative. Hence, in the under-the-barrier region (between $b$ and $c$) it is totally justified to rotate to Euclidean time $s = i t$, where $t$ is the usual Lorentzian time. The Euclidean action is \begin{equation} \label{eq:EuclideanAction} S_E[\phi(s)] = \int ds \left[ \int d^3 x \, \left( \frac{1}{2} \left(\frac{d\phi(s)}{ds}\right)^2 \right) + U[\phi(s)]\right] \,. \end{equation} The Euclidean energy is conserved, which implies \begin{equation} \label{eq:EuclideanEnergyConservation} \int d^3 x \, \left(\frac{1}{2} \left(\frac{d\phi(s)}{ds}\right)^2\right) - U[\phi(s)] = -E \,, \end{equation} where we take $E = U[\phi_A]$\footnote{Note that in the case of the left panel of Fig.~\ref{fig:PotentialFlatSpace}, $E = U[\phi_A] = 0$.} (despite the WKB argument holds for a general $E$), assuming that the initial state is a homogeneous field configuration $\phi = \phi_A$. Using Eq.~\eqref{eq:EuclideanEnergyConservation}, and noting that, in the under-the-barrier region \begin{equation} \frac{d\tau}{ds} = \sqrt{2(U[\phi(s)] - U[\phi_A])} \,, \end{equation} the Euclidean action in Eq.~\eqref{eq:EuclideanAction} becomes simply \begin{eqnarray} S_E[\phi(s)] &=& \int ds \left[ \int d^3 x \, \left( \frac{1}{2} \left(\frac{d\phi(s)}{ds}\right)^2 \right) + U[\phi(s)]\right] = \nonumber \\ &=& \int ds \left[2 (U[\phi(s)] - U[\phi_A])\right] + \int ds \, U[\phi_A] = \nonumber \\ &=& \int_b^c d\tau \sqrt{2(U[\phi(s)] - U(\phi_A))} + S_E^{\rm back} \,, \label{eq:EuclideanAction2} \end{eqnarray} where $S_E^{\rm back}$ is the background Euclidean action evaluated on the homogeneous solution $\phi = \phi_A$. Therefore \begin{equation} \boxed{\quad S_E[\phi(s)] - S_E^{\rm back} = \int_b^c d\tau \kappa \,,\quad } \end{equation} which shows the equivalence between the Euclidean action evaluated on the bounce solution (upon subtracting the background action) and the usual WKB factor of Eq.~\eqref{eq:DecayRateWKB}. Of course, the resort to the use of the Euclidean action is just a trick that sometimes makes the computation easier and is completely justified quantum mechanically, as it reproduces exactly the WKB result. However, there is no intrinsic reason to use the bounce solution for the post-nucleation phase: up to Eq.~\eqref{eq:DecayRateWKB} we have not introduced Euclidean time and the Euclidean action, and these are actually not needed to get to the final result. The quantum mechanics problem only knows about the symmetries that are put in the problem from the very beginning: for instance one can require from the start that the problem has a spherical $O(3)$ symmetry. There is no way for the post-nucleation solution to gain a $\sqrt{t^2 - |x|^2}$ dependence, as it is usually obtained by naively rotating back to Lorentzian signature the bounce solution. \subsection{WKB for decay in run-away potential} In the case of the right panel of Fig.~\ref{fig:PotentialFlatSpace} we can give an explicit definition of the decay rate, as we can discuss the question in terms of an S-matrix. We have in mind a state which comes in from the right, tunnels through the barrier, is reflected off the wall on the left and tunnels through the barrier back to give an outgoing state to the right. The resonances that can be formed in this process constitute the decaying state that we are interested in. The S-matrix is a phase ($S=A_{4}/B_{4}$) with complex poles corresponding to the bound states in region 2. To identify the decay widths of the corresponding resonances we first identify the bound states - these correspond to \begin{equation} \cos\theta = 0 \qquad \Rightarrow \qquad \theta = \left(n+\frac{1}{2}\right)\pi \quad (n \in \mathbb{Z})\,. \end{equation} Note that the last expression determines the possible discrete values of the energy $E_n$: in fact the energy appears both in the limits of integration (as it determines the turning points) and in the integrand (see Eq.~\eqref{eq:ktau} and Eq.~\eqref{eq:kappatau}). Considering the lowest energy state $E_0$ and expanding around this point we can write (see~\cite{Merzbacher:1998}) \begin{equation} \cos \theta\simeq\mp(E-E_{0})\left(\frac{\partial \theta}{\partial E}\right)\bigg|_{E = E_0} \,, \qquad \sin \theta\bigg|_{E = E_0} \simeq 1 \,, \nonumber \end{equation} and get \begin{equation} S\equiv\frac{A_4}{B_4}=\frac{E-E_{0}-i\left[ 1/\left(4\theta{}^{2}\left(\frac{\partial \theta}{\partial E}\right)|_{E = E_0}\right)\right] }{E-E_{0}+i\left[ 1/(\left(4\theta{}^{2}\left(\frac{\partial \theta}{\partial E}\right)|_{E = E_0}\right)\right] }\equiv e^{2i\phi}=\frac{E-E_{0}-i\Gamma/2}{E-E_{0}+i\Gamma/2} \,. \end{equation} with the decay width given by \begin{equation} \Gamma=\frac{1}{2\lambda^{2}}\left(\frac{\partial \theta}{\partial E}\right)^{-1}\bigg|_{E = E_0} \,. \end{equation} The phase shift $\phi$ has then the standard form \begin{equation} \tan\phi=\frac{\Gamma/2}{E-E_{0}} \,,\nonumber \end{equation} with the lifetime of the resonance given by \begin{empheq}[box=\fbox]{align} & \nonumber \\ \qquad \Gamma^{-1} &=2\lambda^{2}\left(\frac{\partial \theta}{\partial E}\right)\bigg|_{E = E_0} \quad \nonumber \\ \qquad &=\left[\frac{\partial}{\partial E}\int_{a}^{b}d\tau\sqrt{2\left(E-U(\tau\right)})\bigg|_{0}\right]2\exp\left[2\int_{b}^{c}d\tau\sqrt{2\left(U(\tau\right)-E})\right]\,.\quad \nonumber \\ \label{eq:lifetime} \end{empheq} Note that $2\left(\frac{\partial \theta}{\partial E}\right)\bigg|_{E = E_0}$is the classical period for oscillations between $a$ to $b$ and one could have divided the transition probability i.e. the WKB factor in Eq.~\eqref{eq:DecayRateWKB} to get this formula heuristically, while here we have derived it purely from quantum mechanical considerations. \subsection{Comparison to Coleman's formula} \label{sec:ScalarFieldComparison} We showed above that the exponential term in the life-time of the false vacuum is (for the particular case where the energy corresponds to the energy at the minimum of the potential) is the same as in Coleman's calculation. On the other hand the pre-factor is quite different from that in the well-known formula derived by Coleman~\cite{Coleman:1977py, Callan:1977pt}. However the derivation in the latter papers is not quite a straightforward application of WKB quantum mechanics but involve a series of additional assumptions whose status we discuss below. \begin{figure}[h!] \begin{center} \includegraphics[scale=1]{HJ_Potential.pdf} \caption{The effective potential $U$ along the path $\phi_{\tau}$ and for the point particle discussion in Sec.~\ref{sec:ScalarFieldComparison}.\label{fig:BouncePotential}} \end{center} \end{figure} \subsubsection*{Some issues with Coleman's argument} Let us summarise some of the weak points with Coleman's original approach \footnote{See~\cite{Andreassen:2016cvx} for related critical comments.}. \begin{itemize} \item The Hamiltonian $H$ is Hermitian - its eigenvalues are necessarily real. Coleman makes essential use of the heuristic interpretation of $\Gamma$ as the imaginary part of an eigenvalue of $ H$ and the dilute gas approximation in order to get the decay rate. The calculation leading to Eq.~\eqref{eq:lifetime} used no such interpretation - $\Gamma$ is extracted from the $S$-matrix for the scattering of a field configuration from a potential which can accommodate quasi-bound states (resonances). \item Coleman's calculation disagrees in the pre-factor from the standard WKB calculation (i.e. essentially Eq.~\eqref{eq:lifetime} with $U$ now being the quantum mechanical potential). In particular Coleman's formula for the decay rate $\Gamma$ involves dividing by the (infinite) translation symmetry of his instanton i.e. the pre-factor for the transition probability contains a factor $T\rightarrow\infty$ that is divided out to get the rate. In the calculation above there is no such factor and effectively the division is by the classical period for oscillations in the potential well. \item The last factor of Eq.~\eqref{eq:lifetime} is the square of the standard WKB decay amplitude and naturally is the same as in Coleman's formula with also the identification of the exponent as the difference between the Euclidean classical action for a classical solution with the appropriate boundary conditions and the action for a particle whose position is localised in the well (at its minimum). \item In the field theory case Coleman gives an argument for the dominant contribution to this last factor to come from a Euclidean 4-sphere configuration. However in the actual evaluation of the decay amplitude the thin-wall approximation is used in which the under the barrier region effectively shrinks to a brane. On the other hand propagation in the classically allowed region to the right of the barrier is described in terms of the analytic continuation of this Euclidean instanton. The justification for the latter is unclear. \item Of course one would like to have a quantum field theory argument for a physical picture of a first order phase transition with bubble nucleation and percolation. Coleman's argument is clearly motivated by this and indeed it would be nice to have a rigorous QFT justification for this. Unfortunately we do not see at this point how to achieve this. \end{itemize} \section{Vacuum decay in curved space} \label{sec:DecayCurvedSpace} \subsection{Review of Coleman-De Luccia} \label{sec:CDLReview} The application of Coleman's arguments to the case involving gravity are even more problematic. Apart from the issues highlighted above there is the problem that the notion of time (and hence that of transition probability per unit time) needs to be reinterpreted given that the WDW equation does not admit the usual notion of time. Let us first discuss the Euclidean mini-superspace case as given by CDL for the case $dS\rightarrow{\cal M}$ and generalised to $dS\rightarrow dS$ by Parke~\cite{Parke:1982pm}. Putting $t=i\tau$ and $a(t)=\rho(\tau)$ and gauge fixing to $N=1$ we have the metric $ds^{2}=d\tau^{2}+\rho^{2}(\tau)d\Omega_{3}^{2}$. The relevant Euclidean equation of motion then is the $\tau\tau$ component of the Einstein equation which reads ($\rho'=d\rho/d\tau,$etc.) \begin{equation} \rho'^{2}=1+\frac{1}{3}\left(\frac{1}{2}\phi'^{2}-V(\phi)\rho^{2}\right).\label{eq:rho} \end{equation} The Euclidean action is \begin{align} S_{E} & =-2\pi^{2}\int_{0}^{\tau_{{\rm max}}}d\tau\left[3\rho+3\rho\rho'^{2}-V\rho^{3}-\rho^{3}\frac{1}{2}\phi'^{2}\right]\nonumber \\ & =-12\pi^{2}\int_{0}^{\tau_{{\rm max}}}d\tau\left[\rho-\frac{1}{3}V\rho^{3}\right]\,.\label{eq:SE} \end{align} In the last step we used the equation of motion in Eq.~\eqref{eq:rho}. In this Euclidean argument this is supposed to be the instanton (bounce) action with $\phi(0)=\phi_{B}$ i.e. the value of the field at the so-called true minimum and $\phi(\infty)=\phi_{A}$, the value of the field in the false minimum. Of course there is complete symmetry between the two so the bounce action is the same for going from $A\rightarrow B$ or $B\rightarrow A$. The difference between up-tunneling (true to false vacuum) and down-tunneling (false to true) just comes from the fact that the background action which is subtracted to get the tunneling amplitude is different. So in the case of down-tunneling that we will consider here (i.e. $A\rightarrow B)$ the tunneling amplitude is given by $e^{B/2}$ where, \begin{equation} \frac{B}{2}=S_{E}-S_{E}^{A}\label{eq:B1-1} \end{equation} where the second term is the action for $\phi$ remaining at the false minimum $\phi_{A}.$ i.e. \[ S_{E}^{A}=-12\pi^{2}\int_{0}^{\tau_{{\rm max}}}d\tau\left[\rho-\frac{1}{3}V_{A}\rho^{3}\right], \] where $V_{A}=V(\phi_{A})$. Hence we have \begin{align} \frac{B}{2}& =-12\pi^{2}\int_{0}^{\tau_{{\rm max}}}d\tau\left[\rho-\frac{1}{3}V\rho^{3}\right]+12\pi^{2}\int_{0}^{\tau_{{\rm max}}}d\tau\left[\rho-\frac{1}{3}V_{A}\rho^{3}\right]\nonumber \\ & =-12\pi^{2}\int_{0}^{\bar{\tau}-\delta\tau}d\tau\left[\rho-\frac{1}{3}V_{B}\rho^{3}\right]+2\pi^{2}\bar{\rho}^{3}T+12\pi^{2}\int_{0}^{\bar{\tau}-\delta\tau}d\tau\left[\rho-\frac{1}{3}V_{A}\rho^{3}\right]\label{eq:B2-1} \end{align} In the second line we have assumed that beyond the point $\bar{\tau}+\delta\tau$, $V\simeq V_{A}$ so that the contribution from $\bar{\tau}+\delta\tau$ to $\text{\ensuremath{\tau_{{\rm max}}}}$ in the first term of the first line cancels against the second term. Also $T$ in the middle term is defined by \begin{equation} \bar{\rho}^{3}T=2\int_{\bar{\tau}-\delta\tau}^{\bar{\tau}+\delta\tau}d\tau\rho^{3}(V(\phi(\tau)-V_{A}).\label{eq:T} \end{equation} In the second line of Eq.~\eqref{eq:B2-1} we have taken the path in $\tau$ such that for $0<\tau\le$ $\bar{\tau}-\delta\tau$, $\phi$ is held fixed at $\phi_{B}$ while in the interval $\bar{\tau}+\delta\tau\le\tau<\tau_{{\rm max}}$, $\phi=\phi_{A}$. So in the first and third terms in Eq.~\eqref{eq:B2-1} we can replace the integral over $d\tau=\frac{d\tau}{d\rho}d\rho$ using the Euclidean Eq.~\eqref{eq:rho} with $\phi$ fixed\footnote{Although not explicitly stated this seems to have been assumed also in~\cite{Parke:1982pm}.}. This gives $\frac{d\tau}{d\rho}=\pm1/\sqrt{1-V_{B,A}\rho^{2}}$ in the first and third terms\footnote{In~\cite{Parke:1982pm} only the positive sign is kept here.} so these integrations can be done giving us (in the thin wall limit $\delta \tau\rightarrow0$), \begin{equation} \boxed{\quad \frac{B}{2}=-12\pi^{2}\left[\pm\frac{\left(1-\frac{1}{3}V_{A}\bar{\rho}^{2}\right)^{3/2}-1}{V_{A}}\mp\frac{\left(1-\frac{1}{3}V_{B}\bar{\rho}^{2}\right)^{3/2}-1}{V_{B}}\right]+2\pi^{2}\bar{\rho}^{3}T.\quad }\label{eq:B3} \end{equation} $\bar{\rho}$ is then determined by extremising $B$. Upon substituting this value into the above one then gets the usual expressions which we will quote later after re-deriving the above without invoking Euclidean arguments with their corresponding interpretational issues. \subsection{Vacuum transitions in mini-superspace} \label{sec:CDLfromWDW} An instructive exercise, that helps understanding the formalism outlined in Sec.~\ref{sec:WKBandWDW} and shows the differences between the Lorentzian and Euclidean appproaches, consists in studying vacuum transitions in a mini-superspace setup that includes a real scalar field. This calculation is a generalization of the ‘tunneling from nothing’ scenario~\cite{Vilenkin:1982de, Hartle:1983ai, Vilenkin:1984wp,Linde:1983mx}. For a recent discussion see for instance~\cite{Kristiano:2018oyv,deAlwis:2018sec,Halliwell:2018ejl}. The metric is \begin{equation} ds^{2}=-N^{2}(t)dt^{2}+a^{2}(t)(dr^{2}+\sin^{2}rd\Omega_{2}^{2}) \,. \label{eq:minimetric} \end{equation} The action (setting $M_p=1/\sqrt{8\pi G}=1$) is given by the sum $S=S_{g}+S_{m}$, where \begin{eqnarray} S_{g} & = & 2\pi^{2}\int_{0}^{1}dt\left(-N^{-1}3a\dot{a}^{2}+3kaN\right) \,,\label{eq:Sg}\\ S_{m} & = & 2\pi^{2}\int_{0}^{1}dt\left(N^{-1}\frac{1}{2}a^{3}\dot{\phi}^{2}-Na^{3}V(\phi)\right) \,. \end{eqnarray} Here $k=\pm1,0$ depending on whether the three-spatial slice is positively (negatively) curved or flat. Of course in the open $k=0,-1$ cases the factor $2\pi^{2}$ would have to be replaced by an appropriate compactified volume factor and the spatial metric in Eq.~\eqref{eq:minimetric} would need to be replaced by a flat or hyperbolic metric. Here we will focus on the $k=+1 $ case and for convenience we will drop the $2\pi^{2}$ factor in the calculations below and restore it in the expressions for the classical action. We will make some remarks at end on the other two cases. The canonical momenta are \begin{equation} \pi_{N}=0\,, \qquad \pi_{a}=-N^{-1}6a\dot{a} \,,\qquad \pi_{\phi}=N^{-1}a^{3}\dot{\phi} \,,\label{eq:momenta} \end{equation} and the Hamiltonian constraint is \begin{equation} {\cal H}=N\left(-\frac{\pi_{a}^{2}}{12a}+\frac{\pi_{\phi}^{2}}{2a^{3}}-3a+a^{3}V(\phi)\right)\approx 0 \,.\label{eq:calH} \end{equation} Comparing with Eq.~\eqref{eq:GeneralHamiltonian} we have \begin{gather} G^{aa} = -\frac{1}{6a} \,,\qquad G^{\phi\phi}=\frac{1}{a^{3}},\label{eq:G-1}\\ f(a,\phi) = -3a+a^{3}V(\phi).\label{eq:faphi} \end{gather} Consider a scalar potential with two dS minima in $\phi_A$ and $\phi_B$, with $V(\phi_A) \equiv V_A > V_B \equiv V(\phi_B)$. Then the general shape of the function $f(a, \phi)$ in Eq.~\eqref{eq:faphi} is plotted in Fig.~\ref{fig:Path}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.3, trim=0cm 2cm 0cm 1.8cm,clip]{Path.pdf} \caption{\label{fig:Path}} \end{center} \end{figure} As we emphasised before, the superspace metric is not positive definite in the presence of gravity and this introduces significant differences to WKB type tunneling arguments. As can be seen from this constraint equation in the absence of the scalar field (this is the `tunneling from nothing' model of~\cite{Vilenkin:1982de, Hartle:1983ai, Vilenkin:1984wp,Linde:1983mx}), one has a barrier at fixed $\phi$ for the scale factor $a$ when $a<\sqrt{3/V}$ whereas if the geometry is fixed (as in the original investigations of Coleman et al.) then there is a barrier for $\phi$ when $V$ is greater than its value at the point where $\pi_{\phi}$ becomes zero. However it is clear that these simple situations are not the only possibilities when both $a$ and $\phi$ are present. The following is to our knowledge the first time this more complex situation is discussed. \subsection{Recovering Coleman-De Luccia} \label{sec:RecoveringCDL} It is instructive to try and recover the CDL expression in Eq.~\eqref{eq:B3} using the formalism of Sec.~\ref{sec:WKBandWDW}. Hopefully, this might help us in better understanding CDL. Let us choose the deformation parameter $s$ (in analogy with the Euclidean time $\tau$) such that from some initial value (say at $s=0$) to the point $\bar{s}-\delta s$ the field $\phi$ remains very close to $\phi_{B}$ and for points $s_{\max}>s>\bar{s}+\epsilon$, $\phi$ becomes close to $\phi_{A}$. It should be emphsised here that $s$ has nothing to do with real time - it is simply a deformation parameter that parametrises the path of integration. The range $\bar{s}-\epsilon<s<\bar{s}+\epsilon$ is the transitional region where in effect CDL used the thin wall approximation. Thus the classical action (see Eq.~\eqref{eq:S0intf}) is \begin{small} \begin{eqnarray} S_{0}(a_{0},\phi_{B};a_{{\rm max}},\phi_{A}) & = & -12\pi^{2}\int_{0}^{s_{{\rm max}}}ds'C^{-1}(s')\left(-a+a^{3}\frac{V(\phi)}{3}\right)\nonumber \\ & = & -12\pi^{2}\int_{0}^{\bar{s}-\delta s}ds'C^{-1}(s')\left(-a+a^{3}\frac{V_{B}}{3}\right)-12\pi^{2}\int_{\bar{s}-\delta s}^{\bar{s}+\delta s}ds'C^{-1}(s')\left(-a+a^{3}\frac{V(\phi)}{3}\right)\nonumber \\ & & -12\pi^{2}\int_{s+\delta s}^{s_{{\rm max}}}ds'C^{-1}(s')\left(-a+a^{3}\frac{V_{A}}{3}\right).\label{eq:SAB1} \end{eqnarray} \end{small} Here in the first term we have used the fact that $\phi$ remains constant and equal to $\phi_{B}$ while in the last term it remains equal to $\phi_{A}.$ This path corresponds to the Euclidean path chosen by CDL and Parke. Thus in the Lorentzian case this action corresponds to ‘tunneling from nothing’, as in Hartle-Hawking/Vilenkin-Linde~\cite{Vilenkin:1982de, Hartle:1983ai, Linde:1983mx} wave function of the universe arguments, essentially keeping $\phi$ fixed, to the potentially emergent state $B$ (the true vacuum in CDL's language), then making a transition to the state $A$ (where both $\phi$ and $a$ can change), that then emerges as the classical background space time. This is then to be compared to the situation where the the state $A$ emerges from a ‘tunneling from nothing’ process. The latter gives an action \begin{equation} S_{0}(a_{{\rm max}},\phi_{A};a_{0},\phi_{A})=-12\pi^{2}\int_{0}^{s}ds'C^{-1}(s')\left(-a+a^{3}\frac{V_{A}}{3}\right).\label{eq:SA} \end{equation} Now we have from Eq.~\eqref{eq:dtauds} \begin{equation} \left(\frac{d\tau}{ds}\right)^{2}=-6a\left(\frac{da}{ds}\right)^{2}+2a^{3}\left(\frac{d\phi}{ds}\right)^{2}=-2C^{-2}(s)(-3a+a^{3}V(\phi))\label{eq:dtauds-1} \,. \end{equation} Thus as long as $a^{2}<3/V(\phi$), for a ``time-like'' trajectory in field space $C^{2}=-1$. In particular this would be the case for $d\phi/ds=0$. For the moment though we will leave this undetermined. The transition probability is given by (ignoring the pre-factors for the moment), \begin{equation} P(A\rightarrow B)=\bigg|\frac{\Psi(a_{0},\phi_{B};a_{{\rm max}},\phi_{A})}{\Psi(a_{0},\phi_{A};a_{{\rm max}},\phi_{A})}\bigg|^{2}=\bigg|\frac{\alpha e^{iS_{0}(a_{0},\phi_{B};a_{{\rm max}},\phi_{A})}+\beta e^{-iS_{0}(a_{0},\phi_{B};a_{{\rm max}},\phi_{A})}}{\alpha e^{iS_{0}(a_{0},\phi_{A};a_{{\rm max}},\phi_{A})}+\beta e^{-iS_{0}(a_{0},\phi_{A};a_{{\rm max}},\phi_{A})}}\bigg|^{2}\equiv e^{-B}\label{eq:PAB} \,. \end{equation} The dominant term in this ratio will be exponentially larger than the subdominant terms so the latter may be safely ignored. \begin{eqnarray} \frac{B}{2} & = & iS_{0}(a_{0},\phi_{B};a_{{\rm max}},\phi_{A})-iS_{0}(a_{0},\phi_{A};a_{{\rm max}},\phi_{A}) \label{eq:B1} \\ & = & -12\pi^{2}i\int_{0}^{\bar{s}-\delta s}ds'C^{-1}(s')\left(-a+a^{3}\frac{V_{B}}{3}\right)-12\pi^{2}i\int_{\bar{s}-\delta s}^{\bar{s}+\delta s}ds'C^{-1}(s')\left(-a+a^{3}\frac{V(\phi)}{3}\right)\nonumber \\ & & -12\pi^{2}i\int_{\bar{s}+\delta s}^{s_{{\rm max}}}ds'C^{-1}(s')\left(-a+a^{3}\frac{V_{A}}{3}\right)+12\pi^{2}i\int_{0}^{s_{{\rm max}}}ds'C^{-1}(s')\left(-a+a^{3}\frac{V_{A}}{3}\right) \,, \nonumber \end{eqnarray} where we have chosen to keep $C$ - so the choice of phase is so far undetermined. After some cancellations this may be rewritten as \begin{align} \pm \frac{B}{2} & =-12\pi{}^{2}i\int_{0}^{\bar{s}-\delta s}ds'C^{-1}(s')\left(-a+a^{3}\frac{V_{B}}{3}\right)+12\pi^{2}i\int_{0}^{\bar{s}-\delta s}ds'C^{-1}(s')\left(-a+a^{3}\frac{V_{A}}{3}\right)\nonumber \\ & +2\pi^{2}\bar{a}^{3}T.\label{eq:B2} \end{align} where we have defined the tension $T$ in analogy with Eq.~\eqref{eq:T}, as the contribution to the action coming from the portion of the path such that $d\phi/ds \neq 0$ \begin{equation} 2\pi^{2}\bar{a}^{3}T=12\pi^{2}i\int_{\bar{s}-\delta s}^{s+\delta s}ds'C^{-1}(s')\left(a^{3}\frac{V(\phi)-V_{A}}{3}\right)\label{eq:T2}\,. \end{equation} Note that, despite the contribution in Eq.~\eqref{eq:T2} is similar to Eq.~\eqref{eq:T}, there is no physical wall in the process that we are considering, that preserves the full $O(4)$ symmetry of the minisuperspace model. So far we have not made any approximation. The terms in the first line of Eq.~\eqref{eq:B2} will now be evaluated (as in the corresponding Euclidean case) keeping $\phi$ constant. So we may use $\frac{da}{ds'}=\pm\sqrt{1-\frac{V_{A,B}}{3}a^{2}}$ (see Eq.~\eqref{eq:dtauds-1} with $d\phi/ds=0$ which implies $C^{2}=-1$). We will also assume that the last term of Eq.~\eqref{eq:B2} is also integrated over a time-like path in field space - so that we can choose $C^{2}=-1$ along this path as well, which requires of course that $da/ds$ is non-zero along this path. Hence \begin{equation} S_{0}^{A,B}=\pm i12\pi^{2}\int_{0}^{a}daa\sqrt{\left(1-a^{2}\frac{V_{A,B}}{3}\right)}=\mp i12\pi^{2}\frac{1}{V_{A}}\left\{ \left(1-a^{2}\frac{V_{A,B}}{3}\right)^{3/2}-1\right\} \label{eq:SA2} \end{equation} Thus we have (putting $a(\bar{s}\pm\delta s)=\bar{a}\pm\delta a$) \begin{eqnarray} \pm \frac{B}{2} & = & 12\pi^{2}\left\{ \frac{1}{V_{B}}\left[\left(1-\left(\bar{a}-\delta a\right)^{2}\frac{V_{B}}{3}\right)^{3/2}-1\right]-\frac{1}{V_{A}}\left[\left(1-\left(\bar{a}-\delta a\right)^{2}\frac{V_{A}}{3}\right)^{3/2}-1\right]\right\} \quad \nonumber \\ \quad & + & 2\pi^{2}\bar{a}^{3}T\,. \end{eqnarray} \subsection{Concrete cases} \label{sec:ThinWallApproximation} Let us now consider several cases to identify the transition amplitudes using our formalism. \begin{itemize} \item{\bf Thin wall CDL}. Let us try to recover the CDL resukt in the thin wall approximation. We can evaluate the last equation in the thin wall approximation. In this case the potential essentially reduces to a brane so that effectively we may write the argument of the integral in Eq.~\eqref{eq:T2} in the limit $\delta s\rightarrow0$ as a delta function. Then the logarithm of the transition amplitude becomes \begin{align} \pm \frac{B}{2} & =12\pi^{2}\left\{ \pm\frac{1}{V_{B}}\left[\left(1-\bar{a}^{2}\frac{V_{B}}{3}\right)^{3/2}-1\right]\mp\frac{1}{V_{A}}\left[\left(1-\bar{a}^{2}\frac{V_{A}}{3}\right)^{3/2}-1\right]\right\} \nonumber \\ & +2\pi^{2}\bar{a}^{3}T \,.\label{eq:Bthinwall} \end{align} This is the same expression as Eq.~\eqref{eq:B3}. It is also clear that this is the sum of two Hartle-Hawking/Vilenkin-Linde terms and\footnote{Note that in the thin wall approximation the integrals in Eq.~\eqref{eq:T} and Eq.~\eqref{eq:T2} imply that in the region around $\bar{s}$ the potential takes the form $a^{3}(s)\left(V(\phi(s))-V_{A}\right)=2\bar{a}^{3}T\delta(s-\bar{s})$.} and a term coming from the portion of the path where $d\phi/ds \neq 0$, that in the CDL computation corresponds to the wall tension contribution. However, notice that we recovered the CDL expression for the tunnelling probability, Eq.~\eqref{eq:Bthinwall} without having to go to Euclidean space, but just using WKB quantum mechanics and assuming that, for the path that extremizes the action, $C^{-1}$ in Eq.~\eqref{eq:T2} is imaginary. Extremizing Eq.~\eqref{eq:Bthinwall} gives the standard (generalised) CDL expression for transitions between two dS spaces (or AdS with appropriate sign changes). The extremum is at \begin{equation} \frac{1}{\bar{a}^{2}}=\frac{V_{B}}{3}+\frac{1}{4}\left(\frac{2}{T}\frac{\Delta V}{3}+\frac{T}{2}\right)^{2}=\frac{V_{A}}{3}+\frac{1}{4}\left(\frac{2}{T}\frac{\Delta V}{3}-\frac{T}{2}\right)^{2},\label{eq:abar} \end{equation} where $\Delta V=V_{A}-V_{B}$. Note that this expression shows that $\bar{a}$ is less than the horizon radius of both $A$ and $B$. Putting this into Eq.~\eqref{eq:Bthinwall} gives the final expression (with $H^{2}\equiv V/3$) \begin{equation} \boxed{\quad B=\pm8\pi^{2}\left[\frac{\left\{ \left(H_{A}^{2}-H_{B}^{2}\right)^{2}+T^{2}\left(H_{A}^{2}+H_{B}^{2}\right)\right\} \bar{a}}{4TH_{A}^{2}H_{B}^{2}}-\frac{1}{2}\left(H_{B}^{-2}-H_{A}^{-2}\right)\right].\quad }\label{eq:Bthinwallextr} \end{equation} This is of course the well-known result. However its derivation and interpretation is quite different from that of CDL (and subsequent work which generalised the CDL result). Firstly we did not explicitly use Coleman's tunneling formula - instead we directly solved the WDW equation in the classical approximation, as a deformation of the solution where the initial configuration is one in which the fields correspond to a dS space (with vacuum energy $V_{A}$ and compared it to the undeformed configuration). However, there are a few puzzles posed by this calculation as we discuss next. \item{\bf Hartle-Hawking interpretation}. As the last term in Eq.~\eqref{eq:Bthinwall} is positive, it increases the absolute value of $B$ and hence decreases the tunneling probabilty. On the other hand, one can simply ask what is the relative probability of ‘tunneling from nothing’ to the state $B$ compared to ‘tunneling from nothing’ to the state $A$. In other words one might compute the following ratio \begin{equation} \frac{B}{2} = \frac{\left|\Psi(a_0, \phi_B; a_{\rm max}, \phi_B)\right|^2}{\left|\Psi(a_0, \phi_A; a_{\rm max}, \phi_A)\right|^2} \,, \end{equation} with $a_{\rm max} \geq \rm{max}(\sqrt{(3/V_A)}, \sqrt{3/V_B})$. The result would then be given by the top line of Eq.~\eqref{eq:Bthinwall} (i.e. the two Hartle-Hawking/Vilenkin-Linde terms) with no tension term. Note that, even though the integration of Eq.~\eqref{eq:Bthinwall} in this case extends till $a_{\rm max}$, only the integrals up to $\sqrt{3/V_A}$ and $\sqrt{3/V_B}$ contribute to the real part of $B$. Hence, for instance, the factor $\left(1 - \frac{\bar{a}^2 V_B}{3}\right)^{3/2}$ evaluated at $\bar{a}^2 > 3/V_B$ gives no contribution to $B$ and analogously for the term containing $V_A$ in Eq.~\eqref{eq:Bthinwall}. Hence the relative probability is now given by $P=e^{-B}$ with \begin{equation} \boxed{\quad B=24\pi^{2}\left\{ \mp\frac{1}{V_{B}}\pm\frac{1}{V_{A}}\right\} .\quad }\label{eq:HHV} \end{equation} This is simply the ratio of the Hartle-Hawking/Vilenkin-Linde (depending on the choice of sign) probabilities for tunneling from ‘nothing’ to the state $B$ compared to tunneling from ‘nothing’ to the state $A$. Note that Eq.~\eqref{eq:HHV} might be interpreted as the tunnelling rate for the transition dS A $\rightarrow$ ‘nothing’ $\rightarrow$ dS B (or the opposite, depending on the choice of the signs). Interestingly, it seems that this would give a greater probability for transition than the CDL calculation. In this connection it should be pointed out that in the Hartle-Hawking case the wave function in the classical region is indeed a superposition of wave functions for expanding and contracting universes. The process we are envisaging may then be thought of as the contracting branch with the field sitting at the $A$ minimum tunneling to ‘nothing’ and then reemerging as an expanding branch in the $B$ minimum or the reverse. \item{\bf CDL vs Hartle-Hawking}. In the Lorentzian analog of the CDL calculation on the other hand, it is not clear how the true vacuum (i.e. $B$) emerges into the classical region. In CDL, the tunnelling rate is computed exactly with the same integral as in Eq.~\eqref{eq:B1}. However, in CDL this is interpreted as the action for a Euclidean configuration given by a compound state that joins two portions of 4-spheres (corresponding to dS B and dS A) along a 3-sphere (the wall): the $SO(5)$ symmetry of the Euclidean four dimensional dS is broken to $SO(4)$ by the presence of the ‘Euclidean’ wall. Afterwards, as described in App.~\ref{sec:dSGeometry}, one of the angular variables that implements the $SO(4)$ symmetry, is analytically continued and becomes the usual Lorentzian time, breaking the symmetry to $SO(1,3)$. In this way, the initial under-the-barrier Euclidean configuration (the compound state) becomes the real three-dimensional equal-time slice of the nucleated spacetime. As the continued angular variable is one that preserves the Euclidean $SO(4)$ symmetry, the nucleated spacetime is still a compound state of dS A and dS B: both dS spacetimes enter the classical region and keep evolving according to the classical equations of motion. In our computation, the integral of Eq.~\eqref{eq:B1} (that we chose in order to recover the same CDL expression, and try to give it a Lorentzian interpretation) is associated with a particular path in field space that, starting from the configuration $(a_0, \phi_B)$ (‘B nothing’) leads to the nucleation of a full dS A sphere, i.e. the configuration $(a_{\rm max}, \phi_A)$. This has to be contrasted, as described in Eq.~\eqref{eq:PAB} with the path that, starting from the configuration $(a_0, \phi_A)$ leads to the same full dS A sphere as above. Essentially we are comparing 3-geometries, which is all one can do in the context of quantum gravity as described by the WDW equation. In our procedure we neither need a dilute gas approximation nor a single negative mode in the spectrum of fluctuations as in the CDL flat space argument. Furthermore, there is no notion of bubble nucleation. Both the numerator and the denominator in Eq.~\eqref{eq:PAB} correspond to spacetime configurations that preserve the $SO(4)$ symmetry. To be more explicit, the portion of the path that corresponds to the dS B is always under the barrier in the Lorentzian case: the computation relies on the fact that $\bar{a}$ in Eq.~\eqref{eq:Bthinwall} after extremizing is located in the under-the-barrier region, so that the integrands of Eq.~\eqref{eq:SA2} give an imaginary contribution and the resulting $B$ is real. In the Lorentzian approach, the dS spacetime B never sees the light of the classical region. Hence, in the present case built on the CDL argument how $B$ emerges as a classical space-time is not at all clear. Our computation relies on the fact that the path in field space extremising the action can be split as described in Sec.~\ref{sec:RecoveringCDL}, and that it is such that $C^{-1}$ in Eq.~\eqref{eq:T2} is imaginary. If the latter condition holds, the portion of the path such that $d\phi/ds \neq 0$ contributes to the imaginary action, bringing a term which is the analogous of Eq.~\eqref{eq:T} in the Euclidean computation. In general, this is not necessarily the case and one should compute the value of the action for the path that solves the equation of motion. In order to compute the contribution to the imaginary part of the action of a given path, it is sometimes more convenient to use the distance on field space as the parameter $s$, in which case the action for the wave function in the numerator of Eq.~\eqref{eq:PAB} can be written as \begin{eqnarray} \label{eq:GeneralExplicitAction} S_0[a_0, \phi_B; a_{\rm max}, \phi_A] = 2 \pi^2 \int_{a_0, \phi_B}^{a_{\rm max}, \phi_A} \left[-6a da^2 + a^3 d\phi^2\right]^{1/2} \, \left[6a\left(1 - \frac{a^2 V(\phi)}{3}\right)\right]^{1/2} \,. \end{eqnarray} Hence, only the portions of the path such that the product of the two square roots in Eq.~\eqref{eq:GeneralExplicitAction} is imaginary contribute to the imaginary part of the path, and hence to the tunnelling rate. \item{\bf Standard classical paths}. It is interesting to notice that, due to the presence of a non-positive definite superspace metric, it is possible to find a classical solution that connects two dS spacetimes A and B. To illustrate this point, consider first the usual Hartle-Hawking transition, as a special case of Eq.~\eqref{eq:GeneralExplicitAction}. If we include the scalar field, the model becomes richer. In fact, the first bracket in Eq.~\eqref{eq:GeneralExplicitAction} is proportional to the kinetic terms of Eq.~\eqref{eq:calH} and can become negative without resorting to an imaginary parameter \begin{equation} \frac{1}{2} \left(-6 a \dot{a}^2 + a^3 \dot\phi^2\right) = - \frac{\pi_a^2}{12 a} + \frac{\pi_\phi^2}{2 a^3} = 3 a \left(1 - \frac{a^2 V(\phi)}{3}\right) \,. \end{equation} Below, we show an example in which we observe a classical transition from dS A to dS B in the potential of the left panel of Fig.~\ref{fig:PotentialExample}. The initial conditions are given by $a(0) = \sqrt{3/V_A}$, $\phi(0) = \phi_A = -0.01$, $\dot\phi(0) = 0.2$, so that the field goes over the barrier and start oscillating in the dS B minimum at $\phi_B = 0.01$, with decreasing amplitude. In the example, the maximum of the barrier is located at $\phi_{\rm max} \simeq -0.00375$ and the value of the potential is $V_{\rm max} \simeq 0.1068$. The initial speed necessary for the field to classically overcome the barrier and make the transition possible is roughly given by the height of the barrier $\Delta V = V_{\rm max} - V_A \simeq 0.0068$, so that $\frac{1}{2} \dot\phi^{2} \gtrsim \Delta V$. This is analogous to the recently proposed ‘fly-over’ scenario~\cite{Blanco-Pillado:2019xny} \footnote{We can equally get Hawking-Moss configurations after adjusting some of the parameters.}. Asymptotically, the dS B solution is recovered. In the example we used $V_A = 0.1$ and $V_B = 0.05$. In Fig.~\ref{fig:BracketTerms} we report the values of the brackets in Eq.~\eqref{eq:GeneralExplicitAction}, showing that they change sign at the same time even though the parameter $s$ of the evolution is always kept real. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.55]{PotentialExample.pdf} \qquad \includegraphics[scale=0.55]{FieldEvolution.pdf} \caption{\footnotesize{The potential in the left panel is a quartic polynomial in which four parameters are fixed by requiring two minima as described in the main text, while the remaining parameter fixes the height of the barrier. The right panel shows the trajectory of the field that starts in the A minimum and oscillates around the B minimum with decreasing amplitude. Asymptotically it goes to rest in the B minimum at $\phi = 0.01$.} \label{fig:PotentialExample}} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.55]{ScaleFactor.pdf} \qquad \includegraphics[scale=0.55]{Aderivative.pdf} \caption{\footnotesize{In the left panel we plot the scale factor. In the right panel we plot the derivative of the scale factor with respect to time $t$. After a contracting phase and a few large oscillations, the system settles into a regime of exponential growth once it is in the lower vacuum. } \label{fig:ScaleFactors}} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.45]{BracketTerms.pdf} \caption{\footnotesize{The black line represents the term $\frac{1}{2} \left(a^3 \dot\phi^2 - 6 a \dot{a}^2\right)$, while the black line represents $1- \frac{a^2 V(\phi)}{3}$.}\label{fig:BracketTerms}} \end{center} \end{figure} \item{\bf Non-standard classical paths}. The Hamiltonian and momentum constraints can be expressed as the standard FLRW equation: \begin{equation} \left(\frac{\dot a}{a}\right)^2=\frac{8\pi G}{3}\left(\frac{1}{2}\dot\phi^2+V(\phi)\right)-\frac{1}{a^2}\label{eq:hamiltonian} \end{equation} and the scalar field equation \begin{equation} \ddot\phi+3\left(\frac{\dot a}{a}\right)\, \dot\phi + V'(\phi)=0\label{eq:scalarequation} \end{equation} We want to explore if there are paths in the $\phi,a$ space that can connect the two sides of the barrier for a scalar potential with more than one minimum. A simple path can be starting with $a=$ constant. In this case the FLRW equation (Hamiltonian constraint) becomes: \begin{equation} \frac{1}{2}\dot\phi^2+V(\phi)=\frac{3}{8\pi Ga^2}\equiv E \end{equation} which is the same as the energy conservation equation for a particle of energy $E$. As long as $E>V$ the field passes through classically over the barrier with no need of tunneling. Note that this happens only for a closed universe. Note also that the scalar field equation is satisfied automatically as usual for energy conservation. This is Einstein's static universe solution. For non-static cases, the standard classical solution would correspond to an initial velocity $\dot\phi$ to be big enough so that the kinetic energy can overcome the difference in potential energies to cross the barrier. In cosmological setting this would correspond to the standard scenarios such as Hawking-Moss and the fly-over of the previous bullet point. However considering trajectories in the $a(t), \phi(t)$ space can be more general than this. Due to the negative signature of the superspace metric there is no need to climb the potential energy barrier but we can move in the 2d $(a, \phi)$ space. For this, note that in Eq.~\eqref{eq:hamiltonian} the curvature term adds a negative term to the scalar potential and therefore if the scale factor decreases with time the initial speed needed to cross the barrier is smaller than that required to compensate for the potential difference. This can also be seen from Eq.~\eqref{eq:scalarequation} since for a contracting universe $H<0$ the ‘friction’ term $3H\dot\phi$ has the opposite sign, accelerating the field $\phi$ rather than slowing it down. This means that the classical path would have a contracting phase while the scalar field climbs through the barrier and then starts expanding after the transition. A sequence of contracting and expanding classical paths connecting different vacua would seem to be generic in a multi-minima scalar potential and could provide the basis of a novel scenario for early universe cosmology beyond old and new inflation. For a recent discussion of bouncing cosmologies see for instance~\cite{Gungor:2020fce}. \subsubsection*{A toy model} To identify the classical path in a more concrete way let us consider the simplest square scalar potential. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.3,trim=3cm 12cm 0cm 5cm,clip]{SquarePotentialb.pdf} \caption{Square potential.\label{fig:squarepotential}} \end{center} \end{figure} For a square potential as in the figure the second equation simplifies as $V'=0$. This can then be integrated to give: \begin{equation} \dot\phi=\frac{K}{a^3} \end{equation} with $K$ a constant that will be different for each of the regions $0<\phi<\phi_0$, $\phi_0<\phi<\phi_1$ and $\phi_1<\phi<\phi_2$. Plugging this in the first equation (and setting $8\pi G=1$) we get: \begin{equation} \dot a= \pm\sqrt{\frac{K^2+2Va^6-6a^4}{6a^4}} \end{equation} Taking the ratio of these two equations we get: \begin{equation} \phi= \pm \sqrt{6}K\int \frac{da}{a\sqrt{K^2+2Va^6-6a^4}}\label{eq:phiintegral} \end{equation} where $V=V_F,V_M,V_T$ depending of the region as shown in the figure. In principle this equation defines the path in the $\phi,a$ space. There will be one expression like Eq.~\eqref{eq:phiintegral} for each of the three regions with the integration constants and the values of the constants $K$ could be fixed by matching the solutions at the points $\phi_0$ and $\phi_1$. \subsubsection*{A numerical example} Let us consider the scalar potential of Fig.~\ref{fig:PotentialExample} with different initial conditions. Taking $\phi(0) = \phi_A$, $a(0) = 6.5$ and $\dot\phi(0) = 0.1$, we find using the Friedmann equation that $\dot{a}(0) \simeq \pm 0.69$. Let us then consider a contracting universe as initial condition: $\dot{a}(0) \simeq -0.69$. In this case, we get the scale factor and field evolutions in Fig.~\ref{fig:NonStandardFieldEvolution} and Fig.~\ref{fig:EvolutionFieldSpace}. Note that this is a different scenario with respect to standard paths through the barrier such as Hawking-Moss and ‘fly-over’ decay, as the initial kinetic energy ($\frac{1}{2} \dot\phi^{2} \simeq 5 \times 10^{-3}$ in the present example) is smaller than the barrier height ($\Delta V \simeq 6.8 \times 10^{-3}$). This illustrates explicitly our claim that due to the non-positive nature of the superspace metric there are more configurations connecting the two different vacua. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.55]{ScaleFactorNonStandard.pdf} \qquad \includegraphics[scale=0.59]{FieldNonStandard.pdf} \caption{\footnotesize{In the left panel we plot the scale factor which is initially shrinking but then start growing. In the right panel we plot the value of the scalar field with respect to $t$. Initially the field starts oscillating around the higher vacuum, then it bounces between the two vacua and finally it settles in the lower vacuum.} \label{fig:NonStandardFieldEvolution}} \end{center} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.8]{EvolutionFieldSpace.pdf} \caption{\footnotesize{We show the evolution of the system in field space: after a contracting phase (blue), the scale factor starts expanding (red) and the system gets trapped in the vacuum B. The black line corresponds to the locus $a^2 = 3/V(\phi)$, hence it is the locus where the function $f(a,\phi) = 0$, see Fig.~\ref{fig:Path}.} \label{fig:EvolutionFieldSpace}} \end{center} \end{figure} \end{itemize} \section{Open or Closed Universe?} \label{sec:OpenClosed} As we have seen in the previous sections, following a Lorentzian approach the spacetime geometry is established from the beginning. For simplicity and computational control we started with a closed universe and the end result is then also a closed universe with $k=1$ LFRW metric\footnote{Note that this is also the starting point in the Hartle-Hawking and Vilenkin approaches towards defining the wave function of the universe. The fact that closed universes are finite whereas flat and open universes have infinite volume makes closed universes better suited to define the probabilities associated to wave functions.}: \begin{equation} ds^2= -dt^2+a^2(t) d\Omega_3^2 \end{equation} which differs clearly from the open universe conclusion of CDL. This metric is by construction Lorentzian and has the $O(4)$ symmetry corresponding to a closed universe with no need of any analytic continuation. For $a\propto \cosh(\lambda t)$ this corresponds to the $SO(4,1)$ invariant closed dS universe in global coordinates. In this case the natural foliation corresponds to horizontal surfaces of constant time. Here it is important to remark that dS space allows open, flat and closed slicings, see Fig.~\ref{fig:slicings}. Therefore, geometrically all slicings are equally allowed. Which foliation is preferred depends on the coupling to matter. For instance slices of constant inflaton field would naturally determine the proper time slicing and fix the curvature of the expanding universe within the nucleated bubble. In CDL this fixes the open universe but only after analytic continuation for which the original $O(4)$ symmetry becomes $O(3,1)$ of the open slicing. In the Lorentzian mini-superspace approach that we have followed here, the original $O(4)$ symmetry remains and implies the closed slicing. Next we will see that this remains true beyond mini-superspace as in the FMP Hamiltonian approach to quantum transitions. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.8, trim=0cm 0.5cm 0cm 0.5cm,clip] {DeSitterSlicing2.pdf} \caption{Penrose diagrams for dS space with slicing corresponding from left to right to closed, flat and open slicings, respectively. Notice that the horizontal closed universe slicing is global.\label{fig:slicings}} \end{center} \end{figure} \subsection{Beyond mini-superspace} We have seen that our Lorentzian treatment led naturally to a closed slicing of dS space contrary to the CDL arguments. However, this may be an artefact of using the mini-superspace approximation in which the metric is only a function of time and there is no concrete description of the emergence of a bubble. In the Euclidean approach, even though the original calculations are also in mini-superspace, the presence of the bubble and its spacetime trajectory after tunneling is obtained from the proposed analytic continuation. In the Lorentzian approach there is at present no explicit formalism to describe the quantum transitions between different vacua of a scalar field potential. However, in the thin wall approximation in which the relevant quantities are the vacuum energy of the two vacua, the Hamiltonian formalism developed by Fischler, Morgan and Polchinski~\cite{Fischler:1990pk} can be used. In this case the spherically symmetric metric depends on both time and the radial coordinate. This allows us to describe the dynamics of the wall and its trajectory. Here we recall the basics of this approach, which like the Lorentzian mini-superspace model, naturally implies a closed universe slicing of the spacetime as seen by an observer inside the bubble. The spherically symmetric metric takes the form: \begin{equation} ds^2=-N_t^2(t,r)dt^2+L^2(t,r)(dr+N_r(t,r)dt)^2+R^2(t,r)d\Omega_2^2 \end{equation} with $N_t,N_r$ the lapse and shift functions respectively and $d\Omega_2^2$ the line element for the 2-sphere. The system consists of two dS spaces with cosmological constants $\Lambda_I, \Lambda_O$ separated by a wall of tension $\sigma$ at $r=\hat r$. The bulk and boundary actions are the standard gravitational ones and the matter action is given by the two cosmological constants, so the total action is: \begin{equation} S=\frac{1}{16\pi G}\int_{\mathcal M} d^4x\sqrt{-g}\mathcal{R}+\frac{1}{8\pi G}\int_{\partial \mathcal M} d^3y\sqrt{-h} K+S_M+S_W, \end{equation} where $K$ is the extrinsic curvature of the wall and \begin{eqnarray} S_M &= & -4\pi \int dt dr L N_t R^2\left(\Lambda_O\theta(r-\hat r)+ \Lambda_I \theta(\hat r-r) \right), \nonumber \\ S_W &= & -4\pi T\int dt dr \delta(r-\hat r)\left[N_t^2-L^2(N_r+\dot{\hat{r}})^2\right]. \end{eqnarray} In the above we defined $T\equiv 4\pi G\sigma$. Following the standard Dirac prescription for this Hamiltonian system, the Hamiltonian and momentum constraints can be found and the matching conditions at the wall lead to an equation for the wall trajectory of the form: \begin{equation} \dot{\hat{R}}^2+V=-1; \qquad V=-\frac{\hat{R}^2}{R_0^2}, \label{rdot} \end{equation} where $\hat R=R(\hat r)$ and $R_0$ is the turning point: \begin{equation} R_0^2=\frac{4T^2}{\left[(H_O^2-H_I^2)^2+2T^2(H_O^2+H_I^2)+T^4 \right]}. \end{equation} With $H^2_{I,O}=8\pi G \Lambda_{I,O}/3$. The classical trajectory of the wall is then given by: \begin{equation} R(t)=R_0\cosh\frac{t}{R_0} \end{equation} The quantum probabilities are determined from the solutions of the WDW equation $\mathcal{H}\Psi=0$ with ${\mathcal P}$ the relative probability of the configuration of the two dS spaces and the wall compared to that for just one dS: \begin{equation} {\mathcal P}({\rm dS}\rightarrow{\rm dS}/\rm{dS}\oplus W)=\frac{|\Psi({\rm dS}/\rm{dS}\oplus W)|^2}{|\Psi(\rm{dS})|^2} \end{equation} The detailed calculation using the WKB method including a discussion of the matching of the under-the-barrier wave function to that in the classical region is given in~\cite{deAlwis:2019dkc} and the result reproduces the standard exponential factor $e^{-B}$ with $B$ given by Eq.~\eqref{eq:Bthinwallextr}. This provides yet another Lorentzian way to derive the same decay rate. But contrary to the mini-superspace approach, the presence of the wall and its classical trajectory after the transition is made quite explicit \begin{equation} \label{eq:GeneralWF} \Psi = a e^{I} + b e^{- I} \,, \end{equation} where, given a configuration with action $S$, we have denoted the combination $iS = I$ and the action $S$ is evaluated on a classical solution. The total action away from the turning point (but still under the barrier) is \begin{small} \begin{equation} \begin{gathered} \label{eq:B1i} I_{\rm tot} = \, \frac{\pi}{4H_{\rm I}^{2}}\left[1-\epsilon(\hat R^{\prime}_{-}) \frac{2}{\pi} \left(\cos^{-1}\left(\frac{\hat R}{R_{\rm o}}\sqrt{1 - H_{\rm I}^{2} R_{\rm O}^{2}}\right)\right)\left(\frac{R_{\rm o}^{2} - \hat R^{2}}{R_{\rm o}^{2} - \hat R^{2}(1 - H_{\rm I}^{2} R_{\rm o}^{2})}\right)^{3/2}\right] - \\ - \frac{\pi}{4 H_{\rm O}^{2}} \left[1 - \epsilon(\hat R^{\prime}_{+}) \left(2 + \frac{2}{\pi} \cos^{-1}\left(\frac{\hat R}{R_{\rm o}}\sqrt{1-H_{\rm O}^{2} R_{\rm o}^{2}}\right)\right)\left(\frac{R_{\rm o}^{2}-\hat R^{2}}{R_{\rm o}^{2} - \hat R^{2}(1 - H_{\rm O}^{2} R_{\rm o}^{2})}\right)^{3/2} \right] + \\ + \frac{\hat R^{3}}{2 R_{\rm o}} \sqrt{R_{\rm o}^{2} - \hat R^{2}} \left[\frac{(H_{\rm O}^{2} - H_{\rm I}^{2} + T^{2})}{\sqrt{1 - c_{-}^{2}\hat R^{2}}} - \frac{(H_{\rm O}^{2} - H_{\rm I}^{2}-T^{2})}{\sqrt{1 - c_{+}^{2}\hat R^{2}}}\right] - \\ - \left[\frac{H_{\rm O}^{2} - H_{\rm I}^{2} + T^{2}}{4 T H_{\rm I}^{2}} - \frac{H_{\rm O}^{2} - H_{\rm I}^{2} - T^{2}}{4 T H_{\rm O}^{2}}\right] R_{\rm o} \sin^{-1} \left(\frac{\hat R}{R_{\rm o}}\right) \,. \end{gathered} \end{equation} \end{small} The background action is obtained by setting $\hat{r}_{\pm}=0$ in the above expressions (corresponding to having the complete dS space with Hubble parameter $H_{\rm O}$), giving us \begin{equation} \overline{I} = -\frac{\pi}{2 G H_{\rm O}^{2}}\left[(1 - H_{\rm O}^{2} a_{\rm O}^{2})^{3/2} - 1 \right] \,,\label{eq:HHV} \end{equation} which gives the Hartle-Hawking (under the barrier) wave function when substituted into Eq.~\eqref{eq:GeneralWF} with $b=0$ and gives the Vilenkin version when $b=2ia$. Now as pointed out in~\cite{Blau:1986cw} when the background geometry is a black hole there are two classically allowed (I and III) and one classically forbidden regions (II) as in the usual tunneling problem in quantum mechanics, as discussed for instance in Sec.~\ref{sec:FlatSpace}. Classically, the wall expands (or contracts) up to a classical turning point and then re-collapses (or re-expands), but quantum mechanically it can tunnel under the barrier and resurface after the second turning point. In the dS to dS case however there is no region I~\cite{Bachlechner:2016mtp,deAlwis:2019dkc} and the situation as discussed in more detail in~\cite{deAlwis:2019dkc} is similar to tunneling from ‘nothing’. In either case the WDW equation has two independent solutions as in Eq.~\eqref{eq:GeneralWF} in each of these regions that need to be matched at the classical turning points. In the dS to dS case there is just the one turning point, the coefficients $a$ and $b$ will determine the two coefficients in region III through WKB matching conditions. In `tunneling from nothing' discussions one usually imposes an additional boundary condition, either the outgoing wave condition of Vilenkin or the real wave function (coming from the so-called `no-boundary' condition') of Hartle and Hawking. However, if no such condition is imposed, in general one of the two solutions in Eq.~\eqref{eq:GeneralWF} will dominate in each region, depending on the sign of the real part of the action $I$. \noindent The ratio \begin{equation} \label{eq:GeneralDefProbability} \mathcal{P}(\mathcal{B} \rightarrow \mathcal{N}) = \left|\frac{\Psi_{\mathcal{N}}}{\Psi_{\mathcal{B}}}\right|^2 \,, \end{equation} gives the probability of finding the system in the `nucleated' state $\mathcal{N}$ versus being in the `background' state $\mathcal{B}$, Notice that the two states $\mathcal{B}$ and $\mathcal{N}$ do not always have the meaning of `initial' and `final' states (see Sec.~\ref{sec:Introduction}): the transition can be clearly interpreted as happening in time if there is an initial classical motion of the bubble wall. In the cases in which there is no initial classical motion of the wall the interpretation is less clear. For this reason we will refer to the state $\mathcal{B}$ as the `background' spacetime, instead of initial spacetime. Given that this is a relative probability, it does not have to be smaller than one, and we avoid the problem of the normalisation of wave functionals. The semi-classical wave function in the region III (i.e. the classical region where the brane spontaneously emerges from ‘nothing’) is obtained by analytically continuing the expression in Eq.~\eqref{eq:B1i}. The denominator in Eq.~\eqref{eq:GeneralDefProbability} corresponds to the emergence from ‘nothing’ of dS space with the `background' radius $H_O^{-1}$ and the relative probability is given by \begin{align} \label{eq:Pout} & \mathcal{P}_{{\rm {\rm out}}}(\mathcal{B} \rightarrow \mathcal{N}) = \frac{|\Psi^{{\rm out}}_{{\cal N}}|^{2}}{|\Psi^{{\rm out}}_{{\cal B}}|^{2}} = \\ & = \frac{|a+i\frac{b}{2}|^{2} e^{2 \text{Re}\left(I_{\rm out}(\hat{R})\right)} + |a-i\frac{b}{2}|^{2}e^{-2 \text{Re}\left(I_{{\rm out}}(\hat{R})\right)}+2\text{Re}\left[(a+i\frac{b}{2})(a^{*}-i\frac{b^{*}}{2})\right]e^{2i\text{Im} \left(I_{\rm out}(\hat{R})\right)}}{|a+i\frac{b}{2}|^{2}e^{2 \text{Re}\left(I_{{\rm out}}(0)\right)} + |a-i\frac{b}{2}|^{2} e^{-2 \text{Re}\left(I_{{\rm out}}(0)\right)} + 2 \text{Re}\left((a+i\frac{b}{2})(a^{*}-i\frac{b^{*}}{2})\right) e^{2i \text{Im} \left(I_{{\rm out}}(0)\right)}} \,.\nonumber \end{align}. As argued in~\cite{deAlwis:2019dkc} the dominant contribution comes from the ratio of the first terms in the numerator and denominator and gives precisely the expression that was obtained by the generalisations of CDL~\cite{{Weinberg:2012pjx}} and by Brown and Teitelboim~\cite{Brown:1988kg}, namely the expression in Eq.~\eqref{eq:Bthinwallextr}. However it is important to emphasise that even though the final exponential term for the relative probability is the same as that obtained by the Euclidean instanton/dilute-instanton-gas method the expression before minimising with respect to the wall radius i.e. Eq.~\eqref{eq:Bthinwall} is very different from the (analytic continuation of) expression in Eq.~\eqref{eq:B1i}. This suggests that the Lorenzian continuation of the CDL or BT Euclidean argument is strictly speaking just the mini-superspace calculation of Sec.~\ref{sec:CDLfromWDW} whilst the next in order of complication - namely the spherically ($S_2$) symmetric Lorentzian calculation~\cite{deAlwis:2019dkc} based on~\cite{{Fischler:1990pk}} gives a completely different amplitude - even though the exponential term is in agreement. Furthermore as pointed out in Sec.~\ref{sec:FlatSpace} even in the flat space case the direct WKB calculation gives a different pre-factor. Finally, it is interesting to remark that in the Hamiltonian approach of~\cite{Fischler:1989se,Fischler:1990pk}, in order to compare their results with the Euclidean approach of FGG~\cite{Farhi:1989yr}, they describe a canonical Euclideanisation of their approach by working on a static path and determine the relevant functions as $R,L$ in terms of the bubble location $\hat R$. Then they use $\hat R$ as the parameter that plays the role of Euclidean time in the Euclidean formalism. By doing this they successfully explain the Euclidean results of FGG in terms of a singular instanton that in the Hamiltonian approach corresponds to well behaved geometries. This Euclideanisation essentially corresponds to the standard $t\rightarrow it$ Wick rotation and does not correspond to the analytic continuations performed by CDL. \subsection{Classical bubble trajectory after nucleation} Now we want to determine the Penrose diagram for the trajectory of the bubble after nucleation. The equations of motion for the wall are given by the junction conditions which are obtained by embedding the wall coordinates in dS spacetime. Assuming rotational invariance the metric of the wall is given by \begin{equation} ds^2_\Sigma=-d\tau^2+R^2(\tau)\, d\Omega^2 \,. \label{eq:wall_metric} \end{equation} We will first choose static coordinates for dS, \begin{equation} ds^2=-(1-H^2r^2)\, dt^2+\frac{dr^2}{1-H^2r^2}+r^2\, d\Omega_2^2 \,, \end{equation} where $H^{-1}$ is the radius of dS. In these coordinates the wall radius is given by $r(\tau)=R(\tau)$. The junction conditions are \begin{equation} (K^+)^{ i}_{\ j}-(K^-)^{ i}_{\ j}=-4\pi G \sigma\delta^{i}_{\ j} \,, \end{equation} where $K_{ij}^\pm$ is the extrinsic curvature at each side and $\sigma$ is the tension of the wall. In order to compute the extrinsic curvature it is helpful to use Gaussian normal coordinates in which $K_{ab}$ takes the simple form \begin{equation} K_{ab} = - \Gamma^n_{ab} = - \frac{1}{2} \partial_n g_{ab} = -\frac{1}{2} n^\mu \partial_\mu g_{ab} \,, \end{equation} where $n^\mu$ denotes the unit vector orthogonal to the wall and $g_{ab}$ is the induced metric at the wall. The whole computation then boils down to calculate the normal vector $n^\mu$ in an appropriate coordinate systems, from which we can compute the extrinsic curvature on the two sides of the wall and then enforce the junction conditions. To do so let us first denote the four-velocity of a point on the wall as $U^\mu$. In the static patch coordinate system, due to the spherical symmetry of the wall we have \begin{equation} U^\mu_S = \left(\dot{t}_{dS},\dot{R},0,0\right) \,, \end{equation} where $\dot{}$ denotes the derivative with respect to proper time. Note that the normalisation of the four-velocity $g_{\mu \nu} U^\mu_S U^\nu_S = - 1$ implies the following relation between $\dot{t}_{dS}$ and $\dot{R}$: \begin{equation} \label{eq:TdRdS} (1-H^2 R^2) \dot{t}_{dS}^2 = 1 +(1-H^2 R^2)^{-1} \dot{R}^2 \,. \end{equation} To compute the normal vector to the wall first notice that this is orthogonal to the four-velocity $U^\mu$, i.e. $g_{\mu \nu} n^\nu U^\mu_S = 0$. Using this condition and that $g_{\mu \nu} n^\mu n^\nu = 1$ implies \begin{equation} n^\mu = \left((1-H^2 R^2)^{-1} \dot{R}, \pm \sqrt{1-H^2 R^2 + \dot{R}^2},0,0\right) \,. \end{equation} One can now compute the junction condition. The $\theta$ component gives, \begin{equation} \sqrt{1-H_+^2 R^2+\dot R}-\sqrt{1-H_-^2 R^2+\dot R^2}=4\pi\sigma R \,, \end{equation} where the subscript $\pm$ indicates the side of the wall where we are evaluating. After some manipulation this equation leads to Eq.~\eqref{rdot}, and so to a solution for the radius of the wall $R$. One also has the $\tau$ component of the extrinsic curvature,\ $K_{\tau\tau}=U^{\mu}U^{\nu}\nabla_{\nu}n_{\mu}=-n_\mu U^{\nu}\nabla_\nu U^\mu $, which can be interpreted as the normal acceleration of the wall. This implies that the trajectory followed by the wall is not a geodesic unless $K_{\tau\tau}$ vanishes. This can be evaluated in the static patch coordinates, \begin{equation} K_{\tau\tau}=-\frac{\ddot R-H^2 R}{\sqrt{1-H^2 R^2+\dot R^2}}=-\frac{\sqrt{1-H ^2R_0^2}}{R_0} \,, \label{eq:normalcurvature} \end{equation} where in the last step we have used Eq.~\eqref{rdot}. This last equation implies that the normal acceleration is a non-vanishing constant (since $R_0H<1$). \subsection*{Global slicing\footnote{This part follows the last part of appendix C of~\cite{Blau:1986cw}.}} We will now embed the wall in global coordinates, which are described in Eq.~\eqref{foliation:global}. The metric is given by \begin{equation} ds^2=\frac{1}{H^2\cos^2T}\, \left(-dT^2+d\rho^2+\sin^2\rho d\Omega^2 \,, \label{eq:globalconformal} \right) \end{equation} where $T$ is conformal time that varies from $-\pi/2$ at $\mathscr{I}_-$ to $\pi/2$ at $\mathscr{I}_+$. Embedding the wall metric in Eq.~\eqref{eq:wall_metric} into the global coordinates implies that $T(\tau)$ and $R(\tau)=H^{-1}\sec T(\tau)\sin\rho(\tau)$. Also, plugging this back into Eq.~\eqref{eq:globalconformal} we have, \begin{equation} -\dot T^2+\dot\rho^2=-H^2\cos^2 T \,, \end{equation} where dots are derivatives with respect to proper time. Using this relation we find \begin{equation} \dot T^2=\frac{H^2\cos ^2T}{1-\rho'^2},~~\dot {\rho}^2=\frac{H^2\cos ^2T}{1-\rho'^2}\rho '^2 \,, \label{eq:tdot} \end{equation} with $\rho'\equiv d\rho/d T$. To describe the trajectories of the wall in global coordinates we would like to find an expression for $\rho(T)$. Let us start by noticing that \begin{eqnarray} \dot R&=&\frac{1}{H} \frac{d T}{d\tau}\left(\tan T\sec T \sin\rho+\sec T\cos\rho \rho' \right)\nonumber\\ &=& \frac{\tan T\sin\rho+\cos\rho\rho'}{\sqrt{1-\rho'^2}}=\sqrt{\frac{\sin^2{\rho}}{H^2R_0^2\cos^2T}-1} \,, \end{eqnarray} where we have used Eq.~\eqref{eq:tdot} and in the last step Eq.~\eqref{rdot}. The last equality is a first order non-linear differential equation which determines the wall trajectory in conformal coordinates. The solution turns out to be remarkably simple: $\cos(\rho)=\sqrt{1-H^2R_0^2}\cos T \label{eq:sol1}$ as the reader may easily verify. In practice it turns out that to obtain this solution it is more convenient to use Eq.~\eqref{eq:normalcurvature}. This is straightforward since it is also possible to write $\ddot R$, in terms of $\rho$, $T$ and derivatives of $\rho$ with respect to $T$. After substituting into Eq.~\eqref{eq:normalcurvature} we get \footnote{Alternatively we can use that is possible to write $K_{\tau\tau}=-\frac{\dot\beta}{\dot R}$, with $$\beta\equiv\sqrt{1-H^2R^2+\dot R^2}=\frac{\cos\rho+\sin\rho\tan T\rho'}{\sqrt{1-\rho'^2}}\,.$$}, \begin{equation} K_{\tau\tau}=-\frac{\cos (T) \rho''-\sin (T)\rho'^3+\sin (T) \rho'}{\left(1-\rho'^2\right)^{3/2}}H \,, \end{equation} given that $K_{\tau\tau}$ is constant this is a second order ODE for $\rho$ as a function of $T$. Furthermore this expression does not depend explicitly on $\rho$ and can be easily integrated if we rewrite it as, \begin{equation} \frac{\sqrt{1-H ^2R_0^2}}{H R_0}=\cos^2 (T)\frac{d}{dT}\left(\frac{ \sec T \rho'}{\sqrt{1-\rho'^2}}\right) \,, \end{equation} which leads to \begin{equation} \rho'=\pm\frac{\sqrt{1-H^2R_0^2}\sin(T)}{\sqrt{H^2R_0^2+(1-H^2R_0^2)\sin ^2(T)}}\,,\label{eq:rho'} \end{equation} where to fix one integration constant we have imposed $\rho'=0$ at $T=0$, which comes from Eq.~\eqref{rdot}. We will keep the positive signs as it means that the wall speed increases. Notice that $\rho'<1$ and that at $\mathscr{I}_+$, $\rho'=\sqrt{1-H^2R_0^2}<1$. This expression can be integrated to obtain \begin{equation} \boxed{\quad \cos(\rho)=\sqrt{1-H^2R_0^2}\, \cos T \quad} \label{eq:sol1} \end{equation} where we have used that at $T=0$, $R_0=\cos(\rho(0))$. Eq.~\eqref{eq:sol1} determines the trajectory of the wall in global coordinates. Now let us analyse this expression, first note that this the trajectory never crosses the light cone $\rho=T$ since $T<\arccos(\sqrt{1-HR_0^2}\cos T)=\rho$. Also note that all trajectories end at $\rho=\pi/2$, since at $T=\pi/2$, $\cos\rho=0$. Moreover the world sheet of the trajectory is a time-like hyperboloid having $SO(3,1)$ invariance. To see this we can substitute Eq.~\eqref{eq:sol1} into the equations for the embedding of global dS from Eq.~\eqref{foliation:global}. Hence the equation for the brane world volume in embedding coordinates is, \begin{eqnarray} X_0&=& H^{-1}\tan T,\nonumber \\ X_1&=&H^{-1} \frac {\cos\rho}{\cos T}=H^{-1}\sqrt{1-H^2R_0^2},\nonumber \\ X_2^2+X_3^2+X_4^2&=&R^2=\frac{\sin^2{\rho}}{H^2\cos^2 T} = \frac{1}{H^2\cos^2T}-\frac{1-H^2R_0^2}{H^2}. \end{eqnarray} Hence we have the equation for the world sheet of the brane, \begin{equation} -X_0^2+X_2^2+X_3^2+X_4^2=R_0^2. \end{equation}\label{eq:braneworld} This is the equation of a hyperboloid with $SO(3,1)$ symmetry\footnote{In fact one could have guessed the solution simply by demanding that $X_1$ is a constant since that is the simplest choice for the embedding given that $R^2$ cannot be set to a constant, and then fixing the constant from the fact that at $T=0=X_0$, $R=R_0$.}, in other words it is 3 dimensional dS space with radius $R_0$ and corresponds to the Lorentzian rotation of Coleman's~\cite{Coleman:1980aw} Euclidean bounce solution. In summary we have explicitly found a closed expression for the classical trajectory of the wall after nucleation given by Eq.~\eqref{eq:sol1}. It corresponds to an $SO(3,1)$ symmetric hyperboloid. The speed is determined by Eq.~\eqref{eq:rho'} which even though it increases it never reaches the speed of light ($|\rho'|<1$). However it can easily be seen that if gravity is decoupled ($G\rightarrow 0$) the turning point goes as $R_0\rightarrow 3\sigma/(\Lambda_O-\Lambda_I)$ but $H\rightarrow 0$ and then $\rho'\rightarrow 1$ reproducing the flat space results~\cite{Aguirre:2008wy}. In general, the limiting speed differs from the speed of light by a small amount of order $M^2/M_P^2$ with $M$ the reference scale of the scalar field potential. Finally, from the Penrose diagram it can be seen that the trajectory is such that a signal from the centre of the bubble cannot reach the wall but in principle radiation from the wall can reach the observer at the centre. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.4,trim=6cm 10cm 10cm 5cm,clip ]{FMPTransition.pdf} \caption{Penrose diagram for the FMP dS to dS transition. The lower part is the universe before the transition. The upper part is the universe after the transtion composed of two regions with different vacua separated by a wall, which is the red line. The equation of the wall is given by Eq.~\eqref{eq:sol1}. The pale blue region is the part of the universe with the true vacuum, where the green dotted lines are open universe constant time slices and the blue dotted lines are closed universe constant time slices.\label{fig:Potential}} \end{center} \end{figure} \label{sec:LorentzianGeometry} \subsection{Cosmological implications} Let us now compare the cosmological differences between CDL and a closed universe after the tunneling transition. This revives the old question regarding the spatial curvature of the universe. After inflation this issue is considered less urgent since whatever the original curvature is, a short period of inflation is enough to render the universe essentially flat. However at least as a question of principle, especially concerning the question of whether the universe is infinite or finite and also potential observational effects, it is still relevant to address these differences. See for instance references~\cite{Efstathiou:2003hk,Aghanim:2018eyx,DiValentino:2019qzk,Handley:2019tkm, Efstathiou:2020wem, DiValentino:2020hov} for a recent debate regarding current observations. \subsection*{Initial conditions and Inflation} Vacuum decay offers a unique physical mechanism to provide the initial conditions for the evolution of the universe. The initial conditions for the classical cosmological evolution are the configurations of $\phi(t)$ and $a(t)$ right after tunnelling that we will define as $t=0$. \begin{itemize} \item{\bf Open Universe\footnote{See~\cite{Gott:1982zf,Ratra:1994vw,Bucher:1994gb,Linde:1999wv,Freivogel:2005vv,Bousso:2013uia,Freivogel:2014hca,Bousso:2014jca} for related discussions.}.} The Lorentzian equations of motion after CDL tunneling are those for an open $k=-1$ LFRW model with the standard cosmological evolution for a scalar field with canonical kinetic terms and potential $V(\phi)$, \begin{eqnarray} \left(\frac{\dot a}{a}\right)^2 & =&\frac{8\pi G}{3}\left(\frac{1}{2}\dot\phi^2+V(\phi)\right)+\frac{1}{a^2} \,, \\ 0&=&\ddot{\phi}+3H\dot\phi +V_{,\phi} \,. \nonumber \label{eqn:fieldinflation} \end{eqnarray} Initial conditions consistent with the smoothness of the CDL instanton are \begin{equation} \dot\phi(0)=\phi(0)=0 \,, \qquad a(t)=t +{\mathcal O(t^3)} \,.\nonumber \end{equation} Here $a(0)=0$ is a coordinate singularity. It is clear from Eqs. \eqref{eqn:fieldinflation} that $\dot a$ could not be chosen to vanish at $t=0$ for positive potentials. We can see that initially the dynamics is dominated by the curvature term $1/a^2$. The friction ($3H$) diverges at $t\to 0$, and then the initial $\dot\phi$ does not increase by much. From Eq.~\eqref{eqn:fieldinflation} we obtain the $\dot H$ equation for $k=-1$: \begin{equation} \dot H= \frac{\ddot a}{ a} -\left(\frac{\dot a}{a}\right)^2=-4\pi G \dot\phi^2 -\frac{1}{a^2} < 0 \,, \label{acceleration} \end{equation} which guarantees in general that there is no local minimum value for $a(t)$ ($\dot a=0 \implies \ddot a<0$) and that there should be at least one point for which $a=0$ (the coordinate singularity). After a critical time of order $t^*\sim (\alpha V)^{-1/2}$ with $\alpha^{-1}=3/8\pi G=3M_p^2$, the potential starts to dominate, and the curvature term becomes less important as the universe expands. This could mark the onset of inflation as long as the slow roll conditions are satisfied. There are some conditions on the potential in order to achieve this scenario. In particular, it is required to have that $V''/V>1$ to be able to have a solution that satisfies the instanton boundary conditions~\cite{Jensen:1983ac} (see fig.\ref{fig:potential}). This being the opposite of the slow-roll condition, there has to be a curvature change in the scalar potential to eventually give rise to inflation. We may be more quantitative and follow the scale factor $a(t)$ and $\phi(t)$ for times smaller than $t^*$. In this case we can expand around the initial points $V(\phi(0)+\delta\phi) \sim \Lambda+ \beta\delta\phi $ with $\beta=V'(\phi(0)) $ and find \begin{equation} a(t)=\frac{\sinh (\lambda t)}{\lambda} \,, \qquad \lambda^2=\alpha\Lambda=\frac{\Lambda}{3M_p^2} \,, \end{equation} and the scalar field: \begin{equation} \phi(t)=\frac{\beta}{3\lambda^2}\left[\frac{\cosh(\lambda t)-1}{\cosh(\lambda t)+1}+ \log\left(\frac{\cosh(\lambda t)+1}{2}\right)\right]. \end{equation} From here we can explicitly verify that for the domain of validity of this regime ($t\leq t^*$) the scale factor starts at zero and then increases linearly with time whereas the scalar field increases as $\phi(t)\lesssim \beta t^2$. For $t>t^*\equiv 1/\sqrt{\lambda}$, $\Lambda$ dominates as long as $\beta$ is small enough ($\beta<\Lambda/M_p$) such that $\Lambda$ dominates over both $\beta\phi$ and $\dot\phi^2$. In that case the universe starts a standard inflationary period, otherwise the field rolls fast and depending on the potential there may or may not be a standard period of inflation. \begin{figure}[h!] \begin{center} \includegraphics[scale=1,trim=0cm 0.2cm 1cm 0cm,clip]{potential.pdf} \caption{The scalar field potential has a false vacuum at $\phi_f$ and an inflationary region on the right of $\phi_T$. In the standard CDL picture the field tunnels from $\phi_a$ to $\phi_b$.\label{fig:potential}} \end{center} \end{figure} \item {\bf Closed Universe}. In the closed universe after tunneling the equations are \footnote{For relatively recent discussions on the cosmology of closed universes see for instance~\cite{White:1995qm, Ellis:2001ym, Linde:2003hc, Uzan:2003nk, Lasenby:2003ur, Masso:2006gv, Bonga:2016iuf, Ratra:2017ezv}.}: \begin{eqnarray} \left(\frac{\dot a}{a}\right)^2 & =&\frac{8\pi G}{3}\left(\frac{1}{2}\dot\phi^2+V(\phi)\right)-\frac{1}{a^2} \,,\\ 0&=&\ddot{\phi}+3H\dot\phi +V_{,\phi} \,. \nonumber \label{eqn:fieldcosmology} \end{eqnarray} The negative sign in the first equation due to the positive curvature ($k=1$) changes the picture substantially. First note that the initial conditions can be fixed by imposing that at the turning point $\pi_a,\pi_\phi=0$. This implies that the right hand side of the Friedmann equation above vanishes and the the natural initial conditions can be (making a convenient choice for $\phi(0)$)\footnote{There are more general initial conditions for $G_{MN}\pi^N\pi^M\neq 0$ where $G^{MN}$ is the non-positive metric in superspace. As we have described before (see Sec.~\ref{sec:ThinWallApproximation}) when this is the case there also classical solutions that go from the false to the true vacua. In this case, the field oscillates around the true minima for a finite time until it reaches the points where $\dot\phi=\dot a=0$ that could be used as initial conditions for the rest of the evolution of the universe.} \begin{equation} \dot\phi(0)=\phi(0)=0\,, \qquad \dot a(0)=0\,. \label{initial} \end{equation} Contrary to the open case, there is no curvature dominated period since the curvature term has at best to balance the energy density term in order to have $(\dot a/a)^2\geq 0$. In particular at $t=0$ we have $a(0)=3/(8\pi G V(0))\neq 0$. Different from the open universe case, the scale factor does not vanish. Also, the equation for $\dot H$ now reads: \begin{equation} \dot H=\frac{\ddot a}{ a} -\left(\frac{\dot a}{a}\right)^2=-4\pi G \dot\phi^2 +\frac{1}{a^2} \,, \label{acceleration2} \end{equation} which, unlike the open and flat cases, can be positive or negative depending on the relative size of the kinetic terms and the curvature. Note that a positive curvature adds a positive contribution to the equation for $\dot H$. This means in particular that even though the universe is closed it does not necessarily recollapse~\cite{Barrow:1988xi}. Repeating the quantitative analysis as for the open case, assuming the energy density is dominated by $\Lambda$, we find: \begin{equation} a(t)=\frac{\cosh (\lambda t)}{\lambda} \,, \qquad \lambda^2=\alpha\Lambda=\frac{\Lambda}{3M_p^2} \,. \end{equation} Contrary to the open case the scale factor {\it does not} vanish at any point and after nucleation it starts with a minimum value of order $a(0)\geq a_{min}\sim M_p/\sqrt{\Lambda} $ which is large enough to be in the regime for which the classical evolution equations are valid. The scalar field evolves as: \begin{equation} \phi(t)=-\frac{\beta}{3\lambda^2}\left[\log\cosh(\lambda t)-{\rm sech}^2(\lambda t)+1\right] \,. \end{equation} Now the Hubble parameter is $H=\dot a/a=\lambda \tanh(\lambda t)$, and \begin{equation} \frac{\ddot a}{a}=\frac{8\pi G}{3}\left(\Lambda-\dot\phi^2\right) \,, \qquad -\frac{\dot H}{H^2}=-{\rm csch}^2(\lambda t) \,. \end{equation} Unlike the open case there is no critical time before which the curvature term dominates over the $\Lambda$ contribution in the Friedmann equation. But in order to have the potential energy to be dominated by $\Lambda$ we need $\beta\phi<\Lambda$ which happens for times $t<t^*\left(\Lambda/\beta M_p\right)^2$ and for having the kinetic energy suppressed with respect to the potential energy $\dot\phi^2\ll \Lambda$ which happens for times $t<t^*\left(\Lambda/\beta M_p\right)^2$ after that the universe stops accelerating. Both of these conditions are satisfied if $\beta\ll\Lambda/M_p$ which is the equivalent of slow-roll condition. So we have two different possible outcomes after nucleation depending on the value of $\beta$: a short period of relative fast roll and a few e-foldings before the standard slow-roll inflation or an inflationary period right after nucleation if $\beta\ll\Lambda/M_p $. The maximum number of e-foldings from this period would be: \begin{equation} N_{max}= \int_0^{t_c} H dt=\log \cosh\left(\frac{t_c}{t^*}\right)\sim \frac{t_c}{t^*}\sim\frac{2}{\epsilon} \,,\qquad t_c=t^*\left(\frac{\Lambda}{\beta M_p}\right)^2 \,. \end{equation} Where $\epsilon=M_P^2V_{,\phi}^2/2V^2\simeq M_P^2\beta^2/2\Lambda^2< 1$ is the usual slow roll parameter. This is the maximum number of e-foldings since at $t=0$ the nucleation happens at the minimum value of $a(t)$ by imposing $\dot a(0)=0$, implying $a(0)=a_{min}=1/\lambda$~\cite{Ellis:2001ym}. Unlike the flat and open cases in which the initial value of $a$ at the start of inflation can be as small as possible, i.e. as small as the Planck scale $l_P$, in the closed case the existence of a lower bound for $a$ with $a_{min}$ much bigger than the Planck length implies a a much stronger upper bound in the number of e-folds in order to fit with the present value size of the observable universe $a_0$ which could be estimated if $\delta=|\Omega -1|=1/a_0H_0 $ is measured: $N_{max}\leq \ln{a_0/a_{min}}$. Note that this is independent of the standard argument for $N\sim 60$ setting bounds on $\Omega$ today which if measured with enough precision may differentiate between open or closed universes. \subsection*{Density Perturbations} The magnitude of density perturbations measured by $\delta\rho/\rho\sim H^2/\dot\phi $ grows from zero at $t=0$ to order $\delta\rho/\rho\sim \lambda^3/\beta $ close to $t=t_c$ which can be smaller or larger than standard slow-roll inflation depending on the values of $\beta$ and $\Lambda$\footnote{Note that in the open universe case, this quantity diverges at $t=0$ since $H$ diverges and $\dot\phi(0)=0$ there, and then increases with time.}. If $\beta$ does not satisfy the slow-roll condition then $N=\int_0^{t^*} H dt=\log \cosh 1\sim 1$. Therefore it is clear that in this case the scalar potential will need to flatten up through an inflection point in order to have an adequate period of inflation afterwards. In both cases (slow or fast roll after bubble nuecleation) the fact is that $k=1$ not only provides initial conditions for inflation but also contributes to the density perturbations since in general the presence of curvature provides a new scale. It affects the power spectrum in the sense that the long wavelength modes which exit the horizon during the early stages of inflation may carry imprints of the spatial curvature, whereas the short wavelength modes which exit the horizon later are not affected by the spatial curvature. This can also be seen if we compute the power spectrum for the inflationary perturbations. As it is well known the scalar field during inflation produces an adiabatic power spectrum of scalar and tensor perturbations. Whereas for flat universes the power spectrum is nearly scale invariant, this is not the case for open and closed universes. As the long wavelength modes leave the horizon carrying the imprint of the curvature they deviate from scale invariance at large scales. For small curvature $\Omega_k=k/(a_0H_0) <1$ the power spectrum can be written as \footnote{A more appropriate treatment of the power spectrum after tunneling needs to take into account the fluctuations of the wall. In the case of open inflation these translate into an excited initial state which also imply deviations from scale invariance~\cite{Bucher:1994gb}. For the closed universe solution obtained in Sec.~\ref{sec:CDLfromWDW} we leave the analysis of the inflationary perturbations for future work.}~\cite{Cheung:2007st,Creminelli:2013cga}, \begin{equation} P(q)=A_s q^{(n_s-1)-3}\left(1-\frac{19}{8}\frac{k}{q^2}\right)+\mathcal{O}(\Omega_k^2)) \,, \label{eq:inflpowerspectrum} \end{equation} where $A_s$ is the amplitude of the scalar fluctuations and $q$ is the comoving wavenumber. We have also neglected self interactions of the curvature perturbation. From Eq.~\eqref{eq:inflpowerspectrum} we can read that at large scales, or smaller $q$ , the power spectrum is suppressed for a closed universe ($k=+1$), but is enhanced for the same scales for an open universe. These deviations from scale invariance can have an effect on the CMB. At large scales the main contribution to the angular power spectrum $C_l$ is given by the Sachs-Wolfe effect\footnote{ For a derivation of the effect of the primordial power spectrum over Sachs-Wolfe effect see formula (2.6.19 ) of~\cite{Weinberg:2008zzc}} \begin{equation} l(l+1)C_l=\frac{4\pi}{25}A_s\left(1-\frac{19}{8}\frac{kr_L^2}{3(l-1)(l+2)}\right) \,.\label{eq:Cls} \end{equation} Where $r_L$ is the radial coordinate of the surface of last scattering. Then we see that at linear order in the curvature there is a suppression/enhancement of the low $l$ modes of the temperature anisotropy of the CMB depending on the sign of $k$. Note that this computation assumes that the only contribution from the curvature comes from inflation and, although this is not quite accurate, it works as a qualitative approximation. Another point is that the effect described is model independent but, as we mentioned before, there are other signatures that depend on each model that may have important consequences. For example for the case of open inflation after CDL, there is a fast roll phase before slow roll, by which the authors of~\cite{Freivogel:2005vv,Bousso:2013uia} have argued that because of an anthropic bound on the duration of inflation the fast roll phase translates into a potentially observable suppression of the low $l$ modes of the CMB. This implies that a negatively curved universe can have an effect which is indistinguishable from a positively curved universe, so in order to break this degeneracy it might be necessary to study higher order correlation functions of cosmological observables. At small scales it can be seen from Eq.~\eqref{eq:inflpowerspectrum} that the power spectra coincide. This means that the CMB power spectrum at large angles or small multipoles is suppressed with respect to the standard flat $\Lambda$ CDM model. Since the effect through inflation is only present at large scales, the power spectrum coincides with the flat case for large multipoles ($\ell \gtrsim 30$). This is the regime that has been tested most successfully, although recently several articles have found some evidence for the closed universe inflationary model from the latest CMB observations~\cite{DiValentino:2019qzk,Handley:2019tkm,Efstathiou:2020wem}. \end{itemize} \subsection*{Observational Implications and the String Landscape} The implications of a closed universe after bubble nucleation may have important observational implications and would radically affect the dynamics of the string landscape. Let us list some of them. \begin{itemize} \item {\bf General Prediction}. Due to the richness of the string landscape, it has been a serious challenge to identify concrete and general predictions that could be tested with the potential to rule out the landscape paradigm. The standard belief that bubble nucleation after vacuum decay gives rise only to an open universe, has been identified as the most concrete general prediction that could be subject to experimental test at some point. However, if the outcome is a closed universe, the prediction would be exactly the opposite. At the moment we cannot rule out the possibility that an open universe could also be allowed. Furthermore note that, in principle, the idea behind the landscape is that universes are continuously produced from a series of quantum tunneling among the many different vacua. If a parent universe is closed and a daughter universe is open, this chain of universe creation would not be possible. However if both parent and daughter universes are closed this is natural. Furthermore as emphasized before in this work, the string theory landscape is a result of brane nucleation rather than a CDL like process, and should be treated as in FMP. \item{\bf Bubble collisions}. In the landscape picture, the continuous creation of inflationary bubbles may give rise to the possibility of bubble collisions that may have left some imprint in the CMB, gravitational waves, etc. See for instance~\cite{Kleban:2011pg, Aguirre:2009ug} for an overview. One important aspect of the treatment of bubble collisions is the symmetry after the collision. Assuming open universes, each bubble spatial section has an $SO(3,1)$ symmetry which breaks to $SO(2,1)$ after the collision. Numerical relativity techniques have been developed during the past few years to address this problem (see for instance~\cite{Johnson:2015gma} and references therein). If the universe is closed the situation differs substantially, the finiteness of the volume of spatial sections affects the probability of collisions and the natural symmetry breaking would be from the $SO(4)$ symmetry of the corresponding three-spheres to $SO(3)$. This should modify the description of bubble collisions. A detailed discussion of this interesting effect lies beyond the scope of this article. \item{\bf Beyond bubbles and CDL}. The natural outcome of a phase transition, such as vacuum decay, is through the nucleation of bubbles of the new vacuum. Bubbles are required if the universe has infinite volume as in flat and open spacetimes. If the spacetime is closed there is the possibility that the full space and not only a region within it can change to the new vacuum with non-vanishing probability since the volume is finite. Note also that, as we have seen in the previous section, the fact that in mini-superspace the kinetic energy for the scale factor is negative whereas that for the scalar field is positive (Eq.~\eqref{eq:calH}), then the Hamiltonian constraint ${\cal H}=0$ allows to pass classically through the barrier of the scalar potential. This does not appear without gravity nor in the Euclidean approach nor in the Hartle-Hawking case without scalar fields. This would then be the leading contribution to the ratio of probabilities for the creation of each of the two dS spaces. Even though these conclusions may be artefacts of the mini-superspace approximation they would affect the structure of the landscape and deserve further study. \item{\bf Number of e-folds}. For the open universe case, reference~\cite{Freivogel:2005vv} extracted a lower bound on the number of e-foldes during inflation $N>59.5$ similar to the observed bound $N>62$ (modulo logarithmic corrections due to the different epochs of matter/radiation domination after inflation). For $k=1$ as we discussed before, there is an {\it upper} bound on the number of e-folds (again with $\Delta N\sim 2$ e-folds with respect with the observed one)~\cite{Uzan:2003nk}. The reason being, as mentioned above, that for the closed universe, there is a minimum value of the scale factor $a_{min}\sim M_p/\sqrt{\Lambda}$ that already starts large (contrary to the flat and open cases in which $a\sim 0$ before inflation and have in principle no limit on the maximum number of e-folds before reproducing the standard cosmology after inflation). This tends to favour concave models of inflation and disfavour models, such as chaotic inflation, that may have an essentially unlimited number of e-folds. \item{\bf Power suppression}. As we have seen, both open and closed universes introduce a new cosmological scale, the curvature, and affect the density perturbations observed from the CMB. The net effect is a suppression of the power spectrum at large angles or small multipoles. This effect may have two different origins, one due to the curvature and the other if there is a period of fast rolling before inflation. \end{itemize} \section{Conclusions} We have presented here an extension of the standard quantum mechanics WKB approximation to field theory and gravity. Expanding on previous approaches we developed a general geometric formalism to extend the WKB to wave functions in Wheeler's superspace and found explicit expressions for vacuum transition probabilities interpreted as ratios of the square of wave functions of the two different configurations for both field theory and gravity. In field theory we presented explicitly two different cases with two vacua. The first is the standard scalar potential with two minima at finite field value and the second one with the second minimum corresponding to a runaway. In both cases we reproduce the standard Coleman results at leading order. In the second case we found the (potential) bound states and corresponding resonances to provide the explicit expression for the decay rate Eq.~\eqref{eq:lifetime} which agrees with the Euclidean approach at leading order but differs in the pre-factor. We further emphasised our approach is the natural generalisation of the standard quantum mechanical WKB calculation and does not have subtle issues such as the handling of negative modes and trusting the dilute instanton approximation. In the gravity case, our results can be summarised in two directions. In one way they confirm the results obtained by Euclidean methods. In particular we provide a Lorentzian perspective for the estimation and interpretation of the decay rates and the wall trajectory after tunneling, illustrating the validity of the Euclidean techniques at least as far as getting the leading exponential behavior goes. On the other hand we point out that there are substantial differences from the Euclidean approach. In particular for the mini-superspace case we found that: \begin{itemize} \item Selecting a very particular path we can reproduce exactly the decay rate as computed by CDL, however the interpretation of this path is not at all clear. \item The natural transition rate gives simply the ratio of the two corresponding Hartle-Hawking wave functions, without a tension term as in Eq.~\eqref{eq:HHV} which gives dominant decay rate as compared to CDL. It seems then that this mini-superspace approach is actually a generalisation of Hartle-Hawking/Vilenkin/Linde transitions than to CDL. \item The fact that the metric in superspace is not positive definite, as manifested for instance in the Hamiltonian constraint Eq.~\eqref{eq:calH}, allows for classical paths to connect the two dS vacua without the need to pass through or across the barrier. This will not have an exponentially suppressed decay rate but it depends on the initial conditions. \item The natural dS foliation corresponds to the global slicing which leads to a closed universe as in Hartle-Hawking/Vilenkin-Linde unlike the open universe claimed by CDL. Here, recall that the driving argument was based on the analytic continuation triggered by the argument of the scalar field. Surfaces of constant $\phi$ would give rise to the hyperbolic foliation of spacetime. However, as noted already in~\cite{Coleman:1980aw} in the extreme thin-wall approximation we only have two values of the cosmological constant and there is no preferred dS space foliation. This means that already in the Euclidean framework described for instance in~\cite{Brown:1988kg} horizontal slices of dS could have been chosen for the regions inside and outside the bubble with a closed universe. \item In mini-superspace there is no way to discuss bubble nucleation and therefore the transition should be interpreted between two entire dS spaces. This computation really makes sense only for $k=1$ since the spatial volume in global slicing is finite. A full description beyond mini-superspace is needed in order to properly include the bubble. In the Hamiltonian approach to quantum tunneling developed in~\cite{Fischler:1989se,Fischler:1990pk} it is clear that the geometry inside and outside the bubble is also of a closed universe~\cite{deAlwis:2019dkc}. In this approach the metric depends not only on time but also on the radial coordinate $r$. This reinforces our conclusion. This formalism only compares the transition between two different cosmological constants without considering a scalar field with the corresponding potential. This corresponds to the extreme thin-wall approximation or brane nucleation in string theory. A full Hamiltonian approach including the scalar field dynamics is not yet available. \item Our approach and that of~\cite{Fischler:1989se,Fischler:1990pk} started with a spherically symmetric geometry that was natural to describe the bubble after the transition but in principle we could have chosen a flat or negatively curved spacetime to start with. The fact that these spaces have infinite volume renders the volume integrals problematic. For instance in Eq.~\eqref{eq:Sg} the $2\pi^2$ factors come from integrating the volume of the three-sphere which in the flat and open cases would diverge. This would require a proper volume regularisation before extracting physical information. Similar volume integrals for closed slicings appear in the approach of~\cite{Fischler:1989se,Fischler:1990pk}. Furthermore, in~\cite{Fischler:1989se,Fischler:1990pk} a prescription is provided to Euclideanise their results, the corresponding analytic continuation is simply $t\rightarrow it$ which is not the analytic continuation proposed by CDL. \item We believe that the possibility that the geometry of the bubble after nucleation corresponds to a closed FLRW universe, contrary to CDL, should be seriously considered. Indeed this is the natural implication of brane nucleation in string theory as we have argued in this paper. This may have important physical implications if our universe is described in terms of a bubble after vacuum transitions as discussed at the end of the previous section. Besides the deep implications of having a finite against an infinite universe, with finite number of stars and galaxies, it may eventually be tested if there is a definite way of determining the curvature of the universe. For the string theory landscape, it will at least eliminate the standard claim that detecting a closed universe would rule out the multiverse. These are important cosmological questions that deserve further scrutiny. \end{itemize} \section*{Acknowledgements} We thank Stefano Ansoldi, John Bohn, Cliff Burgess, Jim Hartle, Veronica Pasquarella, Jorge Santos, Andreas Schachner and Alexander Westphal for useful discussions. We also thank Andreas Schachner for comments on the manuscript. We thank Thomas Hertog for comments on a previous version. The work of SC is supported through MCIU (Spain) through contract PGC2018-096646-A-I00 and by the IFT UAM- CSIC Centro de Excelencia Severo Ochoa SEV-2016-0597 grant. FM is funded by a UKRI/EPSRC Stephen Hawking fellowship, grant reference EP/T017279/1 and partially supported by STFC consolidated grant ST/P000681/1. The work of FQ has been partially supported by STFC consolidated grants ST/P000681/1, ST/T000694/1.
1905.00048
\section*{Abstract} We derive the properties and demonstrate the desirability of a model-based method for estimating the spatially-varying effects of covariates on the quantile function. By modeling the quantile function as a combination of I-spline basis functions and Pareto tail distributions, we allow for flexible parametric modeling of the extremes while preserving non-parametric flexibility in the center of the distribution. We further establish that the model guarantees the desired degree of differentiability in the density function and enables the estimation of non-stationary covariance functions dependent on the predictors. We demonstrate through a simulation study that the proposed method produces more efficient estimates of the effects of predictors than other methods, particularly in distributions with heavy tails. To illustrate the utility of the model we apply it to measurements of benzene collected around an oil refinery to determine the effect of an emission source within the refinery on the distribution of the fence line measurements. \onehalfspacing \section{Introduction} Quantile regression offers an important alternative to traditional mean regression for problems where the interest lies not in the center of the distribution but in some other aspect. Since the first quantile regression paper was published by \cite{koenker1978regression}, an immense body of literature has been developed and is reviewed in \cite{koenker2005quantile}. \cite{yu2001bayesian} proposed a form of Bayesian quantile regression employing the Asymmetric Laplace Distribution (ASL) as the working likelihood, due to its similarity to the check loss function used by \cite{koenker1978regression}. Both of these approaches perform separate analyses for each quantile level of interest. When quantiles are estimated separately, there is no guarantee of a valid non-decreasing quantile function. There are several approaches to address this issue. The first one is a two-stage method: in the first-stage the quantiles are fit separately using one of the above methods, and in the second stage the estimates are smoothed to ensure monotonicity. This approach has been taken by a variety of authors including \cite{neocleous2008monotonicity}, \cite{rodrigues2016regression}, and \cite{reich2012bayesian} who used it as a more computationally efficient Bayesian spatial method. \cite{bondell2010noncrossing} embed a constraint that ensures monotonicity into the minimization problem, while \cite{cai2015estimation} use prior specifications to ensure constraints in the Bayesian framework. The final approach, which we will adopt and extend, is to model the entire quantile function jointly using basis functions. This is the approach taken by \cite{reich2012bayesian} and others \citep{reich2012spatiotemporal, smith2015multilevel} and is more naturally implemented using a Bayesian framework. Regardless of the approach taken, ensuring monotonicity requires either some form of distributional assumption, or constraints on the quantile regression coefficients and the parameter space of the predictors. \cite{cai2015estimation} demonstrated that when predictors are constrained to be positive, the quantile function is monotonic for every possible predictor value if and only if the basis functions are monotonic. This is the approach taken by Zhou et al. (\citeyear{zhou2011calibration, zhou2012}) who first proposed the I-spline quantile regression model whose properties we derive in this paper. As in mean regression, a method of incorporating spatial correlation into quantile regression is to model spatially-varying parameters using Gaussian process priors. \cite{lum2012spatial} use the ASL for the likelihood and incorporate spatial correlation by modeling the error as a function of a Gaussian process and an independent and identically distributed exponential random variable. For large datasets they propose an asymmetric Laplace predictive process, extending the method introduced by \cite{banerjee2008gaussian}. However, the use of the ASL does not allow for valid posterior inference because it does not represent the true likelihood of the observations. \cite{yang2015quantile} combined spatial priors with their Bayesian empirical likelihood approach for modeling the conditional quantiles in the presence of both predictors and spatial correlation, but their method only allows for effects to be estimated at a small fixed number of quantile levels. Several previous methods of modeling a spatially varying conditional quantile function using basis functions have also been advanced \citep{reich2012bayesian, reich2012spatiotemporal}. We consider the model first proposed by Zhou et al. (\citeyear{zhou2011calibration, zhou2012}) where the quantile function is modeled as a combination of I-splines and the Generalized Pareto Distribution (GPD). The GPD is used to model the tails because it has been shown to be the natural choice for exceedances over a threshold \citep{davison1990models} and provides flexibility as a result of the shape parameter which controls boundedness and the existence of moments. A full description of the I-spline quantile regression model for both independent and spatially correlated data is given in Section 2. In this paper, we formulate the conditions under which the resulting density has the desired degree of differentiability and derive the marginal expectations and spatial covariances which can be non-stationary (Section 3). Our simulation studies demonstrate that ensuring a smooth density can lead to more accurate effect estimates and predictive distributions, compared with methods that do not ensure differentiability (Section 4). We apply the method to benzene measurements from a petrochemical facility to determine the effects of emission sources on concentrations (Section 5). \section{Proposed Model} We model the quantile function of the stochastic process $Y(s)$ as a linear combination of the predictors: \begin{equation}\label{2.1} Q(\tau|s, \mathbf{x}(s)) = \beta_0(\tau, s) + \sum_{p=1}^P x_p(s)\beta_p(\tau, s), \end{equation} where $\mathbf{x}(s) = (x_1(s), ..., x_p(s)) \in \mathbb{R}^p_+$, is the vector of predictors observed at location $s$, $\beta_0(\tau, s)$ is the quantile function at location $s$ when all predictors are 0, and $\beta_p(\tau, s)$ is the effect of predictor $p$ on quantile level $\tau$ at location $s$. We further follow the approach of \cite{zhou2011calibration} and model $\beta(\tau, s)$ as a linear combination of I-spline basis functions in the center of the distribution. We denote the $m^{th}$ I-spline basis function evaluated at $\tau$ as $I_m(\tau)$ and define the constant basis function $I_0(\tau)=1$ for all $\tau$. While I-splines allow for a large degree of flexibility in the center of the distribution, unbounded distributions cannot be estimated using I-splines with a finite number of knots. To solve this issue we use the quantile function of the GPD to model the relationship of the covariate(s) to the process in the tails of the distribution. The model for $\beta_p(\tau, s)$ can then be expressed as \begin{align}\label{2.2} \beta_p(\tau, s) & = \begin{cases} \theta_{0,p}(s) - \frac{\sigma_{L,p}(s)}{\alpha_{L}(s)}\left[\left(\frac{\tau}{\tau_L}\right)^{-\alpha_{L}(s)} - 1 \right] & \tau < \tau_L \\ \sum_{m=0}^M\theta_{m,p}(s)I_m(\tau) & \tau_L \le \tau \le \tau_U \\ \left[\sum_{m=0}^M\theta_{m,p}(s)\right] + \frac{\sigma_{U,p}(s)} {\alpha_{U}(s)}\left[\left(\frac{1-\tau}{1-\tau_U}\right)^{-\alpha_{U}(s)} - 1 \right]& \tau > \tau_U, \\ \end{cases} \end{align} where $\tau_L$ and $\tau_U$ are the thresholds between the tails and the center of the distribution, $\theta_{0,p}$ is the location parameter at the lower tail, and $\theta_{m,p}(s)$ represents the coefficient of the $m^{th}$ I-spline basis function and $p^{th}$ predictor at location $s$. I-splines are monotonic polynomials formed by integrating normalized B-splines (Fig. 1) \citep{ramsay1988}. They are defined on a sequence of knots $\{\tau_0 =...=\tau_{k} < ... < \tau_{M+1} =...=\tau_{M+1+k}\}$, where $k$ represents the degree of the polynomial and $M$ is the number of non-constant basis functions. The GPD has three parameters: the shape parameter $\alpha$, the scale parameter $\sigma$, and a location parameter $\mu$. In our parameterization, the location parameter of the lower tail is equal to $\theta_{0,p}(s)$ and the location parameter of the upper tail is equal to $\sum_{m=0}^M\theta_{m,p}(s)$ to ensure the quantile function is continuous. We denote the shape parameters of the lower and upper tails as $\alpha_L(s)$ and $\alpha_U(s)$, respectively, and the scale parameters as $\sigma_{L,p}(s)$ and $\sigma_{U,p}(s)$. We require the shape parameter to be constant across predictors in order to ensure that the density in the tails follows a parametric distribution. The scale parameters vary by both predictor and location and allow the predictors to affect the tails differently. When $\alpha < 0$, the support of GPD is also bounded above, otherwise the domain is unbounded above. The case when $\alpha = 0$ is interpreted as the limit when $\alpha \rightarrow 0$, i.e. $\frac{\sigma_{U,p}}{\alpha_{U}} \left[\left(\frac{1-\tau}{1-\tau_U}\right)^{-\alpha_{U}} - 1 \right]$ is replaced with $-\sigma_{U,p}\log\left(\frac{1-\tau}{1-\tau_U}\right)$. The expectation exists if $\alpha$ is less than 1, and the variance exists if $\alpha$ is less than $1/2$. This model formulation ensures a quantile function that is continuous and differentiable at all but a finite number of points. We can thus exploit the result of Tokdar and Kadane (\citeyear{tokdar2012simultaneous}) who demonstrated that a differentiable and invertible quantile function corresponds with the density \begin{equation} f(y) = \frac{1}{Q'(Q^{-1}(y))}. \end{equation} \begin{figure} \includegraphics[width = \linewidth]{Basis_functions.pdf} \caption{Example set of normalized B-spline (left) and corresponding I-spline (right) basis functions. Dotted vertical lines indicate knot locations.} \label{fig:1} \end{figure} To ensure the quantile function is monotonic we introduce latent parameters with Gaussian process priors, $\theta_{m,p}^* \sim \mathcal{GP}(\mu^*_{m,p}, \Sigma^*_{m,p})$ and define $\theta_{0,p}(s) = \theta^*_{0,p}(s)$ and $\theta_{m,p}(s) = \exp{\theta^*_{m,p}(s)}$ for $m>0$. By using this formulation the resulting $\theta_{m,p}(s)$ are modeled as log Gaussian processes. No constraints are placed on $\theta_{0,p}$ which allows predictors to have a negative effect on the response. The model formulation has many advantages including the ability to allow the effect of each predictor to vary by quantile level and by spatial location while guaranteeing a valid quantile function. It can also accommodate a variety of tail distributions including both bounded and unbounded tails. Furthermore, we show in Section 3 that we can guarantee the degree of differentiability of the corresponding density function. \cite{reich2012spatiotemporal} proposed a similar model, constructing the quantile function using parametric Gaussian basis functions. While the parametric basis functions allow for straightforward evaluation of the density, they do not guarantee a differentiable quantile function, which results in a non-continuous density function (Fig. 2). We show through both simulation and applied data analysis that constraining the density to be continuous and differentiable can result in better parameter estimates and out-of-sample scores. \begin{figure} \includegraphics[width = \linewidth]{Density_examples.pdf} \caption{Examples of quantile function (top row) and corresponding density function (bottom row) constructed using different bases. } \label{fig:2} \end{figure} \section{Model Properties} \subsection{Validity Of Quantile Function} Assuming an I-spline order $k>1$, the proposed quantile function is continuous everywhere and is differentiable for all values of $\tau \in (0,1)$ except $\tau_L$ and $\tau_U$. Thus, a necessary and sufficient constraint to ensure a valid quantile function is $Q'(\tau) \geq 0$ for all $\tau$ at which the derivative exists. For all values of $\tau$ such that $\tau_L < \tau < \tau_U$, $Q'(\tau) =\sum_{m=1}^M B_m(\tau) \sum_{p=1}^P \theta_{m,p}x_p$. By definition, $B_0(\tau) = 0$ for all $\tau$ and $B_m(\tau) \ge 0$ for all $m$ and $\tau$. Without loss of generality, we will henceforth assume that the predictors are all non-negative, i.e. $\mathbf{x} \in \mathbb{R}_{+}^P$, therefore a sufficient constraint to ensure a valid quantile function is $\theta_{m,p} \ge 0$ for all $p$ and $m>0$. If $\sigma_{L,p} > 0$ for any $p$ and $\tau \le \tau_L$, $Q'(\tau) = \sigma_{L,p}x_p\tau^{-\alpha_{L}-1}\tau_L^{\alpha_{L}} > 0$. Similarly if $\sigma_{U,p} > 0$ for any $p$, $Q'(\tau) > 0$ when $\tau \ge \tau_U$. \subsection {Continuity and Differentiability} In many cases, such as the application described below, it is desirable to ensure that the density is continuous and smooth. Proposition 1 establishes the conditions for continuity of the density function. \begin{proposition} Let $Y$ have a quantile function as defined in (\ref{2.1}) and (\ref{2.2}) with $\sigma_{L,p} > 0$ for at least one $p$, then the density of $Y$ is continuous at $Q(\tau_L|\mathbf{x},\Theta)$ for any $\mathbf{x} \in \mathbb{R}_{+}^P$ if and only if \begin{equation}\label{ContinuityConstraintLower} \theta_{1,p} = \frac{\sigma_{L,p}}{\tau_L I_1'(\tau_L)}, \end{equation} for all $p$. Similarly, given $\sigma_{U,p} > 0$ for at least one $p$, the density of $Y$ is continuous at $Q(\tau_U|x,\Theta)$ if and only if \begin{equation}\label{ContinuityConstraintUpper} \theta_{M,p} = \frac{\sigma_{U,p}}{(1-\tau_U)I_M'(\tau_U)}. \end{equation} \end{proposition} Having clarified the conditions for a continuous density, which can be viewed as $0^{th}$ order differentiability, Theorem 1 proceeds to establish the conditions for $q^{th}$ order differentiability of the density function of $Y$. \begin{theorem} Let $Y$ have a quantile function as defined in (\ref{2.1}) and (\ref{2.2}) with an I-spline basis order greater than $q+1$ and a density that is continuous and $(q-1)^{th}$ order differentiable at $Q(\tau_L)$. If $\alpha_{L}$ is constrained so that Eq. \ref{differentiableConstraint} does not result in $\theta_{q+1,p} < 0$, then Y has a density that is $q^{th}$-order differentiable at $Q(\tau_L)$ for any $\mathbf{x} \in \mathbb{R}_{+}^P$ if and only if \begin{equation}\label{differentiableConstraint} \theta_{q+1,p} = \frac{1}{I_{q+1}^{(q+1)}(\tau_L)}\left\{\frac{-\sigma_{L,p}}{\alpha_{L}\tau_L^{q+1}}(-\alpha_{L}-q)_{q+1} - \sum_{m=1}^{q} \theta_{m,p}I_m^{(q+1)}(\tau_L)\right\} \end{equation} where $I_{q+1}^{(q+1)}(\tau_L)$ is the $(q+1)^{th}$ order derivative of the $(q+1)^{th}$ I-spline basis function, $(-\alpha_{L}-q)_{q+1} = \prod_{j=0}^{q}(-\alpha_L - j)$. \end{theorem} The conditions which guarantee differentiability at $\tau_U$ are similar and are given in the supplemental information. Combined with the positivity constraint on the $\theta$s, these results imply that the shape parameters have an upper bound that is a function of the knot placement. Ensuring a density that is first order differentiable results in the possible values for $\alpha_{L}$ being bounded above by $-1 -\tau_L\frac{I^{(2)}_1(\tau_L)}{I'_{1}(\tau_L)}$. This bound is a function of $I'_1(\tau_L)$ and $I^{(2)}_1(\tau_L)$ which are functions of the first two knot locations. We can still model any tail behavior provided the outermost knots are placed sufficiently close. \subsection{Expectations and Covariance} While our models allow for flexible non-Gaussian distributions, sometimes the first two moments are of interest (e.g., for best linear unbiased prediction). We now elaborate on the various types of covariance structure that can be estimated using the proposed model. We model the covariances of the latent parameters $\theta_{m,p}^*$ using covariance function $C$ such that $Cov[\theta_{m,p}^*(s), \theta_{m,p}^*(s')] =\eta^{2}_{m,p} C(s,s')$ and $Var[\theta^*_{m,p}(s)] = \eta^{2}_{m,p} + \lambda_{m,p}^2$. Consequently, the expectation of $\theta_{m,p}$ can be expressed as $E[\theta_{m,p}] = \mu_{m,p} = \exp[\mu^*_{m,p} + (\eta^{2}_{m,p} + \lambda_{m,p}^2)/2]$ and the covariance of $\theta_{m,p}$ is \begin{equation} \Sigma_{m,p}(s,s') = \mbox{Cov}[\theta_{m,p}(s), \theta_{m,p}(s')] = \mu_{m,p}^2(\exp[\eta^{2}_{m,p} C(s,s')] - 1). \end{equation} In this section we describe the covariance of the case when $\tau_L = 0$ and $\tau_U = 1$, we elaborate on other cases in the supplementary material. Under these conditions, the conditional expectation of $Y(s)|\Theta(s),\mathbf{x}(s)$ is \begin{equation} E[Y(s)|\Theta(s),\mathbf{x}(s)] = \int_0^1 Q_Y[\tau|\Theta(s), \mathbf{x}(s)]d\tau = \sum_m\sum_p \theta_{m,p}(s)x_{p,t}(s)G_m, \end{equation} where $G_m = \int_{0}^{1} I_m(\tau) d\tau$. We further marginalize over the log Gaussian processes $\theta_{m,p}(s)$, with mean $\mu_{m,p}$ and covariance $\Sigma_{m,p}$, to obtain the expectation and covariance of $Y(s)$ \begin{equation} E[Y_t(s)|\mathbf{x}(s)] = \sum_m\sum_p \mu_{m,p}(s)x_{p,t}(s)G_m. \end{equation} \begin{equation} \text{Cov}[Y(s),Y(s')|\mathbf{x}(s), \mathbf{x}(s')] = \sum_m\sum_pG_m^2x_{p}(s)x_{p}(s')[\Sigma_{m,p}(s, s')] \end{equation} Through this simple case we can see that the covariance is dependent on the values of the predictors in addition to the covariance functions of the latent parameters. This dependence on the predictors can result in non-stationary covariances if $x_p$ vary across space, even if $C(s,s')$ is stationary. \section{Simulation Study} Our simulation studies demonstrate the superior efficiency of the proposed I-spline quantile regression method (IQR) using four designs from data generating models that are not in the proposed model class (Table \ref{tab:design}). The designs include cases with both light tails (D1 and D3) and heavy tails (D2 and D4), and with (D3 and D4) and without (D1 and D2) spatial correlation. The designs illustrate the flexibility of the proposed method compared with previously established methods. For each design the observed response is indexed as $y_t(s_i)$, where $t \in \{1,...,n\}$ indexes the observations at a given location $s_i$ with $i \in \{1,...,S\}$. The predictor vector $\mathbf{x}_{1,t}$ is generated by sampling from a uniform random variable in D1 and D2. In D3 and D4, $\mathbf{z}_t$ is generated by sampling from a Gaussian processes with mean 0, and exponential covariance with range 1 and $\mathbf{x}_{1,t} = \Phi^{-1}(\mathbf{z}_t)$, where $\Phi^{-1}(\tau)$ is the quantile function of the standard normal. The predictor $\mathbf{x}_{2,t}$ is generated by sampling from a uniform random variable in all designs. The observed response is generated by drawing an independent random uniform variable $u_t(s_i)$ and setting: \begin{equation} \label{eq:SimDesign} y_t(s_i) = \beta_0(u_t(s_i), s_i) + \beta_1(u_t(s_i),s_i)x_{1,t}(s_i) + \beta_2(u_t(s_i), s_i)x_{2,t}(s_i). \end{equation} In all designs we assume multiple observations are obtained for each location. For each design we simulate $B=50$ independent datasets. In D1 and D2 we simulate 1000 observations per dataset, assuming all observations are from a single location and thus independent. In D3 and D4 we use $S=16$ locations evenly spaced on a unit square and simulate 100 observations per site for a total of 1600 observations per dataset. For each of the datasets we randomly assign 10\% of the data to be used as validation data for the out-of-sample calculations and use the other 90\% as training data. Computational details including a description of the Markov Chain Monte Carlo (MCMC) algorithm and prior specifications are included in the supplementary material. \begin{table}[t] \caption{True parameter functions by design used in the simulation study. The location is given as $s = (s_1, s_2)$, $\Phi^{-1}(\tau)$ represents the quantile function of the standard normal evaluated at $\tau$ and $Q_{Pareto}$ represents the quantile function of the Pareto distribution with the given parameters.} \label{tab:design} \renewcommand{\arraystretch}{1} \begin{tabular}{l | l |l| l} \hline & $\beta_0(\tau, s)$ & $\beta_1(\tau, s)$ & $\beta_2(\tau, s)$\\ \hline D1 & $0.1\Phi^{-1}(\tau)$ & 0.3$\tau$ & $Q_{Pareto}(\tau, \alpha=-0.2, \mu = 0, \sigma = 0.1)$\\ D2 & $0.1\Phi^{-1}(\tau)$ & 0.3$\tau$ & $Q_{Pareto}(\tau, \alpha=0.3, \mu = 0, \sigma = 0.3)$\\ D3 & $(.05 + .2s_1s_2)\Phi^{-1}(\tau)$ & $.3e^{s_2} + 0.2\tau$ & $Q_{Pareto}(\tau, \alpha= -0.1, \mu=0, \sigma=.1 )$\\ D4 & $(.05 + .2s_1s_2)\Phi^{-1}(\tau)$ & $.3e^{s_2} + 0.2\tau$ & $Q_{Pareto}(\tau, \alpha=.4s_1, \mu =0.3, \sigma = 0.4)$\\ \hline \end{tabular} \end{table} We compare the estimates from the proposed model (IQR) with those from the model using parametric Gaussian basis functions (GAUS) proposed by \cite{reich2012spatiotemporal} and the non-crossing quantile regression estimates (NCQR) proposed by \cite{bondell2010noncrossing}. For the IQR and GAUS methods the estimates of $\beta(\tau, s)$ represent the means of the corresponding posterior samples. For the NCQR method the estimates of $\beta(\tau, s)$ are obtained by minimizing the check loss function combined with the non-crossing constraint. The GAUS model allows for spatially varying coefficients and spatial correlation while the NCQR method assumes independent and identically distributed samples. We index the quantile levels at which the methods are compared by $j \in {1, ..., J}$. For each quantile level, $\tau_j$, and simulated dataset replicate, $b \in \{1, ..., B\}$, the estimated coefficients $\widehat{\beta_p}(\tau_j, s_i)$, were compared using root mean integrated square error (RMISE). The RMISE for simulated dataset $b$ was calculated for a given $\beta_p$ and sequence $\tau_1, ..., \tau_J$: \begin{equation} RMISE(\beta_p)^{(b)} = \sqrt{\frac{1}{S}\sum_{i=1}^S\sum_{j=1}^J\delta_{j}\left[\widehat{\beta_p}(\tau_j,s_i)^{(b)} - \beta_p(\tau_j, s_i)\right]^2} \end{equation} where $\delta_{j} = \tau_j - \tau_{j-1}$. The means and standard errors of the RMISEs as well as the coverage of the 95\% confidence (NCQR) or credible (IQR and GAUS) intervals were then calculated for each method and design (Table \ref{tab:beta_mid}). Both IQR and the GAUS method produce density estimates. The NCQR method does not estimate the entire quantile function and therefore can not be used to create a density estimate without substantial additional calculation. To evaluate the predictive densities we use the log score, which is the logarithm of the predicted density evaluated at the training and validation data. This is a strictly proper scoring rule \citep{gneiting2007strictly}. We calculate the log score for each observation as the log of the posterior mean of the predictive density evaluated at the observation. The total log score for each dataset is calculated as the mean of the log scores for the individual observations. The mean and standard error by simulation design are calculated using the total log score values of the 50 simulated datasets. \begin{table}[!t] \caption{Comparison of fitted $\beta(\tau)$ functions $\tau = (.05, .06, ..., .94, .95)$. COV represents the coverage of the 95\% credible (IQR and GAUS) or confidence interval (NCQR).} \label{tab:beta_mid} \renewcommand{\arraystretch}{1} \centerline{\tabcolsep=3truept\begin{tabular}{lrrrcrrrcrrr} \hline \multicolumn{1}{l}{\bfseries }&\multicolumn{3}{c}{\bfseries $\beta_0$}&\multicolumn{1}{c}{\bfseries }&\multicolumn{3}{c}{\bfseries $\beta_1$}&\multicolumn{1}{c}{\bfseries }&\multicolumn{3}{c}{\bfseries $\beta_2$}\tabularnewline \cline{2-4} \cline{6-8} \cline{10-12} \multicolumn{1}{l}{}&\multicolumn{1}{c}{RMISE}&\multicolumn{1}{c}{SE}&\multicolumn{1}{c}{COV}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{RMISE}&\multicolumn{1}{c}{SE}&\multicolumn{1}{c}{COV}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{RMISE}&\multicolumn{1}{c}{SE}&\multicolumn{1}{c}{COV}\tabularnewline \hline {\bfseries D1}&&&&&&&&&&&\tabularnewline ~~IQR&$0.014$&$0.001$&$0.92$&&$0.027$&$0.001$&$0.89$&&$0.022$&$0.002$&$0.91$\tabularnewline ~~GAUS&$0.016$&$0.001$&$0.92$&&$0.022$&$0.002$&$0.93$&&$0.025$&$0.002$&$0.93$\tabularnewline ~~NCQR&$0.017$&$0.001$&$0.96$&&$0.025$&$0.001$&$0.97$&&$0.026$&$0.001$&$0.97$\tabularnewline \hline {\bfseries D2}&&&&&&&&&&&\tabularnewline ~~IQR&$0.019$&$0.001$&$0.93$&&$0.035$&$0.002$&$0.91$&&$0.047$&$0.002$&$0.90$\tabularnewline ~~GAUS&$0.038$&$0.005$&$0.83$&&$0.065$&$0.009$&$0.83$&&$0.113$&$0.007$&$0.76$\tabularnewline ~~NCQR&$0.025$&$0.001$&$0.98$&&$0.045$&$0.002$&$0.97$&&$0.051$&$0.002$&$0.97$\tabularnewline \hline {\bfseries D3}&&&&&&&&&&&\tabularnewline ~~IQR&$0.029$&$0.001$&$0.95$&&$0.050$&$0.002$&$0.95$&&$0.027$&$0.001$&$0.99$\tabularnewline ~~GAUS&$0.027$&$0.001$&$0.97$&&$0.046$&$0.001$&$0.97$&&$0.032$&$0.001$&$0.98$\tabularnewline ~~NCQR&$0.050$&$0.000$&$0.64$&&$0.201$&$0.001$&$0.16$&&$0.026$&$0.002$&$0.92$\tabularnewline \hline {\bfseries D4}&&&&&&&&&&&\tabularnewline ~~IQR&$0.038$&$0.001$&$0.94$&&$0.062$&$0.002$&$0.96$&&$0.094$&$0.004$&$0.94$\tabularnewline ~~GAUS&$0.094$&$0.047$&$0.94$&&$0.104$&$0.034$&$0.95$&&$0.182$&$0.027$&$0.93$\tabularnewline ~~NCQR&$0.054$&$0.001$&$0.75$&&$0.197$&$0.001$&$0.24$&&$0.112$&$0.002$&$0.84$\tabularnewline \hline \end{tabular}} \end{table} We compare all three methods using $\tau = \{0.05, 0.06, ..., 0.94, 0.95\}$. Four non-constant basis functions per predictor were used in both the IQR and GAUS methods. The results given in Table \ref{tab:beta_mid} demonstrate that while the 3 methods perform similarly for D1 (independent, light tails), the IQR method performs substantially better than GAUS in the heavy-tailed designs (D2 and D4) and substantially better than NCQR in the spatially varying designs (D3 and D4). Compared to the nominal coverage rate of 0.95, the IQR method has good coverage for all of the designs, with the lowest coverage being 0.88 for $\beta_1$ in D1. GAUS had poor coverage for D2, while NCQR had poor coverage for D3 and D4. Unlike the NCQR method, both our method and the GAUS method assume parametric forms for the tails and so can be used to estimate parameter effects on extreme quantiles. We compare the parameter estimates for these two methods evaluated at $\tau = \{0.950, 0.951, ..., 0.994, 0.995\}$ in Table \ref{tab:beta_tail}. Our method performs better in all cases except D1 $\beta_1$, which is a linear function of $\tau$. \begin{table}[!t] \renewcommand{\arraystretch}{1} \caption{Comparison of fitted $\beta(\tau)$ functions $\tau = (.950, .951, ..., .994, .995)$. COV represents the coverage of the 95\% credible (IQR and GAUS) or confidence interval (NCQR).} \label{tab:beta_tail} \centerline{\tabcolsep=3truept \begin{tabular}{lrrrcrrrcrrr} \hline\hline \multicolumn{1}{l}{\bfseries }&\multicolumn{3}{c}{\bfseries $\beta_0$}&\multicolumn{1}{c}{\bfseries }&\multicolumn{3}{c}{\bfseries $\beta_1$}&\multicolumn{1}{c}{\bfseries }&\multicolumn{3}{c}{\bfseries $\beta_2$}\tabularnewline \cline{2-4} \cline{6-8} \cline{10-12} \multicolumn{1}{l}{}&\multicolumn{1}{c}{RMISE}&\multicolumn{1}{c}{SE}&\multicolumn{1}{c}{COV}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{RMISE}&\multicolumn{1}{c}{SE}&\multicolumn{1}{c}{COV}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{RMISE}&\multicolumn{1}{c}{SE}&\multicolumn{1}{c}{COV}\tabularnewline \hline {\bfseries D1}&&&&&&&&&&&\tabularnewline ~~IQR&$0.0047$&$0.0004$&$0.96$&&$0.0095$&$0.0010$&$0.89$&&$0.0072$&$0.0007$&$0.94$\tabularnewline ~~GAUS&$0.0051$&$0.0005$&$0.98$&&$0.0089$&$0.0009$&$0.90$&&$0.0077$&$0.0006$&$0.98$\tabularnewline \hline {\bfseries D2}&&&&&&&&&&&\tabularnewline ~~IQR&$0.0139$&$0.0014$&$0.95$&&$0.0377$&$0.0022$&$0.85$&&$0.0810$&$0.0051$&$0.76$\tabularnewline ~~GAUS&$0.0266$&$0.0054$&$0.78$&&$0.0469$&$0.0109$&$0.75$&&$0.0913$&$0.0045$&$0.57$\tabularnewline \hline {\bfseries D3}&&&&&&&&&&&\tabularnewline ~~IQR&$0.0094$&$0.0003$&$0.97$&&$0.0124$&$0.0004$&$0.95$&&$0.0089$&$0.0005$&$0.99$\tabularnewline ~~GAUS&$0.0119$&$0.0004$&$0.95$&&$0.0139$&$0.0005$&$0.99$&&$0.0132$&$0.0005$&$0.99$\tabularnewline \hline {\bfseries D4}&&&&&&&&&&&\tabularnewline ~~IQR&$0.0196$&$0.0012$&$0.96$&&$0.0286$&$0.0027$&$0.93$&&$0.1424$&$0.0048$&$0.90$\tabularnewline ~~GAUS&$0.0802$&$0.0476$&$0.94$&&$0.0666$&$0.0314$&$0.97$&&$0.2007$&$0.0271$&$0.81$\tabularnewline \hline \end{tabular}} \end{table} The results of the log-score comparisons are consistent with the parameter estimates (Table \ref{tab:logScore_comparison}). However, the GAUS method consistently produces higher log-scores in-sample than the IQR method. Because the likelihood is not constrained to be continuous in the GAUS method, very large likelihood values can be obtained for the in-sample observations (Fig. \ref{fig:2}). In the heavy-tailed designs the IQR method results in higher out-of-sample log-scores. \begin{table}[!tbhp] \caption{Comparison of mean estimated log scores} \label{tab:logScore_comparison} \renewcommand{\arraystretch}{1} \centerline{\tabcolsep=3truept \begin{tabular}{lrrcrr} \hline\hline \multicolumn{1}{l}{\bfseries }&\multicolumn{2}{c}{\bfseries In-sample}&\multicolumn{1}{c}{\bfseries }&\multicolumn{2}{c}{\bfseries Out-of-sample}\tabularnewline \cline{2-3} \cline{5-6} \multicolumn{1}{l}{}&\multicolumn{1}{c}{Mean}&\multicolumn{1}{c}{SE}&\multicolumn{1}{c}{}&\multicolumn{1}{c}{Mean}&\multicolumn{1}{c}{SE}\tabularnewline \hline {\bfseries D1}&&&&&\tabularnewline ~~IQR&$ 0.339$&$0.003$&&$ 0.315$&$0.008$\tabularnewline ~~GAUS&$ 0.356$&$0.003$&&$ 0.322$&$0.008$\tabularnewline \hline {\bfseries D2}&&&&&\tabularnewline ~~IQR&$-0.223$&$0.004$&&$-0.254$&$0.017$\tabularnewline ~~GAUS&$-0.219$&$0.005$&&$-0.288$&$0.022$\tabularnewline \hline {\bfseries D3}&&&&&\tabularnewline ~~IQR&$ 0.476$&$0.003$&&$ 0.418$&$0.010$\tabularnewline ~~GAUS&$ 0.536$&$0.003$&&$ 0.419$&$0.009$\tabularnewline \hline {\bfseries D4}&&&&&\tabularnewline ~~IQR&$-0.191$&$0.004$&&$-0.238$&$0.012$\tabularnewline ~~GAUS&$-0.126$&$0.006$&&$-0.287$&$0.022$\tabularnewline \hline \end{tabular}} \end{table} \section{Application} \subsection{Data} An amendment to the U.S. National Emission Standards for Hazardous Air Pollutants for petroleum refineries requires the use of two-week time-integrated passive samplers at specified intervals around the facility fence line to establish levels of benzene in the air \citep{fencelinerule}. The utility of fence line measurements as a method of controlling emissions is contingent on their distributions being dependent on nearby sources within the facility. To evaluate the efficacy of passive samplers in monitoring benzene emissions from petroleum refineries, researchers from US EPA Office of Research and Development conducted a year-long field study in collaboration with Flint Hills Resources in Corpus, Christi, TX \citep{thoma2011}. Preliminary analyses found that under consistent wind conditions, downwind concentrations were statistically higher than upwind concentrations \citep{thoma2011}. More sophisticated modeling should be able to shed light on the contributions of individual sources to the concentrations observed at the fence line. Modeling these concentrations requires an extra level of complexity because near-source air pollutant measurements typically exhibit strong spatial correlation along with non-stationary and non-Gaussian distributions even after transformation. Both the spatial covariance and the distribution of the pollutant concentrations can vary as a function of wind and emission source location. Accurately modeling the entire distribution and spatial structure of the pollutant concentrations should improve inference concerning the strengths of known sources. Additionally, due to the stochastic nature of dispersion and variation in background pollutant concentration levels, the effect of a specific source on the pollutant distribution may not be detected through mean regression. Of particular concern both for exposure and compliance evaluation are the source effects on the upper tail of the distribution, in particular the 95\textsuperscript{th} percentile. The measurements used in this study were collected between Dec 3, 2008 and Dec 2, 2009 around the Flint Hills West Refinery \citep{thoma2011}. The samplers were attached to the boundary fence around the facility approximately 1.5 m above the ground at 15 locations (Fig. \ref{fig:flintsummary}). In addition, one sampler (633) was deployed at a nearby Texas Commission on Environmental Quality (TCEQ) continuous air monitoring station (CAMS). A total of 406 two-week time-integrated benzene concentration measurements collected over the course of the year were used in the analysis. Hourly temperature, wind speed and direction were also measured at TCEQ CAMS 633. The concentrations exhibited both spatial and temporal trends (Fig. \ref{fig:flintsummary}). In particular, the variance increased dramatically during the summer months. The highest concentrations were observed on the northern edge of the refinery (sites 360, 20, and 50) while the lowest concentrations were observed on the southern edge (sites 250, 633, and 270). The increase in variance can partly be explained by meteorology (Fig. \ref{fig:wind1}). During the summer the winds consistently blow from the southeast, while during the rest of the year they are more evenly distributed. \begin{figure}[h!] \centering \includegraphics[height = 2.3in]{Flint_Hills_sites.pdf} \includegraphics[height = 2.3in]{Flint_Hills_byTime.pdf} \caption{Benzene measurements by time and location. Source locations, $\mathbf{e}_1$ and $\mathbf{e}_2$, are shown in black. Points have been jittered slightly to improve visibility.} \label{fig:flintsummary} \end{figure} \begin{figure} \centering \includegraphics[width = 2.3in]{Windrose_summer.pdf} \includegraphics[width = 2.3in]{Windrose_winter.pdf} \caption{Wind roses for different seasons.} \label{fig:wind1} \end{figure} A visual analysis of the concentrations and wind roses for the hourly measurements at each time period suggested that the concentrations were correlated with a source within the refinery. Two probable emission source locations $\mathbf{e}_1$, and $\mathbf{e}_2$ were selected using the reported emission inventory. To determine the effect of the emission sources on the distribution of the benzene concentration, we denote the $t^{th}$ observed value of the benzene concentration at site $s_i$ as $y_t(s_i)$ where $i = 1, ..., 16$ and $t=1,..., 26$ and will model the quantile function of $Y$ using equation \ref{2.1} and \ref{2.2}. Our full model includes an intercept and three predictors: transport from source 1, transport from source 2, and temperature. The predictors that represent transport from a source are calculated from the observed hourly wind vectors and relative spatial locations of the source and measurement. The $t^{th}$ observed value of the transport from source 1 to location $s_i$ is defined as \begin{equation}\label{app} x_{1,t}(s_i) = \sum_{h=1}^{336} \left \{ \max \left ( \frac{\mathbf{w}_{t,h} \cdot (\mathbf{e}_1 - \mathbf{s}_i)}{||(\mathbf{e}_1 - \mathbf{s}_i)||}, 0 \right ) \right \} \end{equation} where $\mathbf{e}_1$ is the location of emission source 1, and $\mathbf{s}_i$ is the measurement location. Each hourly wind vector, $\mathbf{w}_{t,h}$, for the two-week period with $h = 1, ..., 336$ was transformed into the same coordinate system and projected onto the vector from the source to the measurement $(\mathbf{e}_1 - \mathbf{s}_i)$. Assuming a constant emission source, the resulting scalar quantity represents the amount of pollutant transported from $\mathbf{e}_1$ to $\mathbf{s}_i$, ignoring the effects of vertical dispersion. When the wind is blowing from $\mathbf{s}_i$ toward $\mathbf{e}_1$, transport from $\mathbf{e}_1$ will be negative. However, due to finite, small background concentrations, the integrated benzene concentration will remain the same rather than decreasing under these conditions. Therefore the maximum of the transport from $\mathbf{e}_1$ and zero was taken before taking the sum over $h$ in period $t$ (\ref{app}). The transport from source 2 was calculated similarly. We use 10-fold cross-validation to determine the most appropriate model for the benzene concentrations. Using each fold as a validation data set, the in-sample and out-of-sample log-score was calculated using both the proposed IQR method and the GAUS method proposed by \cite{reich2012spatiotemporal} for each combination of predictors (Table 5). An exponential covariance function and range of 0.5 were used for both methods. The predictors were transformed to be between 0 and 1 before the models were fit. For both methods the in-sample log-score tends to increase with the number of predictors included in the model. All of the models with predictors have both higher in-sample and out-of-sample log scores than the intercept only model. Of the models with predictors the one that included all 3 produced the largest out-of-sample log-score for the IQR method, but the lowest out-of-sample log-score for the GAUS method, indicating that adding additional predictors exacerbates the probability of over-fitting by the GAUS method. In all of the cases, our method performs substantially better out-of-sample than the GAUS method. \begin{table}[t] \label{tab:logScores} \renewcommand{\arraystretch}{1.2} \setlength{\tabcolsep}{3pt} \caption{Estimated log-scores for training and validation data by method.} \vspace{-8pt} \begin{center} \begin{tabular}{lccccccccccc} \hline &\multicolumn{5}{c}{In-sample}&&\multicolumn{5}{c}{Out-of-sample}\tabularnewline \cline{2-6} \cline{8-12} \multicolumn{1}{l}{Predictors}&\multicolumn{2}{c}{IQR}&&\multicolumn{2}{c}{GAUS}&& \multicolumn{2}{c}{IQR}&&\multicolumn{2}{c}{GAUS}\tabularnewline \cline{2-3} \cline{5-6} \cline{8-9} \cline{11-12} & Mean & SE && Mean & SE && Mean & SE && Mean & SE\tabularnewline None&$-0.29$&$0.01$&&$-0.16$&$0.01$&&$-0.42$&$0.07$&&$-0.45$&$0.09$\tabularnewline Source 1&$ 0.04$&$0.01$&&$ 0.49$&$0.01$&&$-0.13$&$0.07$&&$-0.18$&$0.07$\tabularnewline Source 2&$ 0.04$&$0.01$&&$ 0.47$&$0.01$&&$-0.14$&$0.08$&&$-0.21$&$0.08$\tabularnewline Temperature&$ 0.10$&$0.01$&&$ 0.57$&$0.02$&&$-0.07$&$0.08$&&$-0.25$&$0.08$\tabularnewline Source 1 + Source 2&$ 0.21$&$0.01$&&$ 0.81$&$0.02$&&$-0.02$&$0.08$&&$-0.22$&$0.06$\tabularnewline Source 1 + Temp&$ 0.30$&$0.01$&&$ 1.00$&$0.03$&&$ 0.08$&$0.08$&&$-0.19$&$0.09$\tabularnewline Source 2 + Temp&$ 0.27$&$0.01$&&$ 0.88$&$0.02$&&$ 0.06$&$0.08$&&$-0.24$&$0.09$\tabularnewline All&$ 0.39$&$0.01$&&$ 0.95$&$0.04$&&$ 0.13$&$0.09$&&$-0.39$&$0.08$\tabularnewline \hline \end{tabular}\end{center} \end{table} The model was fit to the entire dataset to determine the effects of the sources and temperature on the distribution of benzene at the fence line. We plot the coefficients by quantile level and location in Fig. \ref{fig:coefbytau}. We can see that the base distribution does not vary as much by location as the effects of the sources and temperature. The effects of the sources on the quantiles of the concentrations range from positive to negative, with the majority of the source effects being positive. The negative effects could be due to the fact that these sources may not have been constant over the course of the entire year. If wind from a given source corresponded to time points when the source was not emitting it could result in a negative effect on the concentrations. As can be seen in Fig. \ref{fig:spatialcoef}, the effect of source 1 on the 95\textsuperscript{th} quantile is large and positive for the sites on the northern edge of the refinery and some sites along the southern edge of the refinery. The northern sites were also the locations where the highest concentrations were observed. The effect of source 2 on the 95\textsuperscript{th} quantile was smaller overall and varied by site with positive effects observed on the background site and sites on the northern edge of the refinery (Fig. \ref{fig:spatialcoef}). Temperature also had a strong positive effect on concentrations on the northern edge of the refinery indicating the possibility of another emission source during the summer near the northern edge of the refinery that was not accounted for. \begin{figure}[h] \centering \includegraphics[width = 5in]{beta_by_tau_application.pdf} \caption{Estimated predictor effect by quantile and location.} \label{fig:coefbytau} \end{figure} \begin{figure} \centering \includegraphics[width = .32\linewidth]{IQR_source1_95.png} \includegraphics[width = .32\linewidth]{IQR_source2_95.png} \includegraphics[width = .32\linewidth]{IQR_temperature_95.png} \caption{Spatial variation in the effect of the predictors on the $95^{th}$ quantile of fenceline benzene measurements.} \label{fig:spatialcoef} \end{figure} \section{Discussion} We have derived the properties and demonstrated the utility of a method for spatial quantile regression that allows for spatially-varying coefficients and flexible tail distributions. By modeling the entire quantile function we exploit the flexibility of non-parametric basis functions in the center of the distribution and the constraints of parametric tails in the areas of the distribution where data is sparse. We have shown the conditions under which the model guarantees a smooth density function with the desired degrees of differentiability and enables the estimation of a non-stationary covariance that is dependent on the predictors. Through both simulations and an application to fence line benzene concentrations we have demonstrated the utility of ensuring a smooth density function with parametric tails and the flexibility and accuracy of the method compared to previous work. While the model doesn't currently account for temporal correlation in the response variable, a non-linear function of time could easily be incorporated as a predictor using the current framework. Additionally, temporal correlation could be accounted for by adjusting the priors of the coefficients or incorporating a copula. A multivariate extension for modeling multiple pollutants simultaneously could also be developed through the use of multivariate spatial priors. \vskip 14pt \noindent {\large\bf Supplementary Materials} Proofs of Proposition 1 and Theorem 1 as well as computing details are contained in the supplemental material. \par \vskip 14pt \noindent {\large\bf Disclaimer} The views expressed in this publication are those of the authors and do not necessarily represent the views or policies of the U.S. Environmental Protection Agency. \noindent {\large\bf Acknowledgements} This work was supported by the National Science Foundation under grant No. 1613219 and the National Institutes of Health under grant No. R01ES027892. The authors would like to thank Oak Ridge Institute for Science and Education for the fellowship funding that supported this work. \par \bibliographystyle{apalike}
2205.07322
\section{introduction}\label{intro} \noindent The interplay between the hook lengths of a partition $\lambda$ and the contents of its cells appears in Stanley's formula for the dimension of the irreducible polynomial representation of the general linear group, $GL(n, \mathbb C)$, indexed by $\lambda$ \cite[(7.106)]{ec2}. There are also analogous formulas for irreducible polynomial representations of symplectic and orthogonal groups \cite{CS, EK, S}. \smallskip \noindent In this article, we prove a variety of results on the combinatorial nature of the expressions involving the symplectic and orthogonal hook-content formulas and certain generalizations thereof. These objects have been conjectured by the first author \cite{A12}. In order to state the conjectures and our results, we begin by introducing the relevant notations. \smallskip \noindent A {\it partition} $\lambda=(\lambda_1,\lambda_2,\dots,\lambda_{\ell})$ of $n\in\mathbb{N}$, denoted $\lambda\vdash n$, is a finite non-increasing sequence of positive integers, called \textit{parts}, that add up to $n$. The {\it size} of $\lambda$, denoted by $\vert\lambda\vert$, is the sum of all its parts. The {\it length} of $\lambda$, denoted by $\ell(\lambda)$, is the number of parts of $\lambda$. As usual, $p(n)$ denotes the number of partitions of $n$. We denote by $\mathcal P(n \mid X)$ the set of partitions of $n$ satisfying some condition $X$ and define $p(n\mid X):=|\mathcal P(n\mid X)|$. The {\it Young diagram} of a partition $\lambda$ is a left-justified array of squares such that the $i^{th}$-row of the array contains $\lambda_i$ squares. \begin{example}\label{eg1} If $\lambda= (5, 3, 3, 2,1)$, then $\vert\lambda\vert=14, \ell(\lambda)=5$. The Young diagram of $\lambda$ is shown below. $$\tiny\ydiagram{5, 3, 3, 2,1}$$ \end{example} \noindent A cell $(i,j)$ of $\lambda$ is the square in the $i^{th}$-row and $j^{th}$-column in the Young diagram of $\lambda$. The conjugate of $\lambda$ is the partition $\lambda'$ whose Young diagram has rows that are precisely the columns of the Young diagram of $\lambda$. For example, $\lambda'=(6,5,4,2,2)$ is the conjugate of $\lambda= (5,5, 3, 3, 2,1)$. When necessary, we will use the exponential notation $\lambda=(a_1^{m_1}, a_2^{m_2}, \ldots, a_k^{m_k})$ to mean that $a_i$ appears $m_i$ times as a part of $\lambda$, for $i=1, 2, \ldots, k$. This will mainly be used for hooks; that is, partitions of the form $(a, 1^b)$. The {\it hook length} of a cell $u=(i,j)$ of $\lambda$ is $h^\lambda(u)=\lambda_i+\lambda_j'-i-j+1$ and its {\it content} is $c^\lambda(u)=j-i$. Then, the dimension of the irreducible representation of $GL(n, \mathbb C)$ corresponding to $\lambda$ with $\ell(\lambda)\leq n$ is given by \cite[(7.106)]{ec2} $$\dim_{GL}(\lambda,n)=\prod_{u\in \lambda}\frac{n+c^\lambda(u)}{h^\lambda(u)}.$$ \smallskip \noindent The irreducible representations of the symplectic group $Sp(2n)$, consisting of $2n\times 2n$ matrices which preserve any non-degenerate, skew-symmetric form on $\mathbb C^{2n}$, are also indexed by partitions $\lambda$ with $\ell(\lambda)\leq n$. The {\it symplectic content} of cell $(i,j)\in\lambda$ is $$c^\lambda _{sp}(i,j)=\begin{cases}\lambda_i+\lambda_j-i-j+2 & \text{ if } i>j\\ i+j-\lambda'_i-\lambda'_j & \text{ if } i\leq j. \end{cases}$$ The irreducible representations of the special orthogonal group $SO(n)$, consisting of orthogonal matrices of determinant $1$, are indexed by partitions $\lambda$ with $\ell(\lambda)\leq \lfloor\frac{n}{2}\rfloor$. The {\it orthogonal content} of cell $(i,j)\in\lambda$ is defined by $$c_{O}^{\lambda}(i,j)=\begin{cases} \lambda_i+\lambda_j-i-j \qquad \text{if $i\geq j$} \\ i+j-\lambda_i'-\lambda_j'-2 \,\,\, \text{if $i<j$}.\end{cases}$$ The symplectic and orthogonal counterparts to Stanley's hook-content formula are, respectively, \cite{EK, S} $$\dim_{Sp}(\lambda,2n)=\prod_{u\in\lambda}\frac{2n+c_{sp}^{\lambda}(u)}{h^{\lambda}(u)} \qquad \text{and} \qquad \dim_{SO}(\lambda,n)=\prod_{u\in\lambda}\frac{n+c_{O}^{\lambda}(u)}{h^{\lambda}(u)}.$$ \smallskip \noindent We are now ready to introduce the conjectures listed in \cite{A12}. The author was inspired by the remarkable hook-length identity of Nekrasov and Okounkov \cite[(6.12)]{NO06} (later given a more elementary proof by Han \cite{H10}) $$\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{t+(h^{\lambda}(u))^2}{(h^{\lambda}(u))^2}=\prod_{j\geq1}\frac1{(1-q^j)^{t+1}},$$ and Stanley's analogous hook-content identity \cite[Theorem 2.2]{RPS10} $$\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{t+(c^{\lambda}(u))^2}{(h^{\lambda}(u))^2}=\frac1{(1-q)^t}.$$ \smallskip \noindent \begin{conjecture} [\cite{A12}, Conjecture 6.2(a)] \label{conj6.2a} For $t$ an indeterminate, we have $$\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{t+c_{sp}^{\lambda}(u)}{h^{\lambda}(u)}= \prod_{j\geq1}\frac{(1-q^{8j})^{\binom{t+1}2}}{(1-q^{8j-2})^{\binom{t+1}2-1}} \left(\frac{1-q^{4j-1}}{1-q^{4j-3}}\right)^t \left(\frac{1-q^{8j-4}}{1-q^{8j-6}}\right)^{\binom{t}2-1}.$$ \end{conjecture} \begin{conjecture} [\cite{A12}, Conjecture 6.2(b)] \label{conj6.2b} For $t$ an indeterminate, we have $$\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{t+c_{O}^{\lambda}(u)}{h^{\lambda}(u)}= \prod_{j\geq1}\frac{(1-q^{8j})^{\binom{t}2}}{(1-q^{8j-6})^{\binom{t}2-1}} \left(\frac{1-q^{4j-1}}{1-q^{4j-3}}\right)^t \left(\frac{1-q^{8j-4}}{1-q^{8j-2}}\right)^{\binom{t+1}2-1}.$$ \end{conjecture} \smallskip \noindent In this article, we prove the case $t=0$ of Conjectures \ref{conj6.2a} and \ref{conj6.2b}. \smallskip \noindent The next conjecture from \cite{A12} is a symplectic, respectively orthogonal, counterpart to the hook-content identity of Stanley, which is also in the spirit of Nekrasov-Okounkov's hook formula. \begin{conjecture} [\cite{A12}, Conjecture 6.3(a)] \label{conj6.3a} For $t$ an indeterminate, we have $$\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{t+(c_{sp}^{\lambda}(u))^2}{(h^{\lambda}(u))^2}= \prod_{j\geq1}\frac1{(1-q^{4j-2})(1-q^j)^t}= \sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{t+(c_{O}^{\lambda}(u))^2}{(h^{\lambda}(u))^2}.$$ \end{conjecture} \smallskip \noindent We prove the cases $t=0$ and $t=-1$ of Conjecture \ref{conj6.3a}. \smallskip \noindent The special cases of the above conjectures will be proved in section \ref{no-thms}. We denote by $sy\mathcal P_0$ the set of partitions $\lambda$ such that $c^{\lambda}_{sp}(u)\neq 0$ for all cells $u\in \lambda$. To prove the case $t=0$ of Conjectures \ref{conj6.2a}, \ref{conj6.2b} and \ref{conj6.3a}, we describe the partitions in $sy\mathcal P_0$ explicitly. The {\it nested hooks} of a partition $\lambda$ are hook partitions whose Young diagrams consist of a cell $(i,i)$ on the main diagonal of $\lambda$ together with all cells directly to the right and directly below the cell $(i,i)$. Thus, the $i^{th}$-hook of $\lambda$ is the partition $(\lambda_i-i+1, 1^{\lambda'_i-i})$. In section \ref{no-thms} we prove the following characterization. \begin{theorem} \label{non-zero cont} $\lambda\in sy\mathcal P_0$ if and only if $\lambda=\emptyset$ or all nested hooks $(a, 1^b)$ of $\lambda$ satisfy $a=b$. \end{theorem} \noindent Using the transformation of {\it straightening} nested hooks, i.e., transforming each nested hook $(a, 1^b)$ into a part equal to $a+b$, one can easily see that partitions of $n$ in $sy\mathcal P_0$ are in bijection with partitions of $n$ into distinct even parts. Therefore \begin{equation}\label{dist-parts} |sy\mathcal P_0(n)|=p(n \, \big| \text{ distinct even parts}). \end{equation} \smallskip \noindent In section \ref{beck-section}, we prove two Beck-type companion identities to \eqref{dist-parts}. These would give combinatorial interpretations for the excess of the number of parts in all partitions in $sy\mathcal P_0(n)$ over the number of parts in all partitions in $\mathcal P(n \, \big| \text{ distinct even parts})$. The combinatorial description in our first Beck-type identity involves partitions with even rank. In section \ref{odd-even rank}, we examine the parity of $p(n\mid \text{distinct parts, odd rank})$ and $p(n\mid \text{distinct parts, even rank})$. In section \ref{bi-hooks}, we will investigate the connection between the sum of all hook-lengths of a partition $\lambda$ and the sum of all inversions in the binary representation of $\lambda$. Finally, in section \ref{two stats}, we study the $x$-ray list of a partition, an analogue to the $x$-ray of a permutation \cite{BMPS}, and we determine links with $p(n \mid \text{distinct parts})$ and also with the number of partitions maximally contained in a given staircase partition. \section{Proofs of Theorem \ref{non-zero cont} and special cases of Conjectures \ref{conj6.2a}, \ref{conj6.2b} and \ref{conj6.3a}}\label{no-thms} \noindent In this section, we first prove Theorem \ref{non-zero cont} and derive two corollaries which lead to the proofs for the case $t=0$ of Conjectures \ref{conj6.2a}, \ref{conj6.2b} and \ref{conj6.3a}. We then conclude by proving the case $t=-1$ of Conjecture \ref{conj6.3a}. \smallskip \noindent We start with basic observations about the symplectic and orthogonal contents of cells in partitions. \begin{remark} \label{remark sp-o} The definitions imply that, for any partition $\lambda$ and any cell $(i,j)\in \lambda$, we have $$c^\lambda_{sp}(i,j)=-c^{\lambda'}_O(j,i).$$ \smallskip \noindent Moreover, since $h^\lambda(i,j)=-h^{\lambda'}(j,i)$, for any positive integer $n$, we have $$\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{c_{sp}^{\lambda}(u)}{h^{\lambda}(u)}= \sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{c_{O}^{\lambda}(u)}{h^{\lambda}(u)}.$$ \end{remark} \smallskip \noindent Given a partition $\lambda$, by the {\it outer hook} of $\lambda$ we mean the hook determined by the cell $(1,1)$, i.e., the partition $(\lambda_1, 1^{\lambda'_1-1})$. \begin{remark} \label{R1} Let $\mu$ be the partition obtained from $\lambda$ by removing the outer hook. Then, it is immediate from the definitions that $$c^\mu _{sp}(i,j)=c^\lambda _{sp}(i+1,j+1) \qquad \text{and} \qquad c^\mu _{O}(i,j)=c^\lambda _{O}(i+1,j+1).$$ Thus, the symplectic and orthogonal contents of a cell are preserved by the operation of adding or removing an outer hook. \end{remark} \smallskip \noindent The \textit{rank}, $r(\lambda)$, of a partition $\lambda$ is defined as the difference between the largest part and the length of the partition, i.e., $r(\lambda)=\lambda_1-\lambda'_1$. We denote by $r_j(\lambda)$ the rank of the partition obtained from $\lambda$ by removing the first $j-1$ outer hooks, successively. Thus, $r_j(\lambda)=\lambda_j-\lambda'_j$. \begin{remark}\label{remark c-h} For any partition $\lambda$ and any cell $(i,j)\in \lambda$, we have $$c^\lambda_{sp}(i,j)=\begin{cases} r_j(\lambda)+1+h^\lambda(i,j) & \text{ if } i>j\\ r_i(\lambda)+1-h^\lambda(i,j) & \text{ if } i\leq j.\end{cases} $$ \end{remark} \smallskip \noindent Next, we determine explicitly the symplectic contents of cells in hook partitions $\lambda=(a, 1^b)$ with $a\geq 1, b\geq 0$. In detail, we have \begin{align*}c^\lambda_{sp}(1,1)& =-2b, & \ \\ c^\lambda_{sp}(1,j)&=-b+j-1 & \text{ for } &2\leq j\leq a,\\ c^\lambda_{sp}(i,1)&=a-i+2 & \text{ for } &2\leq i\leq b+1. \end{align*} The contents $c^\lambda_{sp}(1,j)$, for $j\geq 2$, are seen as consecutive integers between $-b+1$ and $a-b-1$, increasing from left to right. The contents $c^\lambda_{sp}(i,1)$, for $i\geq 2$, are consecutive integers between $a-b+1$ and $a$ decreasing from top to bottom. These observations lead to the following characterization of hook partitions when all symplectic contents are unequal to a fixed integer $t$. \newpage \begin{proposition}\label{not t} Let $t$ be an integer. A hook partition $\lambda=(a, 1^b)$, with $a\geq 1, b\geq 0$, satisfies $c^\lambda_{sp}(i,j)\neq t$ for all cells $(i,j)\in \lambda$ if and only if all of the following conditions is satisfied: \begin{itemize} \item[(i)] if $b\neq -t/2$ \item[(ii)] if $a\geq 2$, then $b<1-t$ or $a-b-1<t$; \item[(iii)] if $b\geq 1$, then $t<a-b+1$ or $a<t$. \end{itemize} \end{proposition} \noindent The cases needed for our proofs are given in the next two corollaries. Recall that $sy\mathcal P_0$ denotes the set of partitions having all symplectic contents non-zero. We also denote by $sy\mathcal P_{\pm1}$ the set of partitions with all symplectic contents not equal to $\pm 1$. \begin{corollary} \label{cor 0} For $a\geq 1, b\geq 0$, the hook partition $\lambda =(a, 1^b)$ is in $sy\mathcal P_0$ if and only if $a=b$. \end{corollary} \begin{proof} Choose $t=0$ in Proposition \ref{not t} and suppose $(a, 1^b)\in sy\mathcal P_0$. Proposition \ref{not t}(i) implies $b\geq 1$. If $a=1$, by Proposition \ref{not t}(iii), we gather $b<2$. Therefore $b=1=a$. If $a\geq 2$, then, by (ii) and (iii) of Proposition \ref{not t}, $a=b$. Conversely, if $a=b$, then the conditions of Proposition \ref{not t} are satisfied. \end{proof} \begin{corollary} \label{cor 1} The only hook partitions in $sy\mathcal P_{\pm 1}$ are $(1)$ and $(2,1)$. \end{corollary} \begin{proof} The two partitions $(1)$ and $(2,1)$ satisfy the conditions of Proposition \ref{not t} with $t=\pm 1$. Consider $(a, 1^b)$ with $a\geq 1, b\geq 0$ and $(a, 1^b)\not \in \{(1),(2,1)\}$. Condition (i) is satisfied. If $b=0$, condition (ii) is not satisfied for any $a\geq 2$. If $a=1$, then condition (iii) is not satisfied for any $b\geq 1$. Suppose $a\geq 2$ and $b\geq 1$. Conditions (ii) and (iii) imply $a=b+1$ if $t=1$, and imply $a=b-1$ if $t=-1$. Thus, the only hook partitions in $sy\mathcal P_{\pm 1}$ are $(1)$ and $(2,1)$. \end{proof} \noindent Before we prove Theorem \ref{non-zero cont} we introduce one more concept. The {\it Durfee square} of a partition $\lambda$ is the largest partition of the form $(m^m)$ whose Young diagram fits inside the Young diagram of $\lambda$. The length of the Durfee square of $\lambda$ equals the number of nested hooks of $\lambda$. \begin{proof}[Proof of Theorem \ref{non-zero cont}] Clearly $\emptyset\in sy\mathcal P_0$. If $\lambda \neq \emptyset$, we prove by induction on the length $m$ of the Durfee square that $\lambda \in sy\mathcal P_0$ if and only if $a=b$. \smallskip \noindent If $m=1$, the statement is true by Corollary \ref{cor 0}. \smallskip \noindent Next, assume that the statement of the theorem is true for every partition with Durfee square of length less than $k$ and let $\lambda$ be a partition with Durfee square of length $k$. Suppose $\lambda\in sy\mathcal P_0$. By Remark \ref{R1}, we only need to show that the outer hook, $(\lambda_1, 1^{\lambda'_1-1})$, satisfies $\lambda_1=\lambda'_1-1$. \smallskip \noindent If $\lambda_1<\lambda'_1-1$, we must have $\lambda_{\lambda_1+2}=1$. Otherwise, if $\mu$ is the partition obtained from $\lambda$ by removing the outer hook, we have $\mu'_1\geq \lambda_1+1\geq\mu_1+2$ which is impossible since $\mu \in sy\mathcal P_0$ and hence $\mu_1=\mu'_1-1$. Consequently, we see that $c^\lambda_{sp}(\lambda_1+2, 1)=\lambda_{\lambda_1+2}+\lambda_1-(\lambda_1+2)-1+2= 0$. Similarly, if $\lambda_1>\lambda'_1-1$, we have $\lambda'_{\lambda'_1}=1$ and thus $c_{sp}(1, \lambda'_1)=1+\lambda'_1-\lambda'_1-\lambda'_{\lambda'_1}=0$. Therefore $\lambda_1=\lambda'_1-1$. \smallskip \noindent Finally, if every nested hook $(a, 1^b)$ of $\lambda$ satisfies $a=b$, by induction, $c^\lambda_{sp}(i,j)\neq 0$ for all $i,j\geq 2$. Moreover, since $\lambda_1=\lambda'_1-1$, for $1\leq j\leq \lambda_1$ we have $$ c^{\lambda}_{sp}(1,j)=1+j-\lambda'_1-\lambda'_j\leq 1+\lambda_1-\lambda'_1-\lambda'_j=-\lambda'_j<0,$$ and for $2\leq i\leq \lambda'_1$ we have $ c^{\lambda}_{sp}(i,1)=\lambda_i+\lambda_1-i-1+2\geq\lambda_i+\lambda_1-\lambda'_1+1 \geq \lambda_i>0$. Thus, $\lambda \in sy\mathcal P_0$. \end{proof} \begin{corollary}\label{Euler} For any positive integer $n$ we have $$|sy\mathcal P_0(n)|=p(n \, \big| \text{ parts} \equiv 2\bmod 4).$$\end{corollary} \begin{proof} Apply \eqref{dist-parts} and Euler's identity \cite[(1.2.5)]{A98} with $q$ replaced by $q^2$. \end{proof} \smallskip \noindent From Theorem \ref{non-zero cont} and Remark \ref{remark c-h} we have the following. \begin{corollary} \label{cc=h} For $\lambda\neq \emptyset$, we have $\lambda \in sy\mathcal P_0$ if and only if $$c^\lambda_{sp}(i,j)=\begin{cases} h^\lambda(i,j) & \text{ if } i>j \\ -h^\lambda(i,j) & \text{ if } i\leq j,\end{cases} $$ for all $(i,j)\in \lambda$. \end{corollary} \begin{proof} The assertion holds due to the fact that all nested hooks $(a, 1^b)$ of $\lambda$ satisfy $a=b$ if and only if $r_j(\lambda)=-1$ for all $j$ no larger than the length of the Durfee square of $\lambda$. \end{proof} \begin{remark} From Corollary \ref{cc=h}, if $\lambda \in sy\mathcal P_0$, we have $c^\lambda_{sp}(i,j)>0$ if $i> j$ and $c^\lambda_{sp}(i,j)<0$ if $i\leq j$. \end{remark} \noindent From Corollaries \ref{Euler} and \ref{cc=h} and Remark \ref{remark sp-o} we obtain the proof for the case $t=0$ of Conjecture \ref{conj6.3a}. \begin{theorem} [\cite{A12}, Conjecture 6.3(b)] \label{conj6.3b} $$\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{(c_{sp}^{\lambda}(u))^2}{(h^{\lambda}(u))^2}= \prod_{j\geq1}\frac1{1-q^{4j-2}}=\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{(c_{O}^{\lambda}(u))^2}{(h^{\lambda}(u))^2}.$$ \end{theorem} \smallskip \noindent Next, we prove care $t=0$ of Conjectures \ref{conj6.2a} and \ref{conj6.2b}. \begin{theorem} [\cite{A12}, Conjecture 6.2(c)] \label{conj6.2c} $$\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{c_{sp}^{\lambda}(u)}{h^{\lambda}(u)}= \prod_{j\geq1}\frac1{1+q^{4j-2}}=\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{c_{O}^{\lambda}(u)}{h^{\lambda}(u)}.$$ \end{theorem} \begin{proof} From Theorem \ref{non-zero cont}, if $n$ is odd, $sy\mathcal P_0(n)=\emptyset$. Assume $n$ is even. If $\lambda \in sy\mathcal P_0$, by Corollary \ref{cc=h}, for each $u\in \lambda$, we have $\displaystyle \frac{c^\lambda_{sp}(u)}{h^\lambda(u)}=\pm 1$. Since all nested hooks $(a, 1^b)$ of $\lambda$ satisfy $a=b$, the number of cells $u\in \lambda$ with $\displaystyle \frac{c^\lambda_{sp}(u)}{h^\lambda(u)}=1$ equals $\displaystyle \frac{n}{2}$. Thus, if $n>0$ is even and $\lambda \vdash n$, $$\prod_{u\in\lambda}\frac{c_{sp}^{\lambda}(u)}{h^{\lambda}(u)}=\begin{cases} 1 & \text{ if } n\equiv 0\bmod 4\\ -1 & \text{ if } n\equiv 2\bmod 4.\end{cases}$$ On the other hand, $\displaystyle \prod_{j\geq 1}\frac{1}{1+q^{4j-2}}$ is the generating function for the number of partitions $\lambda\vdash n$ with parts congruent to $2\bmod 4$, each partition counted with weight $(-1)^{\ell(\lambda)}$. If $\lambda$ is such a partition, then $$(-1)^{\ell(\lambda)}=\begin{cases} 1 & \text{ if } n\equiv 0\bmod 4\\ -1 & \text{ if } n\equiv 2\bmod 4.\end{cases}$$ This proves the left-hand identity of Theorem \ref{conj6.2c}. The right-hand identity follows from Remark \ref{remark sp-o}. \end{proof} \smallskip \noindent Theorems \ref{conj6.2c} and \ref{conj6.3b} lead to the following identity as it was conjectured in \cite{A12}. \begin{corollary} [\cite{A12}, Conjecture 6.3(c)] \label{conj6.3c} For any positive integer $n$ we have $$(-1)^{\binom{n}2}\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{(c_{sp}^{\lambda}(u))^2}{(h^{\lambda}(u))^2} =\sum_{\lambda\vdash n}\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{c_{sp}^{\lambda}(u)}{h^{\lambda}(u)}.$$ \end{corollary} \noindent Before proving the case $t=-1$ of Conjecture \ref{conj6.3a}, we characterize the partitions in $sy\mathcal P_{\pm 1}$. \smallskip\noindent Denote by $\delta_r$ the {\it staircase partition} with $r$ consecutive parts $\delta_r=(r, r-1, \ldots, 2,1)$. Notice that the length of the Durfee square of $\delta_r$ is $\lceil \frac{r}{2}\rceil$. \begin{theorem} \label{non-pm1 cont} $\lambda\in sy\mathcal P_{\pm 1}$ if and only if $\lambda=\emptyset$ or $\lambda=\delta_r$ for some $r$. \end{theorem} \begin{proof} Clearly $\emptyset\in sy\mathcal P_{\pm1}$. If $\lambda \neq \emptyset$, we prove the statement by induction on the length $m$ of the Durfee square of the partition. \smallskip \noindent If $m=1$, the statement is true by Corollary \ref{cor 1}. \smallskip \noindent For the inductive step, assume that if $\mu$ is a partition with Durfee square of length at most $m$, then $\mu\in sy\mathcal P_{\pm1}$ if and only if $\mu$ is a staircase partition. Let $\lambda\in sy\mathcal P_{\pm 1}$ be a partition with Durfee square of size $m+1$. Then by the induction hypothesis, the partition $\lambda^-$ obtained from $\lambda$ by removing the outer hook is a staircase with Durfee size $m$, i.e., $\lambda^-=\delta_{k}$ with $k\in \{2m, 2m-1\}$. Conversely, it follows from Remark \ref{R1} that if $\lambda^-=\delta_{k}$ with $k\in \{2m, 2m-1\}$, then $c^\lambda_{sp}(i,j)\neq \pm 1$ for all cells $(i,j)\in \lambda$ with $i,j\geq 2$. Assume $\lambda$ is such that $\lambda^-=\delta_{k}$ with $k\in \{2m, 2m-1\}$. We have $$c^\lambda_{sp}(1,j)=\begin{cases}2-2\lambda'_1 & \text{ if } j=1\\ -\lambda'_1-k+2j-2 & \text{ if } 2\leq j\leq k+1\\ j-\lambda'_1 & \text{ if } j\geq k+2 \text{ \ (if any) } \end{cases} $$ and $$c^\lambda_{sp}(j,1)=\begin{cases}2-2\lambda'_1 & \text{ if } j=1\\ \lambda_1+k-2j+4 & \text{ if } 2\leq j\leq k+1\\ \lambda_1-j+2 & \text{ if } j\geq k+2 \text{ \ (if any) } \end{cases}$$ By construction, $\lambda_1, \lambda'_1\geq k+1$. \smallskip \noindent If $\lambda_1=\lambda'_1=k+1$, then $c^\lambda_{sp}(1,k+1)=-1$. \smallskip \noindent If $\lambda_1=k+1, \lambda'_1\geq k+2$, then $c^\lambda_{sp}(k+2,1)=1$. \smallskip \noindent If $\lambda_1\geq k+2$ and $\lambda'_1= k+1$, then $c^\lambda_{sp}(1,k+2)=1$. \smallskip \noindent If $\lambda'_1>\lambda_1\geq k+2$, then $c^\lambda_{sp}(\lambda_1+1,1)=1$. \smallskip \noindent If $\lambda_1\geq\lambda'_1\geq k+3$, then $c^\lambda_{sp}(1,\lambda'_1-1)=-1$. \smallskip \noindent Finally, if $\lambda_1=\lambda'_1=k+2$, we have $c^\lambda_{sp}(i,j)$ is even for all cells $(i,j)$. \end{proof} \noindent From Theorem \ref{non-pm1 cont} and Remark \ref{remark c-h} we have the following. \begin{corollary} \label{c=h} For $\lambda\neq \emptyset$, we have $\lambda \in sy\mathcal P_{\pm1}$ if and only if $$c^\lambda_{sp}(i,j)=\begin{cases} h^\lambda(i,j)+1 & \text{ if } i>j \\ -h^\lambda(i,j)+1 & \text{ if } i\leq j,\end{cases} $$ for all $(i,j)\in \lambda$. \end{corollary}\begin{proof} The statement follows from the fact that $\lambda$ is a staircase partition if and only if $r_j(\lambda)=0$ for all $j$ no larger than the length if the Durfee square of $\lambda$. \end{proof} \begin{theorem}\label{t=-1} We have the generating function $$\sum_{n\geq0}q^n\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{c_{sp}^{\lambda}(u))^2-1}{(h^{\lambda}(u))^2}=\prod_{j\geq1}\frac{1-q^j}{1-q^{4j-2}}=\sum_{n\geq0}(-q)^{\binom{n+1}2}.$$ \end{theorem} \begin{proof} \smallskip \noindent Suppose $\lambda=\delta_k\vdash n$. For each each $1\leq j\leq k$, we refer to the cells $(a,c)\in \delta_k$ with $a+c=j+1$ as the $j^{th}$ anti-diagonal of $\lambda$. Then, in the $j^{th}$ anti-diagonal we have $j$ cells with hook-length $2(k-j)+1$. Of these, $\lceil \frac{j}{2}\rceil$ cells have symplectic content $-2(k-j)$ and $\lfloor \frac{j}{2}\rfloor$ cells have symplectic content $2(k-j+1)$. Therefore, if $n$ is not a triangular number, $\sum_{\lambda\vdash n} \prod_{u\in\lambda}\frac{c_{sp}^{\lambda}(u))^2-1}{(h^{\lambda}(u))^2}=0$ and if $n$ is the triangular number $\binom{k+1}{2}$, then \begin{align*} \sum_{\lambda\vdash n}&\prod_{u\in\lambda}\frac{c_{sp}^{\lambda}(u))^2-1}{(h^{\lambda}(u))^2} \\ = &\prod_{j=1}^k\frac{(4(k-j)^2-1)^{\lceil\frac{j}{2}\rceil}(4(k-j+1)^2-1)^{\lfloor\frac{j}{2}\rfloor}}{(2(k-j)+1)^{2j}}\\ =& \prod_{j=1}^k\frac{(2(k-j)-1)^{\lceil\frac{j}{2}\rceil}(2(k-j+1)+1)^{\lfloor\frac{j}{2}\rfloor}}{(2(k-j)+1)^{j}}\\ =& \frac{ (-1)^{\lceil\frac{k}{2}\rceil}\prod_{j=2}^k(2(k-j+1)-1)^{\lceil\frac{j-1}{2}\rceil} \cdot(2k-1) \prod_{j=2}^{k-1}(2(k-j)+1)^{\lfloor\frac{j+1}{2}\rfloor}}{\prod_{j=1}^k(2(k-j)+1)^{j}}. \end{align*} Since $\lceil\frac{j-1}{2}\rceil+\lfloor \frac{j+1}{2}\rfloor=j$ for all $2\leq j\leq k-1$, we obtian $$\sum_{\lambda\vdash n} \prod_{u\in\lambda}\frac{c_{sp}^{\lambda}(u))^2-1}{(h^{\lambda}(u))^2} =(-1)^{\lceil \frac{k}{2}\rceil}=(-1)^{\binom{k+1}{2}}=(-1)^n.$$ Lastly, $$ \prod_{j\geq1}\frac{1-q^j}{1-q^{4j-2}}=\sum_{n\geq0}(-q)^{\binom{n+1}2}$$ follows from Gauss' identity \cite[(2.2.13)]{A98}, with $q$ replaced by $-q$, and Euler's identity \cite[(1.2.5)]{A98}, with $q$ replaced by $q^2$. \end{proof} \begin{remark} Let $\beta(n)$ be the number of {\it overcubic partitions} of an integer $n$, see \cite{K12} and references therein. The generating function for $\beta(n)$ is $$\sum_{n\geq0}\beta(n)\,q^n=\prod_{j\geq1}\frac1{(1-q^{4j-2})(1-q^j)^2}=\prod_{j\geq1}\frac{1+q^{2j}}{(1-q^j)^2}.$$ Thus, the case $t=2$ of Conjecture \ref{conj6.3a}, which is still an open problem, becomes $$\beta(n)=\sum_{\lambda\vdash n}\prod_{u\in\lambda}\frac{2+(c_{sp}^{\lambda}(u))^2}{(h^{\lambda}(u))^2}.$$\end{remark} \smallskip \noindent \section {Beck-Type identities}\label{beck-section} \smallskip \noindent In this section, we consider again the identity \eqref{dist-parts}, $$p(n\mid \mbox{distinct even parts})=\vert sy\mathcal P_0(n)\vert,$$ and establish two Beck-type companion identities for \eqref{dist-parts}. If $\mathcal P(n \mid X)$ denotes the set of partitions of $n$ satisfying condition $X$ and $p(n\mid X)=|\mathcal P(n\mid X)|$, a Beck-type companion identity to $p(n\big| X)= p(n\mid Y)$ is an identity that equates the excess of the number of parts of all partitions in $\mathcal P(n\mid X)$ over the number of parts of all partitions in $\mathcal P(n\mid Y)$ and (a multiple of) the number of partitions of $n$ satisfying a condition that is a slight relaxation of $X$ (or $Y$). \smallskip\noindent Recall that $sy\mathcal P_0(n)$ is the set of partitions of $n$ for which the symplectic content of all cells is non-zero. From the proof of Theorem \ref{non-zero cont}, these partitions are precisely those which are {\it almost self-conjugate}, i.e.. the nested hooks have $\text{leg $=$ arm $+1$}$. Moreover, if the Durfee square has side length $m$, then the $(m+1)^{st}$ part of the partition is $m$ and removing a box from each of the first $m$ columns of the Young diagram leaves a self-conjugate partition. \smallskip\noindent In \cite{AB19}, the authors give a Beck-type companion identity for $$p(n\mid \mbox{distinct odd parts})=p(n\mid \mbox{self-conjugate}),$$ and the work of this section has many similarities with \cite{AB19}. \smallskip \noindent Denote by $s_{c'}(n)$ the number of parts of all partitions in $sy\mathcal P_0(n)$ and by $s_e(n)$ the number of parts of all partitions of $n$ into distinct even parts. Before we introduce our first Beck-type identity, we need a definition. Recall that the rank of a partition $\lambda$ is the difference, $\lambda_1-\ell(\lambda)$, between the largest part in $\lambda$ and the length of $\lambda$. In \cite{BG}, the $M_2$-rank of a partition $\lambda$ is defined as the difference between the largest part and the number of parts in the $\text{mod } 2$ diagram of $\lambda$, that is, $$M_2(\lambda)=\left\lceil\frac{\lambda_1}{2}\right\rceil-\ell(\lambda).$$ \begin{theorem}\label{first beck} For all $n > 0$, we have $s_{c'}(n)-s_e(n)$ equals twice the number of partitions into even parts with exactly one even part repeated plus the number of partitions into distinct even parts with even $M_2$-rank. \end{theorem} \begin{proof} We use the notation $D f(z,q)$ to mean the derivative of $f(z,q)$ with respect to $z$ evaluated at $z=1$, i.e., $$Df(z,q):=\left.\left(\frac{\partial}{\partial z}f(z,q)\right)\right|_{z=1}.$$ Note that if $f(z,q)$ is a partition generating function wherein the exponent of $q$ keeps track of the number being partitioned and the exponent of $z$ is the number of parts, then $Df(z,q)$ is the generating function for the number of parts in the partitions considered. \noindent We denote the generating functions for $s_{c'}(n)$ by $S_{c'}(q)$. Thus, $$S_{c'}(q)=\sum_{n\geq 0} s_{c'}(n)q^n .$$ \noindent To obtain the generating function for the number of partitions in $sy\mathcal P_0$ wherein the exponent of $z$ keeping track of the number of parts, we use the bivariate generating function for self-conjugate partitions \cite{AB19} given by $$ \sum_{m\geq 0}\frac{z^{m}q^{m^2}}{(zq^2;q^2)_m}.$$ Given a partition in $sy\mathcal P_0$ with Durfee square of side length $m$, we remove one box from each of the first $m$ columns of the Young diagram to obtain a self-conjugate partition. Thus, the bivariate generating function for partitions in $sy\mathcal P_0$ wherein the exponent of $z$ keeping track of the number of parts is \begin{equation}\label{bivp_0}F(q;z):=\sum_{m\geq 1}\frac{z^{m+1}q^{m^2+m}}{(zq^2;q^2)_m}.\end{equation} \smallskip \noindent We begin by writing $F(q;z)$ as a limit. We mean $$F(q;z)=\lim_{\tau\to 0}z\sum_{m\geq 1}\frac{(-\frac{q^2}{\tau};q^2)_m(q^2;q^2)_m z^m\tau^m}{(q^2;q^2)_m(zq^2;q^2)_m}.$$ Next, we apply the transformation on the last line of pg. 38 of \cite{A98} in which we first replace $q$ by $q^2$ and then substitute $a= -\frac{q^2}{\tau}, b=q^2, c=zq^2, t=z\tau$. We follow this through the limit as $\tau \to 0$. We obtain \begin{equation}F(q;z)=-z+ z(1-z)\sum_{m\geq 0}(-q^2; q^2)_m z^m. \end{equation} \noindent Applying the operator $D$, we obtain \begin{align*} S_{c'}(q)=&D F(q;z) \\ =&-1+ \lim_{z\rightarrow 1^{-}}(1-z)\sum_{m\geq0}(-q^2;q^2)_mz^m+D(1-z)\sum_{m\geq 0}(-q^2; q^2)_m z^m \\ =&-1+\sum_{m\geq0}\frac{q^{m^2+m}}{(q^2;q^2)_m}+D(1-z)\sum_{m\geq 0}(-q^2; q^2)_m z^m \\ =&-1+(-q^2;q^2)_{\infty}++D(1-z)\sum_{m\geq 0}(-q^2; q^2)_m z^m. \end{align*} \noindent To find $D(1-z)\sum_{m\geq 0}(-q^2; q^2)_m z^m$ we use \cite[Proposition 2.1 and Theorem 1]{AJO01}. When using Theorem 1 in \cite{AJO01}, we first replace $q$ by $q^2$ and then set $a=0$ and $t=-q^2$. Thus, \begin{align*} D(1-z)\sum_{m\geq 0}(-q^2; q^2)_m z^m &=\sum_{n\geq0}((-q^2;q^2)_{\infty}-(-q^2;q^2)_n) \\ &=\frac12\sum_{n\geq1}\frac{q^{n(n-1)}}{(-q^2;q^2)_{n-1}}+(-q^2;q^2)_{\infty}\left(-\frac12+\sum_{m\geq1}\frac{q^{2m}}{1-q^{2m}}\right). \end{align*} Therefore, the generating function for the number of parts of all partitions in $sy\mathcal P_0(n)$ can be written as \newpage \begin{align*} S_{c'}(q)&=D F(q;z) \\ &= -1+\frac{1}{2} \sum_{n=1}^\infty \frac{q^{n(n-1)}}{(-q^2;q^2)_{n-1}}+(-q^2;q^2)_\infty \left(\frac12+\sum_{m\geq 1}\frac{q^{2m}}{1-q^{2m}}\right). \end{align*} We notice that $$\sum_{n\geq1}\frac{q^{n(n-1)}}{(-q^2;q^2)_{n-1}}=\sigma(q^2),$$ where $\sigma(q^2)$ is the expression (1.1) in \cite{ADH88}. There, it is shown that $\sigma(q)=\sum_{n=0}^\infty S(n)q^n$, where $S(n)$ counts the number of partitions of $n$ into distinct parts with even rank minus the number of partitions of $n$ into distinct parts with odd rank. Thus, $\sigma(q^2)=\sum_{n\geq0}\widetilde{S}(n)q^n$, where $\widetilde{S}(n)$ is the number of partitions of $n$ into distinct even parts with even $M_2$-rank minus the number of partitions of $n$ into distinct even parts with odd $M_2$-rank. Then, \begin{equation}\label{m2-rank id} \frac{1}{2} \left(\sum_{n=1}^\infty \frac{q^{n(n-1)}}{(-q^2;q^2)_{n-1}}+(-q^2;q^2)_\infty\right)=p(n\mid \text{distinct even parts, even $M_2$-rank}). \end{equation} \smallskip \noindent Next, we observe that $(-q^2;q^2)_\infty\sum_{m\geq 1}\frac{q^{2m}}{1-q^{2m}}$ is the generating function for the cardinality of $$\mathcal E(n)= \{(\lambda, (2m)^k)\mid \lambda \text{ has distinct even parts, }m,k\geq 1,|\lambda|+2mk=n \}.$$ Clearly, the subset of $\mathcal E(n)$ consisting of pairs $(\lambda, (2m))$, where $2m$ is not a part of $\lambda$, is in one-to-one correspondence with the set of parts of all partitions of $n$ into distinct even parts. To see this, notice that if $\mu$ is a partition into distinct even parts, for each part $2t$ in $\mu$, we can create a pair $(\mu\setminus (2t), (2t))\in \mathcal E(n)$. Here and throughout, if $\eta$ is a partition whose parts (with multiplicity) are also parts of $\mu$, then $\mu\setminus \eta$ stands for the partition obtained from $\mu$ by removing all parts of $\eta$ (with multiplicity). \smallskip \noindent Now, consider the remaining pairs of partitions $(\lambda, (2m)^k)$ in $\mathcal E(n)$, i.e, pairs with $k\geq 2$ or pairs with $k=1$ and $2m$ is a part of $\lambda$. For each such pair $(\lambda, (2m)^k)$, we create the partition $\mu=\lambda \cup (2m)^k$. Then, $\mu$ is a partition into even parts with exactly one even part repeated. Each such partition is obtained twice: if $2t$ is the repeated part of $\mu$ and it appears with multiplicity $b\geq 2$, then $\mu$ is obtained from $(\mu\setminus (2t)^b, (2t)^b)$ and also from $(\mu\setminus (2t)^{b-1}, (2t)^{b-1})$. \smallskip \noindent This completes the proof of the theorem. \end{proof} \smallskip \noindent We proceed to derive a weighted Beck-type companion identity for \eqref{dist-parts}. \begin{theorem}\label{second beck} For $n > 0$, we have $s_{c'}(n)-2s_e(n)$ equals the number of partitions partitions into even parts with exactly one even part repeated and if the repeated part is not the smallest, the partition is counted with weight $2$. \end{theorem} \begin{proof} In addition to the notation introduced in the proof of Theorem \ref{first beck}, we denote by $S_e(q)$ the generating function for $s_e(n)$. Thus, $$S_e(q)=\sum_{n\geq 0} s_e(n)q^n.$$ The generating function for the number of partitions into distinct even parts, where the exponent of $z$ keeping track of the number of parts, is \begin{equation}\label{biv_even} E(q;z):=\sum_{m\geq 0}\frac{z^mq^{m^2+m}}{(q^2;q^2)_m}.\end{equation} To see this, given a partition $\lambda$ with distinct even parts, remove an even staircase, i.e., remove parts $2, 4, \ldots$ from each part of $\lambda$ starting with the smallest. We are left with a partition into even parts. In $\lambda$ there are as many parts as the height of the even staircase removed. \smallskip \noindent Hence, using \eqref{bivp_0} and \eqref{biv_even}, we have \begin{align} S_{c'}(q)-S_e(q) & = D\left(\sum_{m\geq 1}\frac{z^{m+1}q^{m^2+m}}{(zq^2;q^2)_m}-\sum_{m\geq 0}\frac{z^mq^{m^2+m}}{(q^2;q^2)_m} \right)\nonumber \\ & =\sum_{m\geq 1}\left(\frac{(m+1)q^{m^2+m}}{(q^2;q^2)_m}+ D \frac{q^{m^2+m}}{(zq^2;q^2)_m}-\frac{mq^{m^2+m}}{(q^2;q^2)_m}\right)\nonumber\\ & = \sum_{m\geq 1}\frac{q^{m^2+m}}{(q^2;q^2)_m} + D \sum_{m\geq 1}\frac{q^{m^2+m}}{(zq^2;q^2)_m}.\label{gf} \end{align} Now \begin{align*}\sum_{m\geq 0}\frac{q^{m^2+m}}{(zq^2;q^2)_m}& = \lim_{\tau\to 0}\sum_{m\geq 0}\frac{(-\frac{q^2}{\tau};q^2)_m(q^2;q^2)_m\tau^m}{(q^2;q^2)_m(zq^2;q^2)_m}\\ & = \lim_{\tau\to 0} \frac{(q^2;q^2)_\infty(-q^2;q^2)_\infty}{(zq^2;q^2)_\infty(\tau;q^2)_\infty}\sum_{m\geq 0}\frac{(z;q^2)_m(\tau;q^2)_mq^{2m}}{(q^2;q^2)_m(-q^2;q^2)_m}. \end{align*} The last equality was obtained from Heine's transformation \cite[p.19, Cor. 2.3]{A98} by first replacing $q$ with $q^2$ and then substituting $a= -\frac{q^2}{\tau}, b=q^2, c=zq^2, t=\tau$. \smallskip \noindent Therefore \begin{align} \sum_{m\geq 0}\frac{q^{m^2+m}}{(zq^2;q^2)_m} & = \frac{(q^2;q^2)_\infty(-q^2;q^2)_\infty}{(zq^2;q^2)_\infty} \label{S_c-S_o}\\ & \qquad + \frac{(q^2;q^2)_\infty(-q^2;q^2)_\infty}{(zq^2;q^2)_\infty}(1-z)\sum_{m\geq 1}\frac{(zq^2;q^2)_{m-1}q^{2m}}{(q^2;q^2)_m(-q^2;q^2)_m}. \nonumber \end{align} We apply $D$ to \eqref{S_c-S_o} and use \eqref{gf} to obtain \begin{align} S_{c'}(q)- S_e(q) \nonumber &= \sum_{m\geq 1}\frac{q^{m^2+m}}{(q^2;q^2)_m} +\sum_{m\geq 1}\frac{ (-q^2;q^2)_\infty q^{2m}}{1-q^{2m}} - \sum_{m\geq 1}\frac{(-q^2;q^2)_\infty q^{2m}}{(1-q^{2m})(-q^2; q^2)_m}\nonumber\\ & = \sum_{m\geq 1}\frac{q^{m^2+m}}{(q^2;q^2)_m} + \sum_{m\geq 1}\frac{(-q^2;q^2)_\infty q^{2m}}{1-q^{2m}} - \sum_{m\geq 1}\frac{q^{2m}(-q^{2m+1};q^2)_\infty}{1-q^{2m}}.\label{gfs} \end{align} The first expression in \eqref{gfs} is the generating function for partitions with distinct even parts. As in the proof of Theorem \ref{first beck}, the second expression is the generating function for $\mathcal E(n)$, i.e., pairs of partitions $(\lambda, (2m)^k)$, where $\lambda$ has distinct even parts, $m,k\geq 1$ and $|\lambda|+ 2mk=n$. Finally, the last expression generates partitions into even parts in which the smallest part may be repeated. These partitions correspond to pairs $(\lambda, (2m)^k)\in \mathcal E(n)$, such that the parts of $\lambda$ are greater than $2m$. \smallskip \noindent Then, the difference of the last two expressions is the generating function for $$\mathcal E'(n):=\{(\lambda, (2m)^k) \in \mathcal E(n) \mid \lambda \text{ has parts of size at most $2m$}\}.$$ As in the proof of Theorem \ref{first beck}, the subset of $\mathcal E'(n)$ consisting of pairs $(\lambda, (2m))$ such that $2m$ is not a part of $\lambda$ is in one-to-one correspondence with the set of parts that are not smallest in all partitions of $n$ into distinct even parts. We can view the set of partitions of $n$ into distinct even parts (generated by the first sum of \eqref{gfs}) as corresponding to the set of smallest parts in all partitions of $n$ into distinct even parts. \smallskip \noindent Next, we consider the remaining pairs of partitions $(\lambda, (2m)^k)$ in $\mathcal E'(n)$, i.e, pairs with $k\geq 2$ or pairs with $k=1$ and $2m$ is a part of $\lambda$. For each such pair $(\lambda, (2m)^k)$, we create the partition $\mu=\lambda \cup (2m)^k$. Then, $\mu$ is a partition into even parts with exactly one even part repeated. If the repeated part of $\mu$, $2t$ is the smallest, the partition $\mu$ is obtained exactly once: from $(\mu\setminus {2t}^{b-1})$, where $b$ is the multiplicity of $2t$ in $\mu$. If the repeated part is not the smallest, each such partition is obtained twice: if $2t$ is the repeated part of $\mu$ and it appears with multiplicity $b\geq 2$, then $\mu$ is obtained from $(\mu\setminus (2t)^b, (2t)^b)$ and also from $(\mu\setminus (2t)^{b-1}, (2t)^{b-1})$. \smallskip \noindent This completes the proof of the theorem. \end{proof} \smallskip \noindent \section {On the parity of $p(n \mid \text{distinct parts, odd/even rank})$} \label{odd-even rank} \noindent As mentioned in the previous section, the generating function for the number of partitions of $n$ into distinct parts with even rank minus the number of partitions of $n$ into distinct parts with odd rank is given by \cite{ADH88} $$\sigma(q)=\sum_{n\geq0}\frac{q^{\binom{n+1}2}}{(-q;q)_n}.$$ Analogous to identity \eqref{m2-rank id}, we have \begin{equation}\label{rank id} \frac{1}{2} \left(\sum_{n=0}^\infty \frac{q^{\binom{n+1}2}}{(-q;q)_{n}}+(-q;q)_\infty\right)=p(n\mid \text{distinct parts, even rank}). \end{equation} Computed modulo $2$, the identity \eqref{rank id} together with the pentagonal number theorem implies \begin{equation}\label{PNT} \sum_{n\geq0}\frac{q^{\binom{n+1}2}}{(-q;q)_n}\equiv (-q;q)_{\infty}\equiv \sum_{m\in\mathbb{Z}}q^{\frac{m(3m-1)}2}.\end{equation} In this context, we consider the generating function \cite{ADH88} for the number $g(n,r)$ of distinct partitions of $n$ having rank $r$: $$G(z,q):=\sum_{n\geq0}\sum_{r\geq 0} g(n,r)z^rq^n=\sum_{n\geq0}\frac{q^{\binom{n+1}2}}{(zq;q)_n}.$$ Thus, $\sigma(q)=G(-1,q)$. \smallskip \noindent To determine the parity behavior of the number of distinct partitions of $n$ with odd, respectively even rank, we first prove a preliminary result. \begin{lemma} \label{pre-lemma} We have \begin{align*} \sum_{n\geq0}\frac{q^{\binom{n+1}2}}{(zq;q)_n}=\sum_{n\geq0}q^{\frac{n(3n+1)}2}(1+q^{2n+1})\prod_{j=1}^n\frac{z+q^j}{1-zq^j}. \end{align*} \end{lemma} \begin{proof} In the Rogers-Fine identity \cite[p. 233, eq. (9.1.1)]{AB05}, replace $\alpha=-\frac{q}{\tau}, \beta=zq$, and let $\tau\rightarrow0$. Simplification yields the desired result. \end{proof} \begin{theorem} \label{odd-odd-odd-even} Let $n$ be a positive integer. Then, \begin{itemize}\item[(i)] $p(n \mid \text{distinct parts, odd rank})$ is odd if and only if $n=\frac{k(3k-(-1)^k)}2$ for some positive integer $k$; \item[(ii)] $p(n \mid \text{distinct parts, even rank})$ is odd if and only if $n=\frac{k(3k+(-1)^k)}2$ for some positive integer $k$.\end{itemize} \end{theorem} \begin{proof} As usual, operator $D$ computes the derivative $\frac{d}{dz}$ and evaluates at $z=1$. All congruences in this proof are {\it modulo $2$}. It is straightforward to check that, expanded as a series, $D\frac{z+x}{1-zx}\equiv1$ and $\frac{1+x}{1-x}\equiv 1$, i.e., all coefficients of powers of $x$ except the constant term are even. Using these facts and Lemma \ref{pre-lemma}, we have \begin{align} DG(z,q)&=\sum_{n\geq0}q^{\frac{n(3n+1)}2}(1+q^{2n+1})\sum_{j=1}^n\prod_{\substack{i=1 \\ i\neq j}}^n\frac{1+q^i}{1-q^i}D\frac{z+q^j}{1-zq^j}\label{der} \\ &\equiv \sum_{n\geq0}q^{\frac{n(3n+1)}2}(1+q^{2n+1})\sum_{j=1}^n1 \nonumber \\ &\equiv \sum_{\substack{n\geq0 \\ \text{$n$ odd}}} q^{\frac{n(3n+1)}2}(1+q^{2n+1})=\sum_{n\geq0}q^{(2n+1)(3n+2)}(1+q^{4n+3}), \nonumber \end{align} which is equivalent to the first assertion of the theorem. \smallskip \noindent To prove the second assertion, we calculate $D\,zG=G(1,q)+DG(z,q)$ by making use of (\ref{PNT}) and \eqref{der} \begin{align*} D\,zG(z,q)&=\sum_{n\geq1}\frac{q^{\binom{n+1}2}}{(q;q)_n}+\sum_{n\geq0}q^{\frac{n(3n+1)}2}(1+q^{2n+1})\sum_{j=1}^n\prod_{\substack{i=1 \\ i\neq j}}^n\frac{1+q^i}{1-q^i}D\frac{z+q^j}{1-zq^j} \\ &\equiv \sum_{m\in\mathbb{Z}^{*}}q^{\frac{m(3m-1)}2}+ \sum_{k\geq1}q^{\frac{k(3k-(-1)^k)}2} \\ &\equiv \sum_{k\geq1}q^{\frac{k(3k+(-1)^k)}2}, \end{align*} where $\mathbb{Z}^{*}$ denotes the set of non-zero integers. The proof is complete. \end{proof} \smallskip \noindent The preceding discussion leads to the following $q$-series identity. \begin{corollary} We have $$\sum_{n\geq1}\frac{q^{\binom{n+1}2}}{(q;q)_n}\sum_{m=1}^n\frac{q^m}{1-q^m} =\sum_{n\geq1}\frac{q^{\frac{n(3n+1)}2}(1+q^{2n+1})\,(-q;q)_n}{(q;q)_n}\sum_{j=1}^n\frac{1+q^{2j}}{1-q^{2j}}.$$ \end{corollary} \begin{proof} Both sides of the identity equal $D\, G(z,q)$. The left-hand side is obtained by logarithmic-differentiation, while the right-hand side is obtained from \eqref{der}. \end{proof} \section{Binary inversion sums and sums of hook lengths} \label{bi-hooks} \noindent A pair $i<j$ is an {\it inversion} of a binary string $w=w_1\cdots w_n$ if $w_i=1>0=w_j$. The {\it binary inversion sum} of $w$ is given by the sum $s(w)=\sum(j-i)$ over all inversions of $w$. Given a binary string (or word) $w=w_1\cdots w_n$, the {\it position} of $w_i$ in $w$ is $i$. \smallskip \noindent Given a partition $\lambda$, its {\it bit string}, $b(\lambda)$, is a binary word starting with $1$ and ending with $0$ defined as follows. Start at SW corner of the Young diagram and travel along the outer profile going NE. For each step to the right, record a $1$; for each step up, record a $0$. By the {\it length} of $b(\lambda)$ we mean the number of digits in $b(\lambda)$. For instance, for the partition $\lambda= (5,3,3,2,1)$ of Example \ref{eg1} we get the bit string $b(\lambda)=1010100110$ and the length of $b(\lambda)$ is $10$. \smallskip \noindent Before we state the main result of this section, we give a motivating example. \begin{example} We list the partitions of $n=2, 3$, and $4$. For each partition $\lambda$, we give the bit string $b(\lambda)$, the binary inversion sum $s(b(\lambda))$, the list $H_\lambda$ of hook lengths of cells of $\lambda$, and the sum of all hook lengths of $\lambda$, i.e., $\sum_{u\in\lambda}h^{\lambda}(u)$. Notice that we give $H_{\lambda}$ as a list of sub-lists of hook lengths along rows of the Young diagram. \begin{center} \begin{tabular}{ |c|c|c|c|c| } \hline partition $\lambda$ & $b(\lambda)$ & $s(b(\lambda))$ & $H_{\lambda}$ & sum of elements in $H_{\lambda}$ \\ \hline (2) & 110 & 3 & [(2,1)] & 3 \\ (1,1) &100 & 3 & [(2),(1)]& 3 \\ (3) & 1110 & 6 & [(3,2,1)] & 6 \\ (2,1) & 1010 & 5 &[(3,1),(1)] & 5 \\ (1,1,1) & 1000 & 6 & [(3),(2),(1)] & 6 \\ (4) & 11110 & 10 & [(4,3,2,1)] & 10 \\ (3,1) & 10110 & 8 & [(4,2,1),(1)] & 8 \\ (2,2) & 1100 & 8 & [(3,2),(2,1)] & 8 \\ (2,1,1) & 10010 & 8 & [(4,1),(2),(1)] & 8 \\ (1,1,1,1) & 10000 & 10 & [(4),(3),(2),(1)] & 10 \\ \hline \end{tabular} \end{center} \end{example} \begin{theorem} Given any partition $\lambda$, the sum of its hooks, $\sum_{u\in\lambda}h^{\lambda}(u)$, equals the inversion sum, $s(b(\lambda))$, of its binary bit $b(\lambda)$. \end{theorem} \begin{proof} Let $\lambda$ be a partition of $n$ and $b(\lambda)$ its bit string. The number of parts $\ell(\lambda)$ of $\lambda$ equals the number of $0$ digits in $b(\lambda)$. The perimeter of $\lambda$, which is defined as the hook-length $h^{\lambda}(1,1)$, is equal to the length of $b(\lambda)$ minus $1$. \smallskip \noindent Each cell $(a,c)$ in the Young diagram of $\lambda$ defines a sub-diagram consisting of the cell $(a,c)$ and all cells of the Young diagram of $\lambda$ weakly to the right and weakly below $(a,c)$. Denote the partition corresponding to this sub-diagram by $\lambda^{(a,c)}$. Then $b(\lambda^{(a,c)})$ is the sub-string of $b(\lambda)$ starting at the $c^{th}$ digit from the left equal to $1$ (we do not count the $0$ digits) and ending after the $a^{th}$ digit from the right equal to $0$ (we do not count the $1$ digits). The length of $b(\lambda^{(a,c)})$ minus $1$ is precisely the hook length $h(a,c)$ in $\lambda$. Moreover, if the $c^{th}$ digit from the left equal to $1$ is in position $i$ in $b(\lambda)$ and the $a^{th}$ digit from the right equal to $0$ is in position $j$, then $j-i$ is precisely the length of $b(\lambda^{(a,c)})$ minus $1$, i.e., $h(a,c)$. Conversely, every inversion $i<j$ determines a sub-string of $b(\lambda)$ that starts with $1$ in position $i$ in $b(\lambda)$ and ends with $0$ in position $j$ in $b(\lambda)$. The length of the sub-string minus $1$ is precisely $j-i$ and is the hook length $h(a,c)$ where $c$ is the number of $1$ digits in positions $\leq i$ and $a$ is the number of $0$ digits in positions $\geq j$ in $b(\lambda)$. Thus, for each inversion $i<j$, the difference $j-i$ is the hook length of a unique cell in the Young diagram of $\lambda$. \end{proof} \section{On the $x$-ray list of a partition} \label{two stats} \noindent Let $\lambda=(\lambda_1,\lambda_2,\dots,\lambda_{\ell(\lambda)})$ be an integer partition of length $\ell(\lambda)$ and designate $m:=\max\{\lambda_1,\ell(\lambda)\}$. We construct an $m\times m$ matrix whose entry in the $i^{th}$ row and $j^{th}$ column is $1$ if $(i,j)$ is a cell in the Young diagram of $\lambda$ and $0$ otherwise. Since the Young diagram of a partition is also known as the Ferrers diagram, the matrix defined here is referred to as the {\it Ferrers matrix} of $\lambda$ on OEIS \cite{FMatrix}. \begin{example} \label{eg 3} Given the partition, $\lambda=(4,3,1)\vdash 8$, we have $m=4$ and the corresponding $4\times 4$ matrix is shown below. $$\begin{bmatrix} 1&1&1&1 \\ 1&1&1&0 \\ 1&0&0&0 \\ 0&0&0&0 \end{bmatrix}$$ \end{example} \noindent Given a partition $\lambda$, we add the elements of each anti-diagonal in the Ferrers matrix of $\lambda$ to obtain a composition, denoted by $\lambda_x$, which we call the {\it $x$-ray list} of $\lambda$. For the partition $\lambda=(4,3,1)$ in Example \ref{eg 3}, we have $\lambda_x=(1,2,3,2,0,0,0)$. \smallskip \noindent Since the number of entries equal to $1$ in an anti-diagonal is invariant under conjugation, we have $\lambda_x=\lambda'_x$ for all partitions $ \lambda$. Thus, when studying $x$-ray lists, it is enough to consider partitions $\lambda$ satisfying $\lambda_1\geq \ell(\lambda)$. So, the Ferrers matrix is an $M\times M$ matrix, where $M=\lambda_1$. \smallskip \noindent The first result, below, proves that the $x$-ray lists of partitions are unimodal compositions. In addition, the sub-sequence of an $x$-ray consisting of the initial terms up to the first occurrence of a peak is strictly increasing. \begin{lemma} \label{x-ray-lemma} If $\lambda$ is a partition, then $\lambda_x$ is of the form $(1,2,3,\dots,n,a_1,a_2,\dots,a_r)$, where $n\geq a_1\geq a_2\geq\cdots\geq a_r$. \end{lemma} \begin{proof} Let $\lambda$ be a partition such that $\lambda_1\geq\ell(\lambda)$ and let $M=\lambda_1$. Consider all initial anti-diagonals with no entries equal to $0$. These contribute the following initial parts in the composition $\lambda_x$: \begin{itemize} \item[(i)]$1, 2, 3,\dots, n$ where $n\leq M$, or \medskip \item[(ii)] $1, 2, 3, \dots, M, M-1, M-2, \dots, M-j$. \end{itemize} \smallskip \noindent Let us consider the first anti-diagonal containing a $0$ entry. In case (i), the sum of its entries, $a_1$, will be at most $n$. In case (ii), the sum of its entries will be strictly less than $M-j$. Furthermore, zeros propagate zeros, i.e., in a column, below a $0$ there are only $0$ entries and, in a row, to the right of a $0$ there are only $0$ entries. Thus, for any string of consecutive zeros in a given anti-diagonal, there is a string with at least as many zeros in the next anti-diagonal. This necessitates a non-increase in the sum of entries of the next anti-diagonal.\end{proof} \begin{example} In the partially filled Ferrers matrices below, $$\begin{bmatrix} 1&1&1&1&1 \\ 1&1&1&0&\square \\ 1&1&0&\square \\ 1&0&\square \\ 1&\square \end{bmatrix} \qquad \qquad \begin{bmatrix} 1&1&1&1&1 \\ 1&1&1&0&\square \\ 1&1&0&\square \\ 1&0&\square \\ 0&\square \end{bmatrix},$$ \bigskip \noindent the positions marked with $\square$ are forced to be filled with $0$.\end{example} \begin{theorem} \label{x-ray-thm} The number of different $x$-ray lists of $n$ equals to the number of partitions of $n$ into distinct parts. \end{theorem} \begin{proof} We first show that if $\pi=(b_1, b_2, \ldots)$ and $\rho=(c_1, c_2, \ldots)$ are partition of $n$ into distinct parts such that $\pi\neq \rho$, then their $x$-ray lists are different, i.e., $\pi_x\neq\rho_x$. Let $i$ be the smallest integer such that $b_i\neq c_i$, say $b_i>c_i$. In the Ferrers matrix of a partition with distinct parts, in an anti-diagonal, the entries SW of a $0$ are all equal to $0$. Therefore, the anti-diagonal of $\pi$ containing the end point of part $b_i$, i.e., the $(i+b_i-1)^{st}$ anti-diagonal, contains at least one more $1$ than the same anti-diagonal in $\rho$. Then, $\pi_x\neq \rho_x$. Consequently, the number of $x$-ray lists is at least as large as the number of partitions into distinct parts. \smallskip \noindent On the other hand, each $x$-ray list, $\lambda_x$, of $n$ corresponds to a partition $\mu$ of $n$ with distinct parts as follows. If the $i^{th}$ entry of $\lambda_x$ is $r$, fill in the $i^{th}$ anti-diagonal of a matrix with $r$ entries equal to $1$ (starting in the first row of the matrix) followed by $0$ entries. Let $\phi(\lambda_x)$ be the partition whose Ferrers matrix is the matrix obtained above. By Lemma \ref{x-ray-lemma} and the construction of the matrix, $\phi(\lambda_x)$ is a partition with distinct parts. Hence, the number of $x$-ray lists is no more than the number of partitions of $n$ into distinct parts. The proof is complete. \end{proof} \noindent \begin{example} We illustrate the proof of Theorem \ref{x-ray-thm} for $n=7$. \bigskip \begin{center} \begin{tabular}{ |c|c|c|c| } \hline $\lambda$ with $\lambda_1\geq\ell(\lambda)$ & $\lambda_x$ & $\phi(\lambda_x)$ \\ \hline \multirow{1}{4em} {7} & (1,1,1,1,1,1,1) & 7 \\ \hline \multirow{1}{4em} {61} & (1,2,1,1,1,1) & 61 \\ \hline \multirow{2}{4em}{52 \\ 511} & (1,2,2,1,1) & 52 \\ & & \\ \hline \multirow{2}{4em}{43 \\ 411} & (1,2,2,2) & 43 \\ & & \\ \hline \multirow{3}{4em}{421 \\ 331 \\ 322 } & (1,2,3,1) & 421 \\ & & \\ & & \\ \hline \end{tabular} \end{center} \end{example} \smallskip \noindent We now explore a different aspect of $x$-ray lists of partitions. \smallskip\noindent We say that a partition $\mu$ is contained in partition $\eta$ if $\mu_i\leq \eta_i$ for all $i$ (the partitions are padded with $0$ after the last part). Recall that $\delta_r$ denotes the staircase partition of length $r$. We say that $\lambda$ is {\it maximally contained} in $\delta_r$ if it is contained in $\delta_r$ but not in $\delta_{r-1}$. We consider the following motivating example. \begin{example} The table below \cite{STSP} is constructed as follows. For $n\geq1$, the $n^{th}$ row records the number of partitions of $n$ that are maximally contained in $\delta_r$ for $1\leq r\leq n$. \begin{align*} &1 \\ &0, 2 \\ &0, 1, 2 \\ &0, 0, 3, 2 \\ &0, 0, 3, 2, 2 \\ &0, 0, 1, 6, 2, 2 \\ &0, 0, 0, 7, 4, 2, 2. \end{align*} For example, $\{(4), (3,1), (2,2), (2,1,1), (1,1,1,1)\}$ is the set of partitions of $n=4$. The fourth row in the above table shows that there are no partitions maximally contained in $\delta_1$ or $\delta_2$; there are $3$ partitions maximally contained in $\delta_3$; there are $2$ partitions maximally contained in $\delta_4$. Thus, the the fourth row is $0, 0, 3, 2$. \smallskip \noindent On the other hand, the multiset of $x$-ray lists of $n=4$ with zero entries omitted is $\{1111, 211, 211, 211, 1111\}$. There are no $x$-ray lists of length $1$ or $2$; there are $3$ $x$-ray lists of length $3$; there are 2 $x$-ray lists of length $4$. This data is precisely the fourth row in the above table: $0, 0, 3, 2$. \end{example} \noindent This agreement is not accidental as shown in the next theorem. By the {\it length} of an $x$-ray list, we mean the number of non-zero entries in the list. \begin{theorem} \label{biject-triangle} For $n\geq1$ and $1\leq r\leq n$, the number of partitions of $n$ with $x$-ray list of length $r$ equals the number of partitions of $n$ maximally contained in $\delta_r$. \end{theorem} \begin{proof} Let $\lambda\vdash n$ and suppose its x-ray list, $\lambda_x$, has length $r$. From Lemma \ref{x-ray-lemma}, the first $r$ anti-diagonals in the Young diagram of $\lambda$ are non-empty while the $(r+1)^{st}$ anti-diagonal is empty. Since $(\delta_r)_x=(1,2,3,\dots,r)$, it follows that $\lambda$ is maximally contained in $\delta_r$. \smallskip \noindent Conversely, if $\lambda$ is maximally contained in $\delta_r$, then the $(r+1)^{st}$ anti-diagonal of $\lambda$ is empty and the $r^{th}$ anti-diagonal is non-empty. Again, by Lemma \ref{x-ray-lemma}, the $x$-ray list $\lambda_x$ has length $r$. \end{proof} \smallskip
2207.01200
\section{Introduction} \IEEEPARstart{H}{umans} have shown great enthusiasm for Mars. The history of human research on Mars can date back to the 1960s. So far, more than 30 rovers have been dispatched to the red planet, and the increasing amount of available data promotes the application and development of deep learning algorithms. Deep-learning-based methods have already assisted in prioritizing data selection~\cite{qiu2020scoti}, collecting data, and analyzing data~\cite{goesmann2017mars,dapoian2021science,priyadarshini2021mars}. This paper explores the task of Mars terrain segmentation, which aims to identify the drivable areas and the specific terrains. It is of great significance to obstacle avoidance, traversability estimation, data collection, and path planning~\cite{gonzalez2018deepterramechanics,dimastrogiovanni2020terrain}, ensuring the safety and productivity of ongoing and future missions to Mars. \zjh{Mars terrain segmentation faces problems from both data and method design. First, the lack of satisfactory and available data hinders the development of deep learning methods to some extent. On the one hand, because of the high cost of Mars rovers, limited bandwidth and data transmission loss from Mars to Earth, collecting Martian data is very expensive. On the other hand, due to the complexity and similarity of the terrain, data annotation is highly specialized and time-consuming. Accordingly, previous datasets~\cite{schwenzer2019labelmars,AI4Mars} are not satisfactory because of the low-quality annotations or the roughly defined categories. AI4Mars~\cite{AI4Mars}, a newly published Mars terrain segmentation dataset, only defines four simple categories which are difficult to meet actual requirements of complex terrain identification. Besides, some datasets~\cite{schwenzer2019labelmars,AI4Mars} collected through crowdsourcing often do not have satisfactory annotation quality due to the inconsistent standards. } \zjh{From a methodological point of view, the existing methods rely too much on large amounts of training data and lack targeted and effective design. Early works generally defined the set of terrain categories in advance and then directly applied a certain machine learning algorithm such as Support Vector Machines (SVM)~\cite{dimastrogiovanni2020terrain}. With the rapid development of deep learning, methods based on deep neural networks including but not limited to Convolutional Neural Networks (CNN) are proposed to solve the terrain segmentation task~\cite{gonzalez2018deepterramechanics,AI4Mars,rothrock2016spoc}, greatly improving the segmentation performance. However, most existing deep-learning-based methods still rely on standard supervised learning pipelines that require a lot of high-quality labeled data, which is often difficult to achieve. Besides, some works directly migrate the segmentation frameworks designed for Earth without sufficient consideration of the characteristics of Martian data. Goh \textit{et al}.~\cite{less} alleviate the data dependency problem by applying an existing contrastive-learning-based method~\cite{SimCLR}. However, Goh \textit{et al}.~do not take into account the similarity between different terrains in Martian images, which makes the contrastive learning framework less effective. } In summary, there are two main challenges in the Mars terrain segmentation task: 1) the lack of data with sufficient detailed and high-confidence annotations, 2) the over-reliance of existing methods on annotated training data. \zjh{We solve the above problems from the perspective of \textit{both data and method design}, which are named \textbf{S}elf-\textbf{S}upervised and \textbf{S}emi-\textbf{S}upervised \textbf{S}egmentation for \textbf{Mars} (\textbf{S$^{5}$Mars}). We first create a new dataset to provide a high-quality and fine-grained labeled dataset for Mars terrain segmentation. Our dataset contains 6K high-resolution images captured on the surface of Mars, each of which is annotated by a professional team. There are 9 categories defined in our dataset, including sky, ridge, soil, sand, bedrock, rock, rover, trace, and hole, respectively. To improve the quality of labels, the annotation of the dataset adopt a sparse labeling style. Only the area with high human confidence is annotated. } \zjh{To learn from this sparse data, we propose a representation learning-based Martian semantic segmentation framework. In general, we first pre-train the model on the pre-designed auxiliary prediction tasks (known as pretext tasks) to obtain strong feature representations, and then fine-tune the model on the downstream task in a semi-supervised manner to make full use of the unlabeled area in the dataset. In this way, we reduce the dependence of model training on a large amount of annotated data.} \zjh{There is a large literature on self-supervised learning. However, most existing methods are not designed for Mars image data and ignore to consider its special nature. Moreover, many approaches target at solving instance-level prediction problems like image classification, which is sub-optimal for dense prediction tasks like segmentation. Therefore, we design a pixel-level pretext task to bridge this gap. Specifically, our method is based on masked image modeling (MIM), which is inspired by masked language modeling~\cite{BERT,liu2019roberta} in natural language processing. MIM allows the model to learn a visual representation by predicting the raw pixels~\cite{pathak2016context,he2021masked,xie2021simmim} or designed features~\cite{wei2021masked,beit,tan2021vimpac} of the masked image. However, because of the similar colors of images on Mars, predicting raw pixels alone will cause the network to tend to output the average of surrounding pixels and ignore the high-frequency textures, making the pretext task easier and less effective. It is observed that texture information plays an important role in Mars image segmentation. For example, the sky, soil, and sand all have similar color and spatial arrangement, while they can be distinguished from different textures, such as the relative proportion of grain sizes, roughness, and bumpiness. Therefore, we introduce a new pretext task, guiding the network to jointly model the low-frequency (color) and high-frequency (texture) information through a multi-task mechanism.} \zjh{In the fine-tuning stage, since our dataset is sparsely annotated and the areas hard to distinguish in the image are not labeled, we propose a semi-supervised learning method based on uncertainty to further improve performance. A pseudo-label-based approach is designed to take full advantage of the information in unlabeled areas. After our self-supervised pre-training, we fine-tune the model with ground-truth labels on labeled areas and pseudo-labels generated online on unlabeled areas. Furthermore, we exploit the task uncertainty to improve the quality of pseudo-labels. Experimental results demonstrate that our method can improve the performance to a large extent for Mars imagery segmentation.} \zjh{Our contributions can be summed up as follows: \begin{itemize} \item We collect a new fine-grained labeled Mars dataset for terrain semantic segmentation, which contains a large amount of Martian geomorphological data. Our dataset is sparsely annotated by a professional team under multiple rounds of inspection rework. The high-quality dataset can provide accurate and rich segmentation guidance. \item We propose a self-supervised multi-task learning approach to improve the performance of Mars imagery segmentation, in which the network can learn strong representations by explicitly modeling both the low-frequency color feature and high-frequency texture feature of the input image. It enables self-supervised learning to extract more useful and comprehensive information, providing better initialization for downstream tasks. \item An uncertainty-based semi-supervised training strategy is introduced to take full use of unlabeled Mars image areas. We exploit task uncertainty to generate confident pseudo-labels, reducing noise in labeled areas. Semi-supervision is applied to employ more data to improve the generalization ability of the model without compromising the feature representation ability by introducing noise. \end{itemize} } {The rest of this article is organized as follows. In Section~\ref{sec:related}, we provide a detailed survey on Martian datasets and a brief review of deep learning for Mars. Section~\ref{sec:dataset} introduces our Mars segmentation dataset. Section~\ref{sec:method1} and Section~\ref{sec:method2} describe our framework for Mars semantic segmentation. Experimental results are shown in Section~\ref{sec:exp_selfsl} and Section~\ref{sec:exp_semisl}. The conclusion is finally given in Section~\ref{sec:conclusion}.} \section{Related Works} \label{sec:related} \begin{table*}[] \centering \caption{Summary of Mars terrain-aware datasets.} \label{tab:mars_summary} \renewcommand\arraystretch{1.2} { \setlength{\tabcolsep}{4mm}{ \begin{tabular}{c|c|lllll} \toprule \multicolumn{1}{c|}{Type} & Source & Dataset & Scale & Classes & \multicolumn{2}{l}{Description} \\ \hline \multirow{12}{*}{Real} & \multirow{9}{*}{Curiosity rover} & \multicolumn{1}{l}{\multirow{2}{*}{\cite{rothrock2016spoc}}} & 5k & - & \multicolumn{2}{l}{Wheel slip and slope angles prediction} \\ \cline{4-7} & & \multicolumn{1}{l}{} & 700 & 6 & \multicolumn{2}{l}{Terrain segmentation} \\ \cline{3-7} & & \cite{gonzalez2018deepterramechanics} & 300 & 3 & \multicolumn{2}{l}{Terrain classification} \\ \cline{3-7} & & \cite{li2020autonomous} & 620 & 4 & \multicolumn{2}{l}{Terrain classification} \\ \cline{3-7} & & \cite{WagstaffLSGGP18} & 6k & 24 & \multicolumn{2}{l}{Terrain classification} \\ \cline{3-7} & & \cite{xiao2021kernel} & 405 & - & \multicolumn{2}{l}{Rock detection} \\ \cline{3-7} & & \cite{qiu2020scoti} & 1k & - & \multicolumn{2}{l}{Image description} \\ \cline{3-7} & & \cite{kerner2018context} & 310k & - & \multicolumn{2}{l}{\begin{tabular}[c]{@{}l@{}}Compressed image quality evaluation\\ with automatic labeling\end{tabular}} \\ \cline{2-7} & Opportunity, Spirit rovers & \cite{thompson2007performance} & 117 & - & \multicolumn{2}{l}{Rock detection} \\ \cline{2-7} & \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Curiosity, Opportunity, \\ Spirit rovers\end{tabular}} & \cite{thompson2012smart} & 46 & 2 & \multicolumn{2}{l}{Terrain segmentation} \\ \cline{3-7} & & \cite{AI4Mars} & 35k & 4 & \multicolumn{2}{l}{Terrain segmentation} \\ \cline{3-7} & & \cite{li2022stepwise} & 5k & 9 & \multicolumn{2}{l}{Terrain segmentation} \\ \cline{3-7} & & \cite{schwenzer2019labelmars} & 5k & 6 (17 sub) & Terrain segmentation \\ \hline \begin{tabular}[c]{@{}c@{}}Real + Synthetic\end{tabular} & Curiosity rover & \cite{wilhelm2020domars16k} & 30k & 5 & \multicolumn{2}{l}{Terrain classification} \\ \hline Synthetic & ROAMS rover simulator & \cite{thompson2007performance} & 55 & - & \multicolumn{2}{l}{Rock detection} \\ \hline \multirow{6}{*}{Simulation field} & \begin{tabular}[c]{@{}c@{}}Atacama Desert\\ Zoë rover prototype\end{tabular} & \cite{niekum2005reliable} & 30 & - & \multicolumn{2}{l}{Rock detection} \\ \cline{2-7} & {\begin{tabular}[c]{@{}c@{}}JPL Mars Yard \\FIDO rover Platform \end{tabular}} & \multirow{1}{*}{\cite{thompson2007performance}} & \multirow{1}{*}{35} & \multirow{1}{*}{-} & \multicolumn{2}{l}{\multirow{1}{*}{Rock detection}} \\ \cline{2-7} & {\begin{tabular}[c]{@{}c@{}}JPL Mars Yard \\Athena rover Platform\end{tabular}} & \cite{higa2019vision} & 91k & - & \multicolumn{2}{l}{Rover energy consumption} \\ \cline{2-7} & Devon Island & \cite{furlan2019rock} & 400 & - & \multicolumn{2}{l}{Rock detection} \\ \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Real +\\ Simulation field\end{tabular}} & \multirow{2}{*}{Opportunity, Spirit rovers} & \multirow{2}{*}{\cite{xiao2017autonomous}} & \multirow{2}{*}{36} & \multirow{2}{*}{2} & \multicolumn{2}{l}{\multirow{2}{*}{Terrain segmentation}} \\ & & & & & \multicolumn{2}{l}{} \\ \bottomrule \end{tabular} }} \end{table*} \subsection{Deep Learning for Mars} \zjh{With the increasing amount of available data and the rapid development of computing power, deep learning is playing an increasingly important role in Mars exploration. For many reasons such as limited computing resources, existing deep learning methods are usually ex-situ (Earth edge). For terrain identification, Deep Mars~\cite{WagstaffLSGGP18} trains an AlexNet to classify engineering-focused rover images (\textit{e}.\textit{g}., those of rover wheels and drill holes) and orbital images. However, it can only recognize one object in a single image. The Soil Property and Object Classification (SPOC)~\cite{rothrock2016spoc} proposes to segment the Mars terrains in an image by using a fully convolutional neural network. Swan \textit{et al}.~\cite{AI4Mars} collect a terrain segmentation dataset and evaluate the performances using DeepLabv3+~\cite{DeeplabV3_plus}. Considering the dependence of existing methods on large amounts of data, \cite{less} utilizes a self-supervised method and trains the model on less labeled images. For other tasks, Zhang \textit{et al}.~\cite{zhang2018novel} deal with Mars visual navigation problem by utilizing a deep neural network, which can find the optimal path to the target point directly from the global Martian environment. Meanwhile, intrigued by the vision of autonomous probes that rely on deep learning even without human-in-the-loop requirements, scientists are studying the potential of implementing in-situ (Mars edge) deep learning algorithms using high-performance chips~\cite{ono2020maars}. For example, the Scientific Captioning Of Terrain Images (SCOTI)~\cite{qiu2020scoti} model automatically creates captions for pictures of the Martian surface based on LSTM, which helps selectively transfer more valuable data within downlink bandwidth limitations. For energy-optimal driving, Higa \textit{et al}.~\cite{higa2019vision} propose to predict energy consumption from images based on a PNASNet-5~\cite{liu2018progressive}. It is foreseeable that deep learning will play an irreplaceable role in future Mars exploration. However, it also requires a lot of annotated training data, which is often hard to obtain. In the paper, we present a powerful self- and semi-supervised learning framework, which can learn a good visual representation from large amounts of unlabeled data, to resolve the terrain segmentation task.} \subsection{Datasets for Mars Vision} Datasets are the basis for intelligent algorithms development. At present, there are various datasets of planetary surfaces, such as digital simulation Lunar landscape segmentation dataset ALLD and Mars Satellite image dataset DoMars16k~\cite{wilhelm2020domars16k}. As for Mars, the commonly used terrain-aware datasets can be divided into three categories: rover shooting real data, artificial synthetic data, and earth simulation field shooting data. The rover shooting data are captured by devices of rovers that land on Mars. The number of rovers sent to Mars will gradually increase along with the progress of space research. However, the amount of data available now is still relatively limited. Synthesizing Mars datasets by means of digital modeling simulation or adversarial learning is an important data supplement, but can differ greatly from the real Mars data. Earth simulation field shooting way requires building a simulation platform or finding a similar landscape on Earth to Mars, which is difficult to implement. The current Mars terrain-aware datasets are shown in Table~\ref{tab:mars_summary}. A large proportion of them have an image quantity of less than 1000, which can not meet the training needs of the machine learning models. The richness of Mars terrain-aware datasets still needs to be strengthened. \begin{figure*} \centering \subfigure[Sky]{ \begin{minipage}[b]{0.26\textwidth} \centering \includegraphics[width=1\textwidth]{figures/dataset/sky.png} \end{minipage}} \hspace{0.4pt} \subfigure[Ridge]{ \begin{minipage}[b]{0.26\textwidth} \centering \includegraphics[width=1\textwidth]{figures/dataset/ridge.png} \end{minipage}} \hspace{0.4pt} \subfigure[Soil]{ \begin{minipage}[b]{0.26\textwidth} \centering \includegraphics[width=1\textwidth]{figures/dataset/soil.png} \end{minipage} } \vspace{-5pt} \subfigure[Sand]{ \begin{minipage}[b]{0.26\textwidth} \centering \includegraphics[width=1\textwidth]{figures/dataset/sand.png} \end{minipage} } \subfigure[Bedrock]{ \begin{minipage}[b]{0.26\textwidth} \centering \includegraphics[width=1\textwidth]{figures/dataset/bedrock.png} \end{minipage} } \subfigure[Rock]{ \begin{minipage}[b]{0.26\textwidth} \centering \includegraphics[width=1\textwidth]{figures/dataset/rock.png} \end{minipage} } \vspace{-5pt} \subfigure[Rover]{ \begin{minipage}[b]{0.26\textwidth} \centering \includegraphics[width=1\textwidth]{figures/dataset/rover.png} \end{minipage} } \subfigure[Trace]{ \begin{minipage}[b]{0.26\textwidth} \centering \includegraphics[width=1\textwidth]{figures/dataset/trace.png} \end{minipage} } \subfigure[Hole]{ \begin{minipage}[b]{0.26\textwidth} \centering \includegraphics[width=1\textwidth]{figures/dataset/hole.png} \end{minipage} } \caption{Examples for each label category (highlighted in red).} \label{fig:example} \end{figure*} \vspace{-0.5em} \subsection{Self-Supervised and Semi-Supervised Learning} \zjh{Self-supervised learning aims to learn a robust feature space from unlabeled visual data by pretext tasks, usually constructed by image operations~\cite{caron2018deep,doersch2015unsupervised,pathak2016context,noroozi2016unsupervised} or spatiotemporal operations~\cite{fernando2017self,misra2016shuffle,pathak2017learning}. It alleviates the need for expensive annotations and enables representation learning from unlabeled data. A common approach is to pre-train the network on a large dataset such as ImageNet~\cite{ImageNet} using a standard classification task and fine-tune it on a small dataset for certain downstream task~\cite{WagstaffLSGGP18}. However, this method will suffer from domain shift caused by the differences to some extent in image properties between the pre-training data and fine-tuning data. Another popular method lately is contrastive learning~\cite{dosovitskiy2015discriminative,MoCo,MoCoV2,BYOL,SimCLR}, which capitalizes on augmentation invariance. By extending the distance between negative samples while narrowing the distance between positive samples, the model learns a more separable feature space. However, most of the above methods are designed for instance-level classification and can be sub-optimal for dense prediction tasks like segmentation. Recently, the masked image modeling~\cite{chen2020generative,pathak2016context,doersch2015unsupervised,he2021masked,wei2021masked,xie2021simmim} has achieved surprising results in self-supervised learning, which provides a natural pixel-level pretext task. We explore the potential in Mars image semantic segmentation by masked image modeling in this paper.} \zjh{Semi-supervised learning~\cite{ouali2020overview,van2020survey} utilizes the manifold structure of unlabeled data to assist learning with labeled data. The pseudo-label method~\cite{xie2020self} assigns pseudo-labels to unlabeled data through a classifier trained on supervised data. The semantic information of unlabeled data is extracted by this method. To reduce the noise of pseudo-labels and improve the quality, Zou \textit{et al}.~\cite{zou2019confidence} use label regularization to filter labels with high confidence. To correct its own mistakes of pseudo-label, Mugnai \textit{et al}.~\cite{mugnai2022fine} propose Gradient Reversal Layer (GRL) for fine-graining the labels. However, the main downside of such methods is the low quality of pseudo-label, which will seriously interfere with network training. We address this issue by introducing uncertainty estimation to the pseudo-label selection. Only the pseudo-labels with high confidence are retained for training.} \section{Proposed Mars Imagery Segmentation Dataset} \label{sec:dataset} \zjh{In order to solve the problem of scarce available training data for deep learning, } we create a fine-grained labeled Mars dataset for the exploration on Mars surface, which can guide the rovers and support space research missions. In the following, we mainly use S$^{5}$Mars to represent our dataset for clarity. The dataset includes 6,000 high-resolution images taken on the surface of Mars, by color mast camera (Mastcam) from Curiosity (MSL). The spatial resolution of RGB images in this dataset is 1200 $\times$ 1200. We divide the dataset into training, validation, and test sets randomly. The training set contains 5,000 images, while the validation set and the test set contains 200 and 800 images, respectively. \begin{figure} \subfigure[Dist. of the number of labels]{ \begin{minipage}[b]{0.305\textwidth} \includegraphics[width=0.9\textwidth]{figures/dataset/number_count_v2.pdf} \label{fig:number_dis} \end{minipage}} \subfigure[Dist. of label area]{ \begin{minipage}[b]{0.16\textwidth} \includegraphics[width=1\textwidth]{figures/dataset/area_analysis_v2.pdf} \vspace{-3pt} \label{fig:area_dis} \end{minipage}} \caption{Numerical statistics on our S$^{5}$Mars dataset. The figures show the richness of the categories contained in the image from two aspects: distribution of the number of labels and distribution of label area.} \label{fig:statistic} \end{figure} \subsection{Labeling Process} We annotate the dataset at pixel level in a deterministic sparse labeling style. There are 9 label categories, sky, ridge, soil, sand, bedrock, rock, rover, trace, and hole, respectively. Examples of each category are shown in Fig.~\ref{fig:example}. The labeling criteria are as follows: \begin{itemize} \item \textbf{Sky.} The Martian sky, often at the top of a distant image, bounded by the upper edge of a mountain or horizon. \item \textbf{Ridge.} The distant peaks bounded by the sky above and the horizon below. \item \textbf{Soil.} Unconsolidated or poorly consolidated weathered material on the surface of Mars, with larger and coarse-grained grains containing small stones. \item \textbf{Sand.} Granular material, more fluid, less viscous, some with windward and leeward sides, most of the time with sand ridges. \item \textbf{Bedrock.} Partially covered by the soil and buried at varying depths. \item \textbf{Rock.} A stone that is completely exposed to the ground and is roughly lumpy or oval in shape, usually with distinct shadows. \item \textbf{Rover.} The rover itself. \item \textbf{Trace.} The trace left by the rover when it passed over the ground. \item \textbf{Hole.} The hole left by the rover during its sampling operation on Mars, contains the surrounding soil of different colors. \end{itemize} \begin{figure} \centering \subfigure[Visualization on S$^{5}$Mars]{ \begin{minipage}[b]{0.23\textwidth} \centering \includegraphics[width=1\textwidth]{figures/dataset/pkumars_all_visual.pdf} \end{minipage}} \subfigure[Visualization on AI4Mars]{ \begin{minipage}[b]{0.23\textwidth} \centering \includegraphics[width=1\textwidth]{figures/dataset/ai4mars_visual.pdf} \end{minipage}} \caption{Visualization of pixel-level feature distribution on the S$^{5}$Mars and the AI4Mars. Features are extracted by Swin pre-trained backbone and are visualized with t-SNE.} \label{fig:tshe} \end{figure} Martian surface condition is complicated due to the harsh and volatile Martian environment. The terrain types can mix and overlap with each other and it becomes hard for humans to distinguish the correct categories clearly. Considering the situation, we apply sparse labeling, which ensures that only the pixels with enough human confidence are labeled. The overall annotation priority is in a coarse-to-fine manner, which means we label each image in order of object size. In addition, the trace left by rover will be assigned a higher priority since its appearance is relatively infrequent. As for the annotating process, the annotation rules are discussed more than ten times and take each annotator's feedback into account to keep consistency and preciseness. Each annotation result passes more than two turns of quality inspections. Annotation work is carried out by a professional team, 90\% of the annotators have been engaged in such annotation work more than six times, and the annotators are experienced. The age distribution of the annotators ranges from 18 to 37 years old, with an average age of about 24 years old. The annotation time of each terrain image is about 30 minutes. \subsection{Comparison and Analysis} We make a statistical analysis on the semantic labels in the dataset, as shown in Fig.~\ref{fig:statistic}. We show the distribution of the number of labeled categories contained in each image in Fig.~\ref{fig:number_dis}. Most images are relatively complex with three or four annotations in one scene. This distribution on training, validation and test sets keeps in good consistency. We make statistics of the distribution of label area of each category, as shown in Fig.~\ref{fig:area_dis}. There are 2,730 images of distant views in the dataset, which contains the sky and the surface division line. Bedrock is the label of the largest annotation area, ridge the second. Rocks appear in most of the images in the dataset, but the total area is small. The artificial impact, \textit{e}.\textit{g}., rover, trace, and hole, accounts for few portions of the labeled area, but they have a greater variety of shapes and are crucial to the observation and judgment system for intelligence research on Mars. In that sense, the distribution of the dataset offers new challenges for future research, since it tests the generalization and robustness of the research methods for long-tail data. We provide the visualization of pixel-level feature distribution on the S$^{5}$Mars, while comparing with the same type labeled dataset AI4Mars~\cite{AI4Mars}, as in Fig.~\ref{fig:tshe}. The features are randomly extracted with Swin~\cite{swin} segmentation model. We take the features from the second to last layer of the pre-trained backbone and visualize them with t-SNE data dimension reduction algorithm. The Swin backbone is not finetuned on any Mars data for the fairness of the comparison. It can be observed that S$^{5}$Mars shows a more separable feature distribution. The categories like the sky, trace, hole, and rover, are highly distinguishable from other data. Categories like soil, rock, and sand also show a significant aggregation. AI4Mars contains 4 categories of labels, and the separability of each type of label is poor, especially that between bedrock and soil. Since AI4Mars is a crowdsourcing project, though the number of submissions is large, the annotators may have inconsistent understandings of labeling standards. This reflects the importance of establishing clear labeling criteria and professional training for annotators. Mars-Seg~\cite{li2022stepwise} is also a public Mars terrain segmentation dataset. The dataset has 1,064 high-resolution grayscale images and 4,184 RGB images with a spatial resolution of $560 \times 500$, while S$^{5}$Mars is composed of high-resolution RGB images, which offers more accurate and more abundant high-frequency information and texture details for detection and segmentation tasks. Two datasets differ in labelling process. Mars-Seg divides the terrain into 9 categories and labels every pixel. However, categories like gravel, sand, and rocks mix up with each other and it is inaccurate to determine the terrain scene into any one category. S$^{5}$Mars applies a high-confidence sparse-labeled manner, that only regions with high confidence to judge the terrain type will be labeled. This way we guarantee the labels are strongly representative in each category and reduce the label noise introduced in the labeling work. Considering this, the dataset offers an ideal scenario for self-supervised and semi-supervised learning research. \section{Self-Supervised Multi-Task Learning for Mars Imagery Segmentation} \label{sec:method1} \zjh{In this section, we will introduce our approach based on self-supervised learning for the Mars imagery semantic segmentation task. To enrich the visual representation of the network, the model is first pre-trained on the pretext task using unlabeled data in a multi-tasking manner. In the following subsections, we give an overview first and then go into more details about our method.} \subsection{Motivation and Overview} \zjh{Considering that our downstream task semantic segmentation is a dense predictive task, a pixel-level pretext task rather than an instance-level one is expected in the self-supervised learning phase. Compared to other pretext tasks such as jigsaw puzzles and rotation prediction, image inpainting, which aims to restore the masked image in RGB space, provides a natural pixel-level pretext task. It is an efficient representation learning method and has achieved remarkable results recently~\cite{he2021masked,xie2021simmim}. However, due to the optimization of mean distance, the output is usually a blurry averaged image. This is attributed to the fact that the L2 (or L1) loss often prefers a blurry and smooth solution, over highly precise textures~\cite{pathak2016context}. Therefore, for Mars data with similar colors and unclear object contours, it is difficult for the network to learn distinguishing features only by predicting low-frequency information of the image. High-frequency information like texture, which plays an important role in terrain identification, is needed to assist the model to obtain a more strong representation ability.} \zjh{Therefore, we propose a multi-task mechanism in the pre-training stage, to explicitly guide the model to focus on both the low-frequency color feature and high-frequency texture feature of the image. An overview of our self-supervised architecture is shown in the upper of Fig.~\ref{fig:architecture}. In our method, the random masked image is fed into the network which is constrained to predict the original image under texture constraints. Different from the previous works~\cite{pathak2016context,he2021masked,wei2021masked}, in which the image data usually has distinct colors, clear and mostly regular object contours, we model the masked image in a task scenario with stronger texture correlation.} \zjh{In the following, a general description of the multi-task modeling algorithm and several options for implementation will be given and discussed.} \begin{figure*}[t] \centering \includegraphics[width=0.9\linewidth]{Figure_new/archi_ieee.pdf} \caption{\zjh{The framework of our method for Mars image segmentation. The framework can be divided into two stages as a whole, namely, the pre-training stage for representation learning guided by the self-supervised pretext task, and the semi-supervised fine-tuning stage based on pseudo-label fusion. In the self-supervised stage, we utilize the raw pixel value prediction and texture feature prediction on the masked image area to make the network learn the effective feature representation. In the semi-supervised fine-tuning stage, we introduce task uncertainty to generate and select high-quality pseudo-labels, making full use of the supervised information in the unlabeled area of the data.}} \label{fig:architecture} \end{figure*} \subsection{Multi-Task and Joint Learning}\label{sec:MJ} \zjh{\textbf{Multi-task.} Inspired by \cite{wei2021masked}, we explore the potential of introducing traditional operators into existing deep learning frameworks. In our method, a simple yet efficient handcrafted texture feature descriptor, Local Binary Pattern (LBP)~\cite{LBP}, is adopted as an additional prediction target to better guide the network to learn texture features. The network is required to predict the raw pixels of masked regions in the input image, and simultaneously optimize the task of LBP prediction.} \zjh{LBP~\cite{LBP} is a popular texture operator with discriminative power and computational simplicity. It labels the pixels of a grayscale image by thresholding the neighborhood of each pixel and considers the result as a binary number. The values of the pixels in the thresholded neighborhood are multiplied by different weights and summed to obtain the final LBP value. LBP can be seen as a simple Bag-of-Words (BoW) descriptor~\cite{liu2019bow}, in which each value corresponds to a word. It can detect the microstructures (\textit{e}.\textit{g}., edges, lines, spots, flat areas) whose underlying distribution is estimated by the computed occurrence histogram~\cite{ULBP}. Due to its simplicity and efficiency, LBP is widely used in solving computer vision problems, such as texture analysis and face recognition~\cite{ahonen2006face,ahonen2004face}. There are many extensions of LBP~\cite{pietikainen2000rotation,ULBP,zhang2005local,zhao2007dynamic,ojansivu2008rotation,liu2016median} proposed to improve its performance. In our method, an improved version introducing the ``uniform pattern" (ULBP)~\cite{ULBP} is used. On the one hand, it drastically reduces feature dimensionality by grouping the ``nonuniform" patterns under one label, making it efficient to calculate the loss. On the other hand, a few useful properties are provided like rotation and grayscale invariance, which make the feature more robust to some common variations. To verify the effectiveness of the operator, we conduct a simple experiment on the S$^{5}$Mars dataset. The features extracted by different descriptors are fed into an SVM classifier and the output is the terrain category of the corresponding region. We consider the accuracy which reflects the representation ability, and throughput which reflects the computation efficiency, respectively. The results are shown in the Fig.~\ref{fig:descriptor}. As we can see, ULBP (hereinafter referred to as LBP) performs well and maintains high throughput at the same time. MR8 (Maximum Response Filters)~\cite{hayman2004significance}, which is also a texture operator based on the filter bank, achieves the highest accuracy among these descriptors. However, the high computational complexity is not satisfactory. We adopt LBP operator to assist the model in modeling texture information. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{Figure_new/descri_ieee.pdf} \caption{\zjh{The results of different descriptors on the S$^{5}$Mars dataset. Features extracted from the input image are mapped to terrain categories through a SVM classifier. MR8~\cite{hayman2004significance}, ULBP~\cite{ULBP}, Sift~\cite{lowe2004distinctive}, HOG~\cite{dalal2005histograms}, and Color histogram are adopted and we consider their recognition accuracy and throughput. The pink arrow represents the tendency towards higher accuracy and longer computation time.}} \label{fig:descriptor} \end{figure} \zjh{In the implementation, we first calculate the LBP map of the original image, and then we divide the LBP map into different patches and compute the occurrence histograms separately. After normalized to unit length, LBP histogram $\mathbf{s}\in\mathbb{R}^{C_{p}\times{H}\times{W}}$ is obtained as one of our targets, where $C_{p}$ is the number of different LBP patterns we defined. We constrain the network to predict the LBP histogram and RGB value of the image mask region simultaneously.} \zjh{\noindent\textbf{Joint Learning.} After being encoded by the backbone network, the feature map of the image is fed into two decoders respectively to predict the corresponding targets. Note that the loss is calculated only on the masked region and the reconstruction of the visible area will not be involved in the calculation. We use L2 loss to minimize the distance between the prediction and the ground truth.} \zjh{Specifically, given an input image $\mathbf{x}$, the corresponding LBP histogram $\mathbf{s}$, and binary mask $\mathbf{M}$ in which $1$ represents the invalid pixel, the overall loss function $\mathcal{L}_{pre-train}$ can be defined as: \begin{equation} \mathcal{L}_{inp} = \left\| \left( \mathbf{g} \left( \mathbf{f} \left( \mathbf{x}\odot{ \left( \mathbf{1 - M}\right)} \right) \right) - \mathbf{x} \right)\odot \mathbf{M} \right\|_2, \end{equation} \begin{equation} \mathcal{L}_{lbp} = \left\| \left( \mathbf{h}\left(\mathbf{f}\left(\mathbf{x}\odot{\left(\mathbf{1 - M}\right)}\right)\right) - \mathbf{s}\right) \odot \mathbf{M} \right\|_2, \end{equation} \begin{equation} \mathcal{L}_{pre-train} = \lambda_{inp} \mathcal{L}_{inp} + \lambda_{lbp} \mathcal{L}_{lbp}, \end{equation} where $\mathbf{f}(\cdot)$ is the encoder, and $\mathbf{h}(\cdot)$, $\mathbf{g}(\cdot)$ represent the two decoders respectively. $\mathbf{1}$ stands for an all-ones matrix and $\odot$ is the element-wise multiplication operator.} \subsection{Masking Strategy} \zjh{We apply the masking operation, as shown in Fig.~\ref{fig:mask_type}, to the original image to generate the input. For masking strategy, we consider these options:} \zjh{\noindent\textbf{Rectangular Random Masking.} A very simple implementation is to generate a rectangular mask directly at the center of the image~\cite{pathak2016context}. However, the features learned by the network are usually difficult to generalize to downstream tasks due to the lack of randomness. Hence we extend the condition to generate a rectangular mask at a random position.} \zjh{\noindent\textbf{Patch-wise Random Masking.} We also follow the recent works based on Transformer~\cite{he2021masked,wei2021masked,xie2021simmim} to divide the whole image into different non-overlapping patches. Patches to be masked are randomly sampled in a certain proportion. In our experiment, the patch size is 32 $\times$ 32.} \zjh{\noindent\textbf{Free-form Random Masking.}~\cite{yu2019free} introduces an efficient and controllable algorithm to generate free-form masks like Fig.~\ref{fig:mask_type}. It can flexibly generate a variety of masks in the form of lines by controlling the thickness and the number of generated lines. The diversity of masks alleviates the over-fitting problem to a certain extent.} \begin{figure}[t] \centering \includegraphics[width=0.90\linewidth]{Figure_new/mask_type_crop.pdf} \caption{\zjh{Different mask strategies. From left to right: (a) Original image, (b) Rectangular random mask, (c) Patch-wise random mask, (d) Free-form random mask.}} \label{fig:mask_type} \end{figure} \section{Semi-Supervised Learning for Mars Imagery Segmentation} \label{sec:method2} The previous chapter mentions that our dataset has two properties: (1) Sparse annotation. (2) High-confidence annotations. To exploit the two properties of the dataset to aid training, we propose an uncertainty-based semi-supervised training strategy. By generating pseudo-labels for unlabeled area in each image, we propose a semi-supervised learning algorithm to exploit the information of unlabeled pixels. Besides, we propose a selection mechanism to select pixels with lower uncertainty to be added to the training set. \subsection{Task-Uncertainty-Based Pseudo Labeling} In this section, we describe how to estimate uncertainty in data, \textit{i}.\textit{e}., task uncertainty. Task uncertainty is the noise introduced during labeling, which is related to the downstream tasks. For segmentation tasks, data on object boundaries have higher task uncertainty because pixels on the boundaries are more difficult to predict segmentation labels. Similar objects, such as rocks and bedrock, are easily confused in labeling. Therefore, there will also be higher task uncertainty. To reduce task uncertainty, in the labeling process, we employ high-confidence labeling. Our annotations avoid unclear boundaries and discard annotations for uncertain objects. This allows us to apply the annotation information to estimate the task confidence for different pixels. Since the labeled data are more confident, while unlabeled data are difficult to annotate, the predicted pseudo labels are noisy and inaccurate. Therefore, we train a discriminator to estimate the task uncertainty by judging whether a pixel is labeled, and add pixels with low task uncertainty to the training data. \subsection{Estimating Task Uncertainty via the Discriminator} To measure task uncertainty, we train a discriminator to predict the uncertainty of the data. Specifically, we assume that the input image is $\mathbf{x}$, and its annotated pixel data is $\mathbf{y}$. If $-1$ means unlabeled, we generate labels by $\mathbf{q} = \mathbb{I}[\mathbf{y} \not = -1]$, where $\mathbb{I}[\cdot]$ is the indicator function. Because labeled data and unlabeled data are unbalanced, and most areas of an image are often unlabeled, we employ dice loss to train the discriminator $\mathbf{d}(\cdot)$, which takes the output of the encoder $\mathbf{f}(\cdot)$ as input. Let $\mathbf{p}_{h,w} = \mathbf{d}(\mathbf{f}(\mathbf{x}_{h,w}))$ be the predicted label probability of the pixel located at $(h, w)$, which represents the degree of certainty of the pixel. We formalize the loss function as: \begin{equation} \mathcal{L}_{dice} = 1 - \frac{2 \sum \mathbf{p}_{h,w} \mathbf{q}_{h,w}}{\sum \mathbf{p}_{h,w} + \sum \mathbf{q}_{h,w}}. \end{equation} \subsection{Reducing Task Uncertainty of Pseudo Labeling} In pseudo-label-based semi-supervised learning, we utilize the encoder $\mathbf{f}(\cdot)$ and classifier $\mathbf{\phi}(\cdot)$ trained on labeled areas of data to predict their classes on unlabeled areas as pseudo-labels to aid in training. Moreover, we employ the discriminator $\mathbf{d}(\cdot)$ to pick out the data $\hat{\mathbf{y}}_{s}$ with high confidence (\textit{i}.\textit{e}., $\mathbf{p} > t$, where $t$ is the threshold) in the predicted label $\hat{\mathbf{y}}$. Then, we merge the high-confidence predictions $\hat{\mathbf{y}}_{s}$ with the ground truth $\mathbf{y}$ to obtain pseudo-labels $\hat{\mathbf{y}}_{m}$. Given a sample $\mathbf{x}$ in the training dataset $\mathbf{X}$, pseudo-label semi-supervised learning based on task uncertainty can be written as: \begin{equation} \mathcal{L}_{pseudo} = - \sum_{\mathbf{x} \in \mathbf{X}} \sum_{\mathbf{f}_{h,w} \in \phi(\mathbf{f}(\mathbf{x}))} \left[ \log \frac {\exp(\mathbf{f}_{h, w}^{\hat{c}_i})} { \sum_{c_j=1}^{C} \exp(\mathbf{f}_{h, w}^{c_j})} \right], \end{equation} where $\mathbf{f}_{h, w}^{c_j}$ represents the prediction of this pixel at $(h, w)$ belonging to $c_j$, and $\hat{c}_i$ is the pseudo-label $\hat{\mathbf{y}}_{m}$. $C$ is the number of different categories defined in the dataset. \subsection{Full Model} \zjh{The whole model architecture is shown in Fig.~\ref{fig:architecture}. In the pre-training stage, the two tasks, RGB value prediction, and LBP histogram prediction, are jointly optimized. We first pre-train the model as described in Section~\ref{sec:MJ}, and obtain the pre-trained model which provides initialization for the fine-tuning stage. In fine-tuning, the optimization objective of segmentation task is cross-entropy loss $\mathcal{L}_{ce}$, which can be defined as: \begin{equation} \mathcal{L}_{ce} = - \sum_{\mathbf{x} \in \mathbf{X}} \sum_{\mathbf{f}_{h,w} \in \phi(\mathbf{f}(\mathbf{x}))} \left[ \log \frac {\exp(\mathbf{f}_{h, w}^{c_i})} { \sum_{c_j=1}^{C} \exp(\mathbf{f}_{h, w}^{c_j})} \right], \end{equation} where $\mathbf{f}_{h, w}^{c_j}$ represents the prediction of this pixel at $(h, w)$ belonging to $c_j$, and $c_i$ is the ground truth label $\mathbf{y}$.} In supervised fine-tuning, we optimize the following classification loss: \begin{equation} \mathcal{L}_{sup} = \lambda_{ce} \mathcal{L}_{ce}. \end{equation} In semi-supervised learning, the overall loss for the semi-supervised learning stage can be formulated as: \begin{equation} \mathcal{L}_{semi} = \lambda_{ce} \mathcal{L}_{ce} + \lambda_{pseudo} \mathcal{L}_{pseudo}. \end{equation} \section{Experiments for Self-Supervised Learning} \label{sec:exp_selfsl} \zjh{In this section, we evaluate the performance of our proposed self-supervised learning method for terrain semantic segmentation. Note that the fine-tuning process is consistent across all the experiments.} \subsection{Experiment Setup} \zjh{Our model is based on DeepLabV3+~\cite{DeeplabV3_plus}, adopting a ResNet-101~\cite{he2016deep} as the backbone. We evaluate the method by fine-tuning the whole model on the semantic segmentation task after self-supervised pre-training. The batch size is set to 12 when pre-training and 16 in the fine-tuning stage. $\lambda_{inp}$ is set to 0.5 and $\lambda_{lbp}$ is set to 0.5 for self-supervised learning. The numbers of iterations for pre-training and fine-tuning are 15,000, 50,000 for the S$^{5}$Mars dataset, and 30,000, 60,000 for AI4MARS dataset respectively. Unless otherwise noted, the image size for training is 512$\times$512.} \zjh{We evaluate performance using the following metrics: \begin{itemize} \item \noindent\textbf{Pixel Accuracy} (ACC): It simply computes a ratio between the amount of properly classfied pixels and the total number of pixels. \item \noindent\textbf{Mean Pixel Accuracy} (MACC): It computes the ACC metrics in a per-class basis and then averages them over all semantic classes. \item \noindent\textbf{Mean Intersection over Union} (mIoU): This is the widely used metric for semantic segmentation. IoU computes the ratio between the intersection and the union of the ground truth label and the predicted label on a per-class basis. Then IoU is averaged over all semantic classes to get mIoU. \item \noindent\textbf{Frequency Weighted Intersection over Union} (FWIoU): It is an improved version over the mIoU which weights each class importance depending on their appearance frequency. The weighted sum of the IoU of each class is given as the results. \end{itemize} } \subsection{Comparison Results} \zjh{We compare our model with state-of-the-art self-supervised learning methods, including the MIM-based methods and contrastive learning methods. In our experiment, we report the mean and variance of the experimental results of 5 runs with the same backbone for the S$^{5}$Mars dataset and 3 runs for the AI4MARS~\cite{AI4Mars} dataset, which has been introduced in Section~\ref{sec:dataset}. As shown in Table~\ref{table:self_fl512} and Table~\ref{table:self_ai}, our method achieves the best results on both our dataset and AI4MARS dataset. (Note that we do not use official implementation for \cite{he2021masked} and \cite{wei2021masked} because of the difference of the backbone. Here we transfer them to our backbone and only focus on methodological differences, illustrating them by attaching special notation $^\dagger$.) \begin{table}[t] \centering \caption{Segmentation performance on the S$^{5}$Mars dataset. $^\dagger$ indicates the change of backbone. } \label{table:self_fl512} \setlength{\tabcolsep}{1.2mm}{ \begin{tabular}{c|cccc} \toprule Method & ACC (\%) & MACC (\%) & FWIoU (\%) & mIoU (\%) \\ \midrule Baseline & 92.13$\pm$0.21 & 77.97$\pm$0.96 & 85.74$\pm$0.34 & 71.78$\pm$0.67\\ \midrule DenseCL~\cite{densecl} & 92.22$\pm$0.46 & 79.41$\pm$1.08 & 86.05$\pm$0.57 & 72.88$\pm$1.58\\ PixPro~\cite{pixpro} & 92.31$\pm$0.23 & 79.87$\pm$0.22 & 86.09$\pm$0.38 & 73.74$\pm$0.27\\ \midrule MAE$^\dagger$~\cite{he2021masked} & 92.10$\pm$0.19 & 78.99$\pm$1.53 & 85.79$\pm$0.28 & 72.36$\pm$1.21\\ MaskFeat$^\dagger$~\cite{wei2021masked} & 91.91$\pm$0.67 & 79.62$\pm$1.18 & 85.77$\pm$0.64 & 72.89$\pm$1.74\\ \midrule \textbf{Ours} & \textbf{92.46}$\pm$0.19 & \textbf{81.13}$\pm$0.81 & \textbf{86.41}$\pm$0.31 & \textbf{74.36}$\pm$1.33\\ \bottomrule \end{tabular} } \end{table} \begin{table}[!t] \centering \caption{Segmentation performance on the AI4Mars dataset } \label{table:self_ai} \setlength{\tabcolsep}{1.2mm}{ \begin{tabular}{c|cccc} \toprule Method & ACC (\%) & MACC (\%) & FWIoU (\%) & mIoU (\%) \\ \midrule Baseline & 93.70$\pm$0.13 & 73.95$\pm$0.19 & 88.18$\pm$0.26 & 69.02$\pm$0.11 \\ \midrule MAE$^\dagger$~\cite{he2021masked} & 93.63$\pm$0.18 & 74.91$\pm$0.39 & 88.07$\pm$0.33 & 69.75$\pm$0.43 \\ MaskFeat$^\dagger$~\cite{wei2021masked} & 93.61$\pm$0.12 & 74.02$\pm$0.11 & 88.16$\pm$0.21 & 68.84$\pm$0.12 \\ \midrule \textbf{Ours} & \textbf{93.82}$\pm$0.15 & \textbf{74.92}$\pm$0.31 & \textbf{88.42}$\pm$0.27 & \textbf{69.83}$\pm$0.35 \\ \bottomrule \end{tabular} } \end{table} } \begin{table*}[h] \centering \caption{Effect of masking strategy with cropped size of 256 $\times$ 256.} \vspace{1mm} \label{table:abl_mask_strategy} \setlength{\tabcolsep}{6mm}{ \begin{tabular}{c|c|cccc} \toprule Mask type & Mask ratio & ACC (\%) & MACC (\%) & FWIoU (\%) & mIoU (\%) \\ \midrule \makecell[c]{\multirow{4}{*}{\centering Rectangular}} & 30\% &\textbf{91.06}$\pm$0.28& \textbf{71.13}$\pm$1.07 & \textbf{83.71}$\pm$0.43 & \textbf{65.10}$\pm$1.23\\ & 40\% &90.85$\pm$0.17& 70.28$\pm$0.26 & 83.47$\pm$0.14 & 64.61$\pm$0.34\\ & 50\% &91.03$\pm$0.13& 70.29$\pm$1.87 & 83.46$\pm$0.30 & 64.13$\pm$0.66\\ & 60\% &90.95$\pm$0.21& 70.24$\pm$0.35 & 83.10$\pm$0.30 & 64.36$\pm$0.32\\ \midrule \multirow{4}{*}{Patch-wise}& 30\% &90.85$\pm$0.24& 70.74$\pm$1.78 & 83.42$\pm$0.41 & 64.77$\pm$1.79\\ & 40\% &\textbf{91.05}$\pm$0.23& 70.80$\pm$1.42 & \textbf{83.71}$\pm$0.40 & \textbf{65.13}$\pm$1.37\\ & 50\% &90.96$\pm$0.19& \textbf{71.15}$\pm$1.08 & 83.53$\pm$0.25 & 65.10$\pm$1.49\\ & 60\% &90.95$\pm$0.21& 70.30$\pm$0.36 & 83.53$\pm$0.30 & 64.56$\pm$0.32\\ \midrule & 50\% &90.54$\pm$0.06& 70.58$\pm$0.62 & 83.15$\pm$0.27 & 64.60$\pm$0.43\\ Free-form & 60\% &\textbf{91.07}$\pm$0.27& \textbf{71.29}$\pm$0.33 & 83.59$\pm$0.24 & \textbf{65.27}$\pm$0.18\\ & 70\% &91.04$\pm$0.11& 70.68$\pm$1.16 & \textbf{83.66}$\pm$0.26 & 64.75$\pm$1.36\\ \bottomrule \end{tabular} } \end{table*} \begin{table}[!h] \centering \begin{minipage}[!t]{1\columnwidth} \caption{Ablation studies on multi-tasking.} \label{multi-task} \centering \setlength{\tabcolsep}{5mm}{ \begin{tabular}{cc|cc} \toprule $\mathcal{L}_{inp}$ & $\mathcal{L}_{lbp}$ & FWIoU(\%) & mIoU(\%) \\ \midrul \Checkmark & &85.79$\pm$0.28 &72.36$\pm$1.21\\ \midrul & \Checkmark& 85.84$\pm$0.61 & 74.01$\pm$1.99\\ \midrul \Checkmark&\Checkmark& \textbf{86.41}$\pm$0.31 & \textbf{74.36}$\pm$1.33\\ \bottomrule \end{tabular}} \end{minipage} \\[10pt \end{table} \begin{table}[t] \centering \caption{Ablation studies on data augmentation. RC: random crop. RF: random flip. CJ: color jitter.} \label{table:data_aug} \setlength{\tabcolsep}{1.5mm}{ \begin{tabular}{ccc|cccc} \toprule RC & RF & CJ & ACC (\%) & MACC (\%) & FWIoU (\%) & mIoU (\%) \\ \midrule \Checkmark& & & 92.21$\pm$0.25 & \textbf{81.64}$\pm$0.60 & 86.21$\pm$0.19 & 74.34$\pm$0.46 \\ \midrule \Checkmark&\Checkmark& & \textbf{92.46}$\pm$0.19 & 81.13$\pm$0.81 & \textbf{86.41}$\pm$0.31 & \textbf{74.36}$\pm$1.33 \\ \midrule \Checkmark&\Checkmark&\Checkmark & 92.38$\pm$0.14 & 80.31$\pm$1.04 & 86.25$\pm$0.20 & 73.77$\pm$1.78 \\ \bottomrule \end{tabular} } \end{table} \zjh{Compared to \cite{he2021masked}, which uses the raw pixels as predicted targets, our method introduces an extra prediction target to assist the network to extract identifiable features, thus, achieving better results. \cite{wei2021masked} uses the HOG feature as the prediction target. The gradient information it focuses on is mainly concentrated at the contours. However, for Mars image data, the contours of many objects are not clear and small objects like broken rocks are often masked completely, which makes the representation learning of the network less effective. We also test the performance of the contrastive learning methods designed for downstream tasks of dense prediction. \cite{densecl} proposes a method for dense prediction task, which optimizes a pairwise contrastive loss at both global image level and local pixel. \cite{pixpro} encourages the corresponding pixels of the two different views of an image to be consistent. However, these contrastive learning methods fail to achieve a good performance, because of the lack of consideration of the characteristics of Mars images. There is a high degree of similarity between different terrains on Mars, and data augmentation cannot expand data distribution as expected. It makes the regional contrastive learning that relies only on data transformation without label guidance less effective. Our method aims to guide the model to extract both the low-frequency signal (\textit{e}.\textit{g}.~the color feature) and high-frequency signal (\textit{e}.\textit{g}.~the texture feature). The multi-task mechanism makes the features of different categories more discriminative and obtains better performance on the Mars dataset. In addition, our method has higher training efficiency than the contrastive learning methods which require a large memory bank to store negative pairs. } \subsection{Ablation Studies} \zjh{In the following, we give the ablation studies on the S$^{5}$Mars dataset. We report the mean and variance of experimental results of at least 3 runs to get a more reliable analysis of our method.} \zjh{\noindent\textbf{Masking Strategy.} We study the effect of the different masking strategies in Table~\ref{table:abl_mask_strategy}. Specifically, we consider fine-tuning performances under different mask types and mask ratios. } {We can see that our method is robust for different mask types. One thing to be noted is that different mask types correspond to different optimal mask rates. For the free-form mask, a mask ratio of about 60\% is recommended from the results while a relatively low mask ratio of 40\% - 50\% results in better performance when using patch-wise masks. Meanwhile, a much lower mask ratio of 30\% is suitable for rectangular masks. This is because different mask types have different local information loss rates, resulting in different predicted distances~\cite{xie2021simmim}. In our experiment, we adopt a free-form masking strategy with a mask ratio of 0.6 by default, which gives the best performance in the experiment.} \zjh{\noindent\textbf{Multi-tasking.} Table~\ref{multi-task} ablates the effect of different combination of task. As we can see, multi-tasking is better than optimizing any single task. Optimization by the single inpainting task often results in insufficient diversity of feature space. Nevertheless, the LBP prediction task focuses on the high-frequency signals of the image, and optimization only by it will lead to unstable training to some extent. } \zjh{\noindent\textbf{Data Augmentation.} We consider the following common augmentations: random crop and flip, and color jittering. As shown in Table~\ref{table:data_aug}, unlike most contrastive learning methods, our method does not rely on a large number of data transformations to produce a wide data distribution. The model can still achieve good performance when only the random crop is applied. However, we observe that color jittering results in performance degradation. We infer that this is probably caused by the concentrated color distribution of the Mars images, and the color jittering leads to a significant difference in color distribution between training data and testing data.} \begin{table}[t] \centering \caption{Ablation studies on LBP implementation. $P$, $R$ represent the the angular resolution and spatial resolution of the operator respectively.} \label{table:lbp} \setlength{\tabcolsep}{2.3mm}{ \begin{tabular}{c|cccc} \toprule ($P$, $R$) & ACC (\%) & MACC (\%) & FWIoU (\%) & mIoU (\%) \\ \midrule (16, 2) & 91.84$\pm$0.58& 80.38$\pm$0.34& 85.84$\pm$0.49 & 73.50$\pm$0.63\\ \midrule (24, 3) & \textbf{92.46} $\pm$0.19 & \textbf{81.13}$\pm$0.81 &86.41$\pm$0.31 & \textbf{74.36}$\pm$1.33\\ \midrule (32, 4) & 92.36$\pm$0.27& 80.15$\pm$1.05 & \textbf{86.51}$\pm$0.37 & 73.96$\pm$0.68\\ \bottomrule \end{tabular} } \end{table} \begin{table}[t] \centering \caption{Ablation studies on loss region. $Whole$ means that loss is calculated on the whole image, and $Masked$ means that loss is calculated only on the masked area.} \label{table:loss_region} \setlength{\tabcolsep}{1.5mm}{ \begin{tabular}{c|cccc} \toprule Loss region & ACC (\%) & MACC (\%) & FWIoU (\%) & mIoU (\%) \\ \midrule $Whole$ &92.42$\pm$0.17&80.31$\pm$0.35& \textbf{86.41}$\pm$0.26&74.00$\pm$1.03\\ \midrule $Masked$ &\textbf{92.46}$\pm$0.19 & \textbf{81.13}$\pm$0.81 & \textbf{86.41}$\pm$0.31 & \textbf{74.36}$\pm$1.33\\ \bottomrule \end{tabular} } \end{table} \begin{table*}[t] \centering \caption{Segmentation performance on the S$^{5}$Mars and the AI4Mars dataset. $^\dagger$ indicates ImageNet pre-training.} \label{table:semi} \setlength{\tabcolsep}{6.0mm}{ \begin{tabular}{c|c|c|c|c} \toprule \multirow{2}{*}{Method}& \multicolumn{2}{c|}{S$^{5}$Mars}& \multicolumn{2}{c}{AI4Mars} \\ \cmidrule(lr){2-3}\cmidrule(lr){4-5} & FWIoU (\%) & \makecell[c]{mIoU (\%)} & \makecell[c]{FWIoU (\%)} & \makecell[c]{mIoU (\%)} \\ \midrule Mean Teacher~\cite{antti2017mean} + CutOut~\cite{devries2017improved} &62.98 & 63.05 & 75.31& 58.01\\ Mean Teacher~\cite{antti2017mean} + CutMix~\cite{CutMix} & 62.56 & 62.62 & 74.81 & 57.70\\ Mean Teacher~\cite{antti2017mean} + ClassMix~\cite{olsson2021classmix} & 62.84 & 62.89 & 70.67 & 53.84\\ \midrule ReCo$^\dagger$~\cite{liu2021bootstrapping} + CutOut~\cite{devries2017improved} & 76.47 & 76.38 & 83.23 & 68.73\\ ReCo$^\dagger$~\cite{liu2021bootstrapping} + CutMix~\cite{CutMix}& 76.28 & 76.16 & 83.11& 68.78\\ ReCo$^\dagger$~\cite{liu2021bootstrapping} + ClassMix~\cite{olsson2021classmix} & 76.70 & 76.59 & 83.14 & 68.77\\ \midrule \textbf{Ours} & \textbf{87.18} & \textbf{77.20}&\textbf{88.82} &\textbf{70.64} \\ \bottomrule \end{tabular} } \end{table*} \zjh{\noindent\textbf{LBP Implementation.} We study the effect of different LBP implementations. Specifically, we set different $(P, R)$ values which denote the angular resolution (quantization of the angular space) and spatial resolution of the operator. Following the default setting in \cite{ULBP}, we set ($P$, $R$) as (16, 2), (24, 3), and (32, 4) respectively in our experiment. The results are summarized in Table~\ref{table:lbp}. The values of $P$ and $R$ directly determine the number of pattern types (the dimension of the LBP histogram to be predicted) extracted by the operator, which is closely related to the representation ability of the operator. When the ($P$, $R$) is set too small, the representation power is reduced due to the insufficient number of quantization modes. But when ($P$, $R$) is too large, the model performance will degrade in a way due to the introduction of more noise. We finally set the ($P$, $R$) to (24, 3) in our experiment.} \zjh{\noindent\textbf{Loss Region.} Table~\ref{table:loss_region} shows the effect of different loss regions in pre-training stage. It is better to calculate loss only in the mask area than on the whole image. Similar results are found in \cite{xie2021simmim}. It indicates that for masked image modeling, the prediction task can obtain a better feature representation than the reconstruction task, which may be because the reconstruction of unmasked areas is relatively simple and is not consistent with the prediction task of masked areas. Features learned by the network are less effective for downstream task fine-tuning due to the wasted part of network capacity~\cite{xie2021simmim}.} \begin{table}[t] \centering \caption{Ablation studies on different modules with cropped size of 256 $\times$ 256 on the S$^{5}$Mars dataset.} \label{table:semi_module} \setlength{\tabcolsep}{1.0mm}{ \begin{tabular}{c|cccc} \toprule Modules & ACC & MACC & FWIoU & mIoU \\ \midrule \textit{w/o} Semi-Supervised Learning & 91.07\% & 71.29\% & 83.59\% & 65.27\%\\ \midrule Pseudo Label & 90.74\% & 69.57\% & 83.15\% & 63.77\%\\ \midrule Pseudo Label + Task Uncertainty & \textbf{91.73\%} & \textbf{74.73\%} & \textbf{84.78\%} & \textbf{69.38\%}\\ \bottomrule \end{tabular} } \end{table} \begin{table}[t] \centering \caption{Ablation studies on different Certainty Thresholds with cropped size of 256 $\times$ 256 on the S$^{5}$Mars dataset.} \label{table:semi_threshold} \renewcommand\arraystretch{1.2} \setlength{\tabcolsep}{2.3mm}{ \begin{tabular}{c|cccc} \toprule Thresholds & ACC (\%) & MACC (\%) & FWIoU (\%) & mIoU (\%) \\ \midrule 0.3 & 90.77 & 71.52 & 83.41 & 63.83\\ 0.5 & 91.16 & 72.55 & 83.93 & 66.48\\ 0.7 & 91.17 & 72.64 & 84.04 & 66.50\\ 0.9 & \textbf{91.73} & \textbf{74.73} & \textbf{84.78} & \textbf{69.38}\\ 0.99 & 91.29 & 73.06 & 84.14 & 65.88\\ 0.999 & 90.98 & 71.15 & 83.78 & 65.13\\ \bottomrule \end{tabular} } \end{table} \section{Experiments for Semi-Supervised Learning} \label{sec:exp_semisl} Our approach to semantic segmentation is evaluated in this section as a semi-supervised learning framework. Because our dataset is sparsely labeled, unlabeled areas of the data can be used to further improve the generalization ability and performance of the model. \subsection{Experiment Setup} As in the setting of supervised fine-tuning, in semi-supervised learning, we adopt DeepLabV3+~\cite{DeeplabV3_plus} as our backbone. The feature dimension of the output for per pixel is 256. We simultaneously pre-train our encoder with a self-supervised task, and then fine-tune our network with two loss functions, segmentation loss and pseudo-label loss. Among them, the segmentation loss is trained for 50,000 iterations. The pseudo-label loss is applied after training for 30,000 iterations. \subsection{Comparison Results} In Table~\ref{table:semi}, the results of our method and other semi-supervised segmentation methods are shown. It can be seen that our method always outperforms other methods and achieves better performance. Mean Teacher utilizes a teacher network to generate pseudo-labels, and its teacher network utilizes an offline momentum method to update parameters. However, such an approach may still produce noisy labels that can harm the training process. Whereas our method reduces task uncertainty in pseudo-labels, guaranteeing small conditional entropy $H(\mathbf{Y}|\mathbf{Z})$. This enhances the inter-class separability in the feature space. ReCo~\cite{liu2021bootstrapping} applies contrastive learning to extract the information of unlabeled data, and assigns pseudo-labels to unlabeled data to bring the pixel features of the same category closer and push the pixel features of different categories farther. However, contrastive learning requires numerous positive and negative samples to learn, and the computational cost of pixel-level contrastive learning is huge. Our method employs pseudo-labels to assist training and does not require many positive and negative samples. Compared with some semi-supervised methods based on data augmentation, our method also has advantages. Because of the similarity between different terrains on Mars, traditional data augmentation cannot effectively enhance the distribution of the training set. Our method applies the task uncertainty to reduce the noise in the label data, making the training more effective. \subsection{Ablation Studies} In this subsection, we conduct experiments on various parts of the semi-supervised method to illustrate its effect and role. \noindent\textbf{Different Modules.} Table~\ref{table:semi_module} shows the effect of different tasks for semi-supervised learning. We can notice that simply using pseudo-labels does not yield a satisfactory performance boost. This is because the generated pseudo-labels contain a lot of noise and errors, which cannot effectively improve the separability and compactness of features. Better performance is obtained by applying task uncertainty as a constraint, because this removes the uncertainty and the selected pseudo-labels are more accurate. Combining these losses together can achieve the best performance, where the network can better utilize the information from the initialization obtained from self-supervised learning and unlabeled data to obtain stronger generalization performance. \noindent\textbf{Certainty Thresholds.} Table~\ref{table:semi_threshold} shows the effect of different thresholds for semi-supervised learning. We consider the labels of these selected data to be more credible and accurate. As the threshold increases, the segmentation accuracy first increases and then decreases. Because the data selected when the threshold is small introduces too much noise, the pseudo-labels are not accurate enough. When the threshold is too large, less data is selected, which cannot effectively enhance the dataset and improve the generalization ability. \subsection{Visualizations and Interpretability} \zjh{In this part, we give the visualization results to further prove the validity of our method. \begin{figure}[t] \setlength{\abovecaptionskip}{-0.2cm} \centering \includegraphics[width=0.9\linewidth]{Figure_new/inpainting_res_ieee.pdf} \caption{\zjh{The inpainting results in self-supervised learning stage.}} \label{fig:inpainting} \end{figure} \begin{figure}[t] \setlength{\abovecaptionskip}{-0.0cm} \centering \includegraphics[width=0.99\linewidth]{Figure_new/feat_ieee1.pdf} \caption{\zjh{Visualization of the extracted features for each pixel on the S$^{5}$Mars dataset. Compared with the baseline model, our method generates a better feature distribution.}} \label{fig:features} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{Figure_new/seg_sub_ieee3.pdf} \caption{\zjh{Subjective segmentation results compared with the baseline. Compared with the baseline, our method can give more reasonable prediction results in both the labeled region (yellow circle) and the unlabeled region (red circle).}} \label{fig:seg_sub} \end{figure} \noindent\textbf{Inpainting Results.} We give an example of recovered images by the model in the self-supervised pre-training stage. As shown in Fig.~\ref{fig:inpainting}, the model learns the capability to restore the image, predicting the pixels of masked regions. However, the model prefers to give a blurry solution as we mentioned before, and the edges cannot be well recovered. It is because different images of Mars usually have similar colors and unclear contours, making it difficult for the model to learn good semantic information. Therefore, we introduce the high-frequency texture feature as our extra target to assist the representation learning of the model \noindent\textbf{Feature Visualization.} The visualization of the output of the last layer in the encoder $\mathbf{f}(\cdot)$ is shown in Fig.~\ref{fig:features}. Our method enables features learned by the network to be more compact and separable than the baseline algorithm. In our method, self-supervised pre-training provides a good initialization for the fine-tuning stage, which improves the quality of pseudo-labels to some extent. The fine-tuning stage in a semi-supervised manner provides more supervised signals and results in a more separable feature space learned by the network. \noindent\textbf{Subjective Segmentation Results.} We give subjective segmentation results in Fig.~\ref{fig:seg_sub}. The segmentation results of our method is more accurate on the rock in the middle of the presented example. Moreover, benefiting from the semi-supervised approach based on uncertainty estimation, our model has a more reasonable result in the unlabeled area.} \subsection{Failure Case Study and Future Work} \zjh{ Errors occur mainly in confusion between rock and bedrock, while the former is completely exposed to the ground and the latter is often partially covered. It is difficult to distinguish them only by texture information, which is some of the challenging aspects of our dataset. In terms of future work, one direction is to carry out specific designs for difficult categories, for example, a certain category classifier can be trained to improve the performance.} \section{Conclusion} \label{sec:conclusion} We propose a fine-grained annotated dataset for Martian terrain segmentation. This dataset provides sparse and high-confidence labeled data, which effectively assists the subsequent Mars exploration work. To fully utilize this dataset, we propose a self-supervised pre-training, semi-supervised fine-tuning learning mechanism. In the pre-training stage, a multi-tasking mechanism is used to extract features such as shape and texture in the landform, which provides meaningful information for downstream segmentation tasks. In the fine-tuning stage, we adopt a pseudo-label semi-supervised method based on confidence selection, which makes full use of the unlabeled areas in the dataset to improve the generalization ability of the model. Our method finally outperforms previous segmentation algorithms with satisfactory performance gains. \small
1407.6854
\section*{Metastability implies no criticality} \subsection*{Time scales and length scales} A metastable state lifetime, $\tau_\mathrm{MS}$, is a property of an irreversible system. Unlike equilibrium properties, $\tau_\mathrm{MS}$ can depend upon system size and preparation protocols. For example, upon ordinary cooling to near or below its limit of stability, $T_\mathrm{s} \approx 215$\,K, liquid water will coarsen to ice on time scales as short as $10^{-6}$\,s, while hyper quenching to temperatures well below $T_\mathrm{s}$ at rates comparable to $10^6$\,K/s can produce long-lived glassy states.\cite{Glass} Further, because no experiment can control all aspects of a nonequilibrium system, $\tau_\mathrm{MS}$ has a distribution of values for any one experimental protocol. Recent experiments studying ice coarsening in droplets of cold water illustrate this fact.\cite{Nilsson2014} Dynamics of deeply supercooled water is heterogeneous, with domain sizes and time scales of growing as temperature, $T$, decreases. Viewed on small enough length scales and short enough time scales, at the coldest conditions of metastability (i.e., near $T_\mathrm{s}$), these fluctuations are large enough and slow enough to give the illusion of two distinct liquids.\cite{Limmer2013, Yagasaki2014, Limmer2014b} Not surprisingly, all estimates of the imagined liquid-liquid critical temperature are close to $T_\mathrm{s}$,\cite{Holton2012} but the fluctuations at this temperature are mesoscopic transients characteristic of ice coarsening, not criticality. The relevant relaxation time of the liquid, $\tau_\mathrm{R}$, is the time to equilibrate the liquid on length scale $a$, where $a \approx 0.2$ or 0.3 nm is the characteristic microscopic length of the liquid. This relaxation time is closely related to the metastable lifetime. Specifically, for $T\lesssim T_\mathrm{s}$, the average and variance of $\tau_\mathrm{R}$ and $\tau_\mathrm{MS}$ grow with decreasing temperature, but the ratio is $\tau_\mathrm{MS}/\tau_\mathrm{R} \lesssim10^3$ throughout this regime.\cite{Limmer2013a} Near presumed criticality, the time to equilibrate on length scale $\xi$ would be of order $\tau_\xi=\tau_\mathrm{R} (\xi/a)^z$, where $z \approx 3$.\cite{Hohenberg1977} But as Binder notes,\cite{Binder2014} $\tau_\xi < \tau_\mathrm{MS}$ because the liquid cannot resist crystallization for times longer than $\tau_\mathrm{MS}$. Accordingly, $\xi/a< (\tau_\mathrm{MS}/\tau_\mathrm{R})^{1/3}$. At coldest metastable conditions (i.e., near where criticality is imagined to appear), it follows that $\xi < 2$ or 3 nm. This bound is similar in size to the linear dimension of the simulation boxes used by Palmer \textit{et al.}.\cite{Palmer2014} It is a size that is insufficient to demonstrate the signature of two-phase coexistence, a fact we turn to now. \begin{figure} \begin{center} \includegraphics[width=3.3in]{Fig1.pdf} \caption{ Alleged interfacial free energies as functions of system size $N$. The black circles and error bars are the four data points from Ref.~\onlinecite{Palmer2014}. The solid line is the least-square fit of the data to a linear function of $N$. The dashed line is the least-square fit of the data to a linear function of $N^{2/3}$. The data cannot distinguish the different functions.} \label{fig:Fig1} \end{center} \end{figure} \subsection*{No evidence of a stable interface} Demonstration of two-phase coexistence in a molecular simulation requires evidence of a stable interface separating the two phases. Thus, for a three-dimensional system grown proportionally in each direction, the interfacial free energy, $\Delta F$, must grow as $N^{2/3}$ where $N$ is the number of molecules in the simulation box. Palmer \textit{et al.}'s conclusion that their model exhibits a stable interface is based entirely upon showing that four data points as a function of $N^{2/3}$ fall on a straight line to a good approximation. But the system sizes they consider extend over only a factor of three. Such a small range is inadequate, as illustrated in Fig.~\ref{fig:Fig1} by least-square fitting the four data points with two different lines. Both a linear function of $N$ and a linear function of $N^{2/3}$ appear to provide satisfactory fits of the data. Thus, it is impossible from the data provided to discern whether or not there is a stable interface. Other phenomena could be equally or more consistent with the data. For example, finite transient domains could produce a $\Delta F$ that is initially linear in $N$ and becomes independent of $N$ at large $N$. In cases where a first-order phase transition is for certain (e.g., between liquid water and ice), surface scaling can be safely presumed and used to estimate surface tension. See, for example, Ref.~\onlinecite{Limmer2011}. But such a presumption is inappropriate for cases where the transition itself is in question. \section*{Technical issues} \subsection*{Free energy of the order parameter} The free energy $\Delta F$ is the reversible work to prepare an interface at conditions of coexistence. It is essentially the height of the barrier in a bistable free energy as a function of order parameter, which in this case would be the density $\rho$. This free energy function, $F(\rho)$, is therefore the quantity to be judged when assessing the quality and reproducibility of a calculation of $\Delta F$. Reference \onlinecite{Palmer2014} does not show this function, but with the information shared with us,\cite{Private2014} Fig.~\ref{fig:Fig2}a graphs Palmer \textit{et al.}'s $F(\rho)$ for their largest system, $N=600$. Figure~\ref{fig:Fig2}b shows a cross section of the $N=600$ system at the density coinciding with the maximum in $F(\rho)$; it is a snap shot adapted from Fig. 2 of Ref.~\onlinecite{Palmer2014}. The green box outlines the boundary of the periodically replicated simulation cell. The blue and red particles are the oxygen atoms of water molecules in high- and low-density regions, respectively. Reference \onlinecite{Palmer2014} pictures six of the periodic replicas. Here, Fig.~\ref{fig:Fig2} crops out five of those replicas to avoid a false impression of a more extended interface than observed. \begin{figure} [b!] \begin{center} \includegraphics[width=3.25in]{Fig2.pdf} \caption{(a) Free energy function (in units of $k_\mathrm{B}T = 1/\beta$) calculated and used by Palmer \textit{et al.}\cite{Palmer2014, Private2014} to estimate the surface free energy of the $N=600$ system. The red circle and error bar show their estimated surface free energy. The anharmonic low-density basin and asymmetric high-curvature barrier suggest poor equilibration and statistical uncertainties much larger than the cited error bar. (b) A configuration from the simulation of Ref.~\onlinecite{Palmer2014} with $N=600$ at conditions of assumed phase coexistence. Adapted from Fig. 2 of Ref.~\onlinecite{Palmer2014}.} \label{fig:Fig2} \end{center} \end{figure} There are two noteworthy features of the graphed $F(\rho)$: 1. The bottom of the left basin is asymmetric, which is contrary to behavior expected of liquid matter. In particular, reversible fluctuations should yield a parabolic basin up to at least $k_\mathrm{B} T$ above the minimum.\cite{IMSMa} 2. A bump at the barrier top makes the curvature of the barrier relatively high, which is contrary to behavior expected in the presence of a stable interface. Specifically, for the configuration pictured in Fig.~\ref{fig:Fig2}b, if the interface were stable, there would be little free energy difference between the pictured configuration and a similar configuration with somewhat thicker red (or blue) regions. The only curvature at the barrier top should be due to the effects of finite size on fluctuations of the interface, and the barrier top should become flatter as the system size grows.\cite{IMSMb} The implication of these features is that the stated error bar largely underestimates the uncertainty and likely error in $\Delta F$. To the extent curvature at the barrier top is nonzero, there must be deviations from $N^{2/3}$ scaling of the surface energy. The unusual shape of the reported low-density basin suggests that the simulation is trapped in an irreversible state, possibly a glassy configuration. Indeed, all reasonable models of water will possess such configurations, and it is these configurations that either are or foreshadow the long-lived metastable states of nonequilibrium amorphous ices.\cite{Limmer2014} The possibility of errors in free energies due to nonequilibrium effects are generally tested by unweighting neighboring sampling windows and checking for miss alignment and hysteresis. Either symptom of irreversibility would then be addressed by further sampling. Reference \onlinecite{Palmer2014} writes that the the ``bootstrap'' method is applied to determine error estimates. This method is provided by a standard software package for implementing free energy calculations.\cite{MBAR} It is a reliable error-estimate method provided sampling can be demonstrated to be reversible, which is most easily accomplished if there are no slow variables other than those being controlled in the free energy calculation. Otherwise, ``bootstrap'' estimates can be misleading. The fact that supercooled water can be trapped into nonequilibrium glassy states implies that there are many slow degrees of freedom beyond density, $\rho$, and global order parameter, $Q_6$. Yet it is only $\rho$ and $Q_6$ that are controlled in the free energy calculations of Ref.~\onlinecite{Palmer2014}, and none of the sampling techniques applied in that work automatically address the concomitant problems of irreversibility. \subsection*{Time scales} Free energy functions are reversible work functions. As such, they should be independent of time scales. Kinetics enters the picture only to the extent that irreversible dynamics persists in affecting the computed work functions. There is an infinity of ways by which irreversibility can poison a free energy calculation. Reference~\onlinecite{Limmer2013} by Limmer and Chandler offers the proposition that apparent bistability often detected by several groups studying supercooled water is the result of slow equilibration of $Q_6$. \begin{figure}[t!] \begin{center} \includegraphics[width=3.5in]{Fig3.pdf} \caption{ Relaxation functions for a variant of the ST2 model of water at $p=2.2$\,kbar and $T=235$\,K, demonstrating the broad range of time scales and average time-scale separations characteristic of this system. Red and blue lines are, respectively, the $\rho$ and $Q_6$ autocorrelation functions in different windows sampled during free energy calculations of Ref.~\onlinecite{Limmer2013}. The functions for two specific windows are shown in the upper two panels, where $\bar{\rho}$ and $\bar{Q}_6$ values specify the averages of the density and global order parameter in the specific window. Random oscillations about zero at the largest times are the results of autocorrelating over finite times, typically between 50 to 100 times that for the $Q_6$-correlation function to reach 0.1 of its initial value. Notice two (or more) step relaxation for $\rho$, and that for small $\bar{Q}_6$, the long-time relaxation times of $\rho$ increase and approach those of $Q_6$ as $\bar{\rho}$ decreases. Averaging all the functions in the lower panel, using the equilibrium weight for each window, yields the black solid and dashed lines in the bottom panel. The averaged functions are distinct from equilibrium time-correlation functions, but rather illustrate the typical differences between relaxation of $\rho$ and $Q_6$ that must be accounted for to obtain the reversible work surface in $\rho$-$Q_6$ space.} \label{fig:Fig3} \end{center} \end{figure} This idea is explored by computing the conditional probability distribution for $\rho$ given a specific value of $Q_6$, $P(\rho | Q_6)$. Averaging this distribution with the equilibrium distribution for $Q_6$ yields the correct equilibrium free energy, i.e., $\beta F(\rho) = - \ln [\int \mathrm{d} Q_6\, P(Q_6)\, P(\rho | Q_6)]$. On the other hand, if $Q_6$ is poorly sampled, its distribution will be out of equilibrium, and averaging with that out-of-equilibrium distribution will yield an incorrect free energy function. Importantly, Limmer and Chandler show,\cite{Limmer2013} if $P(Q_6)$ is fixed at the equilibrium function of the high-density liquid, not letting it adjust to different values of $\rho$, the averaged conditional distribution yields the bistable free energy function reported by Palmer \textit{et al.}\cite{Palmer2014} In other words, if sampling of $\rho$ proceeds on times scales too short for $Q_6$ to relax, bistable behavior for density will be found. In contrast, if sampling of $\rho$ and $Q_6$ is sufficient to equilibrate both variables, Limmer and Chandler\cite{Limmer2013} show that the bistable liquid behavior disappears. Palmer \textit{et al.}\cite{Palmer2014} disagree with this explanation of their earlier results, saying that a time-scale separation between $\rho$ and $Q_6$ does not exist. This disagreement manifests confusion on at least two levels. First, the Limmer-Chandler analysis\cite{Limmer2013} is based upon a time-scale separation in the normal high-density region of supercooled water while Ref.~\onlinecite{Palmer2014} considers the low-density amorphous region. Second, it seems that Ref.~\onlinecite{Palmer2014} incorrectly estimates pertinent relaxation times. For example, Ref.~\onlinecite{Palmer2014} claims to compute free energy functions by sampling over time scales that are hundreds of times longer than the relaxation time scale for $Q_6$, and it further reports that unconstrained trajectories running for such times show no hint of crystallization. Such claims contradict findings from 1 $\mu$s molecular dynamics trajectories that exhibit ice coarsening.\cite{Yagasaki2014, Limmer2014b} The structural relaxation time for a high-density ST2 liquid at the conditions examined is about $10^2$\,ps, and the relaxation time for $Q_6$ is about $10^4$\,ps.\cite{Limmer2013a} One-hundred times longer would reach $10^6$\,ps, where completed coarsening of the supercooled ST2 model is both observed\cite{Yagasaki2014} and predicted to be observable.\cite{Limmer2013a, Bear} It can be difficult to identify absolute physical time scales from a Monte Carlo calculation, such as those carried out for Ref.~\onlinecite{Palmer2014}, but correlation functions can be studied as functions of computation time or Monte Carlo steps. Figure~\ref{fig:Fig3} shows such relaxation functions for $\rho$ and $Q_6$, \begin{equation*} C_b(t) = \frac{\langle \delta \rho(0)\,\delta \rho(t) \rangle_b}{\langle (\delta \rho)^2 \rangle_b} \quad \mathrm{and} \quad \frac{\langle \delta Q_6(0)\,\delta Q_6(t) \rangle_b}{\langle (\delta Q_6)^2 \rangle_b}\,, \end{equation*} respectively, taken from the calculations of Limmer and Chandler.\cite{Limmer2013} Here, $\langle \cdots \rangle_b$ denote ensemble average with the biasing potential used to confine configurations to the $b$th window of $\rho$-$Q_6$ space in a free energy calculation. The fluctuations, $\delta \rho$ and $\delta Q_6$, are deviations from their respective means in the $b$th window. \begin{figure}[t!] \begin{center} \includegraphics[width=3.3in]{Fig4.pdf} \caption{ Time-correlation functions for $\rho$ and $Q_6$ computed by Palmer \textit{et al.}\cite{Palmer2014, Private2014} The negative tails indicate a drift in the computations. Units of time, MCS, are arbitrary, referring to computation steps, and the algorithm for the steps is different from that of Fig.~\ref{fig:Fig3}. } \label{fig:Fig4} \end{center} \end{figure} Figure \ref{fig:Fig3} shows a broad variety of relaxation behaviors. The equilibrium weight for a given window, $p_b$, depends upon the thermodynamic conditions under consideration. The averaged correlation functions, $\langle C(t) \rangle = \sum_b p_b C_b(t)$, are shown in Fig.~\ref{fig:Fig3} for conditions where the normal supercooled liquid is metastable. The time-scale separation at these conditions is clear. Moreover, the upper panels of Fig.~\ref{fig:Fig3} illustrates how decreasing density towards that of ice while keeping global order amorphous, the time-scale separation diminishes at the longer times but remains significant at the shorter times. Reference~\onlinecite{Palmer2014} reports different time dependence for time-correlation functions of $\rho$ and $Q_6$. The graph provided for these functions in Ref.~\onlinecite{Palmer2014} crops out negative values of $C(t)$, but with information provided to us,\cite{Private2014} a more complete graph is shown here in Fig.~\ref{fig:Fig4}. These correlation functions were computed by averaging over trajectories initiated in the low density amorphous system -- the part of configuration space where glassy states exist. It is expected that both $\rho$ and $Q_6$ will relax slowly in this regime. As such, the correlation functions presented in Ref.~\onlinecite{Palmer2014} do not contradict or discredit the results and arguments put forward in Ref.~\onlinecite{Limmer2013} because Ref.~\onlinecite{Limmer2013} focuses on fluctuations from the normal liquid. Two features of Palmer \textit{et al.}'s $C(t)$ are noteworthy: 1. At the shortest times for which they provide data, $C(t)$ for the density shows a hint of two-step relaxation. (More data at shorter times could clarify the extent of this feature.) Such relaxation is common at glass-forming conditions.\cite{BinderKob} The suggested early-time relaxation would reflect the relaxation processes that dominate at the higher densities, and it would be consistent with the behavior exhibited in the top left panel of Fig.~\ref{fig:Fig3}. 2. The negative tails at large sampling time $t$ seems oscillates about a non-zero negative value. (More data at longer times could clarify the extent of this feature.) A tail oscillating about a non-zero value would imply a systematic drift in the simulations. This apparent nonstationarity seems to be direct evidence of poor equilibration. \subsection*{Freezing} In prior work focusing on how fluctuations associated with coarsening of ice can be confused with two-liquid behavior,\cite{Limmer2011,Limmer2013} free energy surfaces as functions of $\rho$ and $Q_6$ for various models have been computed. Three such models are variants of the ST2 model, which are termed the ST2a, ST2b and ST2c models. These variants differ only in the way long-ranged interactions are treated, and qualitative behaviors of the three variants are similar.\cite{Limmer2013, Digress} In their new work, Palmer \textit{et al.}\cite{Palmer2014} study freezing of the ST2b version and attempt to compare with results of Ref.~\onlinecite{Limmer2013}. The attempted comparison leads Palmer \textit{et al.} to claim that the Limmer-Chandler results of Ref.~\onlinecite{Limmer2013} exhibit large unexplained errors. This claim turns out to be baseless, as discussed now. Properties of coexistence -- the temperature-pressure locations, the surface tension, and so forth -- are model dependent, and computing these properties from simulation requires significant care in establishing reversibility, coexistence and system-size dependence. The Limmer-Chandler treatments of the ST2 models in Refs.~\onlinecite{Limmer2011} and \onlinecite{Limmer2013} are less ambitious, with the purposes of establishing the non-existence of a second liquid phase and establishing the presence of a crystal ice basin. Therefore, much more statistics and smaller error estimates were obtained for amorphous regions than for crystalline regions. No attempt was made to locate phase coexistence properties for the ST2 model. Indeed, the phase diagram for freezing the ST2 model is yet to be determined by anyone. Palmer \textit{et al.}\cite{Palmer2014} report that coexistence conditions for the ST2 model are known, but that is not true. Weber and Stillinger\cite{WeberStillinger} estimated a temperature at which a small spherical cluster will melt, and from that estimate, they suggest, in effect, that the low-pressure melting temperature for an ST2 model is about 300\,K. By another means, Ref.~\onlinecite{Palmer2014} estimates another melting point to be $273\pm3$\,K at $p=2.6$\,kbar. Until now, that seems to be the extent of what is known about melting points of ST2 models. \begin{figure*}[t!] \begin{center} \includegraphics[width=6.3 in]{Fig5.pdf} \caption{Free energy function of order parameter and coexistence line between liquid and ice I. (a) The free energy for the ST2c model as a function of crystal-order parameter $Q_6$, according to the data collected to produce the free energy surface shown in the upper-right panel of Fig. 13 in Ref.~\onlinecite{Limmer2013}; error estimates are for 1, 2 and 3 standard deviations, as indicated. (b) The water-ice I coexistence line modeled from experimental data by Feistel and Wagner,\cite{Feistel2006} with shifted temperature scale applying to ST2 simulations noted in parentheses. Linear extrapolations from two experimental points are shown with dashed lines. The large points locate three estimates of coexistence points extracted from simulation data of three variants of ST2 models at or near pressure $p=2.6$\,kbar or temperature $T=235$\,K. The red and blue points are deduced from the simulations described in Ref.~\onlinecite{Limmer2013}. Error estimates for the ST2a variant are from those in Ref.~\onlinecite{Limmer2013}; error estimates for the ST2c variant follow from those shown here in Panel (a). The green simulation point and error estimate for the ST2b model are from Palmer \textit{et al}.\cite{Palmer2014} } \label{fig:Fig5} \end{center} \end{figure*} To augment that limited knowledge, we can glean what information can be obtained from the data presented in Ref.~\onlinecite{Limmer2013}. With the ST2a model, the statistics is sufficient to locate the coexistence for $T=235$\,K at $p =3.4 \pm 0.3$\,kbar. This coexistence point is shown here in Fig.~\ref{fig:Fig5}b. (The relevant free energy function is shown in Fig. 6a of Ref.~\onlinecite{Limmer2013}.) With the ST2c model, the statistics is less accurate. See Fig.~\ref{fig:Fig5}a. By re-weighting this free energy function for this variant of the ST2 model, coexistence is found for $T=235$\,K at $p = 2.9 \pm 0.9$\,kbar. This coexistence point is also shown in Fig.~\ref{fig:Fig5}b. For the ST2b model, however, the data assembled in Ref.~\onlinecite{Limmer2013} is insufficient to locate a phase coexistence point. Among other things, the full extents of the ST2b liquid and crystal basins were neither sampled nor shown in Ref.~\onlinecite{Limmer2013}. Remarkably, Ref.~\onlinecite{Palmer2014} reports predictions of freezing from the ST2b calculations in Ref.~\onlinecite{Limmer2013}, and it uses these predictions to discredit Ref.~\onlinecite{Limmer2013}. In actuality, the predictions from the ST2 calculations in Ref.~\onlinecite{Limmer2013} are in reasonable accord with what can be deduced from experiment, assuming Weber and Stillinger's estimate of the low-pressure melting temperature is correct. In particular, because it is presumed to be 300\,K, the temperature scale for the ST2 model can be imagined to be shifted by about 27\,K from experiment. With that shift, the experimentally determined phase coexistence line between liquid water and ice I provides an estimate of that for the ST2 model. The high-pressure portion of that line, $p>2$\,kbar, is an extrapolation into a regime where ice I is metastable with respect to other ice phases. The equation of state of Feistela and Wagner is used to make that extrapolation, assuming that equation of state remains reliable beyond the regime for which it was derived. By comparing the extrapolated experimental line with the ST2 estimates in Fig.~\ref{fig:Fig5}b, it appears that that both Ref.~\onlinecite{Palmer2014} and Ref.~\onlinecite{Limmer2013} do equally well (or poorly) in locating coexistence points for the ST2 model of water. Reference \onlinecite{Palmer2014} also claims that Ref.~\onlinecite{Limmer2013} gives an erroneous value for the chemical potential difference between liquid and crystal at $T=230$\,K and $p=2.2$\,kbar, saying that the results of Ref.~\onlinecite{Limmer2013} predict a value of 66\,J/mol, whereas the correct value is an order of magnitude larger. In actual fact, the best estimate from the numerical data of Ref.~\onlinecite{Limmer2013} is about 400\,J/mol. This estimate uses the ST2a variant and the assumption that all variants will give about the same chemical potential differences. \subsection*{Summary} This paper examines much of what Palmer \textit{et al.} have presented in their new publication.\cite{Palmer2014} Lack of evidence for $N^{2/3}$ scaling in surface free energy is demonstrated. Signatures of poor equilibration are identified, the most likely explanation being that Palmer \textit{et al.}'s low-density amorphous basin reflects not a low-density liquid, but rather one or more of the low-density states that can contribute to the non-equilibrium glassy states of water. Finally, criticisms of Ref.~\onlinecite{Limmer2013} are shown to be erroneous. The main result in contention -- whether a small enough simulation cell of ST2 water exhibits bistability at supercooled conditions -- is not an issue of possible errors in force-field evaluation or algorithm implementation. Such sources of confusion have been checked against years ago, as has been noted in Ref.~\onlinecite{Limmer2013}. Rather, disagreement about two-liquid-like bistability is based upon the issue of equilibration or reversibility. Indeed, Ref.~\onlinecite{Limmer2013} shows that this bistability is reproduced by constraining the $Q_6$-distribution to the standard liquid state distribution, and that this bistability disappears as the $Q_6$-distribution is allowed to relax. This general result, independent of whatever free energy sampling method is employed, argues that Palmer \textit{et al.}, and others with similar results, need to demonstrate control of equilibration. Three points require attention: 1. Unlike that shown in Fig.~\ref{fig:Fig4}, $Q_6$-correlation functions should relax in a fashion consistent with stationary distributions of states. Behaviors like those shown in the upper panels of Fig.~\ref{fig:Fig3} are illustrative of what should be expected. While relaxation in unconstrained supercooled ST2 water will necessarily exhibit nonstationarity (coarsening near $T_\mathrm{s}$ takes place on time scales that are only $10^2$ longer than an average $Q_6$-relaxation times), constrained ensembles used in free energy calculations should be stationary if sampled for long enough simulation times. 2. With relaxation of $Q_6$ established in each case, reversibility of free energy calculations should be examined by passing back and forth between neighboring $\rho$-$Q_6$ windows with a variety of different paths. Sampling within each window should extend for times at least of order $10^2$ larger than those for the $Q_6$-autocorrelation function to relax. Hysteresis and path dependence will necessarily appear when moving between large and small $Q_6$. The size of these irreversible effects require consideration in error estimates. Asymmetry of the low-density basin and sharpness of the barrier top in Fig.~\ref{fig:Fig2} manifest irreversible effects thus far not accounted for by Palmer \textit{et al.}. These features necessarily imply uncertainties larger by factors of three or more from those reported. Further, when establishing bistability, the location of coexistence introduces additional errors, which are also not accounted for in Ref.~\onlinecite{Palmer2014}. 3. Only if two-liquid-like bistability persists when satisfying Points 1 and 2, would the result meaningfully challenge Ref.~\onlinecite{Limmer2013}. At that point, one would want to know the limit of length scales for this newly discovered heterogeneity. For real water, we estimate that any remnant of reversible two-liquid-like behavior will disappear on length scales larger than 2 or 3 nm. Is the same true for the ST2 model? Further, what is the physical basis for why the ST2 model at small scales could behave in a fashion that is qualitatively different than other reasonable models of liquid water, and what is the physical basis for why the computations of Ref.~\onlinecite{Limmer2013} miss the effect? On their disagreement with Ref.~\onlinecite{Limmer2013}, Palmer \textit{et al.}\cite{Palmer2014} offer speculations, but no physical picture with accompanying quantitative results, and the speculations prove to be wrong. For sure, the length-scale cutoff of metastability implies that the issues in this debate have little to do with large-enough scale simulations or with experimental observations. Moreover, it is not the same issue as whether multiple amorphous basins exist. All reasonable models of water exhibit glass-forming states, some of high density, others of low density. These are transient states at reversible conditions. They can become long-lived irreversible states, but only at conditions driven far from equilibrium. Their systematic exploration therefore requires techniques of nonequilibrium statistical mechanics. See, for instance, Ref.~\onlinecite{Limmer2014}. Corresponding-states analysis\cite{Limmer2013a} indicates that ST2 water exhibits precursors to glassy behavior at temperatures significantly higher than those of other models and experiment. For example, in real water this correlated dynamics begins when temperature is lowered below 277\,K, but in the ST2 model it begins at 305\,K. It is this higher corresponding-state temperature that seems responsible for the appearance of irreversible artifacts in poorly equilibrated simulations of the ST2 model.\cite{Limmer2013, Limmer2013a} This is not to say that straightforward but unequilibrated simulations of the ST2 model correctly describe amorphous ices. Without employing appropriate methods to attend to the enormously longer timescales of glass and glass transitions, erroneous behaviors will be found, and the ST2 model will be mistreated in that way as well. \begin{acknowledgments} This work was initiated after the publication of Ref.~\onlinecite{Palmer2014}, when I first learned of the paper. Pablo Debenedetti provided simulation data from Ref.~\onlinecite{Palmer2014}, which helped elucidate the contradictions with Ref.~\onlinecite{Limmer2013}. David Limmer provided generous advice on what I have written here. My research on this topic has been supported by the Director, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division and Chemical Sciences, Geosciences, and Biosciences Division under the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. \end{acknowledgments}
1407.6449
\section{Introduction} In the paper, we consider the Cauchy problem on the following linear symmetric hyperbolic system with relaxation (cf.~\cite{Da}): \begin{gather} u_t + A_m u_{x} + L_m u = 0 \label{sys1} \end{gather} with \begin{equation}\label{ID} u|_{t=0}=u_0. \end{equation} Here $u=u(t,x) = (u_1, \cdots, u_m)^T(t,x) \in \mathbb{R}^m$ over $t >0$, $x \in \mathbb{R}$ is an unknown function, $u_0=u_0(x)\in \mathbb{R}^m$ over $x\in \mathbb{R}$ is a given function, and $A_m$ and $L_m$ are $m\times m$ real constant matrices. In general we assume $A_m$ is symmetric and $L_m$ is degenerately dissipative in the sense of $1\leq \dim\, (\ker L_m)\leq m-1$. As pointed out in \cite{UDK}, for a general linear degenerately dissipative system it is interesting to study its decay structure under additional conditions on the coefficient matrices and further investigate the corresponding time-decay property of solutions to the Cauchy problem at the linear level. The purpose of the paper is to present two concrete models of $A_m$ and $L_m$, which do not satisfy the dissipative condition in \cite{UDK}, to derive the decay structure of the corresponding linear systems. We remark that the similar issue has been extensively investigated in Villani \cite{Vi} for an infinite-dimensional dynamical system, for instance, in the content of kinetic theory. In what follows let us explain the motivation of dealing with the problem considered here. More generally one may consider the system in multidimensional space $\mathbb{R}^n$: \begin{equation} A^0_m u_t + \sum_{j=1}^n A^j_m u_{x_j} + L_mu = 0, \label{In.sys1} \end{equation} where $u=u(t,x) \in \mathbb{R}^m$ over $t \geq 0$, $x \in \mathbb{R}^n$. When the degenerate relaxation matrix $L_m$ is symmetric, Umeda-Kawashima-Shizuta \cite{UKS84} proved the large-time asymptotic stability of solutions for a class of equations of hyperbolic-parabolic type with applications to both electro-magneto-fluid dynamics and magnetohydrodynamics. The key idea in \cite{UKS84} and the later generalized work \cite{SK85} that first introduced the so-called Kawashima-Shizuta (KS) condition is to construct the compensating matrix to capture the dissipation of systems over the degenerate kernel space of $L_m$. The typical feature of the time-decay property of solutions established in those work is that the high frequency part decays exponentially while the low frequency part decays polynomially with the same rate as the heat kernel. To precisely state these results, we apply Fourier transform to \eqref{In.sys1} (or \eqref{sys1}). Then we can obtain \begin{equation} A^0_m \hat{u}_t + i |\xi| A_m(\omega) \hat{u} + L_m \hat{u} = 0, \label{Fsys1} \end{equation} where $\xi\in \mathbb{R}^n$ denote the Fourier variable of $x\in \mathbb{R}^n$, $\omega=\xi/|\xi|\in S^{n-1}$, and $A_m(\omega) := \sum_{j=1}^n A^j_m\omega_j$. Moreover we prepare some notations. Given a real matrix $X$, we use $X^{\rm sy}$ and $X^{\rm asy}$ to denote the symmetric and skew-symmetric parts of $X$, respectively, namely, $X^{\rm sy}=(X+X^T)/2$ and $X^{\rm asy}=(X-X^T)/2$. Then the decay result in \cite{UKS84,SK85} is stated as follows. \begin{pro} [Decay property of the standard type (\cite{UKS84,SK85})]\label{pro1} Consider \eqref{In.sys1} with the following condition: \begin{description} \item[Condition (A)$_0$] $A^0_m$ is real symmetric and positive definite, $A^j_m$ for each $1\leq j\leq n$ is real symmetric, and $L_m$ is real symmetric and nonnegative definite with the nontrivial kernel. \end{description} For this problem, assume that the following condition hold: \begin{description} \item[Condition (K)] There is a real compensating matrix $K(\omega) \in C^{\infty}(S^{n-1})$ with the properties: $K(-\omega) = - K(\omega)$, $(K(\omega)A^0_m)^T = - K(\omega)A^0_m$ and \begin{equation*} [K(\omega)A_m(\omega)]^{\rm sy} >0 \quad \text{on} \quad \ker L_m \end{equation*} for each $\omega\in S^{n-1}$. \end{description} Then the Fourier image $\hat u$ of the solution $u$ to the equation \eqref{In.sys1} with initial data $u(0,x)=u_0(x)$ satisfies the pointwise estimate: \begin{equation}\label{std-point} |\hat u(t,\xi)| \le Ce^{-c \lambda(\xi)t}|\hat u_0(\xi)|, \end{equation} where $\lambda(\xi) := |\xi|^2/(1+|\xi|^2)$. Furthermore, let $s \ge 0$ be an integer and suppose that the initial data $u_0$ belong to $H^s \cap L^1$. Then the solution $u$ satisfies the decay estimate: \begin{equation}\label{std-decay} \|\partial_x^{k} u(t)\|_{L^2} \le C(1+t)^{-n/4-k/2}\|u_0\|_{L^1} + Ce^{-ct}\|\partial_x^{k} u_0\|_{L^2} \end{equation} for $k\le s$. Here $C$ and $c$ are positive constants. \end{pro} Under the conditions {\rm (A$)_0$} and {\rm (K)}, we can construct the following energy inequality: \begin{equation* \frac{d}{dt} E + c D \le 0, \end{equation*} where \begin{equation}\label{1K} E = \langle A_m^0 \hat{u},\hat{u} \rangle - \frac{\alpha|\xi|}{1+|\xi|^2}\delta \langle iK(\omega)A_m^0 \hat{u},\hat{u} \rangle, \quad D = \frac{|\xi|^2}{1+|\xi|^2}|\hat{u}|^2 + |(I-P)\hat{u}|^2, \end{equation} $\alpha$ and $\delta$ are suitably small constants, and $P$ denotes the orthogonal projection onto $\ker L_m$. \smallskip For the nonlinear system, the global existence of small-amplitude classical solutions was proved by Hanouzet-Natalini \cite{HN} in one space dimension and by Yong \cite{Yo} in several space dimensions, provided that the system is strictly entropy dissipative and satisfies the KS condition. And later on, the large time behavior of solutions was obtained by Bianchini-Hanouzet-Natalini \cite{BHN} and Kawashima-Yong \cite{KY2} basing on the analysis of the Green function of the linearized problem. Those results show that solutions to such nonlinear system will not develop singularities (e.g., shock waves) in finite time for small smooth initial perturbations, cf.~\cite{Da,Liu}. Notice that the $L^2$-stability of a constant equilibrium state in a one-dimensional system of dissipative hyperbolic balance laws endowed with a convex entropy was also studied by Ruggeri-Serre \cite{RS}. Moreover, it would be an interesting and important topic to study the relaxation limit of general hyperbolic conservation laws with relaxations, see \cite{CLL,KY} and reference therein. Recently it has been found that there exist physical systems which violate the KS condition but still have some kind of time-decay properties. For instance, for the dissipative Timoshenko system \cite{IHK08,IK08} and the Euler-Maxwell system \cite{D,USK,UK}, the linearized relaxation matrix $L_m$ has a nonzero skew-symmetric part while it was still proved that solutions decay in time in some different way. Besides those, there are two related works dealing with general partially dissipative hyperbolic systems with zero-order source when the KS condition is not satisfied. Beauchard-Zuazua \cite{BZ11} first observed the equivalence of the KS condition with the Kalman rank condition in the context of the control theory. They extended the previous analysis to some other situations beyond the KS condition, and established the explicit estimate on the solution semigroup in terms of the frequency variable and also the global existence of near-equilibrium classical solutions for some nonlinear balance laws without the KS condition. In the mean time, Mascia-Natalini \cite{MN} also made a general study of the same topic for a class of systems without the KS condition. The typical situation considered in \cite{MN} is that the non-dissipative components are linearly degenerate which indeed does not hold under the KS condition (see also \cite{KO}). Notice that both in \cite{BZ11} and \cite{MN}, the rate of convergence of solutions to the equilibrium states for the nonlinear Cauchy problem is still left unknown. In \cite{UDK}, the same authors of this paper introduced a new structural condition which is a generalization of the KS condition, and also analyzed the corresponding weak dissipative structure called the regularity-loss type for general systems with non-symmetric relaxation which includes the Timoshenko system and the Euler-Maxwell system as two concrete examples. Precisely, one has the following result. \begin{pro} [Decay property of the regularity-loss type (\cite{UDK})]\label{pro2} Consider \eqref{In.sys1} with the condition: \begin{description} \item[Condition (A)] $A^0_m$ is real symmetric and positive definite, $A^j_m$ for each $1\leq j\leq n$ is real symmetric, while $L_m$ is not necessarily real symmetric but is nonnegative definite with the nontrivial kernel. \end{description} For this problem, assume the previous condition {\rm (K)} and the following condition hold: \begin{description} \item[Condition (S)] There is a real matrix $S$ such that $(SA^0_m)^T = SA^0_m$, and \begin{gather*} [SL_m]^{\rm sy}+[L_m]^{\rm sy}\geq 0 \ \ {\rm on} \ \ {\mathbb C}^m, \quad \ker\big([SL_m]^{\rm sy}+[L_m]^{\rm sy}\big) = \ker\,L_m, \end{gather*} and moreover, for each $\omega\in S^{n-1}$, \begin{gather} i[SA_m(\omega)]^{\rm asy}\geq 0 \quad \text{on} \quad \ker\,[L_m]^{\rm sy}. \label{assump-3} \end{gather} \end{description} Then the Fourier image $\hat u$ of the solution $u$ to the equation \eqref{In.sys1} with initial data $u(0,x)=u_0(x)$ satisfies the pointwise estimate: \begin{equation}\label{loss-point} |\hat u(t,\xi)| \le Ce^{-c \lambda(\xi)t}|\hat u_0(\xi)|, \end{equation} where $\lambda(\xi) := |\xi|^2/(1+|\xi|^2)^2$. Moreover, let $s \ge 0$ be an integer and suppose that the initial data $u_0$ belong to $H^s \cap L^1$. Then the solution $u$ satisfies the decay estimate: \begin{equation}\label{loss-decay} \|\partial_x^{k} u(t)\|_{L^2} \le C(1+t)^{-n/4-k/2}\|u_0\|_{L^1} + C(1+t)^{-\ell/2}\|\partial_x^{k+\ell} u_0\|_{L^2} \end{equation} for $k+\ell \le s$. Here $C$ and $c$ are positive constants. \end{pro} Observe that $\lambda(\xi)$ in \eqref{loss-point} behaves as $|\xi|^2$ as $|\xi|\to 0$ but behaves as $1/|\xi|^2$ as $|\xi|\to \infty$. Thus those estimates \eqref{loss-point} and \eqref{loss-decay} are weaker than \eqref{std-point} and \eqref{std-decay}, respectively. In particular, the decay estimate \eqref{loss-point} is said to be of the regularity-loss type. Similar decay properties of the regularity-loss type have been recently observed for several interesting systems. We refer the reader to \cite{IHK08,IK08,LK4} (cf. \cite{ABMR,MRR}) for the dissipative Timoshenko system, \cite{D,USK,UK} for the Euler-Maxwell system, \cite{HK,KK} for a hyperbolic-elliptic system in radiation gas dynamics, \cite{LK1,LK2,LK3,LC,SK} for a dissipative plate equation, and \cite{Du-1VMB,DS-VMB} for various kinetic-fluid models. In fact, one can show that Proposition \ref{pro1} can be regarded as a corollary of Proposition \ref{pro2} after replacing \eqref{assump-3} in condition {\rm (S)} by a stronger condition: \begin{equation*} i[SA_m(\omega)]^{\rm asy} \ge 0 \quad \text{on} \quad \mathbb{C}^m. \end{equation*} for each $\omega\in S^{n-1}$. The key point for the proof of \eqref{loss-point} is to derive the matrices $S$ and $K(\omega)$ such that the coercive estimate: \begin{equation}\label{core} \delta [K(\omega)A_m(\omega)]^{\rm sy} + [SL_m]^{\rm sy} + [L_m]^{\rm sy} > 0 \quad \text{on} \quad \mathbb{C}^m \end{equation} holds true for suitably small $\delta>0$. Indeed, under the conditions (A), (S) and (K), the estimate \eqref{core} is satisfied. Then, using \eqref{core}, we get the following energy equality \begin{equation}\label{known-energy} \frac{d}{dt} E + c D \le 0, \end{equation} where \begin{equation}\label{1K1S} \begin{split} E &= \langle A_m^0 \hat{u},\hat{u} \rangle + \frac{\alpha_1}{1+|\xi|^2}\Big(\langle SA_m^0 \hat{u},\hat{u} \rangle - \frac{\alpha_2 |\xi|}{1+|\xi|^2} \delta \langle iK(\omega)A_m^0 \hat{u},\hat{u} \rangle \Big),\\ D &= \frac{|\xi|^2}{(1+|\xi|^2)^2}|\hat{u}|^2 + \frac{1}{1+|\xi|^2}|(I-P)\hat{u}|^2 + |(I-P_1)\hat{u}|^2, \end{split} \end{equation} $\alpha_1$ and $\alpha_2$ are suitably small constants, and $P$ and $P_1$ denote the orthogonal projections onto $\ker L_m$ and $\ker \, [L_m]^{\rm sy}$. Interested readers may refer to \cite{UDK} for more details of this issue and also for the construction of $S$ and $K(\omega)$ for the Timoshenko system and the Euler-Maxwell system. Therefore, those conditions in Proposition \ref{pro2} are generalizations of the classical KS conditions. We finally remark that it should be interesting to further investigate the nonlinear stability of constant equilibrium states of the system of the regularity-loss type under the structural condition postulated in Proposition \ref{pro2}. Inspired by the previous work \cite{UDK}, the goal of the paper is to construct much more complex models \eqref{sys1} with given $A_m$ and $L_m$ such that they enjoy some new dissipative structure of the regularity-loss type. Here we recall a notion of the {\it uniform dissipativity} of the system \eqref{sys1} introduced in \cite{UDK}. Consider the eigenvalue problem for the system \eqref{sys1}: \begin{equation*} (\eta A^0_m + i\xi A_m + L_m) \phi = 0, \end{equation*} where $\eta \in \mathbb{C}$ and $\phi \in \mathbb{C}^m$. The corresponding characteristic equation is given by \begin{equation}\label{ce} {\rm det}(\eta A^0_m + i\xi A_m + L_m) = 0. \end{equation} The solution $\eta = \eta(i \xi)$ of \eqref{ce} is called the eigenvalue of the system \eqref{sys1}. \begin{defn} The system \eqref{sys1} is called uniformly dissipative of the type $(p,q)$ if the eigenvalue $\eta=\eta(i\xi)$ satisfies $$ \Re\,\eta(i\xi) \leq -c|\xi|^{2p}/(1+|\xi|^2)^q $$ for all $\xi\in \mathbb{R}^n$, where $c$ is a positive constant and $(p,q)$ is a pair of positive integers. \end{defn} Note that as proved in \cite[Theorem 4.2]{UDK}, one has $\Re\,\eta(i\xi)\leq -c\lambda(\xi)$ whenever the pointwise estimates in the form of \eqref{std-point} or \eqref{loss-point} hold true. Therefore, we can determine the type $(p,q)$ for a uniformly dissipative system \eqref{sys1} in terms of the function $\lambda(\xi)$ obtained from the pointwise estimate on $\hat u(t,\xi)$: \begin{equation} \label{D.key} |\hat u(t,\xi)| \le Ce^{-c \lambda(\xi)t}|\hat u_0(\xi)|. \end{equation} For example, under the assumption in Propositions \ref{pro1} or \ref{pro2}, the system \eqref{sys1} is uniformly dissipative of the type $(1,1)$ or $(1,2)$, respectively. Notice that the regularity-loss type corresponds to the situation when $p$ is strictly less than $q$, i.e., $p<q$. Historically, Shizuta-Kawashima \cite{SK} showed that, under the condition {\rm(A$)_0$}, the strict dissipativity $\Re \, \eta(i\xi) < 0$ for $\xi \neq 0$ is equivalent to the uniform dissipativity of the type $(1,1)$. Moreover, they showed the pointwise estimate \eqref{std-point} by using only one compensating skew-symmetric matrix $K(\omega)$ (see \eqref{1K}). On the other hand, the authors formulated in \cite{UDK} a class of systems whose dissipativity is of the type $(1,2)$ and got Proposition \ref{pro2}. Notice that, in this cace, we need to use one compensating symmetric matrix $S$ and one compensating skew-symmetric matrix $K(\omega)$ to get the desired pointwise estimate \eqref{loss-point} (see \eqref{1K1S}). We note that the dissipative Timoshenko system and the Euler-Maxwell system studied in \cite{IHK08} and \cite{UK}, respectively, are included in the class of systems with the type $(1,2)$ which was formulated in \cite{UDK}. However, to get the optimal dissipative estimate for these two examples, we need to use one $S$ and two different $K(\omega)$ (see \cite{MK1,UK}). On the other hand, more complicated concrete models are found in these years. Indeed, Mori-Kawashima \cite{MK2} considered the Timoshenko-Cattaneo system with heat conduction and showed that its dissipativity is of the type $(2,3)$. Moreover, they proved the optimal dissipative estimate by using four different $S$ and four different $K(\omega)$. This means that Proposition \ref{pro2} and the class formulated in \cite{UDK} is not enough to analyze the dissipativity of general systems \eqref{In.sys1}, and we have to study other concrete models. In this paper, we will present a study of two concrete models of the system \eqref{sys1} related to the above general issue. For the Model I, one has \begin{equation*} p=m-3,\quad q=m-2, \end{equation*} see \eqref{point1} in Theorem \ref{thm1}. While for the Model II, we let $m$ be even and one has \begin{equation*} p=\frac{1}{2}(3m-10),\quad q=2(m-3), \end{equation*} see \eqref{point2} in Theorem \ref{thm2}. In both cases we see $p<q$ and hence two models that we consider are of the regularity-loss type. Compared with the energy inequality \eqref{known-energy}, the energy inequalities of the Model I and II are much more complicated. More precisely, to control the dissipation term, we must employ a lot of compensating symmetric matrices and skew-symmetric matrices whose numbers depend on the dimension $m$ of the coefficient matrices. Therefore we can not apply Proposition \ref{pro2} to the Model I and II, and need direct calculations (see in Section 2 and 3). The proof of the estimate in the form of \eqref{D.key} is based on the Fourier energy method, and in the mean time we also give the explicit construction of matrices $S$ and $K$ as used in Proposition \ref{pro2}. As seen later on, a series of energy estimates are derived and their appropriate linear combination leads to a Lyapunov-type inequality of the time-frequency functional equivalent with $|\hat u(t,\xi)|^2$, which hence implies \eqref{D.key}. The most difficult point is that it is priorly unclear to justify whether one choice of $(p,q)$ is optimal; see more discussions in Section 4.1. For that purpose, we also present an alternative approach to find out the value of $(p,q)$ for both Model I and Model II, and the detailed strategy of the approach is to be given later on. The rest of the paper is organized as follows. In Section 2 and Section 3, we study Model I and Model II, respectively. In each section, for the given model, we first state the main results on the dissipative structure and the decay property of the system \eqref{sys1}, give the proof by the energy method in the case $m=6$ which indeed corresponds to some existing physical models, show the proof in the general case $m\geq 6$ still using the energy method, and finally give the explicit construction of matrices $S$ and $K$. The matrices $S$ and $K$ constructed in Subsection 2.3 and 3.3 have a very important role in obtaining the coercive estimate similar to \eqref{core}. Consequently, by employing these matrices, we can derive the desired pointwise estimates through \eqref{1estSK} and \eqref{2estSK} to be verified later on. In the last Section 4, we provide another approach to justify the dissipative structure of the system \eqref{sys1}. \medskip \noindent{\it Notations.}\ \ For a nonnegative integer $k$, we denote by $\partial^k_x$ the totality of all the $k$-th order derivatives with respect to $x = (x_1, \cdots ,x_n)$. Let $1\leq p\leq\infty$. Then $L^p=L^p({\mathbb R}^n)$ denotes the usual Lebesgue space over ${\mathbb R}^n$ with the norm $\|\cdot\|_{L^p}$. For a nonnegative integer $s$, $H^{s}=H^{s}({\mathbb R}^n)$ denotes the $s$-th order Sobolev space over ${\mathbb R}^n$ in the $L^2$ sense, equipped with the norm $\|\cdot\|_{H^{s}}$. We note that $L^2=H^0$. Finally, in this paper, we use $C$ or $c$ to denote various positive constants without confusion. \section{Model I} \subsection{Main result I} In this section, we consider the Cauchy problem \eqref{sys1}, \eqref{ID} with coefficient matrices given by \begin{equation}\label{Tmat} \begin{split} A_m &= \left( \begin{array}{cccc:cccc {0} & {1} & {0} & {0} & & & & \\%$BBh(B1$B9T(B {1} & {0} & {0} & {0} & & & & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {a_4} & {0} & & \mbox{\smash{\huge\textit{O}}} & \\%$BBh(B3$B9T(B {0} & {0} & {a_4} & {0} & {a_5} & {0} & & \\%$BBh(B4$B9T(B \hdashline & & {0} & {a_5} & {0} & {a_6} & & \\%$BBh(B5$B9T(B & & & {0} & {a_6} & \ddots & & \\%$BBh(B6$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & & & & {a_{m}} \\%$BBh(B9$B9T(B & & & & & & {a_{m}} & {0} \\%$BBh(B10$B9T(B \end{array} \right), \ \ L_m = \left( \begin{array}{cccc:cccc {0} & {0} & {0} & {1} & & & & \\%$BBh(B1$B9T(B {0} & {0} & {0} & {0} & & & & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {0} & & & \mbox{\smash{\huge\textit{O}}} & \\%$BBh(B3$B9T(B {-1} & {0} & {0} & {0} & & & & \\%$BBh(B4$B9T(B \hdashline & & & & {0} & & & \\%$BBh(B5$B9T(B & & & & & \ddots & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & & &{0} & \\%$BBh(B5$B9T(B & & & & & & & {\gamma} \\%$BBh(B6$B9T(B \end{array} \right), \end{split} \end{equation} where integer $m\geq 6$ is even, $\gamma > 0$, and all elements $a_j$ $(4\leq j\leq m)$ are nonzero. We note that the system \eqref{sys1}, \eqref{Tmat} with $m=6$ is the Timoshenko system with the heat conduction via Cattaneo law (cf.~\cite{FSR,SAR}). For this problem, we can derive the following decay structure. \begin{thm} \label{thm1} The Fourier image $\hat u$ of the solution $u$ to the Cauchy problem \eqref{sys1}-\eqref{ID} with \eqref{Tmat} satisfies the pointwise estimate: \begin{equation}\label{point1} |\hat u(t,\xi)| \le Ce^{-c \lambda(\xi)t}|\hat u_0(\xi)|, \end{equation} where $\lambda(\xi) := \xi^{2(m-3)}/(1+\xi^2)^{m-2}$. Furthermore, let $s \ge 0$ be an integer and suppose that the initial data $u_0$ belong to $H^s \cap L^1$. Then the solution $u$ satisfies the decay estimate: \begin{equation}\label{decay1} \|\partial_x^{k} u(t)\|_{L^2} \le C(1+t)^{-\frac{1}{2(m-3)}(\frac{1}{2}+k)}\|u_0\|_{L^1} + C(1+t)^{-\frac{\ell}{2}}\|\partial_x^{k+\ell} u_0\|_{L^2} \end{equation} for $k+\ell \le s$. Here $C$ and $c$ are positive constants. \end{thm} We remark that the estimates \eqref{point1} and \eqref{decay1} with $m=6$ is not optimal. Indeed, Mori-Kawashima \cite{MK2} showed more sharp estimates. The decay estimate \eqref{decay1} is derived by the pointwise estimate \eqref{point1} in Fourier space immediately. Thus readers may refer to the same authors' paper \cite{UDK} (see also \cite{D-AA}) and we omit the proof of \eqref{decay1} for brevity. In order to make the proof more precise, we first consider the special case $m=6$ in Section 2.2, and then generalize it to the case $m\geq 6$ in Section 2.3. The proof of \eqref{point1} is given in the following two subsections. \subsection{Energy method in the case $m=6$} In this subsection we first consider the case $m=6$. In such case, the system \eqref{sys1} with \eqref{Tmat} is described as \begin{equation}\label{equations6} \begin{split} &\partial_t \hat u_1 + i \xi \hat u_2 + \hat u_4 = 0, \\ &\partial_t \hat u_2 + i \xi \hat u_1 = 0, \\ &\partial_t \hat u_3 + i \xi a_4 \hat u_4 = 0, \\ &\partial_t \hat u_4 + i \xi (a_4 \hat u_3 + a_5 \hat u_5) - \hat u_1 = 0, \\ &\partial_t \hat u_5 + i \xi (a_{5} \hat u_{4} + a_{6} \hat u_{6}) = 0, \\%\qquad j = 5, \cdots, m-1, \\[1mm] &\partial_t \hat u_6 + i \xi a_{6} \hat u_{5} + \gamma \hat u_6 = 0. \end{split} \end{equation} For this system we are going to apply the energy method to derive Theorem \ref{thm1} in the case $m=6$. The proof is organized by the following three steps. \medskip \noindent {\bf Step 1.}\ \ We first derive the basic energy equality for the system \eqref{equations6} in the Fourier space. We multiply the all equations of \eqref{equations6} by $\bar{\hat{u}} = (\bar{\hat{u}}_1, \bar{\hat{u}}_2,\bar{\hat{u}}_3,\bar{\hat{u}}_4, \bar{\hat{u}}_5,\bar{\hat{u}}_6)^T$, respectively, and combine the resultant equations. Then we obtain \begin{equation*} \sum_{j=1}^6 \bar{\hat{u}}_j \partial_t \hat u_j + 2 i \xi \Re (\hat u_1 \bar{\hat{u}}_{2}) + 2 i \xi \sum_{j=3}^5 a_{j+1} \Re (\hat u_j \bar{\hat{u}}_{j+1}) + 2 i {\rm Im} (\hat{u}_4 \bar{\hat{u}}_1)+ \gamma |\hat u_6|^2 = 0. \end{equation*} Thus, taking the real part for the above equality, we arrive at the basic energy equality \begin{equation}\label{eq6} \frac{1}{2} \partial_t |\hat u|^2 + \gamma |\hat u_6|^2 = 0. \end{equation} Here we use the simple relation $\partial_t(\hat{u}_j^2) = 2 \Re (\bar{\hat{u}}_j \partial_t \hat{u}_j)$ for any $j$. Next we create the dissipation terms. \medskip \noindent {\bf Step 2.}\ \ We first construct the dissipation for $\hat{u}_1$. We multiply the first and fourth equations in \eqref{equations6} by $- \bar{\hat{u}}_{4}$ and $ - \bar{\hat{u}}_{1}$, respectively. Then, combining the resultant equations and taking the real part, we have \begin{equation}\label{dissipation6-a} - \partial_t \Re (\hat u_{1} \bar{\hat{u}}_4) + |\hat u_{1}|^2 - |\hat u_{4}|^2 \\ - \xi \, \Re (i \hat{u}_{2} \bar{\hat{u}}_4) + a_{4} \xi \, \Re (i \hat{u}_{1} \bar{\hat{u}}_{3}) + a_{5} \xi \, \Re (i \hat{u}_{1} \bar{\hat{u}}_{5}) = 0. \end{equation} On the other hand, we multiply the second and third equations in \eqref{equations6} by $- a_4 \bar{\hat{u}}_{3}$ and $ - a_4 \bar{\hat{u}}_{2}$, respectively. Then, combining the resultant equations and taking the real part, we have \begin{equation* - a_4 \partial_t \Re (\hat u_{2} \bar{\hat{u}}_3) - a_{4} \xi \, \Re (i \hat{u}_{1} \bar{\hat{u}}_{3}) + a_{4}^2 \xi \, \Re (i \hat{u}_{2} \bar{\hat{u}}_{4}) = 0. \end{equation*} Therefore, combining the above two equalities, we obtain \begin{multline}\label{dissipation6-4} - \partial_t \Re (\hat u_{1} \bar{\hat{u}}_4+ a_4 \hat u_{2} \bar{\hat{u}}_3) + |\hat u_{1}|^2 - |\hat u_{4}|^2 \\ + (a_4^2 - 1) \xi \, \Re (i \hat{u}_{2} \bar{\hat{u}}_4) + a_{5} \xi \, \Re (i \hat{u}_{1} \bar{\hat{u}}_{5}) = 0. \end{multline} Furthermore, we multiply the second equation and fifth equation in \eqref{equations6} by $- \bar{\hat{u}}_{5}$ and $ - \bar{\hat{u}}_{2}$, respectively. Then, combining the resultant equations and taking the real part, we have \begin{equation}\label{dissipation6-5} - \partial_t \Re (\hat u_{2} \bar{\hat{u}}_5) - \xi \, \Re (i \hat{u}_{1} \bar{\hat{u}}_5) + a_{5} \xi \, \Re (i \hat{u}_{2} \bar{\hat{u}}_{4}) + a_{6} \xi \, \Re (i \hat{u}_{2} \bar{\hat{u}}_{6}) = 0. \end{equation} Finally, multiplying \eqref{dissipation6-4} and \eqref{dissipation6-5} by $a^2_{5}$ and $ -a_5 (a_{4}^2-1)$, respectively, and combining the resultant equations, we have \begin{multline}\label{dissipation6-6} \partial_t E_1 + a_5^2( |\hat u_{1}|^2 - |\hat u_{4}|^2) \\ + a_5 (a_4^2 + a_5^2 - 1) \xi \, \Re (i \hat{u}_{1} \bar{\hat{u}}_{5}) - a_{5} a_6 (a_{4}^2-1) \xi \, \Re (i \hat{u}_{2} \bar{\hat{u}}_{6}) = 0, \end{multline} where we have defined that $E_1 := - \Re \big\{ a_5^2(\hat u_{1} \bar{\hat{u}}_4+ a_4 \hat u_{2} \bar{\hat{u}}_3) - a_5 (a_{4}^2-1)\hat u_{2} \bar{\hat{u}}_5\big\}. $ Next, we multiply the first and second equations in \eqref{equations6} by $- i \xi \bar{\hat{u}}_{2}$ and $i \xi \bar{\hat{u}}_{1}$, respectively. Then, combining the resultant equations and taking the real part, we have \begin{equation}\label{dissipation6-7'} \xi \partial_t E_2 + \xi^2( |\hat u_{2}|^2 - |\hat u_{1}|^2) + \xi \, \Re (i \hat{u}_{2} \bar{\hat{u}}_4) = 0, \end{equation} where $E_2 := - \Re (i \hat u_{1} \bar{\hat{u}}_2)$. Therefore, by Young inequality, the above equation becomes \begin{equation}\label{dissipation6-7} \xi \partial_t E_2 + \frac{1}{2}\xi^2|\hat u_{2}|^2 \le \xi^2 |\hat u_{1}|^2 +\frac{1}{2} |\hat{u}_4|^2. \end{equation} We multiply the third and fourth equations in \eqref{equations6} by $i \xi a_4 \bar{\hat{u}}_{4}$ and $- i \xi a_4 \bar{\hat{u}}_{3}$, respectively. Then, combining the resultant equations and taking the real part, we have \begin{equation* a_4 \xi \partial_t \Re (i \hat u_{3} \bar{\hat{u}}_4) + a_4^2 \xi^2( |\hat u_{3}|^2 - |\hat u_{4}|^2) + a_4 a_5 \xi^2 \, \Re (\hat{u}_{3} \bar{\hat{u}}_5) + a_4 \xi \, \Re (i \hat{u}_{1} \bar{\hat{u}}_3)= 0. \end{equation*} On the other hand, we multiply the second and third equations in \eqref{equations} by $- a_4 \bar{\hat{u}}_{3}$ and $-a_4 \bar{\hat{u}}_{2}$, respectively. Then, combining the resultant equations and taking the real part, we have \begin{equation* - a_4 \partial_t \Re (\hat u_{2} \bar{\hat{u}}_3) - a_4 \xi \, \Re (i \hat{u}_{1} \bar{\hat{u}}_3) + a_4^2 \xi \, \Re (i \hat{u}_{2} \bar{\hat{u}}_4)= 0. \end{equation*} Finally, combining the above two equations, we get \begin{equation}\label{dissipation6-8'} \partial_t \big\{ \xi E_3 + F_1 \big\} + a_4^2 \xi^2( |\hat u_{3}|^2 - |\hat u_{4}|^2) + a_4 a_5 \xi^2 \, \Re (\hat{u}_{3} \bar{\hat{u}}_5) + a_4^2 \xi \, \Re (i \hat{u}_{2} \bar{\hat{u}}_4)= 0. \end{equation} where $E_3 := a_4 \Re (i \hat u_{3} \bar{\hat{u}}_4)$ and $F_1 := - a_4 \Re (\hat u_{2} \bar{\hat{u}}_3)$. By using Young inequality, we can obtain the following inequality: \begin{equation}\label{dissipation6-8} \partial_t \big\{ \xi E_3 + F_1 \big\} + \frac{1}{2}a_4^2 \xi^2 |\hat u_{3}|^2 \le a_4^2 \xi^2 |\hat u_{4}|^2 + \frac{1}{2}a_5^2 \xi^2 |\hat{u}_5|^2 + a_4^2 |\xi| |\hat{u}_{2}| |\hat{u}_4|. \end{equation} Multiplying the fourth equation and fifth equation in \eqref{equations} by $i \xi a_{5} \bar{\hat{u}}_{5}$ and $ - i \xi a_{5} \bar{\hat{u}}_{4}$, respectively, combining the resultant equations, and taking the real part, then we have \begin{multline}\label{dissipation6-9'} \xi \partial_t E_4 + a_{5}^2 \xi^2( |\hat u_{4}|^2 - |\hat u_{5}|^2) \\ - a_{4} a_{5} \xi^2 \, \Re (\hat{u}_{3} \bar{\hat{u}}_5) + a_{5} a_{6} \xi^2 \, \Re (\hat{u}_{4} \bar{\hat{u}}_{6}) - a_{5} \xi \, \Re (i \hat{u}_{1} \bar{\hat{u}}_{5}) = 0, \end{multline} where $E_4 := a_{5} \Re (i \hat u_{4} \bar{\hat{u}}_5)$. Here, by using Young inequality, we obtain \begin{multline}\label{dissipation6-9} \xi \partial_t E_4 + \frac{1}{2}a_{5}^2 \xi^2 |\hat u_{4}|^2 \\ \le a_{5}^2 \xi^2 |\hat u_{5}|^2 + \frac{1}{2} a_{6}^2 \xi^2 |\hat{u}_{6}|^2 + a_{4} a_{5} \xi^2 \, \Re (\hat{u}_{3} \bar{\hat{u}}_5) + a_{5} \xi \, \Re (i \hat{u}_{1} \bar{\hat{u}}_{5}). \end{multline} On the other hand, we multiply the fifth equation and the last equation in \eqref{equations6} by $i \xi a_{6} \bar{\hat{u}}_{6}$ and $ - i \xi a_{6} \bar{\hat{u}}_{5}$, respectively. Then, combining the resultant equations and taking the real part, we obtain \begin{equation* a_{6} \xi \partial_t \Re (i \hat u_{5} \bar{\hat{u}}_6) + a_{6}^2 \xi^2( |\hat u_{5}|^2 - |\hat u_{6}|^2) \\ - a_{5} a_{6} \xi^2 \, \Re (\hat{u}_{4} \bar{\hat{u}}_6) + \gamma a_{6} \xi \, \Re (i \hat{u}_{5} \bar{\hat{u}}_6) = 0. \end{equation*} Using Young inequality, this yields \begin{equation}\label{dissipation6-10} a_{6} \xi \partial_t \Re (i \hat u_{5} \bar{\hat{u}}_6) + \frac{1}{2}a_{6}^2 \xi^2 |\hat u_{5}|^2 \\ \le a_{6}^2 \xi^2 |\hat u_{6}|^2 + \frac{1}{2} \gamma^2 |\hat{u}_6|^2 + a_{5} a_{6} \xi^2 \, \Re (\hat{u}_{4} \bar{\hat{u}}_6). \end{equation} \medskip \noindent {\bf Step 3.}\ \ In this step, we sum up the energy inequalities derived in the previous step, and then get the desired energy estimate. Throughout this step, $\beta_j$ with $j \in \mathbb{N}$ denote the real numbers determined later. We first multiply \eqref{dissipation6-6} and \eqref{dissipation6-7} by $\xi^2$ and $\beta_1$, respectively. Then we combine the resultant equation, obtaining \begin{multline* \partial_t \big\{ \xi^2 E_1 + \beta_1 \xi E_2 \big\} + (a_5^2 - \beta_1) \xi^2 |\hat u_{1}|^2 + \frac{\beta_1}{2} \xi^2 |\hat u_{2}|^2 \\ \le \Big(\frac{\beta_1}{2} + a_5^2 \xi^2 \Big)|\hat u_{4}|^2 - a_5 (a_4^2 + a_5^2 - 1) \xi^3 \, \Re (i \hat{u}_{1} \bar{\hat{u}}_{5}) + a_{5} a_6 (a_{4}^2-1) \xi^3 \, \Re (i \hat{u}_{2} \bar{\hat{u}}_{6}). \end{multline*} Moreover, combining \eqref{dissipation6-6}, \eqref{dissipation6-8} and the above inequality, we have \begin{equation* \begin{split} & \partial_t \big\{ (1+ \xi^2) E_1 + \beta_1 \xi E_2 + \xi E_3 + F_1\big\} \\ &\qquad + \big\{a_5^2 + (a_5^2 - \beta_1) \xi^2\big\} |\hat u_{1}|^2 + \frac{\beta_1}{2} \xi^2 |\hat u_{2}|^2+ \frac{1}{2}a_4^2 \xi^2 |\hat u_{3}|^2 \\ &\le \Big\{a_5^2+ \frac{\beta_1}{2} + (a_4^2 + a_5^2) \xi^2 \Big\}|\hat u_{4}|^2 + \frac{1}{2}a_5^2 \xi^2 |\hat{u}_5|^2 + a_4^2 |\xi| |\hat{u}_{2}| |\hat{u}_4| \\ &\qquad - a_5 (a_4^2 + a_5^2 - 1) \xi (1+\xi^2) \, \Re (i \hat{u}_{1} \bar{\hat{u}}_{5}) + a_{5} a_6 (a_{4}^2-1) \xi (1+\xi^2)\, \Re (i \hat{u}_{2} \bar{\hat{u}}_{6}). \end{split} \end{equation*} For this inequality, letting $\beta_1$ suitably small and employing Young inequality, we can get \begin{multline}\label{eneq6-1} \partial_t \big\{(1+ \xi^2) E_1 + c \xi E_2 + \xi E_3 + F_1\big\} + c(1 + \xi^2) |\hat u_{1}|^2 + \beta_1 \xi^2 (|\hat u_{2}|^2+ |\hat u_{3}|^2) \\ \le C(1+\xi^2)|\hat u_{4}|^2 + C \xi^2 |\hat{u}_5|^2 \\ \qquad + |a_4^2 + a_5^2 - 1| C |\xi|^3 |\hat{u}_{1}| |\hat{u}_{5}| + |a_{4}^2-1| C |\xi| (1+\xi^2) |\hat{u}_{2}| |\hat{u}_{6}|. \end{multline} Similarly, multiplying \eqref{dissipation6-9} and \eqref{eneq6-1} by $1+\xi^2$ and $\beta_2 \xi^2$, respectively. Then we combine the resultant equation, obtainig \begin{equation* \begin{split} & \partial_t \big\{ \beta_2 \xi^2 ((1+ \xi^2) E_1 + \beta_1 \xi E_2 + \xi E_3 + F_1) + \xi (1+\xi^2)E_4 \big\} \\ &\qquad + \beta_2 c \xi^2 (1 + \xi^2) |\hat u_{1}|^2 + \beta_2 c \xi^4 (|\hat u_{2}|^2+ |\hat u_{3}|^2) + \Big(\frac{1}{2}a_{5}^2 - \beta_2 C \Big) \xi^2(1+\xi^2) |\hat u_{4}|^2 \\ &\le \beta_2 C \xi^4 |\hat{u}_5|^2 + a_{5}^2 \xi^2(1+\xi^2) |\hat u_{5}|^2 + \frac{1}{2} a_{6}^2 \xi^2(1+\xi^2) |\hat{u}_{6}|^2 \\ &\qquad + a_{4} a_{5} \xi^2(1+\xi^2) \, \Re (\hat{u}_{3} \bar{\hat{u}}_5) + a_{5} \xi(1+\xi^2) \, \Re (i \hat{u}_{1} \bar{\hat{u}}_{5}) \\ &\qquad + \beta_2 |a_4^2 + a_5^2 - 1| C |\xi|^5 |\hat{u}_{1}| |\hat{u}_{5}| + \beta_2 |a_{4}^2-1| C |\xi|^3 (1+\xi^2) |\hat{u}_{2}| |\hat{u}_{6}|. \end{split} \end{equation*} Letting $\beta_2$ suitably small and using Young inequality derive that \begin{multline}\label{eneq6-2} \partial_t \big\{ \beta_2 \xi^2 ((1+ \xi^2) E_1 + \beta_1 \xi E_2 + \xi E_3 + F_1) + \xi (1+\xi^2)E_4 \big\} \\ + c \xi^2 (1 + \xi^2) (|\hat u_{1}|^2 + |\hat u_{4}|^2) + c \xi^4 (|\hat u_{2}|^2+ |\hat u_{3}|^2) \\ \le C (1+\xi^2)^2 |\hat u_{5}|^2 + C \xi^2(1+\xi^2) |\hat{u}_{6}|^2 \\ + |a_4^2 + a_5^2 - 1| C \xi^6 |\hat{u}_{5}|^2 + |a_{4}^2-1| C |\xi|^2 (1+\xi^2)^2 |\hat{u}_{2}| |\hat{u}_{6}|. \end{multline} If we assume that $a_{4}^2-1 = 0$, the estimate \eqref{eneq6-2} can be rewritten as \begin{multline}\label{eneq6-2'} \partial_t \big\{ \beta_2 \xi^2 ((1+ \xi^2) E_1 + \beta_1 \xi E_2 + \xi E_3 + F_1) + \xi (1+\xi^2)E_4 \big\} \\ + c \xi^2 (1 + \xi^2) (|\hat u_{1}|^2 + |\hat u_{4}|^2) + c \xi^4 (|\hat u_{2}|^2+ |\hat u_{3}|^2) \\ \le C (1+\xi^2)^3 |\hat u_{5}|^2 + C \xi^2(1+\xi^2) |\hat{u}_{6}|^2. \end{multline} Then, multiplying \eqref{dissipation6-10} and the above inequality by $(1+\xi^2)^3$ and $\beta_3 \xi^2$, respectively, and combining the resultant equation, we have \begin{multline* \partial_t \big\{ \beta_3 \xi^2 (\beta_2 \xi^2 ((1+ \xi^2) E_1 + \beta_1 \xi E_2 + \xi E_3 + F_1) + \xi (1+\xi^2)E_4) + \xi (1+\xi^2)^3 E_5 \big\} \\ + \beta_3 c \xi^4 (1 + \xi^2) (|\hat u_{1}|^2 + |\hat u_{4}|^2)+ \beta_3 c \xi^6 (|\hat u_{2}|^2+ |\hat u_{3}|^2)\\ + \Big( \frac{1}{2}a_{6}^2 - \beta_3 C\Big) \xi^2 (1+\xi^2)^3 |\hat u_{5}|^2 \le \beta_3 C \xi^4(1+\xi^2) |\hat{u}_{6}|^2 \\ + \Big( a_{6}^2 \xi^2 + \frac{1}{2} \gamma^2 \Big) (1+\xi^2)^3|\hat{u}_6|^2 + a_{5} a_{6} \xi^2 (1+\xi^2)^3 \, \Re (\hat{u}_{4} \bar{\hat{u}}_6). \end{multline*} Hence we arrive at \begin{equation* \begin{split} & \partial_t \big\{ \beta_3 \xi^2 (\beta_2 \xi^2 ((1+ \xi^2) E_1 + \beta_1 \xi E_2 + \xi E_3 + F_1) \\ &\qquad\qquad + \xi (1+\xi^2)E_4) + \xi (1+\xi^2)^3 E_5 \big\} \\ & + c \xi^4 (1 + \xi^2) (|\hat u_{1}|^2 + |\hat u_{4}|^2)+ c \xi^6 (|\hat u_{2}|^2+ |\hat u_{3}|^2) + c \xi^2 (1+\xi^2)^3 |\hat u_{5}|^2 \\ &\le C (1+\xi^2)^4 |\hat{u}_{6}|^2 + C \xi^2 (1+\xi^2)^3 |\hat{u}_{4}| | |\hat{u}_6| . \end{split} \end{equation*} Moreover, we multiply \eqref{dissipation6-8} and \eqref{dissipation6-9} by $\beta_4 \xi^6$ and $\beta_5 \xi^6$, respectively, and combining the resultant equations and the above inequality. Then, letting $\beta_4$ and $\beta_5$ suitably small, this yields \begin{multline}\label{eneq6-Eq} \partial_t E + c \xi^4 (1 + \xi^2) |\hat u_{1}|^2 + c \xi^6 |\hat u_{2}|^2+ c \xi^6 (1 + \xi^2) |\hat u_{3}|^2 \\ +c \xi^4 (1 + \xi^2)^2 |\hat u_{4}|^2 + c \xi^2 (1+\xi^2)^3 |\hat u_{5}|^2 \le C (1+\xi^2)^4 |\hat{u}_{6}|^2, \end{multline} where we have defined \begin{multline}\label{eneq6-E} E = \beta_2 \beta_3 \xi^4 (1+ \xi^2) E_1 + \beta_1\beta_2 \beta_3 \xi^5 E_2 + \xi^4 (\beta_2 \beta_3 + \beta_4 \xi^2)(\xi E_3 + F_1) \\ + \xi^3 ( \beta_3 (1+\xi^2) + \beta_5 \xi^4) E_4 + \xi (1+\xi^2)^3 E_5. \end{multline} Finally, combining the basic energy \eqref{eq6} with the above estimate, this yields \begin{multline}\label{eneq6-3} \partial _t \Big\{ \frac{1}{2}(1+\xi^2)^{4}|\hat{u}|^2 + \beta_{7} E \Big\} + c \xi^{4} (1 + \xi^2) |\hat u_{1}|^2 \\ + c \xi^{6} |\hat u_{2}|^2 + c \sum_{j=3}^{6}\xi^{2(6-j)} (1+\xi^2)^{j-2} |\hat{u}_{j}|^2 \le 0. \end{multline} Thus, integrating the above estimate with respect to $t$, we obtain the following energy estimate \begin{multline}\label{eneq6-4} |\hat{u}(t,\xi)|^2 + \int^t_0 \Big\{ \frac{\xi^{4}}{(1+\xi^2)^{3}} |\hat u_{1}|^2 + \frac{\xi^{6}}{(1+\xi^2)^{4}} |\hat u_{2}|^2 \\ + \sum_{j=3}^{6} \frac{\xi^{2(6-j)}}{(1+\xi^2)^{6-j}} |\hat u_{j}|^2 \Big\} d\tau \le C|\hat{u}(0,\xi)|^2. \end{multline} Here we used the following inequality \begin{equation}\label{eneq6-5} c |\hat{u}|^2 \le \frac{1}{2} |\hat{u}|^2 + \frac{\beta_{7}}{(1+\xi^2)^{4}} E \le C |\hat{u}|^2 \end{equation} for suitably small $\beta_{7}$. Furthermore the estimate \eqref{eneq6-3} with \eqref{eneq6-5} gives us the following pointwise estimate \begin{equation}\label{pt6-1} |\hat{u}(t,\xi)| \le C e^{- c \lambda(\xi)t} |\hat{u}(0,\xi)|, \qquad \lambda(\xi) = \frac{\xi^{6}}{(1+\xi^2)^{4}}. \end{equation} On the other hand, if we assume that $a_{4}^2 + a_5^2 -1 = 0$, the estimate \eqref{eneq6-2} is rewritten as \begin{equation}\label{eneq6-2''} \begin{split} & \partial_t \big\{ \beta_2 \xi^2 ((1+ \xi^2) E_1 + \beta_1 \xi E_2 + \xi E_3 + F_1) + \xi (1+\xi^2)E_4 \big\} \\ &\qquad + c \xi^2 (1 + \xi^2) (|\hat u_{1}|^2 + |\hat u_{4}|^2) + c \xi^4 (|\hat u_{2}|^2+ |\hat u_{3}|^2) \\ &\le C (1+\xi^2)^2 |\hat u_{5}|^2 + C (1+\xi^2)^4 |\hat{u}_{6}|^2. \\ \end{split} \end{equation} Then, multiplying \eqref{dissipation6-10} and the above inequality by $(1+\xi^2)^2$ and $\beta_3 \xi^2$, respectively, and combining the resultant equation, we have \begin{multline* \partial_t \big\{ \beta_3 \xi^2 (\beta_2 \xi^2 ((1+ \xi^2) E_1 + \beta_1 \xi E_2 + \xi E_3 + F_1) + \xi (1+\xi^2)E_4) + \xi (1+\xi^2)^2 E_5 \big\} \\ + \beta_3 c \xi^4 (1 + \xi^2) (|\hat u_{1}|^2 + |\hat u_{4}|^2)+ \beta_3 c \xi^6 (|\hat u_{2}|^2+ |\hat u_{3}|^2) + \Big( \frac{1}{2}a_{6}^2 - \beta_3 C\Big) \xi^2 (1+\xi^2)^2 |\hat u_{5}|^2 \\ \le \beta_3 C (1+\xi^2)^4 |\hat{u}_{6}|^2 + \Big( a_{6}^2 \xi^2 + \frac{1}{2} \gamma^2 \Big) (1+\xi^2)^2|\hat{u}_6|^2 + a_{5} a_{6} \xi^2 (1+\xi^2)^2 \, \Re (\hat{u}_{4} \bar{\hat{u}}_6). \end{multline*} Hence we arrive at \begin{equation* \begin{split} & \partial_t \big\{ \beta_3 \xi^2 (\beta_2 \xi^2 ((1+ \xi^2) E_1 + \beta_1 \xi E_2 + \xi E_3 + F_1) \\ &\qquad\qquad + \xi (1+\xi^2)E_4) + \xi (1+\xi^2)^2 E_5 \big\} \\ & + c \xi^4 (1 + \xi^2) (|\hat u_{1}|^2 + |\hat u_{4}|^2)+ c \xi^6 (|\hat u_{2}|^2+ |\hat u_{3}|^2) + c \xi^2 (1+\xi^2)^2 |\hat u_{5}|^2 \\ &\le C (1+\xi^2)^4 |\hat{u}_{6}|^2. \end{split} \end{equation*} Moreover, we multiply \eqref{dissipation6-8}, \eqref{dissipation6-9} and \eqref{dissipation6-10} by $\beta_4 \xi^6$, $\beta_5 \xi^6$ and $\beta_6 \xi^6$, respectively, and combine the resultant equations and the above inequality. Then, letting $\beta_4$ and $\beta_5$ suitably small, this yields \begin{equation* \begin{split} & \partial_t \big\{ \beta_2 \beta_3 \xi^4 (1+ \xi^2) E_1 + \beta_1\beta_2 \beta_3 \xi^5 E_2 + \xi^4 (\beta_2 \beta_3 + \beta_4 \xi^2)(\xi E_3 + F_1) \\ &\qquad\qquad + \xi^3 ( \beta_3 (1+\xi^2) + \beta_5 \xi^4) E_4 + \xi ((1+\xi^2)^2 + \beta_6 \xi^6) E_5 \big\} \\ & + c \xi^4 (1 + \xi^2) |\hat u_{1}|^2 + c \xi^6 |\hat u_{2}|^2+ c \xi^6 (1 + \xi^2) |\hat u_{3}|^2 \\ &+c \xi^4 (1 + \xi^2)^2 |\hat u_{4}|^2 + c \xi^2 (1+\xi^2)^3 |\hat u_{5}|^2 \le C (1+\xi^2)^4 |\hat{u}_{6}|^2. \end{split} \end{equation*} We note that this estimate is essentially the same as \eqref{eneq6-Eq}. Hence we can obtain the energy estimate \eqref{eneq6-4} and the pointwise estimate \eqref{pt6-1}. Eventually, we arrive at the estimate for both cases $a_4^2 - 1 = 0$ and $a_4^2 + a_5^2 - 1 =0$. Moreover, by using the similar argument, we can derive the same estimates in the case $a_4^2 - 1 \neq 0$, $a_4^2 + a_5^2 - 1 \neq 0$. Thus we complete the proof of Theorem 2.1 with $m=6$. \subsection{Energy method for model I} Inspired by the concrete computation in Subsection 2.2, we consider the more general case $m\geq 6$. Now, our system \eqref{sys1} with \eqref{Tmat} is described as \begin{equation}\label{equations} \begin{split} &\partial_t \hat u_1 + i \xi \hat u_2 + \hat u_4 = 0, \\ &\partial_t \hat u_2 + i \xi \hat u_1 = 0, \\ &\partial_t \hat u_3 + i \xi a_4 \hat u_4 = 0, \\ &\partial_t \hat u_4 + i \xi (a_4 \hat u_3 + a_5 \hat u_5) - \hat u_1 = 0, \\ &\partial_t \hat u_j + i \xi (a_{j} \hat u_{j-1} + a_{j+1} \hat u_{j+1}) = 0, \qquad j = 5, \cdots, m-1, \\ &\partial_t \hat u_m + i \xi a_{m} \hat u_{m-1} + \gamma \hat u_m = 0. \end{split} \end{equation} We are going to apply the energy method to this system and derive Theorem \ref{thm1}. The proof is organized by the following three steps. \medskip \noindent {\bf Step 1.}\ \ We first derive the basic energy equality for the system \eqref{sys1} in the Fourier space. Taking the inner product of \eqref{sys1} with $\hat{u}$, we have \begin{equation*} \langle \hat u_t, \hat{u} \rangle + i \xi \langle A_m \hat u, \hat u \rangle + \langle L_m \hat u, \hat u \rangle = 0. \end{equation*} Taking the real part, we get the basic energy equality \begin{equation* \frac{1}{2}\frac{\partial }{\partial t} |\hat u|^2 + \langle L_m \hat u, \hat u \rangle = 0, \end{equation*} and hence \begin{equation}\label{eq} \frac{1}{2}\partial_t |\hat u|^2 + \gamma \hat u_m^2 = 0. \end{equation} Next we create the dissipation terms by the following two steps. \medskip \noindent {\bf Step 2.}\ \ For $\ell = 6, \cdots , m-1$, we multiply the fifth equations with $j=\ell-1$ and $j=\ell$ in \eqref{equations} by $i \xi a_{\ell} \bar{\hat{u}}_{\ell}$ and $ - i \xi a_{\ell} \bar{\hat{u}}_{\ell-1}$, respectively. Then, combining the resultant equations and taking the real part, we have \begin{multline}\label{dissipation-1'} a_{\ell} \xi \partial_t \Re (i \hat u_{\ell-1} \bar{\hat{u}}_\ell) + a_{\ell}^2 \xi^2( |\hat u_{\ell-1}|^2 - |\hat u_{\ell}|^2) \\ - a_{\ell} a_{\ell-1} \xi^2 \, \Re (\hat{u}_{\ell-2} \bar{\hat{u}}_\ell) + a_{\ell} a_{\ell+1} \xi^2 \, \Re (\hat{u}_{\ell-1} \bar{\hat{u}}_{\ell+1}) = 0. \end{multline} Here, by using Young inequality, we obtain \begin{equation}\label{dissipation-1} \xi \partial_t E_{\ell-1} + \frac{1}{2}a_{\ell}^2 \xi^2 |\hat u_{\ell-1}|^2 \le a_{\ell}^2 \xi^2 |\hat u_{\ell}|^2 + \frac{1}{2}a_{\ell+1}^2 \xi^2 |\hat{u}_{\ell+1}|^2 + a_{\ell} a_{\ell-1} \xi^2 \, \Re (\hat{u}_{\ell-2} \bar{\hat{u}}_\ell) \end{equation} for $\ell = 6, \cdots , m-1$, where we have defined $E_{\ell -1} = a_{\ell} \xi \Re (i \hat u_{\ell-1} \bar{\hat{u}}_\ell)$. On the other hand, we multiply the fifth equation with $j=m-1$ and the last equation in \eqref{equations} by $i \xi a_{m} \bar{\hat{u}}_{m}$ and $ - i \xi a_{m} \bar{\hat{u}}_{m-1}$, respectively. Then, combining the resultant equations and taking the real part, we obtain \begin{multline}\label{dissipation-2'} a_{m} \xi \partial_t \Re (i \hat u_{m-1} \bar{\hat{u}}_m) + a_{m}^2 \xi^2( |\hat u_{m-1}|^2 - |\hat u_{m}|^2) \\ - a_{m} a_{m-1} \xi^2 \, \Re (\hat{u}_{m-2} \bar{\hat{u}}_m) + \gamma a_{m} \xi \, \Re (i \hat{u}_{m-1} \bar{\hat{u}}_m) = 0. \end{multline} Using Young inequality, this yields \begin{multline}\label{dissipation-2} \xi \partial_t E_{m-1} + \frac{1}{2}a_{m}^2 \xi^2 |\hat u_{m-1}|^2 \\ \le a_{m}^2 \xi^2 |\hat u_{m}|^2 + \frac{1}{2} \gamma^2 |\hat{u}_m|^2 + a_{m} a_{m-1} \xi^2 \, \Re (\hat{u}_{m-2} \bar{\hat{u}}_m), \end{multline} where we have defined $E_{m -1} = a_{m} \xi \Re (i \hat u_{m-1} \bar{\hat{u}}_m)$. \medskip \noindent {\bf Step 3.}\ \ We note that equations \eqref{equations} with $1 \le j \le 5$ are the same as the five equations in \eqref{equations6}. Thus we can adopt the useful estimates derived in Subsection 2.2. More precisely, we employ \eqref{dissipation6-8}, \eqref{dissipation6-9} and \eqref{eneq6-2} again. For the estimate \eqref{eneq6-2}, if we assume that $a_{4}^2-1 = 0$, we can obtain \eqref{eneq6-2'}. Then, multiplying \eqref{dissipation-1} with $\ell = 6$ and \eqref{eneq6-2'} by $(1+\xi^2)^3$ and $\beta_3 \xi^2$, respectively, and combining the resultant equation, we have \begin{multline* \partial _t \big\{ \beta_3 \xi^2 (\beta_2 \xi^2 ((1+ \xi^2) E_1 + \beta_1 \xi E_2 + \xi E_3 + F_1) + \xi (1+\xi^2)E_4) + \xi (1+\xi^2)^3 E_5 \big\} \\ + \beta_3 c \xi^4 (1 + \xi^2) (|\hat u_{1}|^2 + |\hat u_{4}|^2)+ \beta_3 c \xi^6 (|\hat u_{2}|^2+ |\hat u_{3}|^2)\\ + \Big( \frac{1}{2}a_{6}^2 - \beta_3 C\Big) \xi^2 (1+\xi^2)^3 |\hat u_{5}|^2 \le \beta_3 C \xi^4(1+\xi^2) |\hat{u}_{6}|^2 + a_{6}^2 \xi^2 (1+\xi^2)^3|\hat{u}_6|^2\\ + \frac{1}{2} a_{7}^2 \xi^2 (1+\xi^2)^3|\hat{u}_7|^2 + a_{5} a_{6} \xi^2 (1+\xi^2)^3 \, \Re (\hat{u}_{4} \bar{\hat{u}}_6). \end{multline*} Hence we arrive at \begin{equation* \begin{split} &\partial_t \big\{ \beta_3 \xi^2 (\beta_2 \xi^2 ((1+ \xi^2) E_1 + \beta_1 \xi E_2 + \xi E_3 + F_1) \\ &\qquad\qquad + \xi (1+\xi^2)E_4) + \xi (1+\xi^2)^3 E_5 \big\} \\ & + c \xi^4 (1 + \xi^2) (|\hat u_{1}|^2 + |\hat u_{4}|^2)+ c \xi^6 (|\hat u_{2}|^2+ |\hat u_{3}|^2) + c \xi^2 (1+\xi^2)^3 |\hat u_{5}|^2 \\ &\le C \xi^2 (1+\xi^2)^3 (|\hat{u}_{6}|^2 + |\hat{u}_{7}|^2) + C \xi^2 (1+\xi^2)^3 |\hat{u}_{4}| | |\hat{u}_6| . \end{split} \end{equation*} Moreover, we multiply \eqref{dissipation6-8} and \eqref{dissipation6-9} by $\beta_4 \xi^6$ and $\beta_5 \xi^6$, respectively, and combining the resultant equations and the above inequality. Then, letting $\beta_4$ and $\beta_5$ suitably small, this yields \begin{multline}\label{eneq-1} \partial _t E + c \xi^4 (1 + \xi^2) |\hat u_{1}|^2 + c \xi^6 |\hat u_{2}|^2+ c \xi^6 (1 + \xi^2) |\hat u_{3}|^2 +c \xi^4 (1 + \xi^2)^2 |\hat u_{4}|^2 \\ + c \xi^2 (1+\xi^2)^3 |\hat u_{5}|^2 \le C (1+\xi^2)^4 |\hat{u}_{6}|^2 + C \xi^2 (1+\xi^2)^3 |\hat{u}_{7}|^2. \end{multline} where $E$ is defined in \eqref{eneq6-E}. On the other hand, if we assume that $a_{4}^2 + a_5^2 -1 = 0$, we employ \eqref{eneq6-2''}. Then, multiplying \eqref{dissipation-1} with $\ell = 6$ and \eqref{eneq6-2''} by $(1+\xi^2)^2$ and $\beta_3 \xi^2$, respectively, and combining the resultant equation, we have \begin{multline* \partial_t \big\{ \beta_3 \xi^2 (\beta_2 \xi^2 ((1+ \xi^2) E_1 + \beta_1 \xi E_2 + \xi E_3 + F_1) + \xi (1+\xi^2)E_4) + \xi (1+\xi^2)^2 E_5 \big\} \\ + \beta_3 c \xi^4 (1 + \xi^2) (|\hat u_{1}|^2 + |\hat u_{4}|^2)+ \beta_3 c \xi^6 (|\hat u_{2}|^2+ |\hat u_{3}|^2)\\ + \Big( \frac{1}{2}a_{6}^2 - \beta_3 C\Big) \xi^2 (1+\xi^2)^2 |\hat u_{5}|^2 \le \beta_3 C (1+\xi^2)^4 |\hat{u}_{6}|^2 + a_{6}^2 \xi^2 (1+\xi^2)^2|\hat{u}_6|^2\\ + \frac{1}{2} a_{7}^2 \xi^2 (1+\xi^2)^2|\hat{u}_7|^2 + a_{5} a_{6} \xi^2 (1+\xi^2)^2 \, \Re (\hat{u}_{4} \bar{\hat{u}}_6). \end{multline*} Hence we arrive at \begin{equation* \begin{split} & \partial_t \big\{ \beta_3 \xi^2 (\beta_2 \xi^2 ((1+ \xi^2) E_1 + \beta_1 \xi E_2 + \xi E_3 + F_1) \\ &\qquad\qquad + \xi (1+\xi^2)E_4) + \xi (1+\xi^2)^2 E_5 \big\} \\ & + c \xi^4 (1 + \xi^2) (|\hat u_{1}|^2 + |\hat u_{4}|^2)+ c \xi^6 (|\hat u_{2}|^2+ |\hat u_{3}|^2) + c \xi^2 (1+\xi^2)^2 |\hat u_{5}|^2 \\ &\le C (1+\xi^2)^4 |\hat{u}_{6}|^2 + C\xi^2 (1+\xi^2)^2|\hat{u}_7|^2. \end{split} \end{equation*} Moreover, we multiply \eqref{dissipation6-8}, \eqref{dissipation6-9} and \eqref{dissipation-1} with $\ell = 6$ by $\beta_4 \xi^6$, $\beta_5 \xi^6$ and $\beta_6 \xi^6$, respectively, and combine the resultant equations and the above inequality. Then, letting $\beta_4$, $\beta_5$ and $\beta_6$ suitably small, this yields \begin{equation* \begin{split} & \partial_t \big\{ \beta_2 \beta_3 \xi^4 (1+ \xi^2) E_1 + \beta_1\beta_2 \beta_3 \xi^5 E_2 + \xi^4 (\beta_2 \beta_3 + \beta_4 \xi^2)(\xi E_3 + F_1) \\ &\qquad\qquad + \xi^3 ( \beta_3 (1+\xi^2) + \beta_5 \xi^4) E_4 + \xi ((1+\xi^2)^2 + \beta_6 \xi^6) E_5 \big\} \\ & + c \xi^4 (1 + \xi^2) |\hat u_{1}|^2 + c \xi^6 |\hat u_{2}|^2+ c \xi^6 (1 + \xi^2) |\hat u_{3}|^2 \\ &+c \xi^4 (1 + \xi^2)^2 |\hat u_{4}|^2 + c \xi^2 (1+\xi^2)^3 |\hat u_{5}|^2 \le C (1+\xi^2)^4 |\hat{u}_{6}|^2 + C\xi^2 (1+\xi^2)^3|\hat{u}_7|^2. \end{split} \end{equation*} Consequently, this estimate is essentially the same as \eqref{eneq-1}. Moreover, by using the similar argument, we can derive the same estimate in the case $a_4^2 - 1 \neq 1$ and $a_4^2 + a_5^2 - 1 \neq 0$. \medskip By using the estimate \eqref{eneq-1}, we construct the desired estimate. We multiply \eqref{dissipation-1} with $\ell = 7$ and \eqref{eneq-1} by $(1+\xi^2)^4$ and $\beta_7 \xi^2$, respectively, and combine the resultant equation. Moreover, letting $\beta_7$ suitably small and using Young inequality, we obtain \begin{equation* \begin{split} &\partial _t \big\{ \beta_7 \xi^2 E + \xi (1+\xi^2)^4 E_6\big\} + c \xi^{6} (1 + \xi^2) |\hat u_{1}|^2 + c \xi^{8} |\hat u_{2}|^2 + c \xi^{8} (1 + \xi^2) |\hat u_{3}|^2 \\ &+c \xi^{6} (1 + \xi^2)^2 |\hat u_{4}|^2+ c \xi^{4} (1+\xi^2)^3 |\hat u_{5}|^2 + c \xi^{2} (1+\xi^2)^4 |\hat{u}_{6}|^2 \\ &\le C(1+\xi^2)^5 |\hat{u}_{7}|^2 + C \xi^2 (1+\xi^2)^4 |\hat{u}_{8}|^2. \end{split} \end{equation*} Eventually, by the induction argument with respect to $j$ in \eqref{dissipation-1}, we can derive \begin{multline}\label{eneq-2} \partial _t \mathcal{E}_{m-2} + c \xi^{2(m - 5)} (1 + \xi^2) |\hat u_{1}|^2 + c \xi^{2(m-4)} |\hat u_{2}|^2 \\ + c \sum_{j=3}^{m-2}\xi^{2(m-j-1)} (1+\xi^2)^{j-2} |\hat{u}_{j}|^2 \\ \le C(1+\xi^2)^{m-3} |\hat{u}_{m-1}|^2 + C \xi^2 (1+\xi^2)^{m-4} |\hat{u}_{m}|^2. \end{multline} for $m \ge 7$. Here we define $\mathcal{E}_{m-2}$ as $\mathcal{E}_5 = E$ and $$ \mathcal{E}_{m-2} = \beta_{m-1} \xi^2 \mathcal{E}_{m -3} + \xi (1+\xi^2)^{m-4} E_{m-2}, \qquad m \ge 8. $$ Now, multiplying \eqref{dissipation-2} and \eqref{eneq-2} by $(1+\xi^2)^{m-3}$ and $\beta_{m} \xi^2$, respectively, and making the appropriate combination, we get \begin{multline}\label{eneq-3} \partial _t \mathcal{E}_{m-1} + c \xi^{2(m - 4)} (1 + \xi^2) |\hat u_{1}|^2 + c \xi^{2(m-3)} |\hat u_{2}|^2 \\ + c \sum_{j=3}^{m-1}\xi^{2(m-j)} (1+\xi^2)^{j-2} |\hat{u}_{j}|^2 \le C(1+\xi^2)^{m-2} |\hat{u}_{m}|^2. \end{multline} Finally, combining \eqref{eq} with \eqref{eneq-3}, this yields \begin{multline}\label{eneq-4} \partial _t \Big\{ \frac{1}{2}(1+\xi^2)^{m-2}|\hat{u}|^2 + \beta_{m+1} \mathcal{E}_{m-1}\Big\} + c \xi^{2(m - 4)} (1 + \xi^2) |\hat u_{1}|^2 \\ + c \xi^{2(m-3)} |\hat u_{2}|^2 + c \sum_{j=3}^{m}\xi^{2(m-j)} (1+\xi^2)^{j-2} |\hat{u}_{j}|^2 \le 0. \end{multline} Thus, integrating the above estimate with respect to $t$, we obtain the following energy estimate \begin{multline}\label{energy-eq1} |\hat{u}(t,\xi)|^2 + \int^t_0 \Big\{ \frac{\xi^{2(m-4)}}{(1+\xi^2)^{m-3}} |\hat u_{1}|^2 + \frac{\xi^{2(m-3)}}{(1+\xi^2)^{m-2}} |\hat u_{2}|^2 \\ + \sum_{j=3}^{m} \frac{\xi^{2(m-j)}}{(1+\xi^2)^{m-j}} |\hat u_{j}|^2 \Big\} d\tau \le C|\hat{u}(0,\xi)|^2 \end{multline} for $m \ge 7$. Here we have used the following inequality \begin{equation \notag c |\hat{u}|^2 \le \frac{1}{2} |\hat{u}|^2 + \frac{\beta_{m+1}}{(1+\xi^2)^{m-2}} \mathcal{E}_{m-1} \le C |\hat{u}|^2 \end{equation} for suitably small $\beta_{m+1}$. Furthermore the estimate \eqref{eneq-3} with \eqref{eneq-4} gives us the following pointwise estimate \begin{equation \notag |\hat{u}(t,\xi)| \le C e^{- c \lambda(\xi)t} |\hat{u}(0,\xi)|, \qquad \lambda(\xi) = \frac{\xi^{2(m-3)}}{(1+\xi^2)^{m-2}} \end{equation} for $m \ge 7$. Therefore, together with the proof in Subsection 2.2, \eqref{point1} is proved, and we then complete the proof of Theorem 2.1. \subsection{Construction of the matrices $K$ and $S$} In this section, inspired by the energy method employed in Sections 2.2 and 2.3, we shall derive the matrices $K$ and $S$. Based on the energy method of Step 2 in Subsection 2.2, we introduce the following $m \times m$ matrices: \begin{equation* S_1 = \left( \begin{array}{cccc:c {0} & {0} & {0} & {1} & \\%$BBh(B1$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {1} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right), \quad S_2 = \left( \begin{array}{cccc:c {0} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {1} & {0} & \\%$BBh(B2$B9T(B {0} & {1} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right), \quad S_3 = \left( \begin{array}{cccc:cc {0} & {0} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {0} & {0} & {1} & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline {0} & {1} & {0} & {0} & {0} & \\%$BBh(B5$B9T(B & & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right), \end{equation*} and hence \begin{equation \notag \begin{split} \tilde{S} &= - a_5 \big\{ a_5 (S_1 + a_4 S_2) - a_5 (a_4^2-1) S_3 \big\} \\[2mm] &= - a_5 \left( \begin{array}{cccc:cc {0} & {0} & {0} & {a_5} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {a_4 a_5} & {0} & {1-a_4^2} & \\%$BBh(B2$B9T(B {0} & {a_4 a_5} & {0} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {a_5} & {0} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline {0} & {1-a_4^2} & {0} & {0} & {0} & \\%$BBh(B5$B9T(B & & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right). \\ \end{split} \end{equation} Then, we multiply \eqref{Fsys1} by $\tilde{S}$ and take the inner product with $\hat u$. Furthermore, taking the real part of the resultant equation, we obtain \begin{equation}\label{FsysS-1} \frac{1}{2} \partial_t \langle \tilde{S} \hat u, \hat u \rangle + \xi \langle i [\tilde{S}A_m]^{\rm asy} \hat u, \hat u \rangle + \langle [\tilde{S}L_m]^{\rm sy} \hat u, \hat u \rangle = 0, \end{equation} where \begin{equation* \begin{split} \tilde{S}A_m &= - a_5 \left( \begin{array}{cccc:ccc {0} & {0} & {a_4 a_5} & {0} & {a_5^2} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {0} & {a_5} & {0} & {a_6(1-a_4^2)} & \\%$BBh(B2$B9T(B {a_4a_5} & {0} & {0} & {0} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {a_5} & {0} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline {1-a_4^2} & {0} & {0} & {0} & {0} & {0} & \\%$BBh(B5$B9T(B {0} & {0} & {0} & {0} & {0} & {0} & \\%$BBh(B5$B9T(B & & & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right), \\[2mm] \tilde{S} L_m &= a_5^2 \left( \begin{array}{cccc:c {1} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {-1} & \\%$BBh(B4$B9T(B \hdashline & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right). \\ \end{split} \end{equation*} The equality \eqref{FsysS-1} is equivalent to \eqref{dissipation6-6}. We note that the symmetric matrix $S_1 + a_4 S_2$ is the key matrix for $4 \times 4$ Timoshenko system (see \cite{IHK08,IK08}). The symmetric matrix $\tilde{S}$ is the one of the key matrix for the system \eqref{Fsys1}. On the other hand, we introduce the following $m \times m$ matrix: \begin{equation* K_1 = \left( \begin{array}{cccc:c {0} & {-1} & {0} & {0} & \\%$BBh(B1$B9T(B {1} & {0} & {0} & {0} & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right). \end{equation*} Then, we multiply \eqref{Fsys1} by $-i\xi K_1$ and take the inner product with $\hat u$. Moreover, taking the real part of the resultant equation, we have \begin{equation}\label{FsysK1-1} -\frac{1}{2} \xi \partial_t \langle i K_1 \hat u, \hat u \rangle + \xi^2 \langle [K_1A_m]^{\rm sy} \hat u, \hat u \rangl - \xi \langle i [K_1L_m]^{\rm asy} \hat u, \hat u \rangle = 0, \end{equation} where \begin{equation* K_1A_m = \left( \begin{array}{cccc:cc {-1} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {1} & {0} & {0} & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline & & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right), \qquad K_1 L_m = \left( \begin{array}{cccc:c {0} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {0} & {1} & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right). \\ \end{equation*} The equality \eqref{FsysK1-1} is equivalent to \eqref{dissipation6-7'}. We next introduce the following $m \times m$ matrices: \begin{equation* K_4 = a_4 \left( \begin{array}{cccc:c {0} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {1} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {-1} & {0} & \\%$BBh(B4$B9T(B \hdashline & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right), \qquad S_4 = - a_4 \left( \begin{array}{cccc:c {0} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {1} & {0} & \\%$BBh(B2$B9T(B {0} & {1} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right). \end{equation*} Then, we multiply \eqref{Fsys1} by $-i\xi K_2$ and $S_4$, and take the inner product with $\hat u$, respectively. Moreover, taking the real part of the resultant equations and combining these, then we have \begin{multline}\label{FsysKS-1} \frac{1}{2} \partial_t \langle (S_4 - i \xi K_4) \hat u, \hat u \rangle + \xi^2 \langle [K_4A_m]^{\rm sy} \hat u, \hat u \rangle+ \langle [S_4L_m]^{\rm sy} \hat u, \hat u \rangle \\ + \xi \langle i [S_4A_m - K_4L_m]^{\rm asy} \hat u, \hat u \rangle = 0, \end{multline} where $S_4 L_m =O$ and \begin{equation* \begin{split} K_4A_m &= \left( \begin{array}{cccc:cc {0} & {0} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {0} & {0} & {0} & \\%$BBh(B2$B9T(B {0} & {0} & {a_4^2} & {0} & {a_4a_5} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {-a^2_4} & {0} & \\%$BBh(B4$B9T(B \hdashline {0} & {0} & {0} & {0} & {0} & \\%$BBh(B5$B9T(B & & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right), \quad S_4A_m - K_4 L_m = \left( \begin{array}{cccc:c {0} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {0} & {- a_4^2 } & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right). \\ \end{split} \end{equation*} The equality \eqref{FsysKS-1} is equivalent to \eqref{dissipation6-8'}. Similarly we introduce the following $m \times m$ matrix: \begin{equation* K_5 = a_5 \left( \begin{array}{cccc:cc {0} & {0} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {0} & {0} & {0} & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {0} & {1} & \\%$BBh(B4$B9T(B \hdashline {0} & {0} & {0} & {-1} & {0} & \\%$BBh(B5$B9T(B & & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right). \end{equation*} Then, we multiply \eqref{Fsys1} by $-i \xi K_5$ and take the inner product with $\hat u$. Furthermore, taking the real part of the resultant equation, we obtain \begin{equation}\label{FsysK3-1} -\frac{1}{2} \xi \partial_t \langle i K_5 \hat u, \hat u \rangle + \xi^2 \langle [K_5A_m]^{\rm sy} \hat u, \hat u \rangl - \xi \langle i [K_5L_m]^{\rm asy} \hat u, \hat u \rangle = 0, \end{equation} where \begin{equation* K_5A_m = \left( \begin{array}{cccc:ccc {0} & {0} & {0} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {0} & {0} & {0} & {0} & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {0} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {a_5^2} & {0} & {a_5a_6} & \\%$BBh(B4$B9T(B \hdashline {0} & {0} & {-a_4a_5} & {0} & {-a_5^2} & {0} & \\%$BBh(B5$B9T(B {0} & {0} & {0} & {0} & {0} & {0} & \\%$BBh(B5$B9T(B & & & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right), \quad K_5L_m = \left( \begin{array}{cccc:cc {0} & {0} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {0} & {0} & {0} & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline {a_5} & {0} & {0} & {0} & {0} & \\%$BBh(B5$B9T(B & & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right). \end{equation*} The equality \eqref{FsysK3-1} is equivalent to \eqref{dissipation6-9'}. Based on the energy method of Step 2 in Subsection 2.3, we introduce the following $m \times m$ matrices: $$ K_{\ell} = a_\ell \hskip-3mm \bordermatrix*[{( )}]{ & & &0 & 0 & & & & \cr & \mbox{\smash{\huge\textit{O}}} & &\vdots &\vdots & & \mbox{\smash{\huge\textit{O}}} & & \cr & & &0 & 0 & & & & \cr 0 & \cdots &0 &0 & 1 & 0 &\cdots &0\ \ & \ell-1 \cr 0 & \cdots &0 &-1 &0 &0 & \cdots &0\ \ & \ell \cr & & &0 & 0 & & & & \cr & \mbox{\smash{\huge\textit{O}}}& &\vdots & \vdots & & \mbox{\smash{\huge\textit{O}}} & & \cr & & &0 & 0 & & & & \cr & & &\ell -1 & \ell & & & & } $$ \vskip4mm \noindent for $\ell = 6, \cdots , m-1$. Then, we multiply \eqref{Fsys1} by $-i K_{\ell}$ and take the inner product with $\hat u$. Furthermore, taking the real part of the resultant equation, we obtain \begin{equation}\label{FsysKL-1} -\frac{1}{2} \xi \partial_t \langle i K_{\ell} \hat u, \hat u \rangle + \xi^2 \langle [K_{\ell} A_m]^{\rm sy} \hat u, \hat u \rangl =0 \end{equation} for $\ell = 6, \cdots , m-1$, where $$ \hspace{-12mm} K_{\ell} A_m = \hskip-3mm \bordermatrix*[{( )}]{ & & &0 &0 & 0 &0 & & & & \cr & \mbox{\smash{\huge\textit{O}}} & &\vdots &\vdots &\vdots &\vdots & & \mbox{\smash{\huge\textit{O}}} & & \cr & & &0 &0 & 0 &0 & & & & \cr 0 & \cdots &0 &0 &a_\ell^2 & 0 & a_\ell a_{\ell+1} &0&\cdots &0\ \ & \ell-1 \cr 0 & \cdots &0 & -a_{\ell-1} a_{\ell} &0 &-a_\ell^2 &0 &0& \cdots &0 \ \ & \ell \cr & & &0 &0 & 0 &0 & & & & \cr & \mbox{\smash{\huge\textit{O}}}& &\vdots &\vdots & \vdots & \vdots & & \mbox{\smash{\huge\textit{O}}} & & \cr & & &0 &0 & 0 &0 & & & & \cr & & &\ell-2 &\ell -1 & \ell & \ell+1 & && & } $$ \vskip4mm \noindent Moreover we have \begin{equation}\label{FsysKM-1} -\frac{1}{2} \xi \partial_t \langle i K_{m} \hat u, \hat u \rangle + \xi^2 \langle [K_{m} A_m]^{\rm sy} \hat u, \hat u \rangl - \xi \langle i [K_{m} L_m]^{\rm asy} \hat u, \hat u \rangle = 0, \end{equation} where \begin{equation* K_{m}A_m = \left( \begin{array}{cccccc & & & {0} & {0} & {0} \\%$BBh(B1$B9T(B & \mbox{\smash{\huge\textit{O}}} & & {\vdots} & {\vdots} & {\vdots} \\%$BBh(B2$B9T(B & & & {0} & {0} & {0} \\%$BBh(B3$B9T(B {0} & {\cdots} & {0} & {0} & {a_m^2} & {0} \\%$BBh(B4$B9T(B {0} & {\cdots} & {0} & {-a_{m-1}a_m} & {0} & {-a_m^2} \\%$BBh(B5$B9T(B \end{array} \right), \quad K_{m}L_m = \left( \begin{array}{cccccc & & & {0} \\%$BBh(B1$B9T(B & \mbox{\smash{\huge\textit{O}}} & & {\vdots} \\%$BBh(B2$B9T(B & & & {0} \\%$BBh(B3$B9T(B {0} & {\cdots} & {0} & {a_m \gamma} \\%$BBh(B4$B9T(B {0} & {\cdots} & {0} & {0} \\%$BBh(B5$B9T(B \end{array} \right). \end{equation*} The equalities \eqref{FsysKL-1} and \eqref{FsysKM-1} are equivalent to \eqref{dissipation-1'} and \eqref{dissipation-2'}, respectively. For the rest of this subsection, we construct the desired matrices. According to the strategy of Step 3 in Subsection 2.2, we first combine \eqref{FsysS-1} and \eqref{FsysK1-1}. More precisely, multiplying \eqref{FsysS-1}, \eqref{FsysKS-1} and \eqref{FsysK1-1} by $(1+\xi^2)$, $(1+\xi^2)$ and $\delta_1$, respectively, and combining the resultant equations, we obtain \begin{equation* \begin{split} &\frac{1}{2} \partial_t \big\langle \big\{(1+\xi^2)\mathcal{S} - i \xi( \delta_1 K_1 + (1+\xi^2) K_4)\big\} \hat u, \hat u \big\rangle \\ & +(1+\xi^2) \langle [\mathcal{S}L_m]^{\rm sy} \hat u, \hat u \rangle + \xi^2 \langle [(\delta_1 K_1 + (1+\xi^2) K_4)A_m]^{\rm sy} \hat u, \hat u \rangle \\ & + \xi (1+\xi^2) \langle i [\mathcal{S}A_m]^{\rm asy} \hat u, \hat u \rangle - \xi \langle i [(\delta_1 K_1 + (1+\xi^2) K_4)L_m]^{\rm asy} \hat u, \hat u \rangle= 0. \end{split} \end{equation*} Here we define $\mathcal{S} = \tilde{S} + S_4$. We next multiply \eqref{FsysK3-1} with $\ell = 6$ and the above equation by $(1+\xi^2)^2$ and $\delta_2 \xi^2$, respectively, and combining the resultant equations, we obtain \begin{equation* \begin{split} &\frac{1}{2} \partial_t \big\langle \big\{\delta_2 \xi^2 ((1+\xi^2)\mathcal{S} - i \xi( \delta_1 K_1 + (1+\xi^2) K_4)) -\xi (1+\xi^2)^2 K_5 \big\} \hat u, \hat u \big\rangle \\ & + \delta_2 \xi^2 (1+\xi^2) \langle [\mathcal{S}L_m]^{\rm sy} \hat u, \hat u \rangle + \delta_2 \xi^3 (1+\xi^2) \langle i [\mathcal{S}A_m]^{\rm asy} \hat u, \hat u \rangle \\ &+ \xi^2 \langle [(\delta_2 \xi^2 (\delta_1 K_1 + (1+\xi^2) K_4) + (1+\xi^2)^2K_5)A_m]^{\rm sy} \hat u, \hat u \rangle \\ &- \xi \langle i[(\delta_2 \xi^2 (\delta_1 K_1 + (1+\xi^2) K_4) + (1+\xi^2)^2K_5)L_m]^{\rm asy} \hat u, \hat u \rangle= 0. \end{split} \end{equation*} Moreover, multiplying \eqref{FsysKL-1} and the above equation by $(1+\xi^2)^3$ and $\delta_3 \xi^2$, respectively, and combining the resultant equations, we get \begin{equation* \begin{split} &\frac{1}{2} \partial_t \big\langle \big\{ \delta_3 \xi^2 (\delta_2 \xi^2 ((1+\xi^2)\mathcal{S} - i \xi( \delta_1 K_1 + (1+\xi^2) K_4)) \\ &\hskip30mm -i \xi (1+\xi^2)^2 K_5 ) - i \xi (1+\xi^2)^3K_6\big\} \hat u, \hat u \big\rangle \\ & + \delta_2 \delta_3 \xi^4 (1+\xi^2) \langle [\mathcal{S}L_m]^{\rm sy} \hat u, \hat u \rangle + \delta_2 \delta_3 \xi^5 (1+\xi^2) \langle i [\mathcal{S}A_m]^{\rm asy} \hat u, \hat u \rangle \\ &+ \xi^2 \langle [(\delta_3 \xi^2 (\delta_2 \xi^2 (\delta_1 K_1 + (1+\xi^2) K_4) + (1+\xi^2)^2K_5) + (1+\xi^2)^3 K_6)A_m]^{\rm sy} \hat u, \hat u \rangle \\ &- \delta_3 \xi^3 \langle i[(\delta_2 \xi^2 (\delta_1 K_1 + (1+\xi^2) K_4) + (1+\xi^2)^2K_5)L_m]^{\rm asy} \hat u, \hat u \rangle= 0. \end{split} \end{equation*} Consequently, by the induction argument with respect to $\ell$ in \eqref{FsysKL-1}, we have \begin{multline}\label{FsysSK1-1} \frac{1}{2} \partial_t \Big\langle \Big\{ \prod_{j=2}^{\ell-3} \delta_j \xi^{2(\ell-4)}(1+\xi^2)\mathcal{S} - i \xi \mathcal{K}_{\ell} \Big\} \hat u, \hat u \Big\rangle \\ + \prod_{j=2}^{\ell-3} \delta_j \xi^{2(\ell-4)} (1+\xi^2) \langle [\mathcal{S}L_m]^{\rm sy} \hat u, \hat u \rangle + \prod_{j=2}^{\ell-3} \delta_j \xi^{2(\ell-4) + 1} (1+\xi^2) \langle i [\mathcal{S}A_m]^{\rm asy} \hat u, \hat u \rangle \\ + \xi^2 \langle [\mathcal{K}_{\ell}A_m]^{\rm sy} \hat u, \hat u \rangle - \prod_{j=3}^{\ell-3} \delta_j \xi^{2(\ell-5) + 1} \langle i[\mathcal{K}_5 L_m]^{\rm asy} \hat u, \hat u \rangle= 0. \end{multline} for $5 \le \ell \le m-1$, where the last term of left hand side is replaced by $\xi \langle i[\mathcal{K}_5 L_m]^{\rm asy} \hat u, \hat u \rangle$ for $\ell = 5$. Here we define $\mathcal{K}_{\ell}$ as $\mathcal{K}_4 = \delta_1 K_1 + (1+\xi^2) K_4$ and \begin{equation*} \mathcal{K}_{\ell} = \delta_{\ell-3} \xi^2 \mathcal{K}_{\ell -1} + (1+\xi^2)^{\ell-3} K_{\ell} \end{equation*} for $\ell \ge 5$. Therefore, we make the combination of \eqref{FsysKM-1} and \eqref{FsysSK1-1} with $\ell = m-1$. Then we can obtain \begin{multline}\label{FsysSK2-1} \frac{1}{2} \partial_t \Big\langle \Big\{ \prod_{j=2}^{m-4} \delta_j \xi^{2(m-4)}(1+\xi^2)\mathcal{S} - i \xi \mathcal{K}_{m} \Big\} \hat u, \hat u \Big\rangle + \xi^2 \langle [\mathcal{K}_{m}A_m]^{\rm sy} \hat u, \hat u \rangle \\ + \prod_{j=2}^{m-4} \delta_j \xi^{2(m-4)} (1+\xi^2) \langle [\mathcal{S}L_m]^{\rm sy} \hat u, \hat u \rangle + \prod_{j=2}^{m-4} \delta_j \xi^{2(m-4) + 1} (1+\xi^2) \langle i [\mathcal{S}A_m]^{\rm asy} \hat u, \hat u \rangle \\ - \prod_{j=3}^{m-4} \delta_j \xi^{2(m-5) + 1} \langle i[\mathcal{K}_5 L_m]^{\rm asy} \hat u, \hat u \rangle - \xi(1+\xi^2)^{m-3} \langle i [K_{m} L_m]^{\rm asy} \hat u, \hat u \rangle= 0. \end{multline} Finally, multiplying \eqref{FsysSK2-1} by $\delta_{m-3}/(1+\xi^2)^{m-2}$, and combining \eqref{eq} and the resultant equations, we can obtain \begin{multline}\label{FsysSKfinal-1} \frac{1}{2} \partial_t \Big\langle \Big[ I + \frac{\delta_{m-3}}{(1+\xi^2)^{m-2}} \Big\{ \prod_{j=2}^{m-4} \delta_j \xi^{2(m-4)}(1+\xi^2)\mathcal{S} - i \xi \mathcal{K}_{m} \Big\}\Big] \hat u, \hat u \Big\rangle \\ + \langle L_m \hat u, \hat u \rangle + \prod_{j=2}^{m-3} \delta_j \frac{\xi^{2(m-4)}}{(1+\xi^2)^{m-3}} \langle [\mathcal{S}L_m]^{\rm sy} \hat u, \hat u \rangle \\ + \delta_{m-3} \frac{\xi^2}{(1+\xi^2)^{m-2}} \langle [\mathcal{K}_{m}A_m]^{\rm sy} \hat u, \hat u \rangle + \prod_{j=2}^{m-3} \delta_j \frac{\xi^{2(m-4) + 1}}{(1+\xi^2)^{m-3}} \langle i [\mathcal{S}A_m]^{\rm asy} \hat u, \hat u \rangle \\ - \prod_{j=3}^{m-3} \delta_j \frac{\xi^{2(m-5) + 1}}{(1+\xi^2)^{m-2}} \langle i[\mathcal{K}_5 L_m]^{\rm asy} \hat u, \hat u \rangle - \delta_{m-3} \frac{\xi}{1+\xi^2} \langle i [K_{m} L_m]^{\rm asy} \hat u, \hat u \rangle= 0. \end{multline} where $I$ denotes an identity matrix. Letting $\delta_1, \cdots, \delta_{m-3}$ suitably small, then \eqref{FsysSKfinal-1} derives energy estimate \eqref{energy-eq1}. More precisely, noting that \begin{equation*} \begin{split} \mathcal{K}_{m} & =\prod_{j=2}^{m-3} \delta_{j} \xi^{2(m-4)}( \delta_1 K_1 + (1+\xi^2) K_4)+ (1+\xi^2)^{m-3} K_{m} \\ &\qquad + \sum_{k=3}^{m-3} \prod_{j=k}^{m-3} \delta_{j} \xi^{2(m-k-2)}(1+\xi^2)^{k-1} K_{k+2} \\ \end{split} \end{equation*} for $m \ge 6$, we can estimate the dissipation terms as \begin{multline}\label{1estSK} \langle L_m \hat u, \hat u \rangle + \prod_{j=2}^{m-3} \delta_j \frac{\xi^{2(m-4)}}{(1+\xi^2)^{m-3}} \langle [\mathcal{S}L_m]^{\rm sy} \hat u, \hat u \rangle \\ + \delta_{m-3} \frac{\xi^2}{(1+\xi^2)^{m-2}} \langle [\mathcal{K}_{m}A_m]^{\rm sy} \hat u, \hat u \rangle \\ \ge c \Big\{ \frac{\xi^{2(m-4)}}{(1+\xi^2)^{m-3}} |\hat u_{1}|^2 + \frac{\xi^{2(m-3)}}{(1+\xi^2)^{m-2}} |\hat u_{2}|^2 + \sum_{j=3}^{m} \frac{\xi^{2(m-j)}}{(1+\xi^2)^{m-j}} |\hat u_{j}|^2 \Big\} \end{multline} for suitably small $\delta_1, \cdots, \delta_{m-3}$. Consequently we conclude that our desired symmetric matrix $S$ and skew-symmetric matrix $K$ are described as \begin{equation*} S = \frac{\xi^{2(m-4)}}{(1+\xi^2)^{m-3}} \mathcal{S}, \qquad K = \frac{\xi^2}{(1+\xi^2)^{m-2}}\mathcal{K}_m. \end{equation*} \section{Model II} \subsection{Main result II} In this section, we treat the Cauchy problem \eqref{sys1}, \eqref{ID} with \begin{equation}\label{2Tmat} \begin{split} A_m &= \left( \begin{array}{cc:cc:ccccc:cc {0} & {1} & {0} & {0} & & & & & & & \\%$BBh(B1$B9T(B {1} & {0} & {0} & {0} & & & & & & & \\%$BBh(B2$B9T(B \hdashline {0} & {0} & {0} & {a_4} & {0} & & & & & & \\%$BBh(B3$B9T(B {0} & {0} & {a_4} & {0} & {0} & {0} & & & \mbox{\smash{\huge\textit{O}}} & & \\%$BBh(B4$B9T(B \hdashline & & {0} & {0} & {0} & {a_6} & & & & & \\%$BBh(B5$B9T(B & & & {0} & {a_6} & {0} & & & & & \\%$BBh(B6$B9T(B & & & & & & \ddots & & & & \\%$BBh(B7$B9T(B & & & & & & & 0 & {a_{m-2}} & & \\%$BBh(B7$B9T(B & & \mbox{\smash{\huge\textit{O}}} & & & & & {a_{m-2}} & {0} & {0} & {0} \\%$BBh(B8$B9T(B \hdashline & & & & & & & {0} & {0} & {0} & {a_{m}} \\%$BBh(B9$B9T(B & & & & & & & & {0} & {a_{m}} & {0} \\%$BBh(B10$B9T(B \end{array} \right), \\ L_m &= \left( \begin{array}{c:cc:ccccc:cc:c {0} & {0} & {0} & {0} & & & & & & & \\%$BBh(B1$B9T(B \hdashline {0} & {\gamma} & {1} & {0} & & & & & & & \\%$BBh(B2$B9T(B {0} & {-1} & {0} & {0} & {0} & & & & \mbox{\smash{\huge\textit{O}}} & & \\%$BBh(B3$B9T(B \hdashline {0} & {0} & {0} & {0} & {a_5} & & & & & & \\%$BBh(B4$B9T(B & & {0} & {-a_5} & {0} & & & & & & \\%$BBh(B5$B9T(B & & & & & \ddots & & & & & \\%$BBh(B6$B9T(B & & & & & & 0 & {a_{m-3}} & 0 & & \\%$BBh(B7$B9T(B & & & & & & {-a_{m-3}} & 0 & 0 & 0 & \\%$BBh(B7$B9T(B \hdashline & & & & & & {0} & {0} & {0} & {a_{m-1}} & {0} \\%$BBh(B8$B9T(B & & \mbox{\smash{\huge\textit{O}}} & & & & & {0} & {-a_{m-1}} & {0} & {0} \\%$BBh(B9$B9T(B \hdashline & & & & & & & & {0} & {0} & {0} \\%$BBh(B10$B9T(B \end{array} \right), \end{split} \end{equation} where integer $m\geq 4$ is even, $\gamma > 0$, and all elements $a_j$ $(4\leq j\leq m)$ are nonzero. We note that the system \eqref{sys1} with \eqref{2Tmat} for $m=4$ is the Timoshenko system (cf.~\cite{IHK08,IK08}). For this problem, we can derive the following decay structure. \begin{thm} \label{thm2} The Fourier image $\hat u$ of the solution $u$ to the Cauchy problem \eqref{sys1}-\eqref{ID} with \eqref{2Tmat} satisfies the pointwise estimate: \begin{equation}\label{point2} |\hat u(t,\xi)| \le Ce^{-c \lambda(\xi)t}|\hat u_0(\xi)|, \end{equation} where $\lambda(\xi) := \xi^{3m-10}/(1+\xi^2)^{2(m-3)}$. Furthermore, let $s \ge 0$ be an integer and suppose that the initial data $u_0$ belong to $H^s \cap L^1$. Then the solution $u$ satisfies the decay estimate: \begin{equation \notag \|\partial_x^{k} u(t)\|_{L^2} \le C(1+t)^{-\frac{1}{3m-10}(\frac{1}{2}+k)}\|u_0\|_{L^1} + C(1+t)^{-\frac{\ell}{m-2}}\|\partial_x^{k+\ell} u_0\|_{L^2} \end{equation} for $k+\ell \le s$. Here $C$ and $c$ are positive constants. \end{thm} \subsection{Energy method in the case $m=6$} Ide-Hramoto-Kawashima \cite{IHK08} and Ide-Kawashima \cite{IK08} had already obtained the desired estimates in the case $m=4$. Thus we consider the case $m=6$ in this subsection, which can shed light on the proof of the general case $m\geq 6$ to be given by Section 3.3. Then we rewrite the system \eqref{sys1} with \eqref{2Tmat} as follows. \begin{equation}\label{2equations6} \begin{split} &\partial_t \hat u_1 + i \xi \hat u_2 = 0, \\ &\partial_t \hat u_2 + i \xi \hat u_1 + \gamma \hat u_2 + \hat u_3 = 0, \\ &\partial_t \hat u_3 + i \xi a_4 \hat u_4 - \hat u_2 = 0, \\ &\partial_t \hat u_4 + i \xi a_{4} \hat u_{3} + a_{5} \hat u_{5} = 0, \\ &\partial_t \hat u_5 + i \xi a_{6} \hat u_{6} - a_{5} \hat u_{4} = 0, \\ &\partial_t \hat u_6 + i \xi a_{6} \hat u_{5} = 0. \end{split} \end{equation} \medskip \noindent {\bf Step 1.}\ \ We first derive the basic energy equality for the system \eqref{2equations6} in the Fourier space. We multiply the all equations of \eqref{2equations6} by $\bar{\hat{u}} = (\bar{\hat{u}}_1, \bar{\hat{u}}_2,\bar{\hat{u}}_3,\bar{\hat{u}}_4, \bar{\hat{u}}_5,\bar{\hat{u}}_6)^T$, respectively, and combine the resultant equations. Furthermore, taking the real part for the resultant equality, we arrive at the basic energy equality \begin{equation}\label{2eq6} \frac{1}{2} \partial_t |\hat u|^2 + \gamma |\hat u_2|^2 = 0. \end{equation} Next we create the dissipation terms by the following two steps. \medskip \noindent {\bf Step 2.}\ \ We multiply the first and second equations in \eqref{2equations6} by $i \xi \bar{\hat{u}}_{2}$ and $ - i \xi \bar{\hat{u}}_{1}$, respectively. Then, combining the resultant equations and taking the real part, we have \begin{equation}\label{2dissipation6-4'} \xi \partial_t \Re (i \hat u_{1} \bar{\hat{u}}_2) + \xi^2 ( |\hat u_{1}|^2 - |\hat u_{2}|^2 ) + \gamma \xi \, \Re (i \hat{u}_{1} \bar{\hat{u}}_{2}) + \xi \, \Re (i \hat{u}_{1} \bar{\hat{u}}_{3}) = 0. \end{equation} Next, we combine the fourth and sixth equations in \eqref{2equations6}, obtaining \begin{equation* \partial_t (\xi a_6\hat u_4 + i a_5 \hat{u}_6) + i \xi^2 a_{4} a_6 \hat u_{3} = 0. \end{equation*} Then multiplying the first equation in \eqref{2equations6} and the resultant equation by $\xi a_6\bar{\hat u}_4 - i a_5 \bar{\hat{u}}_6$ and $\bar{\hat{u}}_1$, and combining the resultant equations and taking the real part, we obtain \begin{multline}\label{2dissipation6-2} \partial_t \big\{ a_6 \xi \Re (\hat{u}_1 \bar{\hat{u}}_4) - a_5 \Re (i \hat{u}_1 \bar{\hat{u}}_6) \big\} \\ - a_4 a_6 \xi^2 \, \Re (i \hat{u}_{1} \bar{\hat{u}}_3) + a_6 \xi^2 \, \Re ( i \hat{u}_{2} \bar{\hat{u}}_4) + a_5 \xi \, \Re (\hat{u}_{2} \bar{\hat{u}}_{6}) = 0. \end{multline} To eliminate $\Re (i \hat{u}_{1} \bar{\hat{u}}_3)$, we multiply \eqref{2dissipation6-4'} and \eqref{2dissipation6-2} by $a_4^2 a_6^2 \xi^2$ and $a_4 a_6 \xi$, add the resultant equations. Then this yields \begin{multline}\label{2dissipation6-3} a_4 a_6 \xi \partial_t E_1^{(6)} + a_4^2 a_6^2 \xi^4 (|\hat u_{1}|^2 - |\hat u_{2}|^2 ) \\ + a_4 a_6^2 \xi^3 \, \Re ( i \hat{u}_{2} \bar{\hat{u}}_4) + a_4 a_5 a_6 \xi^2 \, \Re (\hat{u}_{2} \bar{\hat{u}}_{6}) + \gamma a_4^2 a_6^2 \xi^3 \, \Re (i \hat{u}_{1} \bar{\hat{u}}_{2}) = 0, \end{multline} where $E_1^{(6)} = a_6 \xi \Re (\hat{u}_1 \bar{\hat{u}}_4) - a_5 \Re (i \hat{u}_1 \bar{\hat{u}}_6) + a_4 a_6 \xi^2 \Re (i\hat u_{1} \bar{\hat{u}}_2)$. On the other hand, we multiply the second and third equations in \eqref{2equations6} by $\bar{\hat{u}}_{3}$ and $\bar{\hat{u}}_{2}$, respectively. Then, combining the resultant equations and taking the real part, we have \begin{equation}\label{2dissipation6-1'} \partial_t \Re (\hat u_{2} \bar{\hat{u}}_3) + |\hat u_{3}|^2 - |\hat u_{2}|^2 + \xi \, \Re (i \hat{u}_{1} \bar{\hat{u}}_3) - a_{4} \xi \, \Re (i \hat{u}_{2} \bar{\hat{u}}_{4}) + \gamma \, \Re ( \hat{u}_{2} \bar{\hat{u}}_{3}) = 0. \end{equation} By the Young inequality, the equation \eqref{2dissipation6-1'} is estimated as \begin{equation}\label{2dissipation6-1} \partial_t E_3 + \frac{1}{2} |\hat u_{3}|^2 \le \xi^2 |\hat{u}_{1}|^2 + (1+\gamma^2) |\hat u_{2}|^2 + a_{4} \xi \, \Re (i \hat{u}_{2} \bar{\hat{u}}_{4}), \end{equation} where $E_3 = \Re (\hat u_{2} \bar{\hat{u}}_3)$. Furthermore, we multiply the third equation and fourth equation of \eqref{2equations6} by $- i \xi a_4 \bar{\hat{u}}_{4}$ and $ i \xi a_4 \bar{\hat{u}}_{3}$, respectively. Then, combining the resultant equations and taking the real part, we have \begin{equation}\label{2dissipation6-5'} - a_4 \xi \partial_t \Re (i \hat u_{3} \bar{\hat{u}}_4) + a_4^2 \xi^2( |\hat u_{4}|^2 - |\hat u_{3}|^2 ) + a_4 \xi \, \Re (i \hat{u}_{2} \bar{\hat{u}}_4) - a_4 a_{5} \xi \, \Re (i \hat{u}_{3} \bar{\hat{u}}_{5}) = 0. \end{equation} By the Young inequality, the above equation is estimated as \begin{equation}\label{2dissipation6-5} \xi \partial_t E_4 + \frac{1}{2} a_4^2 \xi^2 |\hat u_{4}|^2 \le \frac{1}{2} |\hat{u}_{2} |^2 + a_4^2 \xi^2 |\hat u_{3}|^2 + a_4 a_{5} \xi \, \Re (i \hat{u}_{3} \bar{\hat{u}}_{5}), \end{equation} where $E_4 = - a_4 \Re (i \hat u_{3} \bar{\hat{u}}_4)$. We multiply the fourth equation and fifth equation in \eqref{2equations6} by $a_{5} \bar{\hat{u}}_{5}$ and $ a_{5} \bar{\hat{u}}_{4}$, respectively. Then, combining the resultant equations and taking the real part, we have \begin{equation* a_{5} \partial_t \Re (\hat u_{4} \bar{\hat{u}}_{5}) + a_{5}^2 ( |\hat u_{5}|^2 - |\hat u_{4}|^2) \\ + a_{4} a_{5} \xi \, \Re (i\hat{u}_{3} \bar{\hat{u}}_{5}) - a_{5} a_{6} \xi \, \Re (i \hat{u}_{4} \bar{\hat{u}}_{6}) = 0. \end{equation*} By using Young inequality, we obtain \begin{equation}\label{2dissipation6-6} \partial_t E_5 + \frac{1}{2} a_{5}^2 |\hat u_{5}|^2 \\ \le a_{5}^2 |\hat u_{4}|^2 + \frac{1}{2} a_{5}^2 \xi^2 |\hat{u}_{3}|^2 + a_{5} a_{6} \xi \, \Re (i \hat{u}_{4} \bar{\hat{u}}_{6}). \end{equation} where $E_5 = a_{5} \partial_t \Re (\hat u_{4} \bar{\hat{u}}_{5})$. Moreover, we multiply the last equation and the fifth equation in \eqref{2equations6} by $ i \xi a_6 \bar{\hat{u}}_{5}$ and $- i \xi a_6 \bar{\hat{u}}_{6}$, respectively. Then, combining the resultant equations and taking the real part, we have \begin{equation* - a_6 \xi \partial_t \Re (i \hat u_{5} \bar{\hat{u}}_6) + a_6^2 \xi^2( |\hat u_{6}|^2 - |\hat u_{5}|^2 ) + a_{5}a_6 \xi \, \Re (i \hat{u}_{4} \bar{\hat{u}}_6) = 0. \end{equation*} Using Young inequality, this yields \begin{equation}\label{2dissipation6-8} \xi \partial_t E_6 + \frac{1}{2} a_6^2 \xi^2 |\hat u_{6}|^2 \le a_6^2 \xi^2 |\hat u_{5}|^2 + \frac{1}{2} a_{5}^2 |\hat{u}_{4}|^2, \end{equation} where $E_6 = - a_6 \Re(i \hat u_{5} \bar{\hat{u}}_6)$. \medskip \noindent {\bf Step 3.}\ \ In this step, we sum up the energy inequalities and derive the desired energy inequality. For this purpose, we first multiply \eqref{2dissipation6-6} and \eqref{2dissipation6-8} by $\xi^2$ and $\beta_1$, respectively. Then we combine the resultant equation, obtaining \begin{multline* \partial_t \big\{ \xi^2 E_5 +\beta_1 \xi E_6 \big\} + \frac{1}{2} \beta_1 a_6^2 \xi^2 |\hat u_{6}|^2 + \Big( \frac{1}{2} a_{5}^2 - \beta_1 a_6^2 \Big) \xi^2 |\hat u_{5}|^2 \\ \le \Big( \frac{1}{2} \beta_1 + \xi^2 \Big) a_{5}^2 |\hat u_{4}|^2 + \frac{1}{2} a_{5}^2 \xi^4 |\hat{u}_{3}|^2 + |a_{5}| |a_{6}| |\xi|^3 |\hat{u}_{4}| |\hat{u}_{6}|. \end{multline*} Letting $\beta_1$ suitably small and using Young inequality, we get \begin{equation* \begin{split} & \partial_t \big\{ \xi^2 E_5 +\beta_1 \xi E_6 \big\} + c \xi^2 (|\hat u_{5}|^2 + |\hat u_{6}|^2) \le C (1 + \xi^2)^2 |\hat u_{4}|^2 + \frac{1}{2} a_{5}^2 \xi^4 |\hat{u}_{3}|^2. \end{split} \end{equation*} Moreover, combining the above estimate and \eqref{2dissipation6-6}, we get \begin{multline}\label{2eneq6-1} \partial_t \big\{ (1+\xi^2) E_5 +\beta_1 \xi E_6 \big\} + c (1+ \xi^2) |\hat u_{5}|^2 + c \xi^2 |\hat u_{6}|^2 \\ \le C (1 + \xi^2)^2 |\hat u_{4}|^2 + \frac{1}{2} a_{5}^2 \xi^2(1+\xi^2) |\hat{u}_{3}|^2. \end{multline} Second, we multiply \eqref{2dissipation6-5} and \eqref{2eneq6-1} by $(1+\xi^2)^2$ and $\beta_2 \xi^2$, respectively, and combine the resultant equations. Then we obtain \begin{multline* \partial_t \big\{ \beta_2 \xi^2((1+\xi^2) E_5 +\beta_1 \xi E_6) + \xi (1+\xi^2)^2 E_4 \big\} \\ + \beta_2 c \xi^2 (1+ \xi^2) |\hat u_{5}|^2 + \beta_2 c \xi^4 |\hat u_{6}|^2 + \Big( \frac{1}{2} a_{4}^2 - \beta_2 C \Big) \xi^2 (1 + \xi^2)^2 |\hat u_{4}|^2 \\ \le C(1+\xi^2)^2 |\hat{u}_{2} |^2 + C \xi^2(1+\xi^2)^2 |\hat{u}_{3}|^2 + C \xi (1+\xi^2)^2 \, \Re (i \hat{u}_{3} \bar{\hat{u}}_{5}). \end{multline*} Letting $\beta_2$ suitably small and using Young inequality, we get \begin{multline}\label{2eneq6-2} \partial_t \big\{ \beta_2 \xi^2((1+\xi^2) E_5 +\beta_1 \xi E_6) + \xi (1+\xi^2)^2 E_4 \big\} + c \xi^2 (1+ \xi^2) |\hat u_{5}|^2 \\ + c \xi^4 |\hat u_{6}|^2 + c \xi^2 (1 + \xi^2)^2 |\hat u_{4}|^2 \le C(1+\xi^2)^2 |\hat{u}_{2} |^2 + C (1+\xi^2)^3 |\hat{u}_{3}|^2. \end{multline} Third, we multiply \eqref{2dissipation6-1} and \eqref{2eneq6-2} by $(1+\xi^2)^3$ and $\beta_3$ and combine the resultant equations. Then we obtain \begin{multline* \partial_t \big\{ \beta_3 (\beta_2 \xi^2((1+\xi^2) E_5 +\beta_1 \xi E_6) + \xi (1+\xi^2)^2 E_4) + (1+\xi^2)^3 E_3\big\} + \beta_3 c \xi^4 |\hat u_{6}|^2 \\ + \beta_3 c \xi^2 (1+ \xi^2) |\hat u_{5}|^2 + \beta_3 c \xi^2 (1 + \xi^2)^2 |\hat u_{4}|^2 + \Big(\frac{1}{2} - \beta_3 C \Big)(1+\xi^2)^3 |\hat{u}_{3}|^2 \\ \le C(1+\xi^2)^3 |\hat{u}_{2} |^2 + \xi^2 (1+\xi^2)^3 |\hat{u}_{1}|^2 + a_4 \xi (1+\xi^2)^3 \, \Re ( i \hat{u}_{2} \bar{\hat{u}}_4). \end{multline*} Therefore, letting $\beta_3$ suitably small and using Young inequality, we get \begin{multline}\label{2eneq6-3} \partial_t \big\{ \beta_3 (\beta_2 \xi^2((1+\xi^2) E_5 +\beta_1 \xi E_6) + \xi (1+\xi^2)^2 E_4) + (1+\xi^2)^3 E_3 \big\} \\ + c \xi^4 |\hat u_{6}|^2 + c \xi^2 (1+ \xi^2) |\hat u_{5}|^2 + c \xi^2 (1 + \xi^2)^2 |\hat u_{4}|^2 + c (1+\xi^2)^3 |\hat{u}_{3}|^2 \\ \le C (1+\xi^2)^4 |\hat{u}_{2} |^2 + \xi^2 (1+\xi^2)^3 |\hat{u}_{1}|^2. \end{multline} Fourth, we multiply \eqref{2dissipation6-3} and \eqref{2eneq6-3} by $(1+\xi^2)^3$ and $ \beta_{4} \xi^{2}$, respectively, and combine the resultant equalities. Moreover, letting $\beta_4$ suitably small and using Young inequality, then we obtain \begin{multline}\label{2eneq6-4} \partial_t \tilde{E} + c \xi^6 |\hat u_{6}|^2 + c \xi^4 (1+ \xi^2) |\hat u_{5}|^2 + c \xi^4 (1 + \xi^2)^2 |\hat u_{4}|^2+ c \xi^2 (1+\xi^2)^3 |\hat{u}_{3}|^2 \\ + c \xi^4 (1+\xi^2)^3 |\hat{u}_{1}|^2 \le C (1+\xi^2)^5 |\hat{u}_{2} |^2 + a_4 a_5 a_6 \xi^2 (1+\xi^2)^3\, \Re (\hat{u}_{2} \bar{\hat{u}}_{6}), \end{multline} where we have defined \begin{multline} \notag \tilde{E} = \beta_4 \xi^2 (\beta_3 (\beta_2 \xi^2((1+\xi^2) E_5 +\beta_1 \xi E_6) + \xi (1+\xi^2)^2 E_4) + (1+\xi^2)^3 E_3)\\ + a_4 a_6 \xi (1+\xi^2)^3 E_1^{(6)}. \end{multline} Moreover, to estimate $\Re (\hat{u}_{2} \bar{\hat{u}}_{6})$, we multiply \eqref{2eneq6-4} by $\xi^2$ and use Young inequality again. Then this yields \begin{multline}\label{2eneq6-4'} \xi^2 \partial_t \tilde{E} + c \xi^8 |\hat u_{6}|^2 + c \xi^6 (1+ \xi^2) |\hat u_{5}|^2 + c \xi^6 (1 + \xi^2)^2 |\hat u_{4}|^2\\ + c \xi^4 (1+\xi^2)^3 |\hat{u}_{3}|^2 + c \xi^6 (1+\xi^2)^3 |\hat{u}_{1}|^2 \le C (1+\xi^2)^6 |\hat{u}_{2} |^2. \end{multline} Finally, multiplying the basic energy \eqref{2eq6} and \eqref{2eneq6-4'} by $(1+\xi^2)^6$ and $\beta_{5}$, respectively, combining the resultant equations and letting $\beta_{5}$ suitably small, then this yields \begin{multline}\label{2eneq6-5} \partial _t \Big\{ \frac{1}{2}(1+\xi^2)^{6}|\hat{u}|^2 + \beta_{5}\xi^2 \tilde{E} \Big\} + c \xi^{6} (1 + \xi^2)^3 |\hat u_{1}|^2 + c (1+ \xi^2)^{6} |\hat u_{2}|^2 \\ + c \xi^{4} (1 + \xi^2)^3 |\hat u_{3}|^2 + c \xi^{6} (1 + \xi^2)^2 |\hat u_{4}|^2 + c \xi^{6} (1 + \xi^2) |\hat u_{5}|^2 + c \xi^{8} |\hat u_{6}|^2 \le 0. \end{multline} Thus, integrating the above estimate with respect to $t$, we obtain the following energy estimate \begin{multline \notag |\hat{u}(t,\xi)|^2 + \int^t_0 \Big\{ \frac{\xi^{6}}{(1+\xi^2)^{3}} |\hat u_{1}|^2 + |\hat u_{2}|^2 +\frac{\xi^{4}}{(1+\xi^2)^{3}} |\hat u_{3}|^2 + \frac{\xi^{6}}{(1+\xi^2)^{4}} |\hat u_{4}|^2 \\ + \frac{\xi^{6}}{(1+\xi^2)^{5}} |\hat u_{5}|^2 + \frac{\xi^{8}}{(1+\xi^2)^{6}} |\hat u_{6}|^2 \Big\} d\tau \le C|\hat{u}(0,\xi)|^2. \end{multline} Here we have used the following inequality \begin{equation}\label{2eneq6-7} c |\hat{u}|^2 \le \frac{1}{2} |\hat{u}|^2 + \frac{\beta_{5}\xi^2 }{(1+\xi^2)^{6}} \tilde{E} \le C |\hat{u}|^2 \end{equation} for suitably small $\beta_{5}$. Furthermore the estimate \eqref{2eneq6-5} with \eqref{2eneq6-7} give us the following pointwise estimate \begin{equation \notag |\hat{u}(t,\xi)| \le C e^{- c \lambda(\xi)t} |\hat{u}(0,\xi)|, \qquad \lambda(\xi) = \frac{\xi^{8}}{(1+\xi^2)^{6}}. \end{equation} This therefore proves \eqref{point2} in the case $m=6$ for Theorem \ref{thm2}. \subsection{Energy method for model II} Inspired by the concrete calculation in Subsection 3.2, we consider the more general situation $m\geq 6$. Then we rewrite our system \eqref{Fsys1} with \eqref{2Tmat} as follows: \begin{equation}\label{2equations} \begin{split} &\partial_t \hat u_1 + i \xi \hat u_2 = 0, \\ &\partial_t \hat u_2 + i \xi \hat u_1 + \gamma \hat u_2 + \hat u_3 = 0, \\ &\partial_t \hat u_3 + i \xi a_4 \hat u_4 - \hat u_2 = 0, \\ &\partial_t \hat u_j + i \xi a_{j} \hat u_{j-1} + a_{j+1} \hat u_{j+1} = 0, \qquad j = 4, 6, \cdots, m-2, \ {\rm (for \ even)} \\ &\partial_t \hat u_j + i \xi a_{j+1} \hat u_{j+1} - a_{j} \hat u_{j-1} = 0, \qquad j = 5, 7, \cdots, m-1, \ {\rm (for \ odd)} \\ &\partial_t \hat u_m + i \xi a_{m} \hat u_{m-1} = 0. \end{split} \end{equation} \medskip \noindent {\bf Step 1.}\ \ We first derive the basic energy equality for the system \eqref{Fsys1} in the Fourier space. Taking the inner product of \eqref{Fsys1} with $\hat{u}$, we have \begin{equation*} \langle \hat u_t, \hat{u} \rangle + i \xi \langle A_m \hat u, \hat u \rangle + \langle L_m \hat u, \hat u \rangle = 0. \end{equation*} Taking the real part, we get the basic energy equality \begin{equation* \frac{1}{2} \partial_t |\hat u|^2 + \langle L_m \hat u, \hat u \rangle = 0, \end{equation*} and hence \begin{equation}\label{2eq} \frac{1}{2} \partial_t |\hat u|^2 + \gamma |\hat u_2|^2 = 0. \end{equation} Next we create the dissipation terms by the following three steps. \medskip \noindent {\bf Step 2.}\ \ We note that we had already derived some useful equations in Subsection 3.2. Indeed the equations \eqref{2dissipation6-4'}, \eqref{2dissipation6-1}, \eqref{2dissipation6-5} and \eqref{2dissipation6-6} are valid for our general problem. Therefore we adopt these equations in this subsection. To eliminate $\Re(i \hat{u}_{1} \bar{\hat{u}}_3) $ in \eqref{2dissipation6-4'}, we first prepare the useful equation. We combine the fourth equations with $j= 4, \cdots, 2 \ell$ in \eqref{2equations} inductively. Then we obtain \begin{equation}\label{2mathcalU} \partial_t \mathcal{U}_{2\ell} + i \xi (-i \xi)^{\ell - 2} \prod_{j=2}^{\ell} a_{2j} \hat u_{3} + \prod_{j=2}^{\ell} a_{2j+1} \hat u_{2\ell + 1} = 0, \end{equation} for $4 \le 2 \ell \le m-2$, where we have defined $\mathcal{U}_4 = \hat{u}_4$ and \begin{equation \notag \begin{split} \mathcal{U}_{2 \ell} &= - i \xi a_{2\ell} \mathcal{U}_{2\ell-2} + \prod_{j=2}^{\ell-1} a_{2j+1} \hat u_{2\ell}. \\%[1mm] \end{split} \end{equation} Moreover, combining the last equation in \eqref{2equations} and \eqref{2mathcalU}, this yields \begin{equation}\label{2mathcalU-2} i^{m/2} \partial_t \mathcal{U}_{m} - i \xi^{m/2 - 1} \prod_{j=2}^{m/2} a_{2j} \hat u_{3} = 0. \end{equation} Multiplyig \eqref{2mathcalU-2} by $- \bar{\hat{u}}_1$ and the first equation in \eqref{2equations} by $- \overline{i^{m/2}\mathcal{U}_m}$, combining the resultant equations and taking the real part, we obtain \begin{equation}\label{2estU} - \partial_t \Re ( i^{m/2}\mathcal{U}_m \bar{\hat{u}}_1 ) - \prod_{j=2}^{m/2} a_{2j}\xi^{m/2-1} \Re (i \hat{u}_{1} \bar{\hat{u}}_3) + \xi \, \Re (i^{m/2+1}\mathcal{U}_m \bar{\hat{u}}_{2}) = 0. \end{equation} In order to eliminate $\Re (i \hat{u}_{1} \bar{\hat{u}}_3)$, we multiply \eqref{2dissipation6-4'} by $\prod_{j=2}^{m/2} a_{2j} \xi^{m/2-2} $ and combine the resultant equation and \eqref{2estU}. Then we obtain \begin{multline}\label{2estU2} \partial_t E_1^{(m)} + \prod_{j=2}^{m/2} a_{2j} \xi^{m/2} ( |\hat u_{1}|^2 - |\hat u_{2}|^2 ) \\ + \gamma \prod_{j=2}^{m/2} a_{2j}\xi^{m/2-1} \Re (i \hat{u}_{1} \bar{\hat{u}}_{2}) + \xi \, \Re (i^{m/2+1}\mathcal{U}_m \bar{\hat{u}}_{2}) = 0, \end{multline} where we have defined $$ E_1^{(m)} = \prod_{j=2}^{m/2} a_{2j} \xi^{m/2-1} \Re (i \hat u_{1} \bar{\hat{u}}_2) -\Re ( i^{m/2}\mathcal{U}_m \bar{\hat{u}}_1 ). $$ For $\ell = 4,6, \cdots , m-2$, we multiply the fourth equation and fifth equation with $j=\ell$ and $j=\ell+1$ in \eqref{2equations} by $a_{\ell + 1} \bar{\hat{u}}_{\ell+1}$ and $ a_{\ell+1} \bar{\hat{u}}_{\ell}$, respectively. Then, combining the resultant equations and taking the real part, we have \begin{multline}\label{2dissipation-6'} a_{\ell+1} \partial_t \Re (\hat u_{\ell} \bar{\hat{u}}_{\ell+1}) + a_{\ell+1}^2 ( |\hat u_{\ell+1}|^2 - |\hat u_{\ell}|^2) \\ + a_{\ell} a_{\ell+1} \xi \, \Re (i\hat{u}_{\ell-1} \bar{\hat{u}}_{\ell+1}) - a_{\ell+1} a_{\ell+2} \xi \, \Re (i \hat{u}_{\ell} \bar{\hat{u}}_{\ell+2}) = 0. \end{multline} By using Young inequality, we obtain \begin{equation}\label{2dissipation-6} \partial_t E_{\ell+1} + \frac{1}{2} a_{\ell+1}^2 |\hat u_{\ell+1}|^2 \le a_{\ell+1}^2 |\hat u_{\ell}|^2 + \frac{1}{2} a_{\ell+1}^2 \xi^2 |\hat{u}_{\ell-1}|^2 + a_{\ell+1} a_{\ell+2} \xi \, \Re(i \hat{u}_{\ell} \bar{\hat{u}}_{\ell+2}). \end{equation} where $E_{\ell + 1} = a_{\ell+1} \Re(\hat u_{\ell} \bar{\hat{u}}_{\ell+1})$. On the other hand, for $\ell = 4, \cdots , m-4$, we multiply the fourth and fifth equations with $j=\ell+2$ and $j = \ell +1$ in \eqref{2equations} by $ i \xi a_{\ell+2} \bar{\hat{u}}_{\ell+1}$ and $- i \xi a_{\ell+2} \bar{\hat{u}}_{\ell+2}$, respectively. Then, combining the resultant equations and taking the real part, we have \begin{equation}\label{2dissipation-7'} \begin{split} &- a_{\ell+2} \xi \partial_t \Re(i \hat u_{\ell+1} \bar{\hat{u}}_{\ell+2}) + a_{\ell+2}^2 \xi^2( |\hat u_{\ell+2}|^2 - |\hat u_{\ell+1}|^2 ) \\ &\hskip20mm + a_{\ell+1}a_{\ell+2} \xi \, \Re (i \hat{u}_{\ell} \bar{\hat{u}}_{\ell+2}) - a_{\ell+2} a_{\ell+3} \xi \, \Re (i \hat{u}_{\ell+1} \bar{\hat{u}}_{\ell+3}) = 0. \end{split} \end{equation} Here, by using Young inequality, we obtain \begin{multline}\label{2dissipation-7} \xi \partial_t E_{\ell+2} + \frac{1}{2} a_{\ell+2}^2 \xi^2 |\hat u_{\ell+2}|^2 \\ \hskip10mm \le a_{\ell+2}^2 \xi^2 |\hat u_{\ell+1}|^2 + \frac{1}{2} a_{\ell+1}^2 | \hat{u}_{\ell}|^2 + a_{\ell+2} a_{\ell+3} \xi \, \Re (i \hat{u}_{\ell+1} \bar{\hat{u}}_{\ell+3}), \end{multline} where $E_{\ell+2} = - a_{\ell+2} \Re (i \hat u_{\ell+1} \bar{\hat{u}}_{\ell+2})$. Moreover, we multiply the last equation and the fifth equation with $j=m-1$ in \eqref{2equations} by $ i \xi a_m \bar{\hat{u}}_{m-1}$ and $- i \xi a_m \bar{\hat{u}}_{m}$, respectively. Then, combining the resultant equations and taking the real part, we have \begin{equation}\label{2dissipation-8'} - a_m \xi \partial_t \Re (i \hat u_{m-1} \bar{\hat{u}}_m) + a_m^2 \xi^2( |\hat u_{m}|^2 - |\hat u_{m-1}|^2 ) + a_{m-1}a_m \xi \, \Re (i \hat{u}_{m-2} \bar{\hat{u}}_m) = 0. \end{equation} Using Young inequality, this yields \begin{equation}\label{2dissipation-8} \xi \partial_t E_m + \frac{1}{2} a_m^2 \xi^2 |\hat u_{m}|^2 \le a_m^2 \xi^2 |\hat u_{m-1}|^2 + \frac{1}{2} a_{m-1}^2 |\hat{u}_{m-2}|^2. \end{equation} where $E_m = - a_m \Re (i \hat u_{m-1} \bar{\hat{u}}_m)$. \medskip \noindent {\bf Step 3.}\ \ In this step, we sum up the energy inequalities constructed in the previous step and then make the desired energy inequality. The strategy is essentially the same as in Subsection 3.2. For this purpose, we first multiply \eqref{2dissipation-6} with $\ell = m-2$ and \eqref{2dissipation-8} by $\xi^2$ and $\beta_1$, respectively. Then we combine the resultant equation, obtaining \begin{multline* \partial_t \big\{\xi^2 E_{m-1} + \beta_1 \xi E_m \big\} + \frac{1}{2} \beta_1 a_m^2 \xi^2 |\hat u_{m}|^2 + \Big( \frac{1}{2} a_{m-1}^2 - \beta_1 a_m^2 \Big) \xi^2 |\hat u_{m-1}|^2 \\ \le \Big( \frac{1}{2} \beta_1 + \xi^2 \Big) a_{m-1}^2 |\hat u_{m-2}|^2 + \frac{1}{2} a_{m-1}^2 \xi^4 |\hat{u}_{m-3}|^2 + |a_{m-1}| |a_{m}| |\xi|^3 |\hat{u}_{m-2}| |\hat{u}_{m}|. \end{multline*} Letting $\beta_1$ suitably small and using Young inequality, we get \begin{multline* \partial_t \big\{\xi^2 E_{m-1} + \beta_1 \xi E_m \big\} + c \xi^2 (|\hat u_{m}|^2 + |\hat u_{m-1}|^2) \\ \le C (1 + \xi^2)^2 |\hat u_{m-2}|^2 + \frac{1}{2} a_{m-1}^2 \xi^4 |\hat{u}_{m-3}|^2. \end{multline*} Moreover, combining the above estimate and \eqref{2dissipation-6} with $\ell = m-2$, we get \begin{multline}\label{2eneq-1} \partial_t \big\{(1+\xi^2) E_{m-1} + \beta_1 \xi E_m \big\} + c \xi^2 |\hat u_{m}|^2 + c (1+\xi^2)|\hat u_{m-1}|^2 \\ \le C (1 + \xi^2)^2 |\hat u_{m-2}|^2 + \frac{1}{2} a_{m-1}^2 \xi^2(1+\xi^2) |\hat{u}_{m-3}|^2. \end{multline} Second, we multiply \eqref{2eneq-1} and \eqref{2dissipation-7} with $\ell = m-4$ by $\beta_2 \xi^2$ and $(1+\xi^2)^2$ and combine the resultant equations. Then we obtain \begin{multline* \partial_t \big\{\beta_2 \xi^2((1+\xi^2) E_{m-1} + \beta_1 \xi E_m) + \xi (1+\xi^2)^2 E_{m-2} \big\} \\ + \beta_2 c \xi^4 |\hat u_{m}|^2 + \beta_2 c \xi^2(1+\xi^2)|\hat u_{m-1}|^2 + \Big( \frac{1}{2} a_{m-2}^2 - \beta_2 C \Big) \xi^2 (1 + \xi^2)^2 |\hat u_{m-2}|^2 \\ \le C \xi^2(1+\xi^2)^2 |\hat{u}_{m-3}|^2 + \frac{1}{2} a_{m-3}^2 (1+\xi^2)^2 | \hat{u}_{m-4}|^2 + C |\xi| (1+\xi^2)^2 |\hat{u}_{m-3}| |\hat{u}_{m-1}|, \end{multline*} Letting $\beta_2$ suitably small and using Young inequality, we get \begin{multline}\label{2eneq-2} \partial_t \big\{\beta_2 \xi^2((1+\xi^2) E_{m-1} + \beta_1 \xi E_m) + \xi (1+\xi^2)^2 E_{m-2} \big\} \\ + c \xi^4 |\hat u_{m}|^2 + c \xi^2(1+\xi^2)|\hat u_{m-1}|^2 + c \xi^2 (1 + \xi^2)^2 |\hat u_{m-2}|^2 \\ \le C (1+\xi^2)^3 |\hat{u}_{m-3}|^2 + \frac{1}{2} a_{m-3}^2 (1+\xi^2)^2 | \hat{u}_{m-4}|^2. \end{multline} Third, we multiply \eqref{2eneq-2} and \eqref{2dissipation-6} with $\ell = m-4$ by $\beta_3$ and $(1+\xi^2)^3$, respectively, and combine the resultant equations. Then we obtain \begin{equation* \begin{split} & \partial_t \big\{\beta_3(\beta_2 \xi^2((1+\xi^2) E_{m-1} + \beta_1 \xi E_m) + \xi (1+\xi^2)^2 E_{m-2}) + (1+\xi^2)^3 E_{m-3} \big\} \\ & + \beta_3 c \xi^4 |\hat u_{m}|^2 + \beta_3 c \xi^2(1+\xi^2)|\hat u_{m-1}|^2 + \beta_3 c \xi^2 (1 + \xi^2)^2 |\hat u_{m-2}|^2 \\ &+ \Big( \frac{1}{2} a_{m-3}^2 -\beta_3 C\Big) (1+\xi^2)^3 |\hat{u}_{m-3}|^2 \\ &\le C (1+\xi^2)^3 | \hat{u}_{m-4}|^2 + \frac{1}{2} a_{m-3}^2 \xi^2 (1+\xi^2)^3 |\hat{u}_{m-5}|^2 + C |\xi| (1+\xi^2)^3 |\hat{u}_{m-4}| |\hat{u}_{m-2}|. \end{split} \end{equation*} Therefore, letting $\beta_3$ suitably small and using Young inequality, we get \begin{multline}\label{2eneq-3} \partial_t \big\{\beta_3(\beta_2 \xi^2((1+\xi^2) E_{m-1} + \beta_1 \xi E_m) + \xi (1+\xi^2)^2 E_{m-2}) \\ + (1+\xi^2)^3 E_{m-3} \big\} + c \xi^4 |\hat u_{m}|^2 + c \xi^2(1+\xi^2)|\hat u_{m-1}|^2 + c \xi^2 (1 + \xi^2)^2 |\hat u_{m-2}|^2 \\ + c (1+\xi^2)^3 |\hat{u}_{m-3}|^2 \le C (1+\xi^2)^4 | \hat{u}_{m-4}|^2 + \frac{1}{2} a_{m-3}^2 \xi^2 (1+\xi^2)^3 |\hat{u}_{m-5}|^2. \end{multline} Inspired by the derivation of \eqref{2eneq-1}, \eqref{2eneq-2} and \eqref{2eneq-3}, we can conclude that the following inequality \begin{multline}\label{2eneq-4} \partial_t \mathcal{E}_{m-5} + c \sum_{\ell = 5}^{m} \xi^{2([\ell/2]-2)}(1+\xi^2)^{m - \ell}| \hat{u}_{\ell}|^2 \\ \le C(1+\xi^2)^{m-4} | \hat{u}_{4}|^2 + \frac{1}{2} a_{5}^2 \xi^2(1+\xi^2)^{m-5} |\hat{u}_{3}|^2, \end{multline} is derived by the induction argument. Here $[ \ ]$ denotes the greatest integer function, and $\mathcal{E}_1 = \beta_1 \xi E_m + (1+\xi^2) E_{m-1}$ and \begin{equation}\label{2defE} \begin{split} \mathcal{E}_\ell &= \beta_\ell \xi^2 \mathcal{E}_{\ell - 1} + \xi (1+\xi^2)^{\ell} E_{m - \ell}, \\ \mathcal{E}_{\ell+1} &= \beta_{\ell+1} \mathcal{E}_{\ell} + (1+\xi^2)^{\ell + 1} E_{m- (\ell +1)}, \end{split} \end{equation} for $\ell$ are even integers with $\ell \ge 2$. Furthermore, we multiply \eqref{2eneq-4} and \eqref{2dissipation6-5} by $\beta_{m-4} \xi^2$ and $(1+\xi^2)^{m-4}$, respectively, and combine the resultant equation. Then we obtain \begin{equation* \begin{split} & \partial_t \mathcal{E}_{m-4} + \beta_{m-4} c \sum_{\ell = 5}^{m} \xi^{2([\ell/2]-1)}(1+\xi^2)^{m - \ell}| \hat{u}_{\ell}|^2 \\ &+ \Big( \frac{1}{2} a_4^2 - \beta_{m-4} C\Big) \xi^2 (1+\xi^2)^{m-4} | \hat{u}_{4}|^2 \\ & \le C \xi^2(1+\xi^2)^{m-4} |\hat{u}_{3}|^2 + \frac{1}{2} (1+\xi^2)^{m-4} |\hat{u}_{2} |^2 + C |\xi| (1+\xi^2)^{m-4} |\hat{u}_{3}| |\hat{u}_{5}|, \end{split} \end{equation*} where $\mathcal{E}_{m-4}$ is defined by \eqref{2defE} with $\ell = m-4$. Thus, letting $\beta_{m-4}$ suitably small and using Young inequality, we obtain \begin{multline}\label{2eneq-5} \partial_t \mathcal{E}_{m-4} + c \sum_{\ell = 4}^{m} \xi^{2([\ell/2]-1)}(1+\xi^2)^{m - \ell}| \hat{u}_{\ell}|^2 \\ \le C (1+\xi^2)^{m-3} |\hat{u}_{3}|^2 + \frac{1}{2} (1+\xi^2)^{m-4} |\hat{u}_{2} |^2. \end{multline} Similarly, we multiply \eqref{2eneq-5} and \eqref{2dissipation6-1} by $\beta_{m-3}$ and $(1+\xi^2)^{m-3}$, combine the resultant equalities, and take $\beta_{m-3}$ suitably small. Then we have \begin{multline}\label{2eneq-5'} \partial_t \mathcal{E}_{m-3} + c \sum_{\ell = 3}^{m} \xi^{2([\ell/2]-1)}(1+\xi^2)^{m - \ell}| \hat{u}_{\ell}|^2 \\ \le C (1+\xi^2)^{m-2} |\hat{u}_{2} |^2 + \xi^2 (1+\xi^2)^{m-3} |\hat{u}_{1}|^2, \end{multline} where $\mathcal{E}_{m-3}$ is defined by \eqref{2defE} with $\ell = m-3$. To estimate $|\hat{u}_1|^2$ in \eqref{2eneq-5'}, we next employ \eqref{2estU2}. Namely, we multiply \eqref{2estU2} and \eqref{2eneq-5'} by $(1+\xi^2)^{m-3}$ and $\beta_{m-2} \alpha_m \xi^{m/2-2}$, respectively. Then we combine the resultant equation, obtaining \begin{multline* \partial_t \big\{ \beta_{m-2} \alpha_m \xi^{m/2-2} \mathcal{E}_{m-3} + (1+\xi^2)^{m-3} E_1^{(m)} \big\} \\ + \beta_{m-2} \alpha_m c \xi^{m/2-2} \sum_{\ell = 3}^{m} \xi^{2([\ell/2]-1)}(1+\xi^2)^{m - \ell}| \hat{u}_{\ell}|^2 + \alpha_m (1 - \beta_{m-2})\xi^{m/2} (1+\xi^2)^{m-3} |\hat{u}_{1}|^2 \\ \le C \xi^{m/2-2} (1+\xi^2)^{m-2} |\hat{u}_{2} |^2 + \gamma \alpha_m \xi^{m/2-1} (1+\xi^2)^{m-3} \Re (i \hat{u}_{1} \bar{\hat{u}}_{2}) \\ + \xi (1+\xi^2)^{m-3} \, \Re (i^{m/2+1}\mathcal{U}_m \bar{\hat{u}}_{2}) , \end{multline*} where we have defined $\alpha_m = \prod_{j=2}^{m/2} a_{2j}$. Here, taking $\beta_{m-2}$ suitably small and using Young inequality, we get \begin{multline}\label{2eneq-6} \partial_t \big\{ \beta_{m-2} \alpha_m \xi^{m/2-2} \mathcal{E}_{m-3} + (1+\xi^2)^{m-3} E_1^{(m)} \big\} \\ + c \xi^{m/2-2} \sum_{\ell = 3}^{m} \xi^{2([\ell/2]-1)}(1+\xi^2)^{m - \ell}| \hat{u}_{\ell}|^2 + c \xi^{m/2} (1+\xi^2)^{m-3} |\hat{u}_{1}|^2 \\ \le C \xi^{m/2-2} (1+\xi^2)^{m-2} |\hat{u}_{2} |^2 + \xi (1+\xi^2)^{m-3} \, \Re (i^{m/2+1}\mathcal{U}_m \bar{\hat{u}}_{2}) . \end{multline} For the last term of the right hand side in \eqref{2eneq-6}, we note that \begin{multline* \mathcal{U}_{m} = \Big( \prod_{j=0}^{m/2-3} a_{m- 2j}\Big) (-i\xi)^{m/2-2} \hat u_{4} + \Big( \prod_{j=2}^{m/2-1} a_{2j+1}\Big) \hat u_{m} \\ + \sum_{k=3}^{m/2-1} \Big( \prod_{j=2}^{k-1} a_{2j+1}\Big) \Big( \prod_{j=0}^{m/2-1-k} a_{m- 2j}\Big) (-i\xi)^{m/2-k} \hat u_{2 k}, \end{multline*} for $m \ge 6$, where the last term of the right hand side is neglected in the case $m=6$. Then, substituting the above equality into \eqref{2eneq-6}, we obtain \begin{multline}\label{2eneq-7} \partial_t \big\{ \beta_{m-2} \alpha_m \xi^{m/2-2} \mathcal{E}_{m-3} + (1+\xi^2)^{m-3} E_1^{(m)} \big\} \\ + c \xi^{m/2-2} \sum_{\ell = 3}^{m} \xi^{2([\ell/2]-1)}(1+\xi^2)^{m - \ell}| \hat{u}_{\ell}|^2 + c \xi^{m/2} (1+\xi^2)^{m-3} |\hat{u}_{1}|^2 \\ \le C \xi^{m/2-2} (1+\xi^2)^{m-2} |\hat{u}_{2} |^2 + C \sum_{k=2}^{m/2}|\xi|^{m/2 + 1-k} (1+\xi^2)^{m-3} |\hat u_{2}| |\hat u_{2 k}|. \end{multline} In order to control the term of $|\hat{u}_m|$ on the right hand side of \eqref{2eneq-7} we introduce the following inequality \begin{equation* |\xi|^{3m/2-5 } (1+\xi^2)^{m-3}|\hat{u}_{2}||\hat{u}_{m}| \le \varepsilon \xi^{3m-10}| \hat{u}_{m}|^2 + C_\varepsilon (1+\xi^2)^{2(m-3)} |\hat{u}_{2}|^2. \end{equation*} Inspired by the above inequality, we multiply \eqref{2eneq-7} by $\xi^{3m/2-6}$ and employ this inequality. Then we obtain \begin{multline* \xi^{3m/2-6} \partial_t \big\{ \beta_{m-2} \alpha_m \xi^{m/2-2} \mathcal{E}_{m-3} + (1+\xi^2)^{m-3} E_1^{(m)} \big\} + (c - \varepsilon) \xi^{3m-10}| \hat{u}_{m}|^2 \\ + c \xi^{2 m- 10} \sum_{\ell = 3}^{m-1} \xi^{2[\ell/2]}(1+\xi^2)^{m - \ell}| \hat{u}_{\ell}|^2 + c \xi^{2m-6} (1+\xi^2)^{m-3} |\hat{u}_{1}|^2\\ \le \{ C \xi^{2 m-8} + C_\varepsilon (1+\xi^2)^{m-3}\} (1+\xi^2)^{m-3} |\hat{u}_{2} |^2 \\ + C \sum_{k=2}^{m/2-1} |\xi|^{2 m - 5 -k} (1+\xi^2)^{m-3} |\hat u_{2}| |\hat u_{2 k}|. \end{multline*} Therefore, letting $\varepsilon$ suitably small, we have \begin{multline}\label{2eneq-8} \xi^{3m/2-6} \partial_t \big\{ \beta_{m-2} \alpha_m \xi^{m/2-2} \mathcal{E}_{m-3} + (1+\xi^2)^{m-3} E_1^{(m)} \big\} \\ + c \xi^{2 m- 10} \sum_{\ell = 3}^{m} \xi^{2[\ell/2]}(1+\xi^2)^{m - \ell}| \hat{u}_{\ell}|^2 + c \xi^{2m-6} (1+\xi^2)^{m-3} |\hat{u}_{1}|^2\\ \le C (1+\xi^2)^{2(m-3)} |\hat{u}_{2} |^2 + C \sum_{k=2}^{m/2-1} |\xi|^{2 m - 5 -k} (1+\xi^2)^{m-3} |\hat u_{2}| |\hat u_{2 k}|. \end{multline} Moreover, applying the inequality \begin{multline* |\xi|^{2m-5-k} (1+\xi^2)^{m-3} |\hat{u}_{2}||\hat{u}_{2k}| \\ \le \varepsilon \xi^{2m -10 + 2k}(1+\xi^2)^{m- 2k}| \hat{u}_{2k}|^2 + C_\varepsilon \xi^{2m-4k} (1+\xi^2)^{m -6 + 2k} |\hat{u}_{2}|^2 \end{multline*} to \eqref{2eneq-8}, we can get \begin{multline}\label{2eneq-9} \partial_t \mathcal{E}_{m-2} + c \xi^{2 m- 10} \sum_{\ell = 3}^{m} \xi^{2[\ell/2]}(1+\xi^2)^{m - \ell}| \hat{u}_{\ell}|^2 \\ + c \xi^{2m-6} (1+\xi^2)^{m-3} |\hat{u}_{1}|^2 \le C (1+\xi^2)^{2(m-3)} |\hat{u}_{2} |^2, \end{multline} where we have defined $\mathcal{E}_{m-2} = \xi^{3m/2-6} ( \beta_{m-2} \alpha_m \xi^{m/2-2} \mathcal{E}_{m-3} + (1+\xi^2)^{m-3} E_1^{(m)} )$. Finally, multiplying the basic energy \eqref{2eq6} and \eqref{2eneq-9} by $(1+\xi^2)^{2(m-3)}$ and $\beta_{m-1}$, respectively, combining the resultant equations and letting $\beta_{m-1}$ suitably small, then this yields \begin{multline}\label{2eneq-10} \partial_t \Big\{ \frac{1}{2}(1+\xi^2)^{2(m-3)}|\hat{u}|^2 + \beta_{m-1} \mathcal{E}_{m-2} \Big\} + c \xi^{2 m- 6} (1+\xi^2)^{m - 3}| \hat{u}_{1}|^2 \\ + c (1+\xi^2)^{2(m-3)} |\hat{u}_{2} |^2 + c \xi^{2 m- 10} \sum_{\ell = 3}^{m} \xi^{2[\ell/2]}(1+\xi^2)^{m - \ell}| \hat{u}_{\ell}|^2 \le 0. \end{multline} Thus, integrating the above estimate with respect to $t$, we obtain the following energy estimate \begin{multline}\label{2eneq-11} |\hat{u}(t,\xi)|^2 + \int^t_0 \Big\{ \frac{\xi^{2m-6}}{(1+\xi^2)^{m-3}} |\hat u_{1}|^2 + |\hat u_{2}|^2 \\ + \frac{\xi^{2 m- 10}}{(1+\xi^2)^{m-3}} \sum_{\ell = 3}^{m} \frac{\xi^{2[\ell/2]}}{(1+\xi^2)^{\ell -3}}| \hat{u}_{\ell}|^2 \Big\} d\tau \le C|\hat{u}(0,\xi)|^2. \end{multline} Here we have used the following inequality \begin{equation}\label{2eneq-12} c |\hat{u}|^2 \le \frac{1}{2} |\hat{u}|^2 + \frac{\beta_{m-1}}{(1+\xi^2)^{2(m-3)}} \mathcal{E}_{m-2} \le C |\hat{u}|^2 \end{equation} for suitably small $\beta_{m-1}$. Furthermore the estimate \eqref{2eneq-10} with \eqref{2eneq-12} give us the following pointwise estimate \begin{equation \notag |\hat{u}(t,\xi)| \le C e^{- c \lambda(\xi)t} |\hat{u}(0,\xi)|, \qquad \lambda(\xi) = \frac{\xi^{3m-10}}{(1+\xi^2)^{2(m-3)}}. \end{equation} This therefore proves \eqref{point2} and completes the proof of Theorem \ref{thm2}. \subsection{Construction of the matrices $K$ and $S$} In this section, inspired by the energy method stated in Sections 3.2 and 3.3, we derive the desired matrices $K$ and $S$. Based on the energy method of Step 2 in Subsection 3.2, we first introduce the following $m \times m$ matrices: \begin{equation* K_1 = \left( \begin{array}{cccc:c {0} & {1} & {0} & {0} & \\%$BBh(B1$B9T(B {-1} & {0} & {0} & {0} & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right), \qquad K_4 = a_4 \left( \begin{array}{cccc:c {0} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {-1} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {1} & {0} & \\%$BBh(B4$B9T(B \hdashline & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right). \end{equation*} Then, we multiply \eqref{Fsys1} by $-i\xi K_1$ and take the inner product with $\hat u$. Moreover, taking the real part of the resultant equation, we have \begin{equation}\label{2FsysK1-1} -\frac{1}{2} \xi \partial_t \langle i K_1 \hat u, \hat u \rangle + \xi^2 \langle [K_1A_m]^{\rm sy} \hat u, \hat u \rangl - \xi \langle i [K_1L_m]^{\rm asy} \hat u, \hat u \rangle = 0, \end{equation} where \begin{equation* K_1A_m = \left( \begin{array}{cccc:cc {1} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {-1} & {0} & {0} & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline & & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right), \qquad K_1 L_m = \left( \begin{array}{cccc:c {0} & {\gamma} & {1} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right). \\ \end{equation*} The equality \eqref{2FsysK1-1} is equivalent to \eqref{2dissipation6-4'}. Similarly, by using the matrix $K_4$, we can obtain \begin{equation}\label{2FsysK4-1} -\frac{1}{2} \xi \partial_t \langle i K_4 \hat u, \hat u \rangle + \xi^2 \langle [K_4A_m]^{\rm sy} \hat u, \hat u \rangl - \xi \langle i [K_4L_m]^{\rm asy} \hat u, \hat u \rangle = 0, \end{equation} where \begin{equation* \begin{split} K_4A_m & = a_4^2 \left( \begin{array}{cccc:c {0} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B2$B9T(B {0} & {0} & {-1} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {1} & \\%$BBh(B4$B9T(B \hdashline & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right), \qquad K_4 L_m = - a_4 \left( \begin{array}{cccc:cc {0} & {0} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {0} & {0 } & {0} & \\%$BBh(B2$B9T(B {0} & {0} & {0} & {0} & {a_5} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {1} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline & & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right). \\ \end{split} \end{equation*} The equality \eqref{2FsysK4-1} is equivalent to \eqref{2dissipation6-5'}. \medskip We next introduce \begin{equation* S_3 = \left( \begin{array}{cccc:c {0} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {1} & {0} & \\%$BBh(B2$B9T(B {0} & {1} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right), \qquad \tilde{S}_{\ell} = \hskip -3mm \bordermatrix* & & & 1 & & & & & \cr & \mbox{\smash{\huge\textit{O}}} & & 0 &\mbox{\smash{\huge\textit{O}}} & & & & \cr & & &\vdots & & & & & \cr 1 & 0 &\cdots & 0 & \cdots &0 &\ell & & \cr & \mbox{\smash{\huge\textit{O}}} & & \vdots &\mbox{\smash{\huge\textit{O}}} & & & & \cr & & & 0 & & & & & \cr & & & \ell & & & & & }\\[2mm] \end{equation*} for $2 \le \ell \le m-1$. Then, by using the same argument, we can show that the equality \begin{equation}\label{Fsys2S} \frac{1}{2} \partial_t \langle S_{3} \hat u, \hat u \rangle + \xi \langle i [S_{3} A_m]^{\rm asy} \hat u, \hat u \rangl + \langle [S_{3}L_m]^{\rm sy} \hat u, \hat u \rangle = 0, \end{equation} which satisfies \begin{equation* S_3A_m = \left( \begin{array}{cccc:cc {0} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {0} & {0} & {a_4} & \\%$BBh(B2$B9T(B {1} & {0} & {0} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline & & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right), \qquad S_3 L_m = \left( \begin{array}{cccc:c {0} & {0} & {0} & {0} & \\%$BBh(B1$B9T(B {0} & {-1} & {0} & {0} & \\%$BBh(B2$B9T(B {0} & {\gamma} & {1} & {0} & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B3$B9T(B {0} & {0} & {0} & {0} & \\%$BBh(B4$B9T(B \hdashline & & & & \\%$BBh(B5$B9T(B & \mbox{\smash{\huge\textit{O}}} & & & \mbox{\smash{\huge\textit{O}}} \\%$BBh(B5$B9T(B \end{array} \right) \end{equation*} is equivalent to \eqref{2dissipation6-1'}. Similarly, we derive that \begin{equation}\label{FsysS-ell} \frac{1}{2} \partial_t \langle \tilde{S}_{2j} \hat u, \hat u \rangle + \xi \langle i [\tilde{S}_{2j} A_m]^{\rm asy} \hat u, \hat u \rangl + \langle [\tilde{S}_{2j}L_m]^{\rm sy} \hat u, \hat u \rangle = 0, \end{equation} which satisfies \begin{equation* \tilde{S}_{2j} A_m = \hskip-12mm \bordermatrix* & & a_{2j} & & & & & \cr & \mbox{\smash{\huge\textit{O}}} & 0 &\mbox{\smash{\huge\textit{O}}} & & & & \cr & &\vdots & & & & & \cr 0 \ \ 1\ \ 0 &\cdots &0 & \cdots &0 &\ 2j & & \cr & \mbox{\smash{\huge\textit{O}}} & \vdots &\mbox{\smash{\huge\textit{O}}} & & & & \cr & & 0 & & & & & \cr & & 2j-1 & & & & & }\\[2mm] \end{equation*} and \begin{equation* \tilde{S}_{2j} L_m = \hskip-3mm \bordermatrix* 0 & \cdots &0 \ \ a_{2j+1} \ \ 0 & \cdots & 0& & &\cr & & 0 & & & & \cr & \mbox{\smash{\huge\textit{O}}} & \vdots &\mbox{\smash{\huge\textit{O}}} & & & && \cr & & 0 & & & && & \cr & & 2j+1 & & & && & },\\[2mm] \end{equation*} is equivalent to \begin{equation \notag \begin{split} & \partial_t \Re(\hat u_{1} \bar{\hat{u}}_{2j}) - a_{2j} \xi \, \Re (i \hat{u}_{1} \bar{\hat{u}}_{2j-1}) + a_{2j+1} \, \Re ( \hat{u}_{1} \bar{\hat{u}}_{2j+1}) + \xi \, \Re (i \hat{u}_{2} \bar{\hat{u}}_{2j}) = 0, \end{split} \end{equation} for $2 \le j \le (m-2)/2$. Therefore, to construct \eqref{2estU}, we sum up \eqref{FsysS-ell} with respect to $j$ with $2 \le j \le (m-2)/2$, and find that \begin{equation}\label{FsysS-ell2} \frac{1}{2} \partial_t \langle \tilde{\mathcal{S}}_{m-2} \hat u, \hat u \rangle + \xi \langle i [\tilde{\mathcal{S}}_{m-2} A_m]^{\rm asy} \hat u, \hat u \rangl + \langle [\tilde{\mathcal{S}}_{m-2}L_m]^{\rm sy} \hat u, \hat u \rangle = 0 \end{equation} is equivalent to \eqref{2estU}. Here we define $\tilde{\mathcal{S}}_{2\ell}$ as $\tilde{\mathcal{S}}_4 = \tilde{S}_4$ and $$ \tilde{\mathcal{S}}_{2 \ell} = a_{2 \ell} \xi \tilde{\mathcal{S}}_{2 \ell -2} + \prod^{\ell-1}_{j=2} a_{2j +1} \tilde{S}_{2 \ell} $$ for $\ell \ge 3$. Consequently, multiplying \eqref{2FsysK1-1} by $\prod_{j=2}^{m/2} a_{2j} \xi^{m/2-2} $ and combining the resultant equality and \eqref{FsysS-ell2}, we obtain \begin{equation}\label{FsysS-ell3} \begin{split} &\frac{1}{2} \partial_t \Big\langle \Big( \tilde{\mathcal{S}}_{m-2} - i \prod_{j=2}^{m/2} a_{2j} \xi^{m/2-1} K_1 \Big) \hat u, \hat u \Big\rangle \\ & + \Big\langle \Big[\tilde{\mathcal{S}}_{m-2}L_m + \prod_{j=2}^{m/2} a_{2j} \xi^{m/2} K_1A_m\Big]^{\rm sy} \hat u, \hat u \Big\rangle \\ &+ \xi \langle i [\tilde{\mathcal{S}}_{m-2} A_m]^{\rm asy} \hat u, \hat u \rangle - \prod_{j=2}^{m/2} a_{2j} \xi^{m/2-1} \langle i [K_1L_m]^{\rm asy} \hat u, \hat u \rangle = 0. \end{split} \end{equation} This equality is the same as \eqref{2estU2}. \medskip Based on the energy method of Step 3 in Subsection 3.3, we next introduce the following $m \times m$ matrices: $$ S_{\ell+1} = a_{\ell+1} \hskip-3mm \bordermatrix*[{( )}]{ & & &0 & 0 & & & & \cr & \mbox{\smash{\huge\textit{O}}} & &\vdots &\vdots & & \mbox{\smash{\huge\textit{O}}} & & \cr & & &0 & 0 & & & & \cr 0 & \cdots &0 &0 & 1 & 0 &\cdots &0\ \ & \ell \cr 0 & \cdots &0 &1 &0 &0 & \cdots &0\ \ & \ell + 1\cr & & &0 & 0 & & & & \cr & \mbox{\smash{\huge\textit{O}}}& &\vdots & \vdots & & \mbox{\smash{\huge\textit{O}}} & & \cr & & &0 & 0 & & & & \cr & & &\ell & \ell+1 & & & & } $$ for $\ell = 4, 6, \cdots , m-2$. Then, we multiply \eqref{Fsys1} by $S_{\ell+1}$ and take the inner product with $\hat u$. Furthermore, taking the real part of the resultant equation, we obtain \begin{equation}\label{2FsysSL} \frac{1}{2} \partial_t \langle S_{\ell+1} \hat u, \hat u \rangle + \xi \langle i [S_{\ell+1} A_m]^{\rm asy} \hat u, \hat u \rangl + \langle [S_{\ell+1} L_m]^{\rm sy} \hat u, \hat u \rangle = 0 \end{equation} for $\ell = 4, 6, \cdots , m-2$, where $$ \hspace{-6mm}S_{\ell+1} A_m = a_\ell \hskip-3mm \bordermatrix*[{( )}]{ & & &0 &0 & 0 &0 & & & & \cr & \mbox{\smash{\huge\textit{O}}} & &\vdots &\vdots &\vdots &\vdots & & \mbox{\smash{\huge\textit{O}}} & & \cr & & &0 &0 & 0 &0 & & & & \cr 0 & \cdots &0 &0 &0 & 0 &a_{\ell+2} &0&\cdots &0 \ \ & \ell \cr 0 & \cdots &0 & a_{\ell} &0 &0 &0 &0& \cdots &0 \ \ & \ell+1 \cr & & &0 &0 & 0 &0 & & & & \cr & \mbox{\smash{\huge\textit{O}}}& &\vdots &\vdots & \vdots & \vdots & & \mbox{\smash{\huge\textit{O}}} & & \cr & & &0 &0 & 0 &0 & & & & \cr & & &\ell-1 &\ell & \ell+1 & \ell+2 & && & } $$ and $$ S_{\ell+1} L_m= a_{\ell+1}^2 \hskip-3mm \bordermatrix*[{( )}]{ & & &0 & 0 & & & & \cr & \mbox{\smash{\huge\textit{O}}} & &\vdots &\vdots & & \mbox{\smash{\huge\textit{O}}} & & \cr & & &0 & 0 & & & & \cr 0 & \cdots &0 &-1 & 0 & 0 &\cdots &0\ \ & \ell \cr 0 & \cdots &0 &0 &1 &0 & \cdots &0 \ \ & \ell + 1\cr & & &0 & 0 & & & & \cr & \mbox{\smash{\huge\textit{O}}}& &\vdots & \vdots & & \mbox{\smash{\huge\textit{O}}} & & \cr & & &0 & 0 & & & & \cr & & &\ell & \ell+1 & & & & } \qquad\quad . $$ We note that the equalities \eqref{2FsysSL} is equivalent to \eqref{2dissipation-6'}. On the other hand, we introduce the following $m \times m$ matrices: $$ K_{\ell+2} = a_{\ell + 2} \hskip-3mm \bordermatrix*[{( )}]{ & & &0 & 0 & & & & \cr & \mbox{\smash{\huge\textit{O}}} & &\vdots &\vdots & & \mbox{\smash{\huge\textit{O}}} & & \cr & & &0 & 0 & & & & \cr 0 & \cdots &0 &0 & -1 & 0 &\cdots &0\ \ & \ell+1 \cr 0 & \cdots &0 &1 &0 &0 & \cdots &0\ \ & \ell + 2 \cr & & &0 & 0 & & & & \cr & \mbox{\smash{\huge\textit{O}}}& &\vdots & \vdots & & \mbox{\smash{\huge\textit{O}}} & & \cr & & &0 & 0 & & & & \cr & & &\ell + 1 & \ell +2 & & & & } $$ \vskip2mm \noindent for $\ell = 4, 6, \cdots , m-2$. Then, we multiply \eqref{Fsys1} by $-i K_{\ell+2}$ and take the inner product with $\hat u$. Furthermore, taking the real part of the resultant equation, we obtain \begin{equation}\label{2FsysKL} -\frac{1}{2} \xi \partial_t \langle i K_{\ell+2} \hat u, \hat u \rangle + \xi^2 \langle [K_{\ell+2}A_m]^{\rm sy} \hat u, \hat u \rangl - \xi \langle i[K_{\ell+2}L_m]^{\rm asy} \hat u, \hat u \rangle = 0, \end{equation} for $\ell = 4,6, \cdots , m-4$, where $$ K_{\ell+2} A_m= a_{\ell+2}^2 \hskip-3mm \bordermatrix*[{( )}]{ & & &0 & 0 & & & & \cr & \mbox{\smash{\huge\textit{O}}} & &\vdots &\vdots & & \mbox{\smash{\huge\textit{O}}} & & \cr & & &0 & 0 & & & & \cr 0 & \cdots &0 &-1 & 0 & 0 &\cdots &0\ \ & \ell +1 \cr 0 & \cdots &0 &0 &1 &0 & \cdots &0\ \ & \ell + 2\cr & & &0 & 0 & & & & \cr & \mbox{\smash{\huge\textit{O}}}& &\vdots & \vdots & & \mbox{\smash{\huge\textit{O}}} & & \cr & & &0 & 0 & & & & \cr & & &\ell + 1 & \ell+2 & & & & } $$ and $$ K_{\ell+2} L_m = a_{\ell+2} \hskip-3mm \bordermatrix*[{( )}]{ & & &0 &0 & 0 &0 & & & & \cr & \mbox{\smash{\huge\textit{O}}} & &\vdots &\vdots &\vdots &\vdots & & \mbox{\smash{\huge\textit{O}}} & & \cr & & &0 &0 & 0 &0 & & & & \cr 0 & \cdots &0 &0 &0 & 0 & - a_{\ell+3} &0&\cdots &0\ \ & \ell+1 \cr 0 & \cdots &0 & -a_{\ell+1} &0 &0 &0 &0& \cdots &0\ \ & \ell+2 \cr & & &0 &0 & 0 &0 & & & & \cr & \mbox{\smash{\huge\textit{O}}}& &\vdots &\vdots & \vdots & \vdots & & \mbox{\smash{\huge\textit{O}}} & & \cr & & &0 &0 & 0 &0 & & & & \cr & & &\ell-1 &\ell & \ell + 1 & \ell+3 & && & }\qquad\quad. $$ \vskip2mm \noindent Moreover we have \begin{equation}\label{2FsysKM} -\frac{1}{2} \xi \partial_t \langle i K_{m} \hat u, \hat u \rangle + \xi^2 \langle [K_{m} A_m]^{\rm sy} \hat u, \hat u \rangl - \xi \langle i [K_{m} L_m]^{\rm asy} \hat u, \hat u \rangle = 0, \end{equation} where \begin{equation* K_{m}A_m = a_m^2 \left( \begin{array}{cccccc & & & {0} & {0} \\%$BBh(B1$B9T(B & \mbox{\smash{\huge\textit{O}}} & & {\vdots} & {\vdots} \\%$BBh(B2$B9T(B & & & {0} & {0} \\%$BBh(B3$B9T(B {0} & {\cdots} & {0} & {-1} & {0} \\%$BBh(B4$B9T(B {0} & {\cdots} & {0} & {0} & {1} \\%$BBh(B5$B9T(B \end{array} \right), \quad K_{m}L_m = a_{m-1}a_m \left( \begin{array}{cccccc & & & {0} & {0} & {0}\\%$BBh(B1$B9T(B & \mbox{\smash{\huge\textit{O}}} & & {\vdots} &{\vdots} & {\vdots} \\%$BBh(B2$B9T(B & & & {0} &{0} & {0}\\%$BBh(B3$B9T(B {0} & {\cdots} & {0} & {0} &{0} & {0} \\%$BBh(B4$B9T(B {0} & {\cdots} & {0} & {-1} &{0} & {0}\\%$BBh(B5$B9T(B \end{array} \right). \end{equation*} The equalities \eqref{2FsysKL} and \eqref{2FsysKM} are equivalent to \eqref{2dissipation-7'} and \eqref{2dissipation-8'}, respectively. For the rest of this subsection, we construct the desired matrices. According to the strategy of Step 3 in Subsection 3.2, we first combine \eqref{2FsysSL} and \eqref{2FsysKM}. More precisely, multiplying \eqref{2FsysSL} with $\ell = m-2$ and \eqref{2FsysKM} by $(1+\xi^2)$ and $\delta_1$, respectively, and combining the resultant equations, we obtain \begin{multline* \frac{1}{2} \partial_t \big\langle \big\{(1+\xi^2)S_{m-1} - \delta_1 i \xi K_m \big\} \hat u, \hat u \big\rangle \\ +(1+\xi^2) \langle [S_{m-1}L_m]^{\rm sy} \hat u, \hat u \rangle + \delta_1 \xi^2 \langle [K_mA_m]^{\rm sy} \hat u, \hat u \rangle \\ + \xi (1+\xi^2) \langle i [S_{m-1}A_m]^{\rm asy} \hat u, \hat u \rangle - \delta_1 \xi \langle i [K_1L_m]^{\rm asy} \hat u, \hat u \rangle= 0. \end{multline*} We next multiply \eqref{2FsysKL} with $\ell = m-4$ and the above equation by $(1+\xi^2)^2$ and $\delta_2 \xi^2$, respectively, and combining the resultant equations, we obtain \begin{multline* \frac{1}{2} \partial_t \big\langle \big\{\delta_2 \xi^2((1+\xi^2)S_{m-1} - \delta_1 i \xi K_m) - i \xi (1+\xi^2)^2 K_{m-2}\big\} \hat u, \hat u \big\rangle \\ + \delta_2 \xi^2(1+\xi^2) \langle [S_{m-1}L_m]^{\rm sy} \hat u, \hat u \rangle + \xi^2 \langle [(\delta_1\delta_2 \xi^2 K_m + (1+\xi^2)^2 K_{m-2})A_m]^{\rm sy} \hat u, \hat u \rangle \\ +\delta_2 \xi^3 (1+\xi^2) \langle i [S_{m-1}A_m]^{\rm asy} \hat u, \hat u \rangle \\ - \xi \langle i [(\delta_1\delta_2 \xi^2 K_m + (1+\xi^2)^2 K_{m-2})L_m]^{\rm asy} \hat u, \hat u \rangle= 0. \end{multline*} Furthermore, multiplying \eqref{2FsysSL} with $\ell = m-4$ and the above equation by $(1+\xi^2)^3$ and $\delta_3$, respectively, and combining the resultant equations, we get \begin{multline}\label{2FsysSK2-1} \frac{1}{2} \partial_t \big\langle \big\{\delta_3(\delta_2 \xi^2((1+\xi^2)S_{m-1} - \delta_1 i \xi K_m) \\ - i \xi (1+\xi^2)^2 K_{m-2}) + (1+\xi^2)^3 S_{m-3}\big\} \hat u, \hat u \big\rangle \\ + (1+\xi^2) \langle [(\delta_2 \delta_3 \xi^2S_{m-1} + (1+\xi^2)^2S_{m-3} )L_m]^{\rm sy} \hat u, \hat u \rangle \\ + \delta_3 \xi^2 \langle [(\delta_1\delta_2 \xi^2 K_m + (1+\xi^2)^2 K_{m-2})A_m]^{\rm sy} \hat u, \hat u \rangle \\ +\xi (1+\xi^2) \langle i [(\delta_2 \delta_3 \xi^2S_{m-1} + (1+\xi^2)^2S_{m-3} )A_m]^{\rm asy} \hat u, \hat u \rangle \\ - \delta_3 \xi \langle i [(\delta_1\delta_2 \xi^2 K_m + (1+\xi^2)^2 K_{m-2})L_m]^{\rm asy} \hat u, \hat u \rangle= 0. \end{multline} Now, we introduce the new matrices $\mathcal{K}_{\ell}$ and $\mathcal{S}_\ell$ as $\mathcal{K}_0 = K_m$ and $$ \mathcal{K}_{\ell} = \delta_{\ell-1}\delta_{\ell} \xi^2 \mathcal{K}_{\ell -2} + (1+\xi^2)^{\ell} K_{m-\ell} $$ for $\ell \ge 2$, and $\mathcal{S}_1 = S_{m-1}$ and $$ \mathcal{S}_{\ell} = \delta_{\ell-1}\delta_{\ell} \xi^2 \mathcal{S}_{\ell -2} + (1+\xi^2)^{\ell-1} S_{m-\ell} $$ for $\ell \ge 3$. Then the equation \eqref{2FsysSK2-1} is rewritten as \begin{multline* \frac{1}{2} \partial_t \big\langle \big\{(1+\xi^2)\mathcal{S}_{3} - \delta_3 i \xi \mathcal{K}_2 \big\} \hat u, \hat u \big\rangle + (1+\xi^2) \langle [\mathcal{S}_{3}L_m]^{\rm sy} \hat u, \hat u \rangle + \delta_3 \xi^2 \langle [\mathcal{K}_{2}A_m]^{\rm sy} \hat u, \hat u \rangle \\ +\xi (1+\xi^2) \langle i [\mathcal{S}_{3} A_m]^{\rm asy} \hat u, \hat u \rangle - \delta_3 \xi \langle i [\mathcal{K}_{2}L_m]^{\rm asy} \hat u, \hat u \rangle= 0. \end{multline*} Consequently, by the induction argument with respect to $\ell$ in \eqref{2FsysSL} and \eqref{2FsysKL}, we arrive at \begin{multline}\label{2FsysSK2-2} \frac{1}{2} \partial_t \big\langle \big\{(1+\xi^2)\mathcal{S}_{m-5} - \delta_{m-5} i \xi \mathcal{K}_{m-6} \big\} \hat u, \hat u \big\rangle + (1+\xi^2) \langle [\mathcal{S}_{m-5}L_m]^{\rm sy} \hat u, \hat u \rangle \\ + \delta_{m-5} \xi^2 \langle [\mathcal{K}_{m-6}A_m]^{\rm sy} \hat u, \hat u \rangle +\xi (1+\xi^2) \langle i [\mathcal{S}_{m-5} A_m]^{\rm asy} \hat u, \hat u \rangle \\ - \delta_{m-5} \xi \langle i [\mathcal{K}_{m-6}L_m]^{\rm asy} \hat u, \hat u \rangle= 0. \end{multline} Applying Young inequality to \eqref{2FsysSK2-2}, we can obtain \eqref{2eneq-4}. Moreover, we multiply \eqref{2FsysK4-1} and \eqref{2FsysSK2-2} by $(1+\xi^2)^{m-4}$ and $\delta_{m-4}\xi^2$, respectively, and combine the resultant equations. Then this yields \begin{multline* \frac{1}{2} \partial_t \big\langle \big\{\delta_{m-4}\xi^2(1+\xi^2)\mathcal{S}_{m-5} - i \xi \mathcal{K}_{m-4} \big\} \hat u, \hat u \big\rangle \\ + \delta_{m-4}\xi^2(1+\xi^2) \langle [\mathcal{S}_{m-5}L_m]^{\rm sy} \hat u, \hat u \rangle + \xi^2 \langle [\mathcal{K}_{m-4}A_m]^{\rm sy} \hat u, \hat u \rangle \\ +\delta_{m-4}\xi^3 (1+\xi^2) \langle i [\mathcal{S}_{m-5} A_m]^{\rm asy} \hat u, \hat u \rangle - \xi \langle i [\mathcal{K}_{m-4}L_m]^{\rm asy} \hat u, \hat u \rangle= 0. \end{multline*} Similarly, Moreover, we multiply \eqref{Fsys2S} and the above equation by $(1+\xi^2)^{m-3}$ and $\delta_{m-3}$, respectively, and combine the resultant equations. Then we get \begin{multline}\label{2FsysSK2-2'} \frac{1}{2} \partial_t \big\langle \big\{(1+\xi^2)\mathcal{S}_{m-3} - \delta_{m-3} i \xi \mathcal{K}_{m-4} \big\} \hat u, \hat u \big\rangle + (1+\xi^2) \langle [\mathcal{S}_{m-3}L_m]^{\rm sy} \hat u, \hat u \rangle \\ + \delta_{m-3} \xi^2 \langle [\mathcal{K}_{m-4}A_m]^{\rm sy} \hat u, \hat u \rangle + \xi (1+\xi^2) \langle i [\mathcal{S}_{m-3} A_m]^{\rm asy} \hat u, \hat u \rangle \\ - \delta_{m-3} \xi \langle i [\mathcal{K}_{m-4}L_m]^{\rm asy} \hat u, \hat u \rangle= 0. \end{multline} By Young inequality to \eqref{2FsysSK2-2'}, we can derive \eqref{2eneq-5'}. We next employ \eqref{FsysS-ell3} constructed before. Multiplying \eqref{FsysS-ell3} and \eqref{2FsysSK2-2'} by $(1+\xi^2)^{m-3}$ and $\delta_{m-2} \alpha_m \xi^{m/2-2}$, respectively, and combining the resultant equations, we get \begin{multline}\label{2FsysSK2-3} \frac{1}{2} \partial_t \big\langle \big\{(1+\xi^2)\mathcal{S}' - \alpha_{m} i \xi^{m/2-1} \mathcal{K}' \big\} \hat u, \hat u \big\rangle + (1+\xi^2) \langle [\mathcal{S}'L_m]^{\rm sy} \hat u, \hat u \rangle \\ + \alpha_{m} \xi^{m/2} \langle [\mathcal{K}' A_m]^{\rm sy} \hat u, \hat u \rangle + \xi (1+\xi^2) \langle i [\mathcal{S}' A_m]^{\rm asy} \hat u, \hat u \rangle \\ - \alpha_{m} \xi^{m/2-1} \langle i [\mathcal{K}' L_m]^{\rm asy} \hat u, \hat u \rangle= 0, \end{multline} where we have defined \begin{equation*} \begin{split} \mathcal{S}' &= \delta_{m-2} \alpha_m \xi^{m/2-2}\mathcal{S}_{m-3} + (1+\xi^2)^{m-4} \tilde{\mathcal{S}}_{m-2}, \\ \mathcal{K}' &= \delta_{m-2} \delta_{m-3} \mathcal{K}_{m-4} + (1+\xi^2)^{m-3}K_1, \end{split} \end{equation*} and had already defined $\alpha_m = \prod_{j=2}^{m/2} a_{2j}$. By \eqref{2FsysSK2-3}, we can get \eqref{2eneq-9}. Finally, multiplying \eqref{2FsysSK2-3} by $\delta_{m-1}\xi^{3m/2-6}/(1+\xi^2)^{2(m-3)}$, and combining \eqref{2eq} and the resultant equations, we can obtain \begin{multline}\label{2FsysSKfinal-1} \frac{1}{2} \partial_t \Big\langle \Big[ I + \frac{\delta_{m-1}}{(1+\xi^2)^{2(m-3)}} \big\{\xi^{3m/2-6}(1+\xi^2)\mathcal{S}' - \alpha_m i \xi^{2m-7} \mathcal{K}' \big\}\Big] \hat u, \hat u \Big\rangle \\ + \langle L_m \hat u, \hat u \rangle + \delta_{m-1} \frac{\xi^{3m/2-6}}{(1+\xi^2)^{2m-7}} \langle [\mathcal{S}'L_m]^{\rm sy} \hat u, \hat u \rangle \\ + \alpha_m \delta_{m-1} \frac{\xi^{2(m-3)}}{(1+\xi^2)^{2(m-3)}} \langle [\mathcal{K}'A_m]^{\rm sy} \hat u, \hat u \rangle - \alpha_m \delta_{m-1} \frac{\xi^{2m-7}}{(1+\xi^2)^{2(m-3)}} \langle i [\mathcal{K}' L_m]^{\rm asy} \hat u, \hat u \rangle \\ + \delta_{m-1} \frac{\xi^{3m/2-5}}{(1+\xi^2)^{2m-7}} \langle i [\mathcal{S}'A_m]^{\rm asy} \hat u, \hat u \rangle = 0, \end{multline} where $I$ denotes an identity matrix. Letting $\delta_1, \cdots, \delta_{m-1}$ suitably small, then \eqref{2FsysSKfinal-1} derives the energy estimate \eqref{2eneq-11}. To be more precise, we introduce \begin{equation*} \mathcal{K}_{m-4} = (1+\xi^2)^{m-4} K_{4} + \sum_{k=3}^{m/2} \prod_{j=2}^{k-1} \delta_{m-2j}\delta_{m-2j-1} \xi^{2(k-2)}(1+\xi^2)^{m-2k} K_{2k} \end{equation*} for $m \ge 6$, and hence \begin{equation}\label{2estK} \begin{split} \mathcal{K}' & = (1+\xi^2)^{m-3}K_1 + \delta_{m-2} \delta_{m-3} (1+\xi^2)^{m-4} K_{4} \\ &\qquad + \delta_{m-2} \delta_{m-3} \sum_{k=3}^{m/2} \prod_{j=2}^{k-1} \delta_{m-2j}\delta_{m-2j-1} \xi^{2(k-2)}(1+\xi^2)^{m-2k} K_{2k}. \end{split} \end{equation} Moreover, we find that \begin{equation*} \mathcal{S}_{m-3} = (1+\xi^2)^{m-4} S_{3} + \sum_{k=3}^{m/2} \prod_{j=2}^{k-1} \delta_{m-2j}\delta_{m-2j+1} \xi^{2(k-2)}(1+\xi^2)^{m-2k} S_{2k-1} \end{equation*} for $m \ge 6$, and $\tilde{\mathcal{S}}_{4} = \tilde{S}_{4}$, $\tilde{\mathcal{S}}_{6} = a_5 \tilde{S}_{6} + a_6 \xi \tilde{S}_4$ and \begin{equation*} \begin{split} \tilde{\mathcal{S}}_{m-2} &= \prod^{m/2 -2}_{j=2} a_{2j +1} \tilde{S}_{m-2} + \prod^{m/2 -3}_{j=1} a_{m-2j} \xi^{m/2-3} \tilde{S}_{4} \\ &\qquad + \sum_{k=2}^{m/2-3}\Big( \prod_{j=2}^{m/2-k-1} a_{2j+1} \Big) \Big( \prod_{j=1}^{k-1} a_{m-2j} \Big) \xi^{k-1} \tilde{S}_{m-2k} \end{split} \end{equation*} for $m \ge 10$, and also \begin{equation}\label{2estS} \begin{split} \mathcal{S}' & = \delta_{m-2} \alpha_m \xi^{m/2-2}(1+\xi^2)^{m-4} S_{3} \\ & + \alpha_m \sum_{k=3}^{m/2} \prod_{j=1}^{k-1} \delta_{m-2j}\delta_{m-2j+1} \xi^{m/2 + 2(k-3)}(1+\xi^2)^{m-2k} S_{2k-1} \\ &+ \prod^{m/2 -2}_{j=2} a_{2j +1}(1+\xi^2)^{m-4} \tilde{S}_{m-2} + \prod^{m/2 -3}_{j=1} a_{m-2j} \xi^{m/2-3} (1+\xi^2)^{m-4} \tilde{S}_{4} \\ &\qquad + \sum_{k=2}^{m/2-3}\Big( \prod_{j=2}^{m/2-k-1} a_{2j+1} \Big) \Big( \prod_{j=1}^{k-1} a_{m-2j} \Big) \xi^{k-1}(1+\xi^2)^{m-4} \tilde{S}_{m-2k} \end{split} \end{equation} Therefore, by using \eqref{2estK} and \eqref{2estS}, we can estimate the dissipation terms as \begin{multline}\label{2estSK} \langle L_m \hat u, \hat u \rangle + \delta_{m-1} \frac{\xi^{3(m-4)/2}}{(1+\xi^2)^{2m-7}} \langle [\mathcal{S}'L_m]^{\rm sy} \hat u, \hat u \rangle \\ + \delta_{m-1} \frac{\xi^{2(m-3)}}{(1+\xi^2)^{2(m-3)}} \langle [\mathcal{K}'A_m]^{\rm asy} \hat u, \hat u \rangle \\ \ge c \Big\{ \frac{\xi^{2(m-3)}}{(1+\xi^2)^{m-3}} |\hat u_{1}|^2 + |\hat u_{2}|^2 + \sum_{j=2}^{m/2} \frac{\xi^{2(m+j-6)}}{(1+\xi^2)^{m+2j-7}} |\hat u_{2j-1}|^2 \\ + \sum_{j=2}^{m/2} \frac{\xi^{2(m+j-5)}}{(1+\xi^2)^{m+2j-6}} |\hat u_{2j}|^2 \Big\}, \end{multline} for suitably small $\delta_1, \cdots, \delta_{m-1}$. We note that this estimate is the same as the dissipation part of \eqref{2eneq-11}. Consequently we conclude that our desired symmetric matrix $S$ and skew-symmetric matrix $K$ are described as \begin{equation*} S = \frac{\xi^{3(m-4)/2}}{(1+\xi^2)^{2m-7}} \mathcal{S}', \qquad K = \frac{\xi^{2(m-3)}}{(1+\xi^2)^{2(m-3)}}\mathcal{K}'. \end{equation*} \section{Alternative approach} \subsection{General strategy} In this section, by using the Fourier energy method, we provide an alternative way to justify the dissipative structure of the linear symmetric hyperbolic system with relaxation \eqref{sys1}. The key point of the approach is to derive from the above system a new system of $m$ number of equations or inequalities \begin{equation*} (I_1), (I_2), \cdots, (I_j),\cdots, (I_m), \end{equation*} in the Fourier space, such that their appropriate linear combination can capture the dissipation rate of all the degenerate components only over the frequency domain far from $|\xi|=0$ and $|\xi|=\infty$. Precisely, for any $0<\epsilon<M<\infty$, by considering \begin{equation} \label{aag.p1} \sum_{j=1}^mc_j I_j \end{equation} for an appropriate choice of constants $c_j>0$ $(1\leq j\leq m)$ which may depend on $\epsilon$ and $M$, we expect to obtain that for $\epsilon\leq |\xi|\leq M$, \begin{equation} \label{aag.p2} \partial_t \{|\hat u|^2 + \Re E^{int}_1 (\hat u)\} + c_{\epsilon,M} |\hat u|^2 \leq 0, \end{equation} where $c_{\epsilon,M}>0$ depending on $\epsilon$ and $M$ is a constant, and $E^{int}_1 (\hat u)$ is an interactive functional such that $|\hat u|^2 + \Re E^{int}_1 (\hat u)\sim |\hat u|^2$ over $\epsilon\leq |\xi|\leq M$. To deal with the dissipation rate around $|\xi|=0$ or $|\xi|=\infty$, instead of \eqref{aag.p1}, we re-consider the frequency weighted linear combination in the form of \begin{equation} \label{aag.p3} \sum_{j=1}^mc_j \frac{|\xi|^{\alpha_j}}{(1+|\xi|)^{\alpha_j +\beta_j}} I_j. \end{equation} Here $\alpha_j\geq 0$ and $\beta_j\geq 0$ $(1\leq j\leq m)$ are constants to be chosen such that the similar computations for deriving \eqref{aag.p2} can be applied so as to obtain a Lyapunov inequality taking the form \begin{equation} \label{aag.p4} \partial_t \{|\hat u|^2 + \Re E^{int} (\hat u)\} +c\sum_{j=1}^m \lambda_j (\xi) |\hat u_j|^2\leq 0, \end{equation} for all $t\geq 0$ and all $\xi\in \mathbb{R}$, where $c>0$ is a constant, $\lambda_j (\xi)$ $(j=1,2,\cdots,m)$ are nonnegative rational functions of $|\xi|$, and $E^{int} (\hat u)$ is an interactive functional such that $|\hat u|^2 + \Re E^{int} (\hat u)\sim |\hat u|^2$ for all $\xi \in \mathbb{R}$. If \eqref{aag.p4} was proved then by defining \begin{equation*} \lambda_{min} (\xi) =\min_{1\leq j\leq m} \lambda_j (\xi),\quad \xi \in \mathbb{R}, \end{equation*} it follows that \begin{equation*} |\hat u (t,\xi)|^2 \leq C e^{-c\lambda_{min} (\xi) t}|\hat u (0,\xi)|^2, \end{equation*} for all $t\geq 0$ and all $\xi\in \mathbb{R}$, which thus implies the dissipative structure of the considered system \eqref{sys1}. Observe that $\lambda_j(\xi)$ $(1\leq j\leq m)$ and hence $\lambda_{min}(\xi)$ may depend on $\alpha_j\geq 0$ and $\beta_j\geq 0$ $(1\leq j\leq m)$. In general, $\alpha_j$ and $\beta_j$ are required to satisfy a series of inequalities such that \eqref{aag.p3} indeed can be applied to deduce \eqref{aag.p4} by using the Cauchy-Schwarz inequalities. Therefore we always expect to choose constants $\alpha_j$ and $\beta_j$ such that $\lambda_{min} (\xi) $ is optimal in the sense that $\lambda_{min} (\xi) $ may tend to zero when $|\xi|\to 0$ or $|\xi|\to \infty$ in the slowest rate. Finally, we remark that due to \eqref{aag.p2} which holds over $\epsilon\leq |\xi|\leq M$, considering \eqref{aag.p3} is equivalent to considering $ \sum_{j=1}^mc_j|\xi|^{\alpha_j} I_j $ over $|\xi|\leq \epsilon$ with $0<\epsilon\leq 1$, and $ \sum_{j=1}^mc_j |\xi|^{-\beta_j} I_j $ over $|\xi|\geq M$ with $M\geq 1$. In such way, it is more convenient to derive those inequalities satisfied by $\lambda_j(\xi)$ $(1\leq j\leq m)$. \subsection{Revisit Model I} By using the same strategy as in Subsection 2.2 and 2.3, one can obtain $m$ number of identities $(I_j)$ with $j = 1,2,\cdots,m$ as follows: \begin{eqnarray*} &&(I_1):\ \partial_t \langle i\xi \hat{u}_2,\hat{u}_1\rangle+|\xi|^2 |\hat{u}_2|^2=- \langle i\xi \hat{u}_2,\hat{u}_4\rangle + |\xi|^2 |\hat{u}_1|^2.\\ &&(I_2):\ \displaystyle \partial_t \langle -\hat u_1,\hat u_4\rangle +|\hat u_1|^2= |\hat u_4|^2 +\langle i\xi \hat u_2,\hat u_4\rangle +\langle \hat u_1, i\xi a_4 \hat u_3 +i\xi a_5 \hat u_5\rangle.\\ &&(I_3):\ \displaystyle \partial_t \{\langle i\xi a_4 \hat u_3,\hat u_4\rangle -\langle a_4 \hat u_3,\hat u_2\rangle\} +a_4^2 |\xi|^2 |\hat u_3|^2 =\\ \displaystyle &&\qquad\qquad\qquad\qquad+a_4^2 |\xi|^2 |\hat u_4|^2 +\langle i\xi a_4 \hat u_3,-i\xi a_5 \hat u_5\rangle+a_4^2 \langle i\xi \hat u_4,\hat u_2 \rangle.\\ &&(I_4):\ \partial_t \langle i \xi a_5 \hat u_{4}, \hat u_5\rangle +a_5^2 |\xi|^2 |\hat u_{4}|^2 =\langle i\xi a_5 \hat u_{4}, -i \xi a_{6} \hat u_{6} \rangle \\ &&\displaystyle \qquad\qquad\qquad\qquad+a_5^2 |\xi|^2 |\hat u_5|^2 + a_5 a_{4} |\xi|^2 \langle \hat u_{3}, \hat u_5\rangle + \langle i\xi a_5 \hat u_1, \hat u_5\rangle.\\ &&(I_{j-1}):\ \partial_t \langle i \xi a_j \hat u_{j-1}, \hat u_j \rangle +a_j^2 |\xi|^2 |\hat u_{j-1}|^2 =\langle i\xi a_j \hat u_{j-1}, -i \xi a_{j+1} \hat u_{j+1} \rangle \\ &&\displaystyle \qquad\qquad\qquad\qquad+a_j^2 |\xi|^2 |\hat u_j|^2 + a_j a_{j-1} |\xi|^2 \langle \hat u_{j -2}, \hat u_j \rangle,\quad j = 6,7,\cdots,m-1.\\ &&(I_{m-1}):\ \partial_t \langle i \xi a_m \hat u_{m-1},\hat u_m\rangle+ a_m^2 |\xi|^2 |\hat u_{m-1}|^2=\langle i\xi a_m \hat u_{m-1}, -\gamma \hat u_m\rangle\\ &&\displaystyle \qquad\qquad\qquad\qquad+ a_m^2 |\xi|^2 |\hat u_m|^2 +a_{m-1}a_m |\xi|^2 \langle \hat u_{m-2}, \hat u_m\rangle.\\ &&(I_m):\ \frac{1}{2} \partial_t |\hat u|^2 +\gamma |\hat u_m|^2=0. \end{eqnarray*} We note that the equations $(I_1), (I_2), (I_3), (I_4), (I_{j-1}), (I_{m-1}), (I_m)$ are parallel to \eqref{dissipation6-7'}, \eqref{dissipation6-a}, \eqref{dissipation6-8'}, \eqref{dissipation6-9'}, \eqref{dissipation-1'}, \eqref{dissipation-1'}, \eqref{eq}, respectively. Hence we omit the proof for the derivation of these equations. \medskip \noindent{\bf Step 1.} We claim that for any $0<\epsilon<M<\infty$, there is $c_{\epsilon,M}>0$ such that for all $\epsilon\leq |\xi|\leq M$, \begin{equation} \label{tm1.p2} \partial_t \{|\hat u|^2 +\Re E^{int}_1(\hat u)\} +c_{\epsilon,M}|\hat u|^2 \leq 0, \end{equation} where $E^{int}_1(\hat u)$ is an interactive functional chosen such that \begin{equation} \label{tm1.p3} |\hat u|^2 +\Re E^{int}_1(\hat u)\sim |\hat u|^2. \end{equation} \begin{proof}[Proof of claim:] The key observation is that all the right-hand terms of identities $(I_j)$ $(1\leq j\leq m)$ can be absorbed by the left-hand dissipative terms after taking an appropriate linear combination of all identities. In fact, let us define \begin{eqnarray*} E^{int}_1(\hat u)&=&c_1\langle i\xi \hat{u}_2,\hat{u}_1\rangle+c_2\langle -\hat u_1,\hat u_4\rangle \\ &&+c_3\{\langle i\xi a_4 \hat u_3,\hat u_4\rangle -\langle a_4 \hat u_3,\hat u_2\rangle\}+\sum_{j=4}^{m-1}c_j \langle i \xi a_j \hat u_{j-1}, \hat u_j\rangle. \end{eqnarray*} By taking the real part of each identity $(I_j)$, taking the sum $\sum_{j=1}^mc_jI_j$ with an appropriate choice of constants $c_j$ $(1\leq j\leq m)$, and applying the Cauchy-Schwarz inequality to the right-hand product terms, one can obtain \eqref{tm1.p2}, where constants $c_j$ $(1\leq j\leq m)$ depending on $\epsilon$ and $M$ are chosen such that \begin{equation} \notag 0<c_1\ll c_2\ll \cdots \ll c_{m-2}\ll c_{m-1}\ll 1=c_m. \end{equation} The detailed representation of the proof is omitted for brevity. \eqref{tm1.p3} holds true due to $|E^{int}(\hat u)|\leq C_M c_{m-1} |\hat u|^2$ for some constant $C_M$ depending on $M$ and also due to smallness of $c_{m-1}$. \end{proof} \noindent{\bf Step 2.} Let $|\xi|\geq M$ for $M\geq 1$. We consider the weighted linear combination of identities $(I_j)$ $(1\leq j\leq m)$ in the form of \begin{equation} \notag I_m+\sum_{j=1}^{m-1}c_j |\xi|^{-\beta_j} I_j, \end{equation} where $c_j$ $(1\leq j\leq m-1)$ are chosen in terms of step 2, and $\beta_j\geq 0$ are chosen such that all the right-hand product terms can be absorbed after using the Cauchy-Schwarz inequality. In fact, multiplying $(I_j)$ by $|\xi|^{-\beta_j}$, one has \begin{eqnarray*} &&(I_{\beta_1}):\ \partial_t \langle i\xi |\xi|^{-\beta_1}\hat{u}_2,\hat{u}_1\rangle+|\xi|^{2-\beta_1} |\hat{u}_2|^2=- \langle i\xi|\xi|^{-\beta_1} \hat{u}_2,\hat{u}_4\rangle + |\xi|^{2-\beta_1} |\hat{u}_1|^2.\\ &&(I_{\beta_2}):\ \displaystyle \partial_t \langle -|\xi|^{-\beta_2}\hat u_1,\hat u_4\rangle +|\xi|^{-\beta_2}|\hat u_1|^2= |\xi|^{-\beta_2}|\hat u_4|^2 +\langle i\xi|\xi|^{-\beta_2} \hat u_2,\hat u_4\rangle \\ &&\qquad\qquad+\langle \hat u_1, i\xi |\xi|^{-\beta_2}a_4 \hat u_3 +i\xi |\xi|^{-\beta_2}a_5 \hat u_5\rangle.\\ &&(I_{\beta_3}):\ \displaystyle \partial_t \{\langle i\xi |\xi|^{-\beta_3}a_4 \hat u_3,\hat u_4\rangle -\langle a_4 |\xi|^{-\beta_3}\hat u_3,\hat u_2\rangle\} +a_4^2 |\xi|^{2-\beta_3} |\hat u_3|^2 =\\ \displaystyle &&\qquad\qquad+a_4^2 |\xi|^{2-\beta_3} |\hat u_4|^2 +\langle i\xi|\xi|^{-\beta_3} a_4 \hat u_3,-i\xi a_5 \hat u_5\rangle+a_4^2 \langle i\xi |\xi|^{-\beta_3}\hat u_4,\hat u_3\rangle.\\ &&(I_{\beta_4}):\ \partial_t \langle i \xi|\xi|^{-\beta_4} a_5 \hat u_{4}, \hat u_5\rangle +a_5^2 |\xi|^{2-\beta_4} |\hat u_{4}|^2 =\langle i\xi |\xi|^{-\beta_4}a_5 \hat u_{4}, -i \xi a_{6} \hat u_{6} \rangle \\ &&\displaystyle \qquad\qquad\qquad\qquad+a_5^2 |\xi|^{2-\beta_4} |\hat u_5|^2 + a_5 a_{4} |\xi|^{2-\beta_4} \langle \hat u_{3}, \hat u_5\rangle + \langle i\xi |\xi|^{-\beta_4} a_5 \hat u_1, \hat u_5\rangle.\\ &&(I_{\beta_{j-1}}):\ \partial_t \langle i \xi |\xi|^{-\beta_{j-1}}a_j \hat u_{j-1}, \hat u_j\rangle +a_j^2 |\xi|^{2-\beta_{j-1}} |\hat u_{j-1}|^2 \\ &&\qquad\qquad=\langle i\xi |\xi|^{-\beta_{j-1}}a_j \hat u_{j-1}, -i \xi a_{j+1} \hat u_{j+1} \rangle+a_j^2 |\xi|^{2-\beta_{j-1}} |\hat u_j|^2 \\ &&\qquad\qquad\qquad\qquad+ a_j a_{j-1} |\xi|^{2-\beta_{j-1}} \langle \hat u_{j-2}, \hat u_j\rangle,\quad j=6,7,\cdots,m-1.\\ &&(I_{\beta_{m-1}}):\ \partial_t \langle i \xi |\xi|^{-\beta_{m-1}}a_m \hat u_{m-1},\hat u_m\rangle+ a_m^2 |\xi|^{2-\beta_{m-1}} |\hat u_{m-1}|^2\\ &&\qquad\qquad=\langle i\xi |\xi|^{-\beta_{m-1}}a_m \hat u_{m-1}, -\gamma \hat u_m\rangle+ a_m^2 |\xi|^{2-\beta_{m-1}} |\hat u_m|^2 \\ &&\qquad\qquad\qquad\qquad+a_{m-1}a_m |\xi|^{2-\beta_{m-1}} \langle \hat u_{m-2}, \hat u_m\rangle. \end{eqnarray*} We then require $\beta_j$ $(1\leq j\leq m-1)$ to satisfy the following relations. From $(I_{\beta_1})$, \begin{eqnarray*} && \beta_1-1\geq 0,\quad \beta_1-2\geq 0,\\ && 2(\beta_1-1)\geq (\beta_1-2)+(\beta_4-2),\quad \beta_1-2\geq \beta_2, \end{eqnarray*} where since $|\xi|\geq M$, $\beta_1-1\geq 0$ is such that $\xi |\xi|^{-\beta_1}$ in the left first product term of $(I_{\beta_1})$ is bounded, $\beta_1-2\geq 0$ is such that $|\xi|^{2-\beta_1}$ in the left second product term of $(I_{\beta_1})$ is bounded, $2(\beta_1-1)\geq (\beta_1-2)+(\beta_4-2)$ is such that the product term $\langle i\xi|\xi|^{-\beta_1} \hat{u}_2,\hat{u}_4\rangle$ on the right first term of $(I_{\beta_1})$ can be bounded by the linear combination of the dissipative term $|\xi|^{2-\beta_1} |\hat{u}_2|^2$ in $(I_{\beta_1})$ and $ |\xi|^{2-\beta_4} |\hat u_{4}|^2 $ in $(I_{\beta_4})$, $ \beta_1-2\geq \beta_2$ is such that the term $|\xi|^{2-\beta_1} |\hat{u}_1|^2$ on the right second term of $(I_{\beta_1})$ can be bounded by the dissipative term $|\xi|^{-\beta_2}|\hat u_1|^2$ of $(I_{\beta_2})$. In terms of the completely same way, from $(I_{\beta_j})$ for $j=2,3,\cdots,m-1$, respectively, we require \begin{eqnarray*} && \beta_2\geq 0,\\ &&\beta_2\geq \beta_4-2,\quad 2(\beta_2-1)\geq (\beta_1-2)+(\beta_4-2),\quad \beta_2\geq \beta_3,\quad \beta_2\geq \beta_5, \end{eqnarray*} and \begin{eqnarray*} &&\beta_3-1\geq 0,\quad \beta_3\geq 0,\quad \beta_3-2\geq 0,\\ && \beta_3-2\geq \beta_4-2,\quad \beta_3\geq \beta_5,\quad 2(\beta_3-1)\geq (\beta_3-2)+(\beta_4-2), \end{eqnarray*} and \begin{eqnarray*} &&\beta_4\geq 1,\quad \beta_4\geq 2,\quad \beta_4\geq \beta_6,\quad \beta_4\geq \beta_5,\\ &&2( \beta_4-2)\geq (\beta_3-2)+(\beta_5-2),\quad 2( \beta_4- 1)\geq \beta_2+(\beta_5-2), \end{eqnarray*} and for $j=6,\cdots, m-1$, \begin{eqnarray*} && \beta_{j-1}\geq 1,\quad \beta_{j-1}\geq 2,\\ && \beta_{j-1}\geq \beta_{j+1},\quad \beta_{j-1}\geq \beta_{j},\quad 2(\beta_{j-1}-2)\geq (\beta_{j-2}-2)+(\beta_j-2), \end{eqnarray*} and \begin{eqnarray*} && \beta_{m-1}\geq 1,\quad \beta_{m-1}\geq 2,\\ && 2(\beta_{m-1} -1)\geq \beta_{m-1}-2,\quad \beta_{m-1} -2 \geq 0,\quad 2( \beta_{m-1} -2) \geq \beta_{m-2}-2. \end{eqnarray*} Let us choose \begin{equation} \notag \beta_1=4,\quad \beta_2=\beta_3=\cdots=\beta_{m-1}=2, \end{equation} which satisfy all the above inequalities of $\beta_j$ $(1\leq j\leq m-1)$. We now define \begin{eqnarray*} E^{int}_\infty(\hat u)&=&c_1\langle i\xi |\xi|^{-4}\hat{u}_2,\hat{u}_1\rangle+c_2\langle - |\xi|^{-2}\hat u_1,\hat u_4\rangle \\ &&+c_3\{\langle i\xi |\xi|^{-2}a_4 \hat u_3,\hat u_4\rangle -\langle a_4 |\xi|^{-2}\hat u_3,\hat u_2\rangle\}\\ &&+\sum_{j=4}^{m-1}c_j \langle i \xi|\xi|^{-2} a_j \hat u_{j-1}, \hat u_j\rangle. \end{eqnarray*} Then, as in Step 1, one can show that for any $M\geq 1$, there is $c_{M}>0$ such that for all $|\xi|\geq M$, \begin{equation}\notag \partial_t \{|\hat u|^2 +\Re E^{int}_\infty(\hat u)\} +c_{M}\{|\xi|^{-2}(|\hat u_1|^2+|\hat u_2|^2)+\sum_{j=3}^m|\hat u_j|^2\} \leq 0. \end{equation} \noindent{\bf Step 3.} Let $|\xi|\leq \epsilon$ for $0<\epsilon\leq 1$. As in Step 2, we consider the weighted linear combination of identities $(I_j)$ $(1\leq j\leq m)$ in the form of \begin{equation} \notag I_m+\sum_{j=1}^{m-1}c_j |\xi|^{\alpha_j} I_j, \end{equation} where $c_j$ $(1\leq j\leq m-1)$ are chosen in terms of step 1, and $\alpha_j\geq 0$ are chosen such that all the right-hand product terms can be absorbed after using the Cauchy-Schwarz inequality. In fact, as in Step 2, multiplying $(I_j)$ by $|\xi|^{\alpha_j}$, one has \begin{eqnarray*} &&(I_{\alpha_1}):\ \partial_t \langle i\xi |\xi|^{\alpha_1}\hat{u}_2,\hat{u}_1\rangle+|\xi|^{2+\alpha_1} |\hat{u}_2|^2=- \langle i\xi|\xi|^{\alpha_1} \hat{u}_2,\hat{u}_4\rangle + |\xi|^{2+\alpha_1} |\hat{u}_1|^2.\\ &&(I_{\alpha_2}):\ \displaystyle \partial_t \langle -|\xi|^{\alpha_2}\hat u_1,\hat u_4\rangle +|\xi|^{\alpha_2}|\hat u_1|^2= |\xi|^{\alpha_2}|\hat u_4|^2 +\langle i\xi|\xi|^{\alpha_2} \hat u_2,\hat u_4\rangle \\ &&\qquad\qquad+\langle \hat u_1, i\xi |\xi|^{\alpha_2}a_4 \hat u_3 +i\xi |\xi|^{\alpha_2}a_5 \hat u_5\rangle.\\ &&(I_{\alpha_3}):\ \displaystyle \partial_t \{\langle i\xi |\xi|^{\alpha_3}a_4 \hat u_3,\hat u_4\rangle -\langle a_4 |\xi|^{\alpha_3}\hat u_3,\hat u_2\rangle\} +a_4^2 |\xi|^{2+\alpha_3} |\hat u_3|^2 =\\ \displaystyle &&\qquad\qquad+a_4^2 |\xi|^{2+\alpha_3} |\hat u_4|^2 +\langle i\xi|\xi|^{\alpha_3} a_4 \hat u_3,-i\xi a_5 \hat u_5\rangle+a_4^2 \langle i\xi |\xi|^{\alpha_3}\hat u_4,\hat u_3\rangle.\\ &&(I_{\alpha_4}):\ \partial_t \langle i \xi|\xi|^{\alpha_4} a_5 \hat u_{4}, \hat u_5\rangle +a_5^2 |\xi|^{2+\alpha_4} |\hat u_{4}|^2 =\langle i\xi |\xi|^{\alpha_4}a_5 \hat u_{4}, -i \xi a_{6} \hat u_{6} \rangle \\ &&\displaystyle \qquad\qquad\qquad\qquad+a_5^2 |\xi|^{2+\alpha_4} |\hat u_5|^2 + a_5 a_{4} |\xi|^{2+\alpha_4} \langle \hat u_{3}, \hat u_5\rangle + \langle i\xi |\xi|^{\alpha_4} a_5 \hat u_1, \hat u_5\rangle.\\ &&(I_{\alpha_{j-1}}):\ \partial_t \langle i \xi |\xi|^{\alpha_{j-1}}a_j \hat u_{j-1}, \hat u_j\rangle +a_j^2 |\xi|^{2+\alpha_{j-1}} |\hat u_{j-1}|^2 \\ &&\qquad\qquad=\langle i\xi |\xi|^{\alpha_{j-1}}a_j \hat u_{j-1}, -i \xi a_{j+1} \hat u_{j+1} \rangle+a_j^2 |\xi|^{2+\alpha_{j-1}} |\hat u_j|^2 \\ &&\qquad\qquad\qquad\qquad+ a_j a_{j-1} |\xi|^{2+\alpha_{j-1}} \langle \hat u_{j-2}, \hat u_j\rangle,\quad j=6,7,\cdots,m-1.\\ &&(I_{\alpha_{m-1}}):\ \partial_t \langle i \xi |\xi|^{\alpha_{m-1}}a_m \hat u_{m-1},\hat u_m\rangle+ a_m^2 |\xi|^{2+\alpha_{m-1}} |\hat u_{m-1}|^2\\ &&\qquad\qquad=\langle i\xi |\xi|^{\alpha_{m-1}}a_m \hat u_{m-1}, -\gamma \hat u_m\rangle+ a_m^2 |\xi|^{2+\alpha_{m-1}} |\hat u_m|^2 \\ &&\qquad\qquad\qquad\qquad+a_{m-1}a_m |\xi|^{2+\alpha_{m-1}} \langle \hat u_{m-2}, \hat u_m\rangle. \end{eqnarray*} As in the case of the large frequency domain, for $|\xi|\leq \epsilon$ with $\epsilon>0$, in order for all the right product terms to be bounded, from equations $(I_{\alpha_j})$ $(j=1,2,\cdots,m-1)$ above, respectively, we have to require \begin{eqnarray*} && \alpha_1+1\geq 0,\ 2(\alpha_1+1)\geq (\alpha_1+2)+(\alpha_4+2), \ \alpha_1+2\geq \alpha_2, \end{eqnarray*} and \begin{eqnarray*} && \alpha_2 \geq \alpha_4 +2,\ 2(\alpha_2+1) \geq (\alpha_1+2)+(\alpha_4+2),\ \alpha_2\geq \alpha_3,\ \alpha_2\geq \alpha_5, \end{eqnarray*} and \begin{eqnarray*} &&\alpha_3 \geq \alpha_4,\ \alpha_3\geq \alpha_5,\ 2( \alpha_3+1) \geq (\alpha_4+2)+(\alpha_1+2), \end{eqnarray*} and \begin{eqnarray*} && \alpha_4\geq \alpha_6,\ \alpha_4\geq \alpha_5,\\ &&2( \alpha_4 +2) \geq (\alpha_3+2)+(\alpha_5+2),\ 2( \alpha_4 +1) \geq \alpha_2+(\alpha_5+2), \end{eqnarray*} and for $j=6,\cdots, m-1$, \begin{eqnarray*} && \alpha_{j-1}\geq \alpha_{j+1},\ \alpha_{j-1}\geq \alpha_{j},\ 2(\alpha_{j-1}+2) \geq (\alpha_{j-2}+2)+(\alpha_j+2), \end{eqnarray*} and \begin{eqnarray*} && \alpha_{m-1}\geq 0,\ \alpha_{m-1}+2\geq 0,\ 2(\alpha_{m-1} +2) \geq \alpha_{m-2}+2. \end{eqnarray*} To consider the best choice of $\{\alpha_j\}_{j=1}^{m-1}$, one can see \begin{equation} \notag \alpha_1\geq \alpha_2\geq\cdots \geq \alpha_j\geq \alpha_{j+1}\geq \cdots\geq \alpha_{m-2}\geq \alpha_{m-1}\geq 0:=\alpha_m, \end{equation} with \begin{eqnarray*} && \alpha_1-\alpha_4\geq 2,\\ &&\alpha_2-\alpha_4\geq 2,\\ &&\alpha_3-\alpha_4\geq 2,\\ && \alpha_{j-1}-\alpha_{j}\leq \alpha_{j}-\alpha_{j+1},\quad 4\leq j\leq m-1. \end{eqnarray*} Therefore, the possible best choice satisfies \begin{eqnarray*} && \alpha_1-\alpha_4= 2,\\ &&\alpha_2-\alpha_4= 2,\\ &&\alpha_3-\alpha_4= 2,\\ &&2=\alpha_3-\alpha_4\leq \alpha_4-\alpha_5\leq \cdots\leq \alpha_{m-1}-\alpha_m=\alpha_{m-1}=2, \end{eqnarray*} which implies \begin{eqnarray*} && \alpha_1=\alpha_2=\alpha_3=2(m-4),\\ &&\alpha_{j}=2(m-j-1), \quad 4\leq j\leq m-1. \end{eqnarray*} We now define \begin{eqnarray*} E^{int}_0(\hat u)&=&c_1\langle i\xi |\xi|^{2(m-4)}\hat{u}_2,\hat{u}_1\rangle+c_2\langle - |\xi|^{2(m-4)}\hat u_1,\hat u_4\rangle \\ &&+c_3\{\langle i\xi |\xi|^{2(m-4)}a_4 \hat u_3,\hat u_4\rangle -\langle a_4 |\xi|^{2(m-4)}\hat u_3,\hat u_2\rangle\}\\ &&+\sum_{j=4}^{m-1}c_j \langle i \xi|\xi|^{2(m-j-1)} a_j \hat u_{j-1}, \hat u_j\rangle. \end{eqnarray*} Then, as in Step 1, one can show that for any $0<\epsilon\leq 1$, there is $c_{\epsilon}>0$ such that for all $|\xi|\leq \epsilon$, \begin{equation*} \partial_t \{|\hat u|^2 +\Re E^{int}_0(\hat u)\} +c_{\epsilon}\{|\xi|^{2m-8}|\hat u_1|^2+|\xi|^{2m-6}|\hat u_2|^2+\sum_{j=3}^m |\xi|^{2(m-j)}|\hat u_j|^2\} \leq 0, \end{equation*} which further implies that for $|\xi|\leq \epsilon$, \begin{equation} \notag \partial_t \{|\hat u|^2 +\Re E^{int}_0(\hat u)\} +c_\epsilon |\xi|^{2m-6}|\hat u|^2\leq 0. \end{equation} \noindent{\bf Step 4.} For $\xi\in \mathbb{R}$ let us define \begin{eqnarray*} E^{int}(\hat u)&=&c_1\frac{|\xi|^{2(m-4)}}{(1+|\xi|)^{2m-4}}\langle i\xi \hat{u}_2,\hat{u}_1\rangle+c_2\frac{|\xi|^{2(m-4)}}{(1+|\xi|)^{2m-6}}\langle - \hat u_1,\hat u_4\rangle \\ &&+c_3\frac{|\xi|^{2(m-4)}}{(1+|\xi|)^{2m-6}}\{\langle i\xi a_4 \hat u_3,\hat u_4\rangle -\langle a_4 \hat u_3,\hat u_2\rangle\}\\ &&+\sum_{j=4}^{m-1}c_j \frac{|\xi|^{2(m-j-1)}}{(1+|\xi|)^{2(m-j)}} \langle i \xi a_j \hat u_{j-1}, \hat u_j\rangle. \end{eqnarray*} As in Step 2 and Step 3, we consider the weighted linear combination of identities $(I_j)$ $(1\leq j\leq m)$ in the form of \begin{multline*} I_m+c_1\frac{|\xi|^{2(m-4)}}{(1+|\xi|)^{2m-4}}I_1 +c_2\frac{|\xi|^{2(m-4)}}{(1+|\xi|)^{2m-6}}I_2\\ +c_3\frac{|\xi|^{2(m-4)}}{(1+|\xi|)^{2m-6}}I_3 +\sum_{j=4}^{m-1}c_j \frac{|\xi|^{2(m-j-1)}}{(1+|\xi|)^{2(m-j)}} I_j, \end{multline*} where $c_j$ $(1\leq j\leq m-1)$ are chosen in terms of Step 1. Thanks to computations in Step 1, Step 2, and Step 3, in the completely same way, one can deduce that for $\xi\in \mathbb{R}$, \begin{multline*} \partial_t \{|\hat u|^2 +\Re E^{int}(\hat u)\} +c\{\frac{|\xi|^{2m-8}}{(1+|\xi|)^{2m-6}}|\hat u_1|^2\\ +\frac{|\xi|^{2m-6}}{(1+|\xi|)^{2m-4}}|\hat u_2|^2+\sum_{j=3}^m \frac{|\xi|^{2(m-j)}}{(1+|\xi|)^{2(m-j)}}|\hat u_j|^2\} \leq 0, \end{multline*} which further gives \begin{equation} \notag \partial_t \{|\hat u|^2 +\Re E^{int}(\hat u)\} +c\frac{|\xi|^{2m-6}}{(1+|\xi|)^{2m-4}} |\hat u|^2\leq 0. \end{equation} Noticing $|\hat u|^2 +\Re E^{int}(\hat u)\sim |\hat u|^2$, it follows that \begin{equation} \notag |\hat u(t,\xi)|\leq C e^{-c\eta (\xi) t}|\hat u (0,\xi)|,\quad \eta (\xi)=\frac{|\xi|^{2m-6}}{(1+|\xi|)^{2m-4}}, \end{equation} for all $t\geq 0$ and all $\xi\in \mathbb{R}$. Notice that the result here is consistent with \eqref{point1} proved in Section 2.3. \subsection{Revisit Model II} In this section we revisit the Model II \eqref{sys1} with coefficients matrices $A_m$ and $L_m$ defined in \eqref{2Tmat}. For simplicity of representation, we rewrite $A_{m}$ with $m=2n$ as \begin{equation}\notag A_{2n}= \left( \begin{array}{ccccccc} 0 & a_{12} & & && & \\ a_{21} & 0 & & & && \\ & &\ 0 &\ a_{34} & && \\ & &\ a_{43} &\ 0 & & & \\ & & & &\ddots & & \\ &&& && 0& a_{2n-1,2n}\\ &&&&&a_{2n,2n-1} & 0 \end{array} \right), \end{equation} with $a_{2j-1,2j}=a_{2j,2j-1}=a_{j}$ for $1\leq j\leq n$, and also choose $L_m$ with $m=2n$ as \begin{equation}\notag L_{2n}=\left( \begin{array}{ccccccccc} 0 & & & & &&&&\\ & 1 & 1 & & &&&& \\ & -1 & 0 & & &&&& \\ & & & 0 & 1 &&&& \\ &&&-1&0&&&& \\ &&&&&\ddots&&& \\ &&&&&&0&1&\\ &&&&&&-1&0&\\ &&&&&& &&0 \end{array} \right). \end{equation} With notations above, system \eqref{sys1} can read \begin{eqnarray*} &&\partial_t \hat u_{2j-1}+i\xi a_j \hat u_{2j} - \hat u_{2j-2}=0 \\ &&\partial_t \hat u_{2j} + i\xi a_j \hat u_{2j-1} + \hat u_{2j+1} +\delta_{2,2j}\hat u_2=0,\quad j=1,2,\cdots,n, \end{eqnarray*} with the convention that $ \hat u_{2n+1}\equiv 0$ and $\hat u_0\equiv 0$. As for the model I, we can obtain the following estimates \begin{equation}\label{tm2.pe1} \frac{1}{2}\partial_t |\hat u|^2+ |\hat u_2|^2=0, \end{equation} and \begin{multline} \partial_t \Re\langle i\xi a_1\hat u_1, \sum_{j=1}^n(-i\xi)^{1-j} (\prod_{k=2}^j a_k)^{-1} \hat u_{2j}\rangle +c a_1^2 \xi^2|\hat u_1|^2\\ \lesssim (1+|\xi|)^2|\hat u_2|^2 +\Re\langle \xi^2a_1^2 \hat u_2, \sum_{j=2}^n(-i\xi)^{1-j} (\prod_{k=2}^j a_k)^{-1} \hat u_{2j}\rangle, \label{tm2.pe2a} \end{multline} and \begin{equation} \partial_t \Re\langle \hat u_{2j-1},u_{2j-2}\rangle +c |\hat u_{2j-1}|^2 \lesssim |\hat u_{2j-2}|^2+\xi^2 |\hat u_{2j-3}|^2 +\Re\langle -i\xi a_j \hat u_{2j}, \hat u_{2j-2}\rangle, \label{tm2.pej1} \end{equation} and \begin{multline} \partial_t\Re \langle i\xi a_j\hat u_{2j},\hat u_{2j-1}\rangle +c a_j^2 \xi^2|\hat u_{2j}|^2 \\ \lesssim |\hat u_{2j-2}|^2+a_j^2\xi^2 |\hat u_{2j-1}|^2 +\Re\langle -i\xi a_j \hat u_{2j+1}, \hat u_{2j-1}\rangle, \label{tm2.pej2} \end{multline} for $j=2,3,\cdots,n$. Indeed, by using the equations \eqref{2eq}, \eqref{2estU2}, \eqref{2dissipation-6}, \eqref{2dissipation-7} derived in Subsection 3.3, we can get \eqref{tm2.pe1}, \eqref{tm2.pe2a}, \eqref{tm2.pej1}, \eqref{tm2.pej2}, immediately. Let us denote \eqref{tm2.pe1}, \eqref{tm2.pe2a}, \eqref{tm2.pej1}, \eqref{tm2.pej2} by $(I_1)$, $(I_2)$, $(I_{2j-1})$ and $(I_{2j})$, respectively, where $j=2,3,\cdots,n$. Consider the linear combination of all $2n$ number of equations \begin{equation} \notag \sum_{j=1}^n (c_{2j-1} I_{2j-1} +c_{2j} I_{2j}), \end{equation} where $c_1=1$, and $c_k>0$ $(k=2,3,\cdots,2n)$ are constants to be properly chosen. It is straightforward to verify that for any $0<\epsilon<M<\infty$, one can choose constants $c_k$ $(1\leq k\leq 2n)$ depending on $\epsilon$ and $M$, with \begin{equation} \notag 0<c_{2n}\ll c_{2n-1}\ll \cdots \ll c_{2j}\ll c_{2j-1}\ll\cdots \ll c_3\ll c_2\ll 1=c_1, \end{equation} such that there is $c_{\epsilon,M}>0$ such that for all $\epsilon\leq |\xi|\leq M$, \begin{equation} \notag \partial_t \{|\hat u|^2 +\Re E^{int}_1(\hat u)\} +c_{\epsilon,M}|\hat u|^2 \leq 0, \end{equation} where $E^{int}_1(\hat u)$ is an interactive functional given by \begin{eqnarray} E^{int}_1(\hat u) &=&c_2 \langle i\xi a_1\hat u_1, \sum_{j=1}^n(-i\xi)^{1-j} (\prod_{k=2}^j a_k)^{-1} \hat u_{2j}\rangle\notag \\ &&+\sum_{j=2}^{n} \{c_{2j-1}\langle \hat u_{2j-1},u_{2j-2}\rangle+c_{2j}\langle i\xi a_j\hat u_{2j},\hat u_{2j-1}\rangle\},\notag \end{eqnarray} satisfying \begin{equation} \notag |\hat u|^2 +\Re E^{int}_1(\hat u)\sim |\hat u|^2,\quad\text{for}\ \epsilon\leq |\xi|\leq M. \end{equation} Furthermore, we can consider the frequency weighted linear combination in the form of \begin{equation} \label{tm2.pf1} \sum_{j=1}^n \{c_{2j-1}\frac{|\xi|^{\alpha_{2j-1}}}{(1+|\xi|)^{\alpha_{2j-1}+\beta_{2j-1}}} I_{2j-1} +c_{2j} \frac{|\xi|^{\alpha_{2j}}}{(1+|\xi|)^{\alpha_{2j}+\beta_{2j}}}I_{2j}\}, \end{equation} where $\alpha_1=\beta_1=0$. As for the model I, we use the same strategy to determine the choice of constants \begin{equation}\notag \alpha_2,\alpha_3,\cdots,\alpha_{2n};\quad \beta_2,\beta_3,\cdots,\beta_{2n}. \end{equation} In fact, by considering the low frequency domain $|\xi|\leq \epsilon$ with $\epsilon\leq 1$, $ \alpha_2,\alpha_3,\cdots,\alpha_{2n}$ are required to satisfy inequalities \begin{eqnarray*} && 2-j +\alpha_2\geq 0, j=2,3,\cdots,n,\\ && \alpha_2\geq 0,\\ &&2+\alpha_2\geq 0,\\ && \alpha_3\geq 0, 2+\alpha_3\geq 2+\alpha_2,\\ &&\alpha_4\geq 0,2+\alpha_4\geq \alpha_3,\\ &&\alpha_{2j}\geq 2+\alpha_{2j-2},2+\alpha_{2j}\geq \alpha_{2j-1},\\ &&\alpha_{2j-1}\geq 2+\alpha_{2j-2},2+\alpha_{2j-1}\geq \alpha_{2j-3},\ j=3,4,\cdots,n, \end{eqnarray*} and \begin{eqnarray*} && 2(3-j+\alpha_2)\geq \alpha_{2j}+2, j=2,\cdots,n,\\ && 1+\alpha_3 \geq \frac{2+\alpha_4}{2},\\ && \alpha_{2j}\geq \frac{1}{2}(\alpha_{2j+1} +\alpha_{2j-1})-1,\\ &&\alpha_{2j+1}\geq \frac{1}{2} (\alpha_{2j+2}+\alpha_{2j})-1, j=2,\cdots,n-1. \end{eqnarray*} One can take the best choice \begin{eqnarray*} &\displaystyle\alpha_2=4(n-2),\\ &\displaystyle \alpha_{2j-1}=\alpha_{2j}=4(n-2)+2(j-2),\ j=2,3,\cdots,n. \end{eqnarray*} Similarly, by considering the high frequency domain $|\xi|\geq M$ with $M\geq 1$, constants $\beta_2,\beta_3,\cdots,\beta_{2n}$ are required to satisfy inequalities \begin{eqnarray*} && \beta_2-2\geq 0,\\ && \beta_3\geq 0,\beta_3-2\geq \beta_2-2,\\ &&\beta_4\geq 0,\beta_4-2\geq \beta_3,\\ && \beta_{2j}\geq \beta_{2j-2}, \beta_{2j}-2\geq \beta_{2j-1},\\ && \beta_{2j-1} \geq \beta_{2j-2}-2,\beta_{2j-1}-2\geq \beta_{2j-3},\ j=3,\cdots,n, \end{eqnarray*} and \begin{eqnarray*} && 2(\beta_3-1)\geq \beta_4-2,\\ &&\beta_2+j-2\geq 0, 2(\beta_2+j-3)\geq \beta_{2j}-2,j=2,\cdots,n,\\ && 2(\beta_{2j}-1)\geq \beta_{2j+1}+\beta_{2j-1},\\ &&2 (\beta_{2j+1}-1) \geq (\beta_{2j+2}-2)+(\beta_{2j}-2),j=2,\cdots,n-1. \end{eqnarray*} One can take the best choice \begin{equation}\notag \beta_{2j}=\beta_{2j+1}=2j,\quad j=1,2,\cdots,n. \end{equation} Now, by \eqref{tm2.pf1}, let us define the interactive functional \begin{eqnarray*} E^{int}(\hat u) &=& c_2 \frac{|\xi|^{\alpha_2}}{(1+|\xi|)^{\alpha_2+\beta_2}} \langle i\xi a_1\hat u_1, \sum_{j=1}^n(-i\xi)^{1-j} (\prod_{k=2}^j a_k)^{-1} \hat u_{2j}\rangle\\ &&+\sum_{j=2}^n\{c_{2j-1}\frac{|\xi|^{\alpha_{2j-1}}}{(1+|\xi|)^{\alpha_{2j-1}+\beta_{2j-1}}}\langle \hat u_{2j-1},\hat u_{2j-2}\rangle\\ &&\qquad\quad+c_{2j} \frac{|\xi|^{\alpha_{2j}}}{(1+|\xi|)^{\alpha_{2j}+\beta_{2j}}}\langle i\xi a_j\hat u_{2j},\hat u_{2j-1}\rangle\}, \end{eqnarray*} that is, \begin{eqnarray*} E^{int}(\hat u) &=& c_2 \frac{|\xi|^{4n-8}}{(1+|\xi|)^{4n-6}} \langle i\xi a_1\hat u_1, \sum_{j=1}^n(-i\xi)^{1-j} (\prod_{k=2}^j a_k)^{-1} \hat u_{2j}\rangle\\ &&+\sum_{j=2}^n\{c_{2j-1}\frac{|\xi|^{4n+2j-12}}{(1+|\xi|)^{4n+4j-14}}\langle \hat u_{2j-1},\hat u_{2j-2}\rangle\\ &&\qquad\quad+c_{2j} \frac{|\xi|^{4n+2j-12}}{(1+|\xi|)^{4n+4j-12}}\langle i\xi a_j\hat u_{2j},\hat u_{2j-1}\rangle\}, \end{eqnarray*} and also define the energy dissipation rate \begin{eqnarray*} D(\hat u) &=& |\hat u_2|^2 +\frac{|\xi|^{2+\alpha_2}}{(1+|\xi|)^{\alpha_2+\beta_2}} |\hat u_1|^2 \\ &&+\sum_{j=2}^n\{\frac{|\xi|^{\alpha_{2j-1}}}{(1+|\xi|)^{\alpha_{2j-1}+\beta_{2j-1}}}|\hat u_{2j-1}|^2+\frac{|\xi|^{2+\alpha_{2j}}}{(1+|\xi|)^{\alpha_{2j}+\beta_{2j}}} |\hat u_{2j}|^2\}, \end{eqnarray*} that is, \begin{eqnarray*} D(\hat u) &=& |\hat u_2|^2 +\frac{|\xi|^{4n-6}}{(1+|\xi|)^{4n-6}} |\hat u_1|^2 \\ &&+\sum_{j=2}^n\{\frac{|\xi|^{4n+2j-12}}{(1+|\xi|)^{4n+4j-14}}|\hat u_{2j-1}|^2+\frac{|\xi|^{4n+2j-10}}{(1+|\xi|)^{4n+4j-12}} |\hat u_{2j}|^2\}. \end{eqnarray*} Then it follows that \begin{equation} \notag \partial_t \{|\hat u|^2 +\Re E^{int}(\hat u) \} +c D (\hat u)\leq 0, \end{equation} for all $t\geq 0$ and all $\xi\in \mathbb{R}$. Noticing \begin{equation} \notag |\hat u|^2 +\Re E^{int}(\hat u) \sim |\hat u|^2, \end{equation} and \begin{equation} \notag D (\hat u) \gtrsim \frac{|\xi|^{6n-10}}{(1+|\xi|)^{8n-12}}|\hat u|^2, \end{equation} one can see that the Model II \eqref{sys1}, where coefficients matrices $A_m$ and $L_m$ are defined in \eqref{2Tmat} with $m=2n$, enjoys the dissipative structure \begin{equation} \notag |\hat u(t,\xi)|^2 \leq C e^{-c\eta (\xi) t} |\hat u(0,\xi)|, \end{equation} with \begin{equation} \notag \eta (\xi)= \frac{|\xi|^{6n-10}}{(1+|\xi|)^{8n-12}}=\frac{|\xi|^{3m-10}}{(1+|\xi|)^{4m-12}}. \end{equation} Hence the derived result here is consistent with \eqref{point2} proved in Theorem \ref{thm2}. \bigskip \noindent {\sc Acknowledgments:}\ \ The first author is partially supported by Grant-in-Aid for Young Scientists (B) No.\,25800078 from Japan Society for the Promotion of Science. The second author's research was supported by the General Research Fund (Project No.~400912) from RGC of Hong Kong. The third author is partially supported by Grant-in-Aid for Scientific Research (A) No.\,22244009.
2003.06470
\section{Introduction} In this paper we present the framework to construct ${\mathbb Z}_2\times {\mathbb Z}_2$-graded mechanics as a one-dimensional classical theory. We extend to the ${\mathbb Z}_2\times {\mathbb Z}_2$-graded setting the results and approach employed for ordinary worldline supersymmetric \textcolor{black}{\cite{PaTo,KRT}} and superconformal \textcolor{black}{\cite{KT}} mechanics at the classical level. A separate paper is devoted to the quantization of the models here introduced.\par The theory under consideration possesses four types of time-dependent fields: ordinary bosons, two classes of fermions (fermions belonging to different classes commute among themselves) and exotic bosons which anticommute with the fermions of both classes.\par Before discussing ${\mathbb Z}_2\times {\mathbb Z}_2$-graded theories, we briefly sketch the list of the main results of this work. We introduce at first the basic $4$ component-field multiplets and their respective $D$-module representations (realized by $4\times 4$ matrix differential operators) of both ${\mathbb Z}_2\times {\mathbb Z}_2$-graded superalgebra and superconformal algebra. These results should be compared with the supermultiplets of the ${\cal N}=2$ worldline supersymmetry. In that case the basic multiplets are \textcolor{black}{\cite{PaTo}} the ``chiral" supermultiplet \cite{{GatRan},{GatRan2}}, also known in the literature \cite{BBMO} as the ``root" supermultiplet and denoted as ``$(2,2,0)$" (in physical applications it produces $2$ propagating bosons, $2$ propagating fermions and no auxiliary field), and the real supermultiplet $(1,2,1)$ with one bosonic auxiliary field. The notation ``$(0,2,2)$" is sometimes used to denote the supermultiplet with two bosonic auxiliary fields. In application to the ${\mathbb Z}_2\times {\mathbb Z}_2$-graded superalgebra, similar notations can be applied, but one has to discriminate between the two subcases $(1,2,1)_{[00]}$ and $(1,2,1)_{[11]}$, where the suffix denotes which bosonic field is propagating, either the ordinary one (the $(1,2,1)_{[00]}$ multiplet), or the exotic one (the $(1,2,1)_{[11]}$ multiplet). In the following we construct classical invariant actions in the Lagrangian framework for both single basic multiplets and several interacting basic multiplets. The construction relies on the ${\mathbb Z}_2\times {\mathbb Z}_2$-graded Leibniz property satisfied by the matrix differential operators closing the ${\mathbb Z}_2\times {\mathbb Z}_2$-graded superalgebra. {\textcolor{black}{Following the approach of \cite{KT}}}, ${\mathbb Z}_2\times {\mathbb Z}_2$-graded superconformal invariant actions are obtained by requiring invariance under $K$, the conformal counterpart of the time-translation generator $H$ {\textcolor{black}{(for a review of ordinary superconformal mechanics see \cite{FIL}).}} \par \textcolor{black}{ The present work is motivated by the recently introduced ${\mathbb Z}_2\times {\mathbb Z}_2$-graded} {\textcolor{black}{analogue}} \textcolor{black}{of supersymmetric and superconformal quantum mechanics \cite{Bru,BruDup,NaAmaDoi,NaAmaDoi2}. } \textcolor{black}{ It consists of models of one particle quantum mechanics on a real line whose symmetries are described by ${\mathbb Z}_2\times{\mathbb Z}_2$-graded Lie superalgebras. Peculiar to these models is the fact that their supercharges close with commutators, instead of anticommutators; this feature, which reflects the ${\mathbb Z}_2\times{\mathbb Z}_2$-grading, impies that two types of fermions commute with each other. One should also add that in ${\mathbb Z}_2\times{\mathbb Z}_2$-graded models central elements appear naturally. Due to properties of this type, the physical meaning of the $ {\mathbb Z}_2\times{\mathbb Z}_2$-graded models has yet to be properly understood and clarified.} {\par }\textcolor{black}{ We recall that supersymmetry algebra with central extension naturally appears in higher dimensional Dirac actions with curved extra dimension, see \cite{Ueba}, while commuting fermions also appear in the dual double field theory \cite{BHPR} (see also \cite{CKRS} and \cite{BruIbar} for the relation to higher grading geometry). Therefore, one could expect that the $ {\mathbb Z}_2\times{\mathbb Z}_2$-graded quantum systems possess some physical relevance, at least in nonrelativistic or anyonic physics, where the spin-statistics connection does not necessarily hold. Obviously, a thorough understanding of these quantum systems is highly desirable. } \textcolor{black}{ For a better understanding of the results in \cite{Bru,BruDup,NaAmaDoi,NaAmaDoi2}, one may pose the following question: is it possible to recover and extend these given models by quantizing some classical system? This question relies on a more essential one, namely, are there examples of classical systems invariant under the $ {\mathbb Z}_2\times{\mathbb Z}_2$-graded supersymmetric transformations? And, if this is indeed so, which are the $ {\mathbb Z}_2\times{\mathbb Z}_2$-graded supersymmetric transformations? In the present work we positively answer the last two questions about classical systems. The answer to the first question about quantization is postponed to a separate paper, see \cite{akt}. } \textcolor{black}{ After the introduction \textcolor{black}{\cite{Ree,rw1,rw2,sch}} of ${\mathbb Z}_2\times{\mathbb Z}_2$-graded Lie superalgebras, a long history of investigations of $ {\mathbb Z}_2\times{\mathbb Z}_2$ (and higher) graded symmetries in physical problems, which recently attracted a renewed interest, began. Several works considered enlarged symmetries in various contexts such as extensions of spacetime symmetries (beyond ordinary de-Sitter and Poincar\'e algebras), supergravity theory, quasi-spin formalism, parastatistics and non-commutative geometry, see \cite{lr,vas,jyw,zhe,Toro1,Toro2,tol2,tol,BruDup2,Me}. It was also recently revealed that the symmetries of the L\'evy-Leblond equation, which is a nonrelativistic quantum mechanical wave equation for spin 1/2 particles, are given by a ${\mathbb Z}_2\times {\mathbb Z}_2$-graded superalgebra \cite{aktt1,aktt2}. } \textcolor{black}{ Various mathematical studies of algebraic and geometric aspects of higher graded superalgebras have also been undertaken since their introduction. In this respect one of the hot topics is the geometry of higher graded manifolds, which is an extension of the geometry of supermanifolds. For those mathematical works the reader may consult the references in \cite{AiIsSe}. } The scheme of the paper is as follows: in Section {\bf 2} we review some basic features of ${\mathbb Z}_2\times{\mathbb Z}_2$-graded superalgebras and their $4\times 4$ real matrices representations. In Section {\bf 3} we introduce the $D$-module representations of the ${\mathbb Z}_2\times {\mathbb Z}_2$-graded superalgebra. In Section {\bf 4} we present the superconformal extension of the $D$-module representations. The construction of ${\mathbb Z}_2\times {\mathbb Z}_2$-graded classical invariant actions is given in Section {\bf 5}. {\textcolor{black}{We introduce in Appendix {\bf A} the scaling dimensions of the ${\mathbb Z}_2\times{\mathbb Z}_2$-graded generators and of the component fields entering the multiplets. We present in Appendix {\bf B} a derivation (that can be extended to the corresponding ${\mathbb Z}_2\times {\mathbb Z}_2$-graded case) of the ${\cal N}=2$ supersymmetric action for the real superfield. }} In the Conclusions we comment about future developments and the quantization of the models. \section{On ${\mathbb Z}_2\times{\mathbb Z}_2$-graded matrices} We recall at first the definition \textcolor{black}{\cite{Ree,rw1,rw2,sch}} of a ${\mathbb Z}_2\times{\mathbb Z}_2$-graded Lie superalgebra ${\cal G}$.\par It admits the decomposition \bea {\cal G}&=& {\cal G}_{00}\oplus {\cal G}_{10}\oplus{\cal G}_{01}\oplus {\cal G}_{11} \eea and is endowed with an operation $[\cdot,\cdot\}: {\cal G}\times {\cal G}\rightarrow {\cal G}$ which satisfies the following properties for any $g_a\in {\cal G}_{{\vec{\alpha}}}$, \par {1)} the ${\mathbb Z}_2\times{\mathbb Z}_2$-graded (anti)commutation relations \bea\label{anticommrel} [g_a,g_b\} &=& g_ag_b-(-1)^{{\vec{\alpha}}\cdot{\vec{\beta}}}g_bg_a, \eea \par 2) the ${\mathbb Z}_2\times{\mathbb Z}_2$-graded Jacobi identity \bea (-1)^{{\vec{\gamma}}\cdot{\vec{\alpha}}}[g_a,[g_b,g_c\} \}+ (-1)^{{\vec{\alpha}}\cdot{\vec{\beta}}}[g_b,[g_c,g_a\} \}+ (-1)^{{\vec{\beta}}\cdot{\vec{\gamma}}}[g_c,[g_a,g_b\} \}&=& 0. \eea In the above formulas $g_a, g_b, g_c$ respectively belong to the sectors ${\cal G}_{{\vec{\alpha}}},{\cal G}_{{\vec{\beta}}},{\cal G}_{{\vec{\gamma}}}$, where ${\vec{\alpha}}=(\alpha_1,\alpha_2)$ for $\alpha_{1,2}=0,1$ and ${\cal G}_{\vec\alpha}\equiv {\cal G}_{\alpha_1\alpha_2}$ (similar expressions hold for ${\vec{\beta}}$ and ${\vec{\gamma}}$). \par The scalar product ${\vec{\alpha}}\cdot{\vec{\beta}}$ is defined as \bea\label{scalarproduct} {\vec{\alpha}}\cdot{\vec{\beta}}&=& \alpha_1\beta_1+\alpha_2\beta_2. \eea Finally, $ [g_a,g_b\}\in {\cal G}_{{\vec{\alpha}}+{\vec{\beta}}}$ where the vector sum is defined ${\textrm{mod}}~ 2$.\par A ${\mathbb Z}_2\times{\mathbb Z}_2$-graded Lie superalgebra ${\cal G}$ can be realized by $4\times 4$, real matrices which can be accommodated into the ${\cal G}_{ij}$ sectors of ${\cal G}$ according to \bea\label{gradedmatrices} {\cal G}_{00}= \left(\begin{array}{cccc}\ast&0&0&0\\0&\ast&0&0\\0&0&\ast&0\\0&0&0&\ast\end{array}\right), &\quad& {\cal G}_{11}= \left(\begin{array}{cccc}0&\ast&0&0\\\ast&0&0&0\\0&0&0&\ast\\0&0&\ast&0\end{array}\right), \nonumber\\ {\cal G}_{10}= \left(\begin{array}{cccc}0&0&\ast&0\\0&0&0&\ast\\\ast&0&0&0\\0&\ast&0&0\end{array}\right), &\quad& {\cal G}_{01}= \left(\begin{array}{cccc}0&0&0&\ast\\ 0&0&\ast&0\\ 0&\ast&0&0\\ \ast&0&0&0\end{array}\right), \eea where the ``$\ast$" symbol denotes the non-vanishing real entries.\par The matrix generators spanning each ${\mathbb Z}_2\times{\mathbb Z}_2$-graded sector can be expressed as tensor products of the $4$ real, $2\times 2$ split-quaternion matrices $I,X,Y,A$ (see \textcolor{black}{\cite{aktt1}}) given by {\footnotesize{ \bea\label{splitquat} &I= \left(\begin{array}{cc}1&0\\0&1\end{array}\right),\quad X= \left(\begin{array}{cc}1&0\\0&-1\end{array}\right),\quad Y= \left(\begin{array}{cc}0&1\\1&0\end{array}\right),\quad A= \left(\begin{array}{cc}0&1\\-1&0\end{array}\right).& \eea }} We have \bea {\cal G}_{00}&:& \quad I\otimes I, \quad~ I\otimes X,\quad X\otimes I,\quad X\otimes X,\nonumber\\ {\cal G}_{11}&:& \quad I\otimes Y, \quad ~ I\otimes A,\quad X\otimes Y,\quad X\otimes A,\nonumber\\ {\cal G}_{10}&:& \quad Y\otimes I, \quad Y\otimes X,\quad A\otimes I,\quad A\otimes X,\nonumber\\ {\cal G}_{01}&:& \quad Y\otimes Y, \quad Y\otimes A,\quad A\otimes Y,\quad A\otimes A. \eea Up to an overall normalization, the most general Hermitian matrices with real coefficients and respectively belonging to the ${\cal G}_{10}$ and ${\cal G}_{01}$ sectors are \bea Q_{10}= \cos \alpha~ Y\otimes I+\sin\alpha ~Y\otimes X, &\quad& Q_{01} = \cos \beta ~Y\otimes Y +\sin\beta ~A\otimes A, \eea where $\alpha,\beta$ are arbitrary angles. If $\alpha,\beta\neq n\frac{\pi}{2}$ for $n\in {\mathbb Z}$, then both $Q_{10}^2\neq {\mathbb I}_4$ and $Q_{01}^2\neq {\mathbb I}_4$ (${\mathbb I}_4=I\otimes I$ is the $4\times 4$ identity matrix). \par Working under the assumption that $Q_{10}^2=Q_{01}^2={\mathbb I}_4$, the following choices for $Q_{10}$ are admissible. Either $Q_{10}=\pm Y\otimes I$ or $Q_{10}=\pm Y\otimes X$. Up to the overall sign and without loss of generality (the second choice being recovered from the first one via a similarity transformation) we can set \bea Q_{10}&=& Y\otimes I. \eea This position implies, up to a sign, two possible choices for $Q_{01}$: \bea {\textrm {either}}\quad Q_{01}^A =Y\otimes Y & {\textrm{or}}& Q_{01}^B = A\otimes A. \eea We explore the consequences of each one of these choices.\par ~\par By assuming choice $A$ one can introduce\par ~\par $1_A$) a ${\mathbb Z}_2$-graded superalgebra spanned by the two odd generators $Q_{10}, ~Q_{01}^A$ and the two even generators ${\mathbb I}_4, ~ W= I\otimes Y$, which is defined by the non-vanishing (anti)commutators \bea \{Q_{10},Q_{10}\}=\{Q_{01}^A,Q_{01}^A\}= 2\cdot{\mathbb I}_4, &&\{Q_{10},Q_{01}^A\}= 2W. \eea Since $\{Q_{01},Q_{10}^A\}=W\neq 0$, this superalgebra does not correspond to the ordinary ${\cal N}=2$ worldline supersymmetry;\par ~\par $2_A$) a ${\mathbb Z}_2\times{\mathbb Z}_2$-graded superalgebra defined by the (anti)commutators \bea \relax &\{Q_{10},Q_{10}\}=\{Q_{01}^A,Q_{01}^A\}= 2\cdot{\mathbb I}_4, \quad [Q_{10},Q_{01}^A]=0.& \eea A non-vanishing ${\cal G}_{11}$ sector can be introduced by adding an operator $Z\in {\cal G}_{11}$, given by \bea Z &=& \epsilon I\otimes Y + r X\otimes Y, \quad {\textrm{with}} \quad \epsilon=0,1, \quad r \in {\mathbb R}. \eea It follows that \bea \{Q_{01}^A, Z\} = 2\epsilon Q_{10}, &\quad& \{Q_{10}, Z\} = 2\epsilon Q_{01}^A. \eea \par ~\par By assuming choice $B$ one can introduce\par ~\par $1_B$) a ${\mathbb Z}_2$-graded superalgebra given by the (anti)commutators \bea &\{Q_{10},Q_{10}\}=\{Q_{01}^B,Q_{01}^B\}= 2\cdot{\mathbb I}_4, \quad \{Q_{10},Q_{01}^B\}= 0.& \eea This superalgebra corresponds to the ${\cal N}=2$ worldline supersymmetry algebra;\par ~\par $2_B$) a ${\mathbb Z}_2\times{\mathbb Z}_2$-graded superalgebra spanned by the generators ${\mathbb I}_4,~Q_{10}, ~Q_{01}^B, ~Z=X\otimes A$ and defined by the non-vanishing (anti)commutators \bea\label{z2z2const} \relax \{Q_{10},Q_{10}\}=\{Q_{01}^B,Q_{01}^B\}= 2\cdot{\mathbb I}_4,&& [Q_{10}, Q_{01}^B] = -2Z. \eea \section{$D$-module representations and supermultiplets of the ${\mathbb Z}_2\times{\mathbb Z}_2$-graded worldline superalgebra} In analogy with the $D$-module representations \textcolor{black}{\cite{PaTo, KRT}} of ordinary worldline supersymmetry, the $D$-module representations of the ${\mathbb Z}_2\times{\mathbb Z}_2$-graded superalgebra are obtained by replacing the constant matrices with real coefficients (as those entering formula (\ref{z2z2const})) with differential matrix operators. Since these matrices with differential entries are $4\times 4$, a total number of four, time-dependent, real fields are required. In the following the time coordinate is denoted by ``$\tau$". In the construction of the ${\mathbb Z}_2\times{\mathbb Z}_2$-graded worldline multiplets it is convenient to assume the reality conditions on all four fields. On the other hand, in the construction of sigma-models, it is sometimes convenient to assume the fields to be Hermitian and work with the Wick-rotated time coordinate $t$, given by $t=i\tau$, so that $\partial_t=-i\partial_\tau$. One can easily go back and forth from the ``Euclidean time" $\tau$ to the real time $t$ through Wick rotation. Some fields which are real in the Euclidean time version become imaginary in the real time formalism.\par The reality condition is associated with the complex conjugation (denoted by ``$\ast$"). The hermiticity condition is associated with the adjoint operator (``$\dagger$") given by a complex conjugation and a transposition (denoted by ``$T$"). The operations satisfy \bea &(A^\ast)^\ast=(A^T)^T= (A^\dagger)^\dagger=A, \quad (AB)^\ast= A^\ast B^\ast,\quad (AB)^T=B^TA^T,\quad (AB)^\dagger = B^\dagger A^\dagger.& \eea They are interrelated through \bea A^\dagger = (A^\ast)^T. \eea Without loss of generality, all formulas in the paper are presented for the Euclidean time $\tau$.\par A given multiplet $m^T= (x(\tau), z(\tau),\psi(\tau), \xi(\tau))$ of time-dependent fields belongs to a ${\mathbb Z}_2\times{\mathbb Z}_2$-graded vector space ${\cal V}$, \bea &x(\tau)\in{\cal V}_{00}, \quad z(\tau)\in {\cal V}_{11}, \quad\psi(\tau)\in{\cal V}_{10},\quad\xi(\tau)\in{\cal V}_{01},& \eea such that its grading is consistent with the ${\mathbb Z}_2\times{\mathbb Z}_2$-grading of the differential operators.\par A field belonging to ${\cal V}_{\epsilon_1\epsilon_2}$ is called ``even" (respectively ``odd") if the sum $\epsilon_1+\epsilon_2$ ($\epsilon_1+\epsilon_2=0~{\textrm{mod}} ~2$ or $\epsilon_1+\epsilon_2=1~{\textrm{mod}} ~2$) is even (odd).\par Four types of multiplets are encountered: \bea\label{fourmultiplets} &(2,2,0), \qquad (1,2,1)_{[00]},\qquad (1,2,1)_{[11]},\qquad (0,2,2).& \eea The first multiplet corresponds to the ``root" multiplet with two propagating even fields and two propagating odd fields. The $(1,2,1)$ multiplets, just like the corresponding ${\cal N}=2$ worldline supermultiplet, correspond to one even propagating field, two odd propagating fields and one even auxiliary field. An extra piece of information has to be added. The suffix $[00]$ specifies that the even propagating field is the ordinary boson, while the $[11]$ suffix specifies that the even propagating field is the exotic boson. Finally, the $(0,2,2)$ multiplet corresponds to the case of two propagating odd and two auxiliary even fields. \par As for the worldline supersymmetry, the multiplets $(1,2,1)_{[00]},~(1,2,1)_{[11]},~ (0,2,2)$ are obtained from the root multiplet $(2,2,0)$ via a dressing transformation.\par The $D$-module representations associated with the ${\mathbb Z}_2\times{\mathbb Z}_2$-graded Lie superalgebra (\ref{z2z2const}) are presented in the two following subsections. The operators are $H\in {\cal G}_{00}, ~Z\in{\cal G}_{11}, ~Q_{10}\in {\cal G}_{10}, ~Q_{01}\in {\cal G}_{01}$. The operator $H$ is the generator of the time translation. It commutes with all algebra generators and replaces the identity ${\mathbb I}_4$ in (\ref{z2z2const}). The unnecessary label ``$B$" is dropped in the definition of the $Q_{01}$ operator. \par The ${\mathbb Z}_2\times{\mathbb Z}_2$-graded superalgebra is defined by the non-vanishing (anti)commutators \bea\label{z2z2super} \{Q_{10},Q_{10}\}=\{Q_{01},Q_{01}\}=2H, &\quad& [Q_{10},Q_{01}] = -2Z. \eea \subsection{The root multiplet} The differential operators associated with the $(2,2,0)$ root multiplet are { \footnotesize{ \bea\label{root} H= \left(\begin{array}{cccc}\partial_{\tau}&0&0&0\\0&\partial_{\tau}&0&0\\0&0&\partial_{\tau}&0\\ 0&0&0&\partial_{\tau}\end{array}\right), &\quad& Z= \left(\begin{array}{cccc}0&\partial_{\tau}&0&0\\-\partial_{\tau}&0&0&0\\0&0&0&-\partial_{\tau} \\ 0&0&\partial_{\tau}&0\end{array}\right),\nonumber\\ Q_{10}= \left(\begin{array}{cccc}0&0&1&0\\0&0&0&1\\ \partial_{\tau}&0&0&0\\ 0&\partial_{\tau}&0&0\end{array}\right), &\quad& Q_{01}= \left(\begin{array}{cccc}0&0&0&1\\0&0&-1&0\\0&-\partial_{\tau}&0&0 \\ \partial_{\tau}&0&0&0\end{array}\right). \eea }} The corresponding transformations of the component fields are (for simplicity here and in the following it is not needed to report the action of $H$, being just a time derivative) \bea\label{rootfieldtransf} &\begin{array}{cccc}Q_{10}x={\psi},~~&Q_{10}z={\xi},~~&Q_{10}\psi={\dot x},~~&Q_{10}\xi={\dot z},~~\\ Q_{01}x={\xi},~~&Q_{01}z=-{\psi},~~&Q_{01}\psi=-{\dot z},~~&Q_{01}\xi={\dot x},~~\\ ~~Z x={\dot z},~~&~~Zz=-{\dot x},~~&~~Z \psi=-{\dot\xi},~~&~~Z \xi={\dot\psi}.~~ \end{array}& \eea \subsection{The dressed multiplets} Following the derivation \textcolor{black}{\cite{PaTo, KRT}} of the worldline supermultiplets, the remaining ${\mathbb Z}_2\times {\mathbb Z}_2$-graded multiplets are obtained from the operators given in (\ref{root}) and associated with the root multiplet, by applying the dressing transformation \bea {\frak M}&\mapsto{\frak M}'={\frak D} {\frak M} {\frak D}^{-1}. \eea In the above formula ${\frak M}$ denotes any operator in (\ref{root}), while ${\frak D}$ is a differential diagonal operator. The three consistent choices for ${\frak D}$, \bea\label{dressingmatrices} &{\frak D}_1 =diag(\partial_\tau, 1,1,1), \quad{\frak D}_2=diag(1,\partial_\tau,1,1),\quad {\frak D}_3=diag(\partial_\tau,\partial_\tau,1,1), \eea are such that the transformed operators ${\frak M}'$, despite the presence of ${\frak D}^{-1}$ in the right hand side, remain differential operators. They correspond to the $D$-module representations respectively acting on the $(1,2,1)_{[11]}$, $ (1,2,1)_{[00]}$ and $(0,2,2)$ multiplets. They are given by: \par ~ \par {\it i}) for the $(1,2,1)_{[11]}$ multiplet the $D$-module representation is { \footnotesize{ \bea\label{12111rep} H= \left(\begin{array}{cccc}\partial_{\tau}&0&0&0\\0&\partial_{\tau}&0&0\\0&0&\partial_{\tau}&0\\ 0&0&0&\partial_{\tau}\end{array}\right), &\quad& Z= \left(\begin{array}{cccc}0&\partial_{\tau}^2&0&0\\-1&0&0&0\\0&0&0&-\partial_{\tau} \\ 0&0&\partial_{\tau}&0\end{array}\right),\nonumber\\ Q_{10}= \left(\begin{array}{cccc}0&0&\partial_\tau&0\\0&0&0&1\\ 1&0&0&0\\ 0&\partial_{\tau}&0&0\end{array}\right), &\quad& Q_{01}= \left(\begin{array}{cccc}0&0&0&\partial_{\tau}\\0&0&-1&0\\0&-\partial_{\tau}&0&0 \\ 1&0&0&0\end{array}\right). \eea } } The corresponding transformations of the component fields are \bea\label{12111transf} &\begin{array}{cccc}Q_{10}x={\dot{\psi}},~~&Q_{10}z={\xi},~~&Q_{10}\psi={x},~~&Q_{10}\xi={\dot z},~~\\ Q_{01}x={\dot{\xi}},~~&Q_{01}z=-{\psi},~~&Q_{01}\psi=-{\dot z},~~&Q_{01}\xi={x},~~\\ ~~Z x={\ddot z},~~&~~Zz=-{x},~~&~~Z \psi=-{\dot\xi},~~&~~Z \xi={\dot\psi};~~ \end{array}& \eea \par ~\par {\it ii}) for the $(1,2,1)_{[00]}$ multiplet the $D$-module representation is { \footnotesize{ \bea\label{12100rep} H= \left(\begin{array}{cccc}\partial_{\tau}&0&0&0\\0&\partial_{\tau}&0&0\\0&0&\partial_{\tau}&0\\ 0&0&0&\partial_{\tau}\end{array}\right), &\quad& Z= \left(\begin{array}{cccc}0&1&0&0\\-\partial_{\tau}^2&0&0&0\\0&0&0&-\partial_{\tau} \\ 0&0&\partial_{\tau}&0\end{array}\right),\nonumber\\ Q_{10}= \left(\begin{array}{cccc}0&0&1&0\\0&0&0&\partial_{\tau}\\ \partial_{\tau}&0&0&0\\ 0&1&0&0\end{array}\right), &\quad& Q_{01}= \left(\begin{array}{cccc}0&0&0&1\\0&0&-\partial_\tau&0\\0&-1&0&0 \\ \partial_{\tau}&0&0&0\end{array}\right). \eea } } The corresponding transformations of the component fields are \bea\label{12100transf} &\begin{array}{cccc}Q_{10}x={\psi},~~&Q_{10}z={\dot{\xi}},~~&Q_{10}\psi={\dot x},~~&Q_{10}\xi={ z},~~\\ Q_{01}x={\xi},~~&Q_{01}z=-{\dot{\psi}},~~&Q_{01}\psi=-{z},~~&Q_{01}\xi={\dot x},~~\\ ~~Z x={z},~~&~~Zz=-{\ddot x},~~&~~Z \psi=-{\dot\xi},~~&~~Z \xi={\dot\psi};~~ \end{array}& \eea \par ~\par {\it iii}) for the $(0,2,2)$ multiplet the $D$-module representation is { \footnotesize{ \bea\label{root2} H= \left(\begin{array}{cccc}\partial_{\tau}&0&0&0\\0&\partial_{\tau}&0&0\\0&0&\partial_{\tau}&0\\ 0&0&0&\partial_{\tau}\end{array}\right), &\quad& Z= \left(\begin{array}{cccc}0&\partial_{\tau}&0&0\\-\partial_{\tau}&0&0&0\\0&0&0&-\partial_{\tau} \\ 0&0&\partial_{\tau}&0\end{array}\right),\nonumber\\ Q_{10}= \left(\begin{array}{cccc}0&0&\partial_\tau&0\\0&0&0&\partial_\tau\\ 1&0&0&0\\ 0&1&0&0\end{array}\right), &\quad& Q_{01}= \left(\begin{array}{cccc}0&0&0&\partial_\tau\\0&0&-\partial_\tau&0\\0&-1&0&0 \\ 1&0&0&0\end{array}\right). \eea } } The corresponding transformations of the component fields are \bea &\begin{array}{cccc}Q_{10}x={\dot\psi},~~&Q_{10}z={\dot{\xi}},~~&Q_{10}\psi={x},~~&Q_{10}\xi={z},~~\\ Q_{01}x={\dot\xi},~~&Q_{01}z=-{\dot\psi},~~&Q_{01}\psi=-{z},~~&Q_{01}\xi={x},~~\\ ~~Z x={\dot z},~~&~~Zz=-{\dot x},~~&~~Z \psi=-{\dot\xi},~~&~~Z \xi={\dot\psi}.~~ \end{array}& \eea It is worth noticing that the $D$-module representation of (\ref{z2z2super}) acting on the $(0,2,2)$ multiplet can also be recovered from the $(2,2,0)$ root $D$-module representation by applying a similarity transformation. Let $g$ denotes a given generator in (\ref{root}). The corresponding generator $g'$ acting on the $(0,2,2)$ multiplet can be expressed, in terms of the $2\times 2$ matrices $Y,I$ introduced in (\ref{splitquat}), as \bea\label{220simtran} g &\mapsto & g' = (Y\otimes I)\cdot g \cdot(Y\otimes I), \qquad {\textrm{where}}\quad (Y\otimes I)^2={\mathbb I}_4. \eea \par This expression for $g'$ coincides up to a sign with the corresponding generator obtained from the ${\mathfrak D}_3$ dressing and presented in (\ref{root2}).\par Similarly, the $D$-module representations associated with the $(1,2,1)_{[00]}$ and $(1,2,1)_{[11]}$ multiplets are interrelated by a similarity transformation. Let ${\widetilde g}$ denotes any generator given in (\ref{12111rep}), its associated ${\widehat g}$ operator expressed by \bea\label{121simtran} {\widetilde g}&\mapsto {\widehat g}=(I\otimes Y) \cdot{\widetilde g}\cdot (I\otimes Y), \qquad {\textrm{where}}\quad (I\otimes Y)^2={\mathbb I}_4, \eea coincides, up to a sign, with the corresponding generator obtained from the ${\mathfrak D}_2$ dressing and presented in (\ref{12100rep}). \section{$D$-module representations of the ${\mathbb Z}_2\times {\mathbb Z}_2$-graded conformal superalgebra} A ${\mathbb Z}_2\times{\mathbb Z}_2$-graded conformal superalgebra extension of the ${\mathbb Z}_2\times{\mathbb Z}_2$-graded superalgebra (\ref{z2z2super}) is obtained by introducing the conformal partners of the generators $H, Q_{10}, Q_{01}, Z$. The minimal conformal extension ${\cal G}_{conf}$ corresponds to a superalgebra spanned by $10$ generators. The $6$ extra generators will be denoted as $D,U, S_{10}, S_{01}, K, W$. The (anti)commutators defining ${\cal G}_{conf}$ respect both the ${\mathbb Z}_2\times{\mathbb Z}_2$-grading $ij$ and the scaling dimension \textcolor{black}{(see Appendix {\bf A})} $s$ of the generators. Scaling dimension and ${\mathbb Z}_2\times{\mathbb Z}_2$-grading of the ${\cal G}_{conf}$ generators are assigned according to the table \bea\label{table} & \relax \begin{array}{|l|c|c|c|c|}\hline {~s ~} \backslash ij&$00$&$11$&$10$&$01$ \\ \hline +1:&H&Z&&\\ \hline +\frac{1}{2}:&&&Q_{10}&Q_{01}\\ \hline ~~ 0:&D&U&&\\ \hline -\frac{1}{2}:&&&S_{10}&S_{01}\\ \hline -1:&K&{W}&&\\ \hline \end{array} & \eea The minimal conformal extension ${\cal G}_{conf}$ can be recovered from the (\ref{root}) $D$-module root representation (\ref{root}) of the superalgebra (\ref{z2z2super}) by adding an extra operator $K$ (the conformal partner of $H$), which is introduced through the position \bea\label{kroot} K &=& -\tau^2\partial_\tau {\mathbb I}_4- 2\tau\Lambda, \qquad \Lambda=diag(\lambda,\lambda,\lambda+\frac{1}{2},\lambda+\frac{1}{2}). \eea The remaining generators entering table (\ref{table}) and their (anti)commutators defining ${\cal G}_{conf}$ are recovered from repeated (anti)commutators involving the operators $Q_{10}$, $Q_{01}$ and $K$. \par The nonvanishing ${\cal G}_{conf}$ (anti)commutators are \bea\label{conformal220} &\begin{array}{llll} \relax [H, D]=-H,&[H, U]=2Z,& [H, S_{10}]=Q_{10},& [H, S_{01}]=Q_{01},\\ \relax [H, K]=2D,& [H, W]=-U,&[Z, D]=-Z,&[Z, U]=-2H,\\ \relax \{Z, S_{10}\}=Q_{01},&\{Z, S_{01}\}=-Q_{10},& [Z, K]=-U,&[Z, W]=-2D ,\\ \relax \{Q_{10},Q_{10}\}=2H,&[Q_{10}, Q_{01}]= -2Z,&[Q_{10}, D]=-\frac{1}{2}Q_{10}, & \{Q_{10}, U\}=-Q_{01},\\ \relax \{Q_{10}, S_{10}\}=-2D,&[Q_{10}, S_{01}]=-U,&[Q_{10}, K]=-S_{10},&\{Q_{10}, W\}=S_{01} , \\ \relax \{Q_{01}, Q_{01}\}=2H,& [Q_{01}, D]=-\frac{1}{2} Q_{01}, & \{Q_{01}, U\}=Q_{10},&[Q_{01}, S_{10}]=U ,\\ \relax \{Q_{01}, S_{01}\}=-2D,& \relax [Q_{01}, K]=-S_{01},&\{Q_{01}, W\}=-S_{10} ,& [D, S_{10}]=-\frac{1}{2} S_{10},\\ \relax [D, S_{01}]=-\frac{1}{2}S_{01},&[D, K]=-K,& [D, W]=-W,&\{U, S_{10}\}=S_{01},\\ \relax \{U, S_{01}\}=-S_{10},&[U, K]=2W,& [U, W]=-2K ,& \{S_{10}, S_{10}\}=-2K,\\ \relax [S_{10}, S_{01}]=2W,&\{S_{01}, S_{01}\}=-2K.&& \end{array}&\nonumber\\&& \eea The closure of the ${\cal G}_{conf}$ algebra is realized for any real value of the parameter $\lambda$ entering (\ref{kroot}). Therefore $\lambda\in {\mathbb R}$ is unconstrained.\par The $D$-module representation corresponding to the $(2,2,0)$ root multiplet is given by the operators {\footnotesize{ \bea\label{220conf} H& =&\left(\begin{array}{cccc}\partial_{\tau}&0&0&0\\0&\partial_{\tau}&0&0\\0&0&\partial_{\tau}&0\\ 0&0&0&\partial_{\tau}\end{array}\right), \nonumber\\ Z&=& \left(\begin{array}{cccc}0&\partial_{\tau}&0&0\\-\partial_\tau&0&0&0\\0&0&0&-\partial_{\tau} \\ 0&0&\partial_{\tau}&0\end{array}\right),\nonumber\\ Q_{10}&=& \left(\begin{array}{cccc}0&0&1&0\\0&0&0&1\\ \partial_\tau&0&0&0\\ 0&\partial_{\tau}&0&0\end{array}\right), \nonumber\\ Q_{01}&=& \left(\begin{array}{cccc}0&0&0&1\\0&0&-1&0\\0&-\partial_{\tau}&0&0 \\ \partial_\tau&0&0&0\end{array}\right),\nonumber\\ D& =&\left(\begin{array}{cccc}-\tau\partial_{\tau}-\lambda&0&0&0\\0&-\tau\partial_{\tau}-\lambda&0&0\\0&0&-\tau\partial_{\tau}-(\lambda+\frac{1}{2})&0\\ 0&0&0&-\tau\partial_{\tau}-(\lambda+\frac{1}{2})\end{array}\right), \nonumber\\ {U}&=& \left(\begin{array}{cccc}0&2(\tau\partial_{\tau}+\lambda)&0&0\\-2(\tau\partial_{\tau}+\lambda)&0&0&0\\0&0&0&-(2\tau\partial_{\tau}+2\lambda+1) \\ 0&0&2\tau\partial_{\tau}+2\lambda+1&0\end{array}\right),\nonumber\\ {S}_{10}&=& \left(\begin{array}{cccc}0&0&\tau&0\\0&0&0&\tau\\ \tau\partial_\tau+2\lambda&0&0&0\\ 0&\tau\partial_{\tau}+2\lambda&0&0\end{array}\right), \nonumber\\ {S}_{01}&=& \left(\begin{array}{cccc}0&0&0&{\tau}\\0&0&-\tau&0\\0&-(\tau\partial_{\tau}+2\lambda)&0&0 \\ \tau\partial_\tau+2\lambda&0&0&0\end{array}\right),\nonumber\\ K& =&\left(\begin{array}{cccc}-\tau^2\partial_{\tau}-2\tau\lambda&0&0&0\\0&-\tau^2\partial_{\tau}-2\tau\lambda&0&0\\0&0&-\tau^2\partial_{\tau}-(2\lambda+1)\tau&0\\ 0&0&0&-\tau^2\partial_{\tau}-(2\lambda+1)\tau\end{array}\right), \nonumber\\ {W}&=& \left(\begin{array}{cccc}0&-\tau^2\partial_{\tau}-2\lambda\tau&0&0\\\tau^2\partial_{\tau}+2\lambda\tau&0&0&0\\0&0&0&\tau^2\partial_{\tau}+(2\lambda+1)\tau \\ 0&0&-\tau^2\partial_{\tau}-(2\lambda+1)\tau&0\end{array}\right). \eea }} The ${\cal G}_{conf}$ algebra contains several subalgebras. In particular the $sl(2)$ subalgebra generated by $H,D,K$ where $D$, the scaling operator, is the Cartan element. Two different $osp(1|2)$ superalgebras are recovered from the subsets of generators $\{H,D,K,Q_{10}, S_{10}\}$ and $\{H,D,K,Q_{01}, S_{01}\}$, respectively.\par \subsection{${\mathbb Z}_2\times {\mathbb Z}_2$-graded conformal superalgebra and dressed multiplets} The $D$-module representation of ${\cal G}_{conf}$ acting on the $(0,2,2)$ multiplet is obtained by extending the (\ref{220simtran}) similarity transformation to any generator of ${\cal G}_{conf}$. Let $g$ denotes a given generator in (\ref{220conf}). The corresponding generator $g'$ acting on the $(0,2,2)$ multiplet is given by \bea\label{similconformal022} g &\mapsto & g' = (Y\otimes I)\cdot g \cdot(Y\otimes I), \qquad {\textrm{where}}\quad (Y\otimes I)^2={\mathbb I}_4, \eea with $Y$ and $I$ introduced in (\ref{splitquat}).\par The $D$-module representation of the minimal ${\cal G}_{conf}$ algebra acting on the $(1,2,1)_{[11]}$ multiplet is obtained by applying to the (\ref{220conf}) operators the dressing transformation generated by the diagonal matrix ${\frak D}_1$ introduced in (\ref{dressingmatrices}).\par Let $g$ be a given operator in (\ref{220conf}), the corresponding ${\widetilde g}$ dressed operator is given by \bea\label{singular} g &\mapsto & {\widetilde g}= {\frak D}_1\cdot g \cdot{{\frak D}_1}^{-1}, \eea where ${\mathfrak{D}}_1$ has been introduced in (\ref{dressingmatrices}).\par The four dressed operators ${\widetilde H}, {\widetilde Z}, {\widetilde Q}_{10}, {\widetilde Q}_{01}$ are differential matrix operators. On the other hand, due to the presence of the inverse matrix ${{\frak D}_1}^{-1}$ in (\ref{singular}), the remaining $6$ transformed matrices ${\widetilde D}, {\widetilde U}, {\widetilde S}_{10}, {\widetilde S}_{01}, {\widetilde K}, {\widetilde W}$ are differential operators only if the real parameter $\lambda$, which is unconstrained in (\ref{220conf}), is set to $0$: \bea \lambda&=&0. \eea Therefore, the minimal ${\cal G}_{conf}$ conformal algebra is recovered from the $(1,2,1)_{[11]}$ multiplet by taking repeated (anti)commutators of the operators ${\widetilde Q}_{10},~ {\widetilde Q}_{01},~ {\widetilde K}_{\lambda=0}$, given by {\footnotesize{ \bea && {\widetilde Q}_{10}= \left(\begin{array}{cccc}0&0&\partial_\tau&0\\0&0&0&1\\ 1&0&0&0\\ 0&\partial_{\tau}&0&0\end{array}\right), \qquad {\widetilde Q}_{01}= \left(\begin{array}{cccc}0&0&0&\partial_\tau\\0&0&-1&0\\0&-\partial_{\tau}&0&0 \\ 1&0&0&0\end{array}\right),\nonumber\\ &&~~~{\widetilde K}_{\lambda=0}=\left(\begin{array}{cccc}-\tau^2\partial_{\tau}-2\tau&0&0&0\\0&-\tau^2\partial_{\tau}&0&0\\0&0&-\tau^2\partial_{\tau}-\tau&0\\ 0&0&0&-\tau^2\partial_{\tau}-\tau\end{array}\right). \eea }} A nonminimal ${\mathbb Z}_2\times{\mathbb Z}_2$-graded conformal extension of ${\cal G}_{conf}$, requiring the introduction of new generators, is recovered by taking repeated (anti)commutators of the operators ${\widetilde Q}_{10}, ~{\widetilde Q}_{01}$ and $ {\widetilde K}_{\lambda}$, where $ {\widetilde K}_{\lambda}$ is defined for $\lambda\neq 0$ as \bea {\widetilde K}_{\lambda}&=& {\widetilde K}_{\lambda=0}-2\lambda\tau\cdot{\mathbb I}_4. \eea This new nonminimal algebra is denoted as ${\cal G}_{nm,conf}$. \par {\textcolor{black}{ ${\cal G}_{nm,conf}$ is finitely generated by the the three generators ${\widetilde Q}_{10}$, ${\widetilde Q}_{01}$, ${\widetilde K}_\lambda$. Each generator entering ${\cal G}_{nm,conf}$ is obtained by taking repeated (anti)commutators involving ${\widetilde Q}_{10}$, ${\widetilde Q}_{01}$, ${\widetilde K}_\lambda$. For instance, ${\widetilde H}$ is recovered from the anticommutator $\{{\widetilde Q}_{10},{\widetilde Q}_{10}\}=2{\widetilde H}$, ${\widetilde Z}$ from the commutator $[{\widetilde Q}_{10},{\widetilde Q}_{01}]=-2{\widetilde Z}$, while ${\widetilde U}$, belonging to the $11$-sector, from $[{\widetilde Z},{\widetilde K}]=-2{\widetilde U}$ and so on.}} \par {\textcolor{black}{For $\lambda\neq 0$ ${\cal G}_{nm,conf}$ is an infinitely dimensional ${\mathbb Z}_2 \times {\mathbb Z}_2$-graded Lie superalgebra, possessing an infinite number of generators. This is seen as follows: the commutator between the $11$-graded generators ${\widetilde Z}, {\widetilde U}$ produces $[{\widetilde Z},{\widetilde U}]= -2{\widetilde H}-4\lambda {\widetilde H}_2$, where ${\widetilde H}_2$ is a new $00$-graded generator, given by {\footnotesize {${\widetilde H}_2 = \left(\begin{array}{cccc}-\partial&0&0&0\\0&\partial&0&0\\ 0&0&0&0 \\0&0&0&0 \end{array}\right)$.}} The commutator $[{\widetilde Z},{\widetilde H}_2]= 2{\widetilde Z}_2$ produces the new $11$-graded generator {\footnotesize{${\widetilde Z}_2 = \left(\begin{array}{cccc}0&\partial^3&0&0\\\partial&0&0&0\\ 0&0&0&0 \\0&0&0&0 \end{array}\right)$}}. By taking repeated commutators with ${\widetilde Z}$ one generates an infinite tower of new generators ${\widetilde H}_n$, ${\widetilde Z}_n$. This shows that, for $\lambda\neq 0$, ${\cal G}_{nm,conf}$ is an infinite-dimensional ${\mathbb Z}_2 \times {\mathbb Z}_2$-graded Lie superalgebra.\\ The generator {\footnotesize{${\widetilde M}=\left(\begin{array}{cccc}0&0&0&0\\0&0&0&0\\ 0&0&0&1 \\0&0&1&0 \end{array}\right)$}}, obtained from the right hand side of \bea\label{nonvanish} \relax [{\widetilde Q}_{10}, {\widetilde S}_{01}] +[{\widetilde Q}_{01}, {\widetilde S}_{10}] &=&4\lambda {\widetilde M}, \eea is another extra generator entering ${\cal G}_{nm,conf}$.}}\par We finally mention that the $D$-module representations of the conformal algebras associated with the $(1,2,1)_{[00]}$ multiplet are recovered from the $(1,2,1)_{[11]}$ representations by applying an extension of the (\ref{121simtran}) similarity transformation. Let ${\widetilde g}$ denotes any conformal generator associated with the $(1,2,1)_{[11]}$ multiplet, the corresponding ${\widehat g}$ generator associated with the $(1,2,1)_{[00]}$ multiplet is given by \bea {\widetilde g}&\mapsto {\widehat g}=(I\otimes Y) \cdot{\widetilde g}\cdot (I\otimes Y), \qquad {\textrm{where}}\quad (I\otimes Y)^2={\mathbb I}_4, \eea for $Y$, $I$ introduced in (\ref{splitquat}). \section{${\mathbb Z}_2\times {\mathbb Z}_2$-graded invariant actions} In this Section we present a general framework to construct ${\mathbb Z}_2\times {\mathbb Z}_2$-graded classical invariant actions, in the Lagrangian setting, for the basic multiplets introduced in Section {\bf 3}. The approach works for both single basic multiplets and for several interacting basic multiplets. We discuss, at first, the actions for the root multiplet. The modifications to be applied for the construction of actions for the dressed multiplets are immediate. \subsection{Invariant actions for the $(2,2,0)$ root multiplet} We rely on the fact that the differential operators introduced in (\ref{root}) satisfy the ${\mathbb Z}_2\times {\mathbb Z}_2$-graded Leibniz rule when acting on functions of the component fields $x,z,\psi,\xi$. The actions and the Lagrangians are required to belong to the $00$-graded sector. \par Therefore, a manifestly invariant sigma-model action ${\cal S}_\sigma$ for the $(2,2,0)$ multiplet can be expressed as \bea\label{sigma220a} {\cal S}_\sigma&=& \int d\tau {\cal L_\sigma}, \quad\quad {\cal L}_\sigma= ZQ_{10}Q_{01}g(x,w), \quad{\textrm{for}} \quad w=z^2. \eea In the above formula $g(x,w)$ is an arbitrary $00$-graded prepotential of the even fields $x,z$. Due to the (\ref{z2z2super}) (anti)commutators and their explicit expression, the action of $H,Z, Q_{10}, Q_{01}$ on ${\cal L}_\sigma$ produces a time derivative, making the ${\cal S}_\sigma$ action ${\mathbb Z}_2\times {\mathbb Z}_2$-graded invariant. Up to boundary terms we have \bea\label{sigmaphi} {\cal L}_\sigma &\sim& \Phi(x,w)({\dot x}^2+{\dot z}^2-\psi{\dot\psi}+\xi{\dot \xi})+(\Phi_x{\dot z}-2\Phi_w {\dot x}z)\psi\xi, \eea where \bea \Phi(x,w) &=& g_{xx}+2g_w+4wg_{ww} \eea (here and in the following the suffix denotes derivative with respect to the corresponding field so that, e.g., $\Phi_x(x,w) = \textcolor{black}{\frac{\partial \Phi(x,w)}{\partial x}}$).\par The invariant sigma-model defined by (\ref{sigma220a}) is not the most general one. Another manifestly invariant $(2,2,0)$ action ${\cal S}_{\overline\sigma}$ is obtained from setting \bea\label{sigma220b} {\cal S}_{\overline\sigma}&=&\int d\tau ZQ_{10}Q_{01} \left(f(x,w) z\psi\xi\right), \eea where the new prepotential $f(x,w) z\psi\xi$ also belongs to the $00$-graded sector. Since the odd-fields $\psi,\xi$ are Grassmann, the most general manifestly invariant sigma-model is produced by the linear combination ${\cal S}= {\cal S}_{\sigma}+{\cal S}_{\overline\sigma}$ for arbitrary prepotentials $ g(x,w),~f(x,w) z\psi\xi$.\par One should warn that, contrary to $g(x,w)$, the $f(x,w) z\psi\xi$ prepotential in (\ref{sigma220b}) produces higher derivatives. Indeed the simplest choice, obtained by setting $f(x,w)=1$, produces the Lagrangian \bea {\overline{\cal L}} &=& ZQ_{10}Q_{01}(z\psi\xi) \sim Z(z({\dot x}^2+{\dot z}^2-\psi{\dot\psi}+\xi{\dot \xi})-{\dot x}\psi\xi) \eea which contains a third order time derivative in $x$ (${\overline{\cal L}}={\dot x}^3+\ldots$) that cannot be reabsorbed by a total time derivative.\par The free kinetic action is defined by setting $\Phi(x,w)=\frac{1}{2}$ in (\ref{sigmaphi}). The corresponding Lagrangian ${\cal L}$, given by \bea {\cal L}&=& \frac{1}{2}({\dot x}^2+{\dot z}^2-\psi{\dot\psi}+\xi{\dot \xi}), \eea is invariant under the full (\ref{conformal220}) ${\mathbb Z}_2\times {\mathbb Z}_2$-graded conformal superalgebra ${\cal G}_{conf}$. This is a consequence of the relation \bea K{\cal L} &=& \frac{1}{2}\frac{d}{d\tau} \left(-\tau^2{\dot x}^2+x^2-\tau^2{\dot z}^2+z^2 +\tau^2\psi{\dot\psi}-\tau^2\xi{\dot\xi}\right), \eea for $K$ given by (\ref{220conf}) with the \bea \Lambda &=&diag (-\frac{1}{2}, -\frac{1}{2}, 0,0) \eea assignment of the scaling dimensions of the root multiplet component fields.\par We can generalize the manifestly invariant action (\ref{sigma220a}) to the case of $n$ independent root multiplets labeled by $i=1,2,\ldots, n$. In each multiplet its component fields $x_i, \psi_i,\xi_i, z_i$ transform according to (\ref{rootfieldtransf}). An invariant sigma-model action ${\cal S}_{int}$, describing the motion of interacting multiplets, can be defined through the position \bea\label{sigmainteract} {\cal S}_{int}&=& \int d\tau ZQ_{10}Q_{01}g(x_i,w_{ij}), \eea for a generic prepotential $g(x_i, w_{ij})$, with $w_{ij}=z_iz_j$. The introduction of nontrivial interactions among multiplets requires suitably choosing the prepotential function $g(x_i,w_{ij})$. One can set, e.g., $\partial_{x_k}\partial_{x_l}g(x_i,w_{ij})$ for $k\neq l$ to be nonvanishing functions of the component fields. \subsection{Invariant actions for the $(1,2,1)_{[00]}$ multiplet} As the next case we are considering the invariant actions for the dressed $(1,2,1)_{[00]}$ multiplet with an ordinary propagating boson. Its component fields $x, z,\psi,\xi$ transform according to \textcolor{black}{ \eqref{12100transf}.} \par A sigma-model type of action, the counterpart of (\ref{sigmaphi}), can be formally expressed with the same notation, but taking into account the different role of the exotic boson $z$: \bea\label{sigma12100} {\cal S}_\sigma&=& \int d\tau {\cal L_\sigma}, \quad\quad {\cal L}_\sigma= ZQ_{10}Q_{01}g(x,w), \quad{\textrm{for}} \quad w=z^2. \eea Up to boundary terms the Lagrangian ${\cal L}_\sigma$ now reads \bea {\cal L_\sigma}&\sim& (g_{xxx}-2g_{xxw}{\ddot x}) z\psi\xi+[(2 g_{xxw}w+g_{xx})-2h_x{\ddot x}](\xi{\dot \xi}-\psi{\dot\psi})+ 2h ({\dot\xi}{\ddot \xi}-{\dot\psi}{\ddot\psi})+\nonumber\\ &&(4g_{xw}+2h_x-4h_w{\ddot x})z{\dot\psi}{\dot\xi}+g_{xx}\textcolor{black}{ w}-(4g_{xw}w+g_x){\ddot x}+\nonumber\\ &&{\textcolor{black}{2h{\ddot x}^2-2g_wz{\ddot z}-2g_{xw}z({\ddot\psi}\xi+\psi{\ddot\xi}),}} \eea where $ h(x,w)$ is introduced as \bea h(x,w)&=&2g_{ww}w+g_w. \eea Let us present several particular cases: \begin{enumerate} \item [1)] for $g(x,w) = g(x)$ we have \bea\label{sigma121special} {\cal L}_\sigma&\sim& g_{xx}({\dot x}^2+z^2-\psi{\dot\psi}+\xi{\dot \xi})+g_{xxx}z\psi\xi; \eea \item[2)] for $g(x,w)=g(w)$ we have $h(x,w)=h(w)$. The Lagrangian ${\cal L}_\sigma$ possesses higher-order time derivatives, \bea {\cal L}_\sigma&\sim& 2h({\ddot x}^2+{\dot z}^2-{\dot\psi}{\ddot\psi}+{\dot\xi}{\ddot\xi})-4h_w{\ddot x}z{\dot\psi}{\dot\xi}; \eea \item[3)] the condition $h(x,w)=0$ implies that the prepotential $g(x,w)$ has the form $$g(x,w)=a(x)\sqrt{w}+b(x).$$ Under this condition the Lagrangian ${\cal L}_\sigma$ admits up to a second order time derivative. The result of the computation of the second term, $b(x)$, is recovered from the result at item $1$. The new contribution for $a(x)\neq 0$, $b(x)=0$ reads \bea {\cal L}_\sigma&\sim& a_{xx}\sqrt{w}(3{\dot x}^2+z^2-2\psi{\dot\psi}+2\xi{\dot\xi})+2a_x{\sqrt w}z{\dot\psi}{\dot\xi}+4\frac{a_x}{\sqrt w}z{\dot x}{\dot z}+\nonumber\\ &&\textcolor{black}{(a_{xxx}{\sqrt w}-\frac{a_{xx}}{\sqrt w}{\ddot x})z\psi\xi} \textcolor{black}{ - \frac{a_x}{\sqrt{w}} z (\ddot{\psi} \xi + \psi \ddot{\xi}).} \eea \end{enumerate} Let us focus our discussion on the sigma-model Lagrangian (\ref{sigma121special}). By setting $\phi(x)= 2g_{xx}$ {\textcolor{black}{it differs from the ${\cal N}=2$ supersymmetric Lagrangian (also denoted as ${\cal L}_\sigma$) entering formula (\ref{totallag}), in the sign in front of the $\xi{\dot \xi}$ term.}} {\textcolor{black}{This difference is due to the ${\mathbb Z}_2\times{\mathbb Z}_2$ grading; in particular the (\ref{sigma121special}) operators $Q_{10}$, $Q_{01}$, unlike the $Q_1,Q_2$ supersymmetry operators used in the derivation of (\ref{totallag}), act on the ${\mathbb Z}_2\times{\mathbb Z}_2$-graded component fields as ${\mathbb Z}_2\times{\mathbb Z}_2$-graded Leibniz derivatives.}} \par {\textcolor{black}{It is worth mentioning that the sigma-model action (57), when specialized to the choice $g_{xx}= x^\alpha$ for $\alpha\neq -2$, is invariant under the nonminimal ${\mathbb Z}_2\times{\mathbb Z}_2$ conformal algebra ${\cal G}_{nm, conf}$ with the identification $\lambda= -\frac{1}{\alpha+2}$. }} \par Since $\psi, \xi$ are classical Grassmann fields (satisfying, in particular, $\psi^2=\xi^2=0$), \textcolor{black}{by solving the algebraic equation of motion for $z$,} the new Lagrangian reads \bea\label{sigmaof121} {\cal L}_\sigma&\sim& \frac{1}{2}\phi(x) ({\dot x}^2 -\psi{\dot\psi}+\xi{\dot\xi}). \eea By setting, as in formula (\ref{barredfieldsapp}), \bea\label{cxnew} &{\overline x}= C(x),\quad {\overline\psi} = C_x\psi,\quad {\overline \xi}= C_x \xi,& \qquad {\textrm{where}}\quad C_x= \sqrt{\phi}, \eea we realize that the Lagrangian (\ref{sigmaof121}) corresponds to the non-interacting constant kinetic Lagrangian ${\cal L}_{kin}$ for the barred fields \bea {\cal L}_{kin}&=& \frac{1}{2} ({\dot {\overline x}}^2 -{\overline \psi}{\dot{\overline \psi}}+{\overline \xi}{\dot{\overline \xi}}). \eea The introduction of interacting terms is reached by adding, as also discussed in the Appendix, a linear potential term in $z$ to the sigma-model Lagrangian (\ref{sigma121special}). Since $z$ transforms as a time-derivative under the (\ref{12100rep}) operators, the total action ${\cal S}$ is invariant. This action is \bea {\cal S} &=& \int d\tau \frac{1}{2}\left(\phi(x)({\dot x}^2+z^2-\psi{\dot\psi}+\xi{\dot \xi})+\phi_xz\psi\xi+\mu z\right). \eea {\textcolor{black}{ In order for the action to be $00$-graded, the constant parameter $\mu$ should be $11$-graded. We recall that the classical ${\mathbb Z}_2\times {\mathbb Z}_2$-graded fields (anti)commute, satisfying the (anti)commutators (\ref{anticommrel}). This is a consistent extension of ordinary classical supermechanics whose fields are assumed to be real or complex (the bosons) and Grassmann (the fermions). We have now two types of Grassmann fields ($10$-graded and $01$-graded) and two types of bosons ($00$-graded and $11$-graded). They are time-dependent and their ordering is determined by the ${\mathbb Z}_2\times {\mathbb Z}_2$-graded structure. E.g., following the (\ref{anticommrel}) prescription, if $z(\tau)$ is $11$-graded and $\psi(\tau)$ is $10$-graded, then $z(\tau)\psi(\tau)=-\psi(\tau)z(\tau)$. The same prescription (\ref{anticommrel}) holds for ${\mathbb Z}_2\times {\mathbb Z}_2$-graded constant (not depending on $\tau$) fields. The $11$-graded coupling constant $\mu$ is such an example. It can be interpreted as a non-dynamical, constant, $11$-graded, background field. When quantizing the theory as discussed in \cite{akt}, $\mu$ becomes a constant $4\times 4$ matrix belonging to the ${\cal G}_{11}$ sector in formula (\ref{gradedmatrices}). At a classical level $\mu$ is assumed to (anti)commute with the ${\mathbb Z}_2\times{\mathbb Z}_2$-graded fields. }}\par By repeating the computations for the analogous case presented in the Appendix, one can solve the algebraic equation of motion for $z$ so that, up to boundary terms, the Lagrangian ${\cal L}$ can be expressed as \bea\label{intermediate11} {\cal L}&=& \frac{1}{2}({\dot {\overline x}}^2-{\overline\psi}{\dot{\overline\psi}}+{\overline \xi}{\dot{\overline\xi}}) -\frac{1}{8} (\frac{\mu}{C_x})^2-\frac{\mu}{2}\frac{C_{xx}}{(C_x)^3} {\overline \psi}{\overline \xi}, \eea where the barred fields and $C(x)$ are given in (\ref{cxnew}). After setting \bea\label{Wpot} &W({\overline x})=W(C(x))=\frac{\mu}{2C_x(x)}& \eea we can rewrite the Lagrangian as \bea\label{the12100model} {\cal L}&=& \frac{1}{2}({\dot {\overline x}}^2-{\overline\psi}{\dot{\overline\psi}}+{\overline \xi}{\dot{\overline\xi}})-\frac{1}{2}W^2({\overline x})+W_{\overline x}{\overline\psi}{\overline \xi}. \eea {\textcolor{black}{ The potential $W({\overline x})$ in (\ref{Wpot}) is proportional to $\mu$ and is $11$-graded since $C(x)$, defined in (\ref{cxnew}), is $00$-graded. The Lagrangians (\ref{intermediate11}) and (\ref{the12100model}) are $00$-graded. In particular, the cubic term $\mu{\overline\psi}{\overline \xi}$ entering the right hand side of (\ref{intermediate11}) is $00$-graded since, from (\ref{anticommrel}) and (\ref{scalarproduct}), $[00] = [11]+[10]+[01]$, where the addition is $mod~2$. The consistency of the procedure follows from respecting the ${\mathbb Z}_2\times {\mathbb Z}_2$-graded properties.}}\par This construction of the interacting action follows the second approach described in the Appendix. The first approach which works nicely for the ${\cal N}=2$ supersymmetric action and is based on a constant kinetic term plus a potential term, cannot be repeated in the ${\mathbb Z}_2\times{\mathbb Z}_2$-graded case. The reason is that the only ${\mathbb Z}_2\times{\mathbb Z}_2$ invariant potential term is given by the linear term in $z$. To get the nontrivial interaction one is therefore obliged to add this linear term to the sigma-model action and perform the (\ref{cxnew}) field redefinitions. \subsection{Interacting $(1,2,1)_{[00]}$ multiplets } The construction of invariant actions for interacting multiplets proceeds as for the root multiplets case. We explicitly present it for two interacting multiplets. The fields are denoted as $x_1, z_1, \psi_1,\xi_1$ and $x_2,z_2,\psi_2,\xi_2$, respectively. They transform independently; nevertheless, their interaction can be induced by the prepotential.\par The sigma-model action can be defined as before, so that \bea {\cal S}_\sigma=\int d\tau {\cal L}_\sigma= \int d\tau ZQ_{10}Q_{01} g(x_1,x_2). \eea Up to boundary terms, the Lagrangian is \bea {\cal L}_{\sigma}&=& g_{11}({\dot x}_1^2+z_1^2-\psi_1{\dot\psi}_1+\xi_1{\dot\xi}_1)+g_{22}({\dot x}_2^2+z_2^2-\psi_2{\dot\psi}_2+\xi_2{\dot\xi}_2)+\nonumber\\ &&+g_{12}(2{\dot x}_1{\dot x}_2+2z_1z_2-\psi_1{\dot\psi}_2-\psi_2{\dot\psi}_1+\xi_1{\dot \xi}_2+\xi_2{\dot \xi}_1)+g_{111}z_1\psi_1\xi_1+g_{222}z_2\psi_2\xi_2+\nonumber\\&&+g_{112}(z_2\psi_1\xi_1+z_1(\psi_1\xi_2+\psi_2\xi_1))+ g_{221}(z_1\psi_2\xi_2+z_2(\psi_1\xi_2+\psi_2\xi_1)) \eea \textcolor{black}{ where $ g_{12} := \partial_{x_1} \partial_{x_2} g(x_1,x_2),$ etc.} } The necessary condition $g_{12}\neq 0$ is required to have interacting multiplets.\par The action ${\cal S}$, obtained by adding the linear potential term, is also invariant: \bea {\cal S}&=& \int d\tau\left( {\cal L}_\sigma +{\cal L}_{lin}\right), \qquad {\textrm{where}}\quad {\cal L}_{lin}= \mu_1z_1+\mu_2z_2, \eea {\textcolor{black}{For consistency, the $\mu_{1,2}$ constants belong to the $11$-graded sector; their (anti)commutation properties with respect to the ${\mathbb Z}_2\times{\mathbb Z}_2$-graded fields are defined in accordance with this position.}}\par Extending this construction to the case of $n>2$ interacting multiplets is immediate. \subsection{Invariant actions for the $(1,2,1)_{[11]}$ multiplet} The sigma-model invariant action is expressed as \bea {\cal S}&=& \int d\tau {\cal L}_\sigma = \int d\tau ZQ_{10}Q_{01} f(z), \eea where $f(z)$ is an even function of $z$. The computation produces, up to boundary term, \bea {\cal L}_\sigma&\sim& \Phi(z) ({\dot z}^2+x^2-\psi{\dot\psi}+\xi{\dot\xi})-\textcolor{black}{\Phi_z(z)}x\psi\xi, \eea where \bea \Phi(z)&=& f_{zz}(z) \eea is also an even function of $z$. \par Just like the $(1,2,1)_{[00]}$ case, an invariant linear potential term ${\cal L}_{lin}$ can be added. For this multiplet the total Lagrangian ${\cal L}$ is \bea {\cal L}&=& \Phi(z) ({\dot z}^2+x^2-\psi{\dot\psi}+\xi{\dot\xi})-\Phi_zx\psi\xi + \mu x, \eea where $\mu$ is an ordinary real (i.e., not exotic) coupling constant. \subsection{Invariant action of the $(0,2,2)$ multiplet} The free kinetic Lagrangian {\textcolor{black}{\bea {\cal L}&=& \frac{1}{2}({x}^2-{z}^2-\psi{\dot\psi}-\xi{\dot \xi}) \eea }} defines the invariant action ${\cal S}=\int d\tau {\cal L}$ of the $(0,2,2)$ multiplet. The scaling dimension of the fields is $\lambda=\frac{1}{2}$ for the even fields $x$, $z$ and $\lambda =0$ for the odd fields $\psi$, $\xi$.\par With respect to the differential operators defined in (\ref{similconformal022}), the action ${\cal S}$ is invariant under the $10$-generator ${\mathbb Z}_2\times {\mathbb Z}_2$-graded ${\cal G}_{conf}$ conformal superalgebra (\ref{conformal220}). \section{Conclusions} \textcolor{black}{ In the supersymmetric literature the term ``supermechanics" refers to classical systems formulated in the Lagrangian setting.} {\textcolor{black}{There are hundreds, possibly thousands, of papers devoted on this topic. Somewhat surprisingly, after more than fifty years since the introduction (inspired by superalgebras) of ${\mathbb Z}_2\times{\mathbb Z}_2$-graded superalgebras \cite{{rw1},{rw2},{sch}}, no work has been presented yet to analyze ${\mathbb Z}_2\times{\mathbb Z}_2$-graded symmetries in this context. To fill this vacuum is the main motivation of the present paper. }}\par \textcolor{black}{Our basic strategy is to mimick, as much as possible, the construction of supermechanics based on supermultiplets and their derived invariant actions. We extended to the ${\mathbb Z}_2\times{\mathbb Z}_2$-graded case the approaches of \cite{{PaTo},{KRT}} for the one-dimensional super-Poincar\'e algebras and \cite{KT} for the superconformal algebras.}\par \textcolor{black}{We derived the basic ${\mathbb Z}_2\times{\mathbb Z}_2$-graded multiplets (even in the conformal case) and presented a general framework to construct the actions. As a consequence, a plethora of ${\mathbb Z}_2\times{\mathbb Z}_2$-graded invariant actions has been obtained (for single basic multiplets, for interacting multiplets, for systems with or without higher derivatives, etc.). The simplest models with ${\mathbb Z}_2\times {\mathbb Z}_2$-graded conformal invariance have also been presented.}\par \textcolor{black}{The ${\mathbb Z}_2\times{\mathbb Z}_2$-graded invariance poses further restrictions, with respect to ordinary superalgebras, on the invariant actions and the procedures to obtain them. As an example, only the second approach described in Appendix {\bf B} for the ${\cal N}=2$ supersymmetric model can be applied to derive its $ {\mathbb Z}_2\times{\mathbb Z}_2$ counterpart given in (\ref{the12100model}).}\par \textcolor{black}{As already mentioned in the Introduction, there has been recently a renewal of interest, which started from the works \cite{{aktt1}, {aktt2}} and \cite{{Bru},{BruDup},{NaAmaDoi},NaAmaDoi2}, in analyzing ${\mathbb Z}_2\times{\mathbb Z}_2$-graded symmetries in the context of dynamical systems. The present paper fits into this current trend. }\par \textcolor{black}{Several open questions have yet to be answered. The most relevant ones are perhaps ``which is the quantum role of the $11$-graded exotic bosons?" and ``which is the quantum signature of a ${\mathbb Z}_2\times{\mathbb Z}_2$-graded symmetry?". } \par \textcolor{black}{On an abstract level the ${\mathbb Z}_2\times{\mathbb Z}_2$-graded symmetry is related with a specific form of parastatistics, see \cite{tol,StVdJ}. Concretely, the existence of a ${\mathbb Z}_2\times {\mathbb Z}_2$-symmetry tells us that multiparticle states can be (anti)symmetrized according to the ${\mathbb Z}_2\times{\mathbb Z}_2$-statistics. These states do not obey the ordinary boson-fermion statistics. The use of the alternative ${\mathbb Z}_2\times {\mathbb Z}_2$ statistics has consequences which are measurable and can be observed. It affects, e.g., the energy degeneracy of multiparticle wavefunctions, the partition function, the derived chemical potentials, etc.} \par \par ~\par ~\par \par {\Large{\bf Acknowledgments}} {}~\par{}~\par Z. K. and F. T. are grateful to the Osaka Prefecture University, where this work was completed, for hospitality. F. T. was supported by CNPq (PQ grant 308095/2017-0). \par ~\par ~\par \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} \textcolor{black}{ {\Large{\bf{Appendix A: the scaling dimensions}}} }\par ~\par Besides the ${\mathbb Z}_2\times {\mathbb Z}_2$-grading, a scaling dimension can be assigned to the component fields entering the (\ref{fourmultiplets}) multiplets. Let us assign to the Euclidean time $\tau$ the scaling dimension \bea [\tau]&=&-1. \eea By consistency, the scaling dimension of the ${\mathbb Z}_2\times {\mathbb Z}_2$-graded superalgebra operators entering (\ref{z2z2super}) are \bea [H]=[Z]= 1, \quad &&\quad [Q_{10}]=[Q_{01}]= \frac{1}{2}. \eea For each multiplet a scaling dimension can be assigned to its component fields in terms of an arbitrary real parameter $\lambda\in{\mathbb R}$. The parameter $\lambda$, which coincides the lowest scaling dimension of a component field in a given multiplet, is called the scaling dimension of the multiplet. \par ~\par The consistent assignment of scaling dimensions are\par ~\par {\it i}) for the $(2,2,0)$ root multiplet, \bea &[x] = [z]=\lambda, \quad \quad [\psi]=[\xi]=\lambda+\frac{1}{2};& \eea {\it ii}) for the $(1,2,1)_{[11]}$ ${\mathbb Z}_2\times {\mathbb Z}_2$-graded multiplet, \bea &[x] = \lambda +1, \quad\quad [z]=\lambda, \quad \quad [\psi]=[\xi]=\lambda+\frac{1}{2}.& \eea {\it iii}) for the $(1,2,1)_{[00]}$ ${\mathbb Z}_2\times {\mathbb Z}_2$-graded multiplet, \bea &[x] = \lambda, \quad\quad [z]=\lambda+1, \quad \quad [\psi]=[\xi]=\lambda+\frac{1}{2};& \eea {\it iv}) for the $(0,2,2)$ ${\mathbb Z}_2\times {\mathbb Z}_2$-graded multiplet, \bea &[x] = [z]=\lambda+\frac{1}{2}, \quad \quad [\psi]=[\xi]=\lambda.& \eea \par ~\par Let us set \bea &\lambda_1=[x],\quad \lambda_2=[z],\quad \lambda_3=[\psi],\quad \lambda_4=[\xi].& \eea For each one of the four cases above a scaling operator $D$ defines the scaling dimension of the operators $H, Z, Q_{10}, Q_{01}$. The scaling dimension is read from the commutators \bea &[D,H] =H, \quad [D,Z]= Z, \quad [D, Q_{10}]=\frac{1}{2} Q_{10}, \quad [D, Q_{01}]=\frac{1}{2} Q_{01}. \eea The operator $D$ can be introduced through the position \bea\label{scalingop} D&=& -\tau\partial_\tau\cdot {\mathbb I}_4 -\Lambda, \quad {\textrm{for}}\quad \Lambda=diag(\lambda_1,\lambda_2,\lambda_3,\lambda_4). \eea In the above formula $\Lambda$ is a diagonal operator. One should note that $D$ is an operator belonging to the ${\cal G}_{00}$ sector of the ${\mathbb Z}_2\times {\mathbb Z}_2$-graded superalgebra.\par For several applications it is important to mention that constant matrices $M$, possessing a non-vanishing scaling dimension as defined by $D$, exist in each one of the different $D$-module representations. The scaling dimension $s$ is given by \bea [D,M]&=&s M. \eea Let $E_{ij}$ denotes the matrix with entry $1$ at the intersection of the $i$-th column with the $j$-th row and $0$ otherwise. The constant matrices with non-vanishing scaling dimensions are:\par ~\par {{ {\it i}) for the $(2,2,0)$ $D$-module representation, \bea s=\frac{1}{2}:&& {\textrm{for}} \quad E_{13}, E_{24}\in {\cal G}_{10}\quad {\textrm{and}}\quad E_{14}, E_{23}\in {\cal G}_{01},\nonumber\\ s=-\frac{1}{2}:&& {\textrm{for}} \quad E_{31}, E_{42}\in {\cal G}_{10}\quad {\textrm{and}}\quad E_{32}, E_{41}\in {\cal G}_{01}; \eea {\it ii}) for the $(1,2,1)_{[11]}$ $D$-module representation, \bea s=1:&& {\textrm{for}} \quad E_{21}\in {\cal G}_{11},\nonumber\\ s=\frac{1}{2}:&& {\textrm{for}} \quad E_{24}, E_{31}\in {\cal G}_{10}\quad {\textrm{and}}\quad E_{23}, E_{41}\in {\cal G}_{01},\nonumber\\ s=-\frac{1}{2}:&& {\textrm{for}} \quad E_{13, E_{42}}\in {\cal G}_{10}\quad {\textrm{and}}\quad E_{14}, E_{32}\in {\cal G}_{01},\nonumber\\ s=-1:&& {\textrm{for}} \quad E_{12}\in {\cal G}_{11}; \eea {\it iii}) for the $(1,2,1)_{[00]}$ $D$-module representation, \bea s=1:&& {\textrm{for}} \quad E_{12}\in {\cal G}_{11},\nonumber\\ s=\frac{1}{2}:&& {\textrm{for}} \quad E_{13}, E_{42}\in {\cal G}_{10}\quad {\textrm{and}}\quad E_{14}, E_{32}\in {\cal G}_{01},\nonumber\\ s=-\frac{1}{2}:&& {\textrm{for}} \quad E_{24}, E_{31}\in {\cal G}_{10}\quad {\textrm{and}}\quad E_{23}, E_{41}\in {\cal G}_{01},\nonumber\\ s=-1:&& {\textrm{for}} \quad E_{21}\in {\cal G}_{11}. \eea }} \par ~\par \renewcommand{\theequation}{B.\arabic{equation}} \setcounter{equation}{0} {\Large{\bf{Appendix B: revisiting the ${\cal N}=2$ supersymmetric action \\ $~~~~~~~~~~~~~~~~~~~~~$ for the real supermultiplet }}} \par ~\par The ${\cal N}=2$ supersymmetric action of the real supermultiplet is well known {\textcolor{black}{ \cite{{DVF},{witten},{FreTow}}.}} It consists of a constant kinetic term plus a generic potential term; it is obtained either from a superfield \textcolor{black}{\cite{BellKri}} or from a $(1,2,1)$ $D$-module approach. We present here {\textcolor{black}{for illustrative purposes}} a derivation of this action as recovered from a sigma model Lagrangian {\textcolor{black}{plus a Fayet-Iliopoulos \cite{FayIli} linear potential term}}. {\textcolor{black}{ It is the same method which was used in Section {\bf 5} to obtain the non-trivial ${\mathbb Z}_2\times{\mathbb Z}_2$-graded invariant actions for the $(1,2,1)_{[11]}$ and $(1,2,1)_{[00]}$ multiplets. }} {\textcolor{black}{Indeed, the ${\mathbb Z}_2\times{\mathbb Z}_2$-graded symmetry differs and is more stringent than ordinary supersymmetry, so that not all methods which work in the supersymmetric case have a counterpart which is applicable to the ${\mathbb Z}_2\times{\mathbb Z}_2$-graded case. In particular the ${\mathbb Z}_2\times{\mathbb Z}_2$-graded symmetry forces the potential term to be linear.}} \par The four time-dependent fields of the ${\cal N}=2$ model are denoted as $x$ (the propagating boson), $\psi $, $\xi$ (the fermionic fields) and $z$ (the auxiliary bosonic field). Their field transformations are \bea &\begin{array}{llll} Q_1x = \psi,\quad & Q_1 z = {\dot \xi},\quad & Q_1\psi = {\dot x}, \quad& Q_1\xi = z,\\ Q_2x = \xi,\quad & Q_2 z = -{\dot \psi},\quad & Q_2\psi = -z,\quad& Q_2\xi = {\dot x}. \end{array}& \eea The one-dimensional ${\cal N}=2$ supersymmetry algebra (with generators $Q_1, Q_2, H$) satisfies \bea \{Q_i,Q_j\}=2\delta_{ij}H, &\quad & [H,Q_i]=0, \qquad {\textrm{for}}\quad i,j=1,2. \eea The standard construction of the invariant action is made through the position \bea {\cal S} = \int d\tau {\cal L},&& {\textrm{where}}\quad {\cal L} = {\cal L}_{kin} + {\cal L}_{pot}. \eea The kinetic and potential terms of the Lagrangian are \bea {\cal L}_{kin} =\frac{1}{2}\left( {\dot x}^2+z^2 -\psi{\dot\psi} - \xi{\dot \xi}\right), && {\cal L}_{pot} = W(x)z + W_x(x)\psi\xi. \eea They are both manifestly supersymmetric invariant (up to a time derivative), being given by \bea {\cal L}_{kin} = -\frac{1}{2}Q_1Q_2 ( \psi\xi), && {\cal L}_{pot} = Q_1Q_2(\Phi(x))= \Phi_xz+\Phi_{xx}\psi\xi. \eea The second equation implies the identification $W(x)=\Phi_x(x)$.\par After solving the $z= - W(x)$ algebraic equation of motion for $z$, the Lagrangian ${\cal L}$ can be expressed as \bea\label{n2lagr} {\cal L} &=& \frac{1}{2} ({\dot x}^2 -\psi{\dot\psi}-\xi{\dot\xi})-\frac{1}{2}W(x)^2+W_x\psi\xi. \eea The alternative formulation that we are presenting here can be obtained by expressing the ${\cal N}=2$ invariant action in terms of a sigma-model Lagrangian ${\cal L}_\sigma$ plus a linear in $z$ potential term ${\cal L}_{lin}$. They are \bea {\cal L}_{\sigma}= Q_1Q_2(f(x)z), && {\cal L}_{lin} = \frac{1}{2}\mu z. \eea The total Lagrangian is \bea\label{totallag} {\cal L}&=& {\cal L}_\sigma+{\cal L}_{lin}=\frac{1}{2}\phi(x)({\dot x}^2+z^2-\psi{\dot\psi}-\xi{\dot\xi})+\frac{1}{2}\phi_xz\psi\xi+\frac{1}{2}\mu z,\quad {\textrm{for}} \quad \phi(x)= 2f_x. \eea The algebraic equation of motion for $z$ gives \bea z&=& -\frac{1}{2\phi}\mu-\frac{\phi_x}{2\phi}\psi\xi. \eea By substituting the right hand side into the Lagrangian we obtain \bea {\cal L}&=& \frac{1}{2}\phi({\dot x}^2-\psi{\dot\psi}-\xi{\dot\xi})-\frac{1}{8\phi}\mu^2-\frac{\phi_x}{4\phi}\mu\psi\xi. \eea By performing non-linear transformations on the component fields, we can express the Lagrangian in the so-called ``constant kinetic term" basis \textcolor{black}{\cite{HoTo}}. We set \bea\label{barredfieldsapp} &{\overline x}= C(x),\quad {\overline\psi} = C_x\psi,\quad {\overline \xi}= C_x \xi,& \qquad {\textrm{where}}\quad C_x= \sqrt{\phi}. \eea We then get, at first, the intermediate expression \bea\label{intermediate} {\cal L}&=& \frac{1}{2}({\dot {\overline x}}^2-{\overline\psi}{\dot{\overline\psi}}-{\overline \xi}{\dot{\overline\xi}}) -\frac{1}{8} (\frac{\mu}{C_x})^2-\frac{\mu}{2}\frac{C_{xx}}{(C_x)^3} {\overline \psi}{\overline \xi}. \eea The position \bea &W({\overline x})=W(C(x))=\frac{\mu}{2C_x(x)}& \eea allows to identify (by replacing the fields $x,\psi,\xi$ with their respective barred expressions) the Lagrangian (\ref{intermediate}) with the Lagrangian (\ref{n2lagr}): \bea {\cal L}&=& \frac{1}{2}({\dot {\overline x}}^2-{\overline\psi}{\dot{\overline\psi}}-{\overline \xi}{\dot{\overline\xi}})-\frac{1}{2}W^2({\overline x})+W_{\overline x}{\overline\psi}{\overline \xi}. \eea As an example of the construction, the harmonic oscillator and the inverse-square potentials are respectively recovered from\par {\it i}) the harmonic oscillator potential, $W^2({\overline x})=A^2{\overline x}^2$, so that \bea &W({\overline x}) =A{\overline x},\qquad C(x) =\sqrt{\frac{\mu x}{A}}, \qquad \phi (x) = \frac{\mu}{4Ax};& \eea {\it ii}) the inverse square potential, $W^2({\overline x})=(\frac{g}{{\overline x}})^2$, so that \bea &W({\overline x}) = \frac{g}{{\overline x}},\qquad C(x) =e^{\frac{\mu x}{2g}}, \qquad \phi = \frac{\mu^2}{4g^2}e^{\frac{\mu x}{g}}.& \eea
cond-mat/9706088
\section{Introduction} Bose-Einstein condensation is central to much of our understanding of phenomena in condensed matter physics \cite{Huang}. It is one of the simplest processes where quantum effects manifest themselves on the macroscopic level when a finite fraction of the non-interacting bosons in a system start to occupy the lowest energy level. Although all the particles will then be in the same quantum state at zero temperature, this condensate does not have real long-range order and does not truly represent a different phase. It was Bogoliubov \cite{Bogoliubov} who first showed that a short-range repulsion between the particles is necessary in order to have a real condensate with the particles in a new physical phase which is superfluid. His description of the condensation of interacting bosons has since then formed the basis for a much more detailed understanding of these important phenomena \cite{PN}. Until very recently the only physical Bose-Einstein system of non-relativistic particles exhibiting a phase transition at low temperatures, was liquid $He^4$. But here the particle density is so high that it is a strongly interacting system so that perturbation theory around the free system does not work \cite{PN}. However, with the recent experimental progress made in connection with magnetically trapped bosons in the gas phase \cite{BEC}, the situation has radically changed and systems of weakly interacting bosons can now be studied. These were theoretically investigated in a series of papers by Lee, Yang and their collaborators forty years ago using methods from statistical mechanics \cite{LeeYang_2,LeeYang_1,LHY,LeeYang_3}. Their many results still represent to a large degree the state of theoretical understanding of weakly interacting boson gases. In the normal phase one can assign a definite number of particles to the system while this is impossible after the condensate has formed \cite{PWA}. Bose-Einstein condensation of interacting particles is therefore the oldest and still probably the simplest example of spontaneous symmetry breakdown which today lies at the very heart of modern elementary particle theory \cite{Tony}. Since these theories are relativistic, it is not obvious how Bogoliubov's method can be used in this case. Instead one has developed very powerful methods based on Feynman's path integral formulation of quantum field theories \cite{RPF} which allow the calculation of the corresponding effective potentials in a very systematic way \cite{Bernard,DJ,Weinberg}. This approach to spontaneous symmetry breakdown based upon functional me\-thods has not yet been used to the same extent in the study of Bose-Einstein condensation of non-relativistic systems although the basic elements are already in a modern textbook \cite{Brown}. In two dimensions it has been used by Lozano\cite{Lozano}. It is not clear if this approach is equivalent to the Bogoliubov method or not. This is what we have set out to investigate here. A first step in this direction was taken several years ago by Kapusta \cite{Kapusta_1} who considered interacting systems of relativistic bosons at non-zero chemical potential and their condensation at low temperatures. More recently, Bernstein and Dodelson \cite{BD} and Benson, Bernstein and Dodelson \cite{BBD} extended these relativistic calculations and also considered the non-relativistic limit. We will here show that their finite-temperature results are incomplete in that they have not included the contributions from the ring or daisy diagrams which are known to be essential at non-zero temperatures \cite{DJ,Weinberg,Carrington,AE}. Since then, the non-relativistic Bose gas has also been studied by Stoof and Bijlsma \cite{SB}. The use of functional methods in quantum statistical physics of non-relativistic systems has not yet as widespread as for relativistic systems. One of the best introductions have been given by Popov \cite{Popov}. Our approach is different and more along the lines used in relativistic quantum field theories \cite{Kapusta_2}, but we will to a large degree reproduce his results. The necessary formalism is established in the next section where we will derive the thermodynamics of a gas of free bosons in this language. We will work with the two real components of the field instead of the complex field itself and its conjugate which is usually done in condensed matter physics \cite{FW,Griffin}. We find this choice of variables especially advantageous in the case of interacting particles at zero temperature considered in Section 3. We calculate the effective potential and free energy at non-zero chemical potential in the one-loop approximation where we include the quantum effects of the fluctuations around the classical solution. After removing the divergences in the theory by renormalization of the coupling constant and the chemical potential, we find the ground state energy of the hard-core bosons to be in exact agreement with the standard results of Lee and Yang \cite{LeeYang_2}. In Section 4 we extend the calculation of the effective potential to finite temperatures using the imaginary-time formalism. We discover that the one-loop approximation is no longer consistent with the Goldstone theorem requiring the excitation spectrum to be linear in the long wavelength limit. The problem is solved by including the so-called daisy or ring corrections to the boson propagator. These important contributions to the free energy of relativistic quantum field theories at finite temperature were first discussed by Dolan and Jackiw \cite{DJ} and Weinberg \cite{Weinberg}. They have recently become of importance in the connection with the standard model of elementary particles at finite temperatures \cite{Carrington,AE}. In the last section we discuss the obtained results and compare with what has been obtained by other methods. This functional approach allows now in principle a systematic calculation of higher loop corrections to the thermodynamics of the interacting boson gas. \section{Functional methods for the non-relativistic boson gas} The wavefunction $\psi = \psi({\bf x},t)$ for a free, non-relativistic particle of mass $m$ satisfies the Schr\"odinger equation \begin{eqnarray} i{\partial}_t\psi = - {1\over 2m}\nabla^2 \psi \label{free.1} \end{eqnarray} when we use units so that $\hbar = 1$. In the second quantized description of a system of many such particles $\psi({\bf x},t)$ becomes the corresponding quantum field. The wave equation (\ref{free.1}) is then the classical equation of motion. It follows from the Schr\"odinger Lagrangian \begin{eqnarray} {\cal L} = i\psi^*{\partial}_t\psi - {1\over 2m}|{\bg\nabla}\psi|^2 \label{free.2} \end{eqnarray} Constructing now the Hamiltonian $H$ and the number operator $N = \int\!d^3x\,\psi^*\psi$, the grand canonical partition function for the is then given by \begin{eqnarray} \Xi(\beta,\mu) = \mbox{Tr}\,e^{-\beta(H - \mu N)} \label{free.3} \end{eqnarray} where $\beta = 1/T$ when the system is in thermal equilibrium at temperature $T$ and chemical potential $\mu$. The Boltzmann constant is taken to be $k_B = 1$. Rewriting now the trace as a path integral \cite{RPF}, one obtains the functional integral \begin{eqnarray} \Xi(\beta,\mu) = \int\!{\cal D}\psi{\cal D}\psi^*e^{-\int_0^\beta\!d\tau\!\int\!d^3x\, {\cal L}_E(\psi^*,\psi)} \label{free.4} \end{eqnarray} where the field $\psi = \psi({\bf x},\tau)$ is now function of imaginary time $\tau = it$. Its dynamics is governed by the Euclidean Lagrangian density \begin{eqnarray} {\cal L}_E = \psi^*{\partial}_{\tau}\psi + {1\over 2m}|{\bg\nabla}\psi|^2 - \mu\psi^*\psi \label{free.5} \end{eqnarray} where we have included the contribution from the chemical potential. \subsection{Complex field formalism} Since the integral (\ref{free.4}) is Gaussian, it can be immediately evaluated. The result for the free energy $\Omega = - \ln\Xi/\beta$ can then be written as \begin{eqnarray} \Omega(\beta,\mu) = {1\over\beta}\mbox{Tr}\ln(\partial_\tau - {\nabla}^2/2m - \mu) \end{eqnarray} The functional trace is given by eigenvalues of the indicated operator. They are found by expanding the Bose field in plane waves \begin{eqnarray} \psi({\bf x},\tau) = \sqrt{1\over\beta V} \sum_{n=-\infty}^\infty\sum_{{\bf k}} \psi_{n{\bf k}}\,e^{i{\bf k}\cdot{\bf x} + i\omega_n\!\tau} \label{free.6} \end{eqnarray} where $\omega_n = 2\pi n/\beta$ are the corresponding Matsubara frequencies. Then we have \begin{eqnarray} \Omega(\beta,\mu) = {1\over\beta}\sum_{{\bf k}}\sum_{n=-\infty}^\infty \ln(-i\omega_n + \varepsilon_{\bf k} - \mu) \label{free.6b} \end{eqnarray} where $\varepsilon_{\bf k} = {\bf k}^2/2m$ is the single-particle energy. One can regularize the divergent sum by taking the derivative with respect to $e_{\bf k} = \varepsilon_{\bf k} - \mu$. Using then the standard sum \begin{eqnarray} \sum_{n=-\infty}^\infty {1\over \omega_n^2 + \omega^2} = {\beta\over\omega}\left[{1\over 2} + {1\over e^{\beta\omega} - 1}\right] \label{free.7} \end{eqnarray} and integrating back, we have \begin{eqnarray} \Omega(\beta,\mu) = \sum_{{\bf k}}\left[\f{1}{2} e_{\bf k} + T\ln\left(1 - e^{-\beta e_{\bf k}}\right)\right] \label{free.8} \end{eqnarray} after discarding an infinite constant. The pressure in the gas is now simply $P = -\Omega/V$. Taking the infinite-volume limit, it becomes \begin{eqnarray} P(\beta,\mu) = - \int\!{d^3k\over(2\pi)^3}\left[\f{1}{2} (\varepsilon_{\bf k} - \mu) + T\ln\left(1 - e^{-\beta(\varepsilon_{\bf k} - \mu)}\right)\right] \label{free.9} \end{eqnarray} The first term is the zero-point energy which is the only contribution at zero temperature and is usually without physical content. However, in the next section where we discuss the interacting theory, it will be important. In order to find the equation of state, we want the pressure as a function of the density $\rho = {\partial} P/{\partial}\mu$. Ignoring the zero-point energy in (\ref{free.9}), we get the ordinary result \begin{eqnarray} \rho = \int\!{d^3k\over(2\pi)^3}{1\over e^{\beta(\varepsilon_{\bf k} - \mu)} - 1} \label{be.1} \end{eqnarray} from which one can calculate the chemical potential $\mu$ as function of temperature and density. From the pressure one then has the equation of state. It involves a critical line, determined by $\mu = 0$, i.e. \begin{eqnarray} \rho = \int\!{d^3k\over(2\pi)^3}{1\over e^{\beta\varepsilon_{\bf k}} - 1} = \zeta(3/2) \left({mT\over 2\pi}\right)^{3/2} \label{be.2} \end{eqnarray} For densities above the critical value, the pressure in the gas is seen to be independent of the density. This condensed phase has thus an infinite compressibility which shows that it is non-physical. A short-ranged, repulsive potential between the particles will solve this problem and the Bose-Einstein condensate will become a physical superfluid \cite{Huang}. \subsection{Real field formalism} When we later consider the interacting gas, we will find it more convenient to take the two real components of the complex field $\psi$ as independent field variables, \begin{eqnarray} \psi = \sqrt{1\over 2} (\psi_1 + i \psi_2) \label{free.10} \end{eqnarray} The Euclidean Lagrangian (\ref{free.5}) can then be written as \begin{eqnarray} {\cal L}_E = {1\over 2}\psi_a \wh{M}_{ab}\psi_b \label{free.11} \end{eqnarray} where equal indices are summed from 1 to 2. We have here introduced the matrix operator \begin{eqnarray} \wh{M}_{ab} = i\epsilon_{ab}{\partial}_\tau - \left({\nabla^2\over 2m} + \mu\right)\delta_{ab} \label{free.12} \end{eqnarray} where $\epsilon_{ab}$ is the antisymmetric tensor in two dimensions with $\epsilon_{12}=1$. After expressing the corresponding action in terms of the complex Fourier components in (\ref{free.6}), the partition function (\ref{free.4}) is now given by the functional integral \begin{eqnarray} \Xi(\beta,\mu) = \int\!{\cal D}\psi_1{\cal D}\psi_2\,\exp{\left[-{1\over 2}\sum_{n,{{\bf k}}} (e_{1{\bf k}}\psi_1^*\psi_1 + e_{2{\bf k}}\psi_2^*\psi_2) + \sum_{n,{{\bf k}}}\omega_n(\psi_1^*\psi_2 - \psi_2^*\psi_1)\right]} \label{free.13} \end{eqnarray} The two energies $e_{1{\bf k}}$ and $e_{2{\bf k}}$ are here actually both equal to $e_{\bf k} = \varepsilon_{\bf k} - \mu$. It should also be clear that in this integral we have suppressed the Fourier indices on the field components. From the form of the partition function (\ref{free.13}) we see that in terms of the real field components, the free, non-relativistic field theory is at the fundamental level interacting. The diagonal part of the action moves the fields $\psi_1$ and $\psi_2$ at fixed time with the propagators \begin{eqnarray} \setcoordinatesystem units <1cm,1cm> \unitlength=1cm \plot -0.9 0.12 -0.1 0.12 / : D_{11}^{(0)} = {1\over e_{1{\bf k}}} \hspace{3.5cm} \setdashes <2pt> \plot -0.9 0.12 -0.1 0.12 / : D_{22}^{(0)} = {1\over e_{2{\bf k}}} \label{free.14} \end{eqnarray} while the kinetic term provides the interaction with the simple vertex \begin{eqnarray} \setcoordinatesystem units <1cm,1cm> \unitlength=1cm \plot -1.0 0.12 -0.2 0.12 / \put {\circle{0.2}} [B1] at 0.025 0.12 \setdashes <2pt> \plot 0.05 0.12 0.85 0.12 / \hspace{1cm} : \omega_n \hspace{3.5cm} \plot -1.0 0.12 -0.2 0.12 / \put {\circle{0.2}} [B1] at 0.025 0.12 \setsolid \plot 0.05 0.12 0.85 0.12 / \hspace{1cm} : -\omega_n \label{free.15} \end{eqnarray} of strength given by the Matsubara frequency. The full, free energy is now obtained in standard perturbation theory which is almost trivially solved to all orders in the interaction. First, we need the partition function of the free theory \begin{eqnarray} \Xi_0 & = & \int\!{\cal D}\psi_1{\cal D}\psi_2\,\exp{\left[-{1\over 2}\sum_{n,{{\bf k}}} (e_{1{\bf k}}\psi_1^*\psi_1 + e_{2{\bf k}}\psi_2^*\psi_2)\right]} \nonumber \\ & = & \mbox{det}^{-\f{1}{2}}(e_{1{\bf k}})\,\mbox{det}^{-\f{1}{2}}(e_{2{\bf k}}) \label{free.16} \end{eqnarray} Taking the logarithm, we find the non-interacting result \begin{eqnarray} \beta\Omega_0 & = & {1\over 2}\left[\mbox{Tr}\ln e_{1{\bf k}} + \mbox{Tr}\ln e_{2{\bf k}}\right] \nonumber\\ & = & -\left[\frac{1}{2}\ln \setcoordinatesystem units <1cm,1cm> \unitlength=1cm \circulararc 360 degrees from 0.9 0.12 center at 0.5 0.12 \hspace{1cm} + \frac{1}{2}\ln \setdashes <1.50pt> \circulararc 360 degrees from 0.9 0.12 center at 0.5 0.12 \hspace{1.00 cm} \right] \label{free.17} \end{eqnarray} where the closed loops denotes the trace over the variables in the propagators. The kinetic interaction will now perturb the fields in these two loop diagrams. A 1-field will be converted two a 2-field and vice versa with the coupling constant $\omega_n$. Since the free propagator $\ex{\psi_1\psi_2}_0 = 0$, only loops with an even number of interactions will contribute, i.e. with the same number of 1- and 2-fields. We then find for the full free energy \begin{eqnarray} \beta\Omega & = & -\left[\frac{1}{2}\ln \setcoordinatesystem units <1cm,1cm> \unitlength=1cm \circulararc 360 degrees from 0.9 0.12 center at 0.5 0.12 \hspace{1cm} + \frac{1}{2}\ln \setdashes <1.50pt> \circulararc 360 degrees from 0.9 0.12 center at 0.5 0.12 \hspace{1.00 cm} + \frac{1}{2} \put {\circle{0.16}} [Bl] at 0.50 0.52 \setsolid \circulararc 156 degrees from 0.412 0.510 center at 0.500 0.120 \put {\circle{0.16}} [Bl] at 0.50 -0.28 \setdashes <1.50pt> \circulararc 156 degrees from 0.588 -0.270 center at 0.500 0.120 \hspace{1.00 cm} + \frac{1}{4} \put {\circle{0.16}} [Bl] at 0.50 0.52 \setsolid \circulararc 66 degrees from 0.412 0.510 center at 0.500 0.120 \put {\circle{0.16}} [Bl] at 0.10 0.12 \setdashes <1.50pt> \circulararc 66 degrees from 0.110 0.032 center at 0.500 0.120 \put {\circle{0.16}} [Bl] at 0.50 -0.28 \setsolid \circulararc 66 degrees from 0.588 -0.270 center at 0.500 0.120 \put {\circle{0.16}} [Bl] at 0.90 0.12 \setdashes <1.50pt> \circulararc 66 degrees from 0.890 0.208 center at 0.500 0.120 \hspace{1.00 cm} + \frac{1}{6} \put {\circle{0.16}} [Bl] at 0.50 0.52 \setsolid \circulararc 36 degrees from 0.412 0.510 center at 0.500 0.120 \put {\circle{0.16}} [Bl] at 0.15 0.32 \setdashes <1.50pt> \circulararc 36 degrees from 0.118 0.239 center at 0.500 0.120 \put {\circle{0.16}} [Bl] at 0.15 -0.08 \setsolid \circulararc 36 degrees from 0.206 -0.151 center at 0.500 0.120 \put {\circle{0.16}} [Bl] at 0.50 -0.28 \setdashes <1.50pt> \circulararc 36 degrees from 0.588 -0.270 center at 0.500 0.120 \put {\circle{0.16}} [Bl] at 0.85 -0.08 \setsolid \circulararc 36 degrees from 0.882 0.001 center at 0.500 0.120 \put {\circle{0.16}} [Bl] at 0.85 0.32 \setdashes <1.50pt> \circulararc 36 degrees from 0.794 0.391 center at 0.500 0.120 \hspace{1.00 cm} + \cdots\right] \nonumber\\ & = & {1\over 2}\sum_{n,{\bf k}}\left[\ln(e_{1{\bf k}}e_{2{\bf k}}) + {\omega_n^2\over 1(e_{1{\bf k}}e_{2{\bf k}})} - {\omega_n^4\over 2(e_{1{\bf k}}e_{2{\bf k}})^2} + {\omega_n^6\over 3(e_{1{\bf k}}e_{2{\bf k}})^3} + \cdots\right] \nonumber\\ & = & {1\over 2}\sum_{n,{\bf k}}\ln(\omega_n^2 + e_{1{\bf k}}e_{2{\bf k}}) \label{free.18} \end{eqnarray} Again regularizing as in ({\ref{free.6b}) and summing over the Matsubara frequencies with the help of (\ref{free.7}), we recover the the standard Bose-Einstein free energy (\ref{free.8}). \subsection{Free propagators of real fields} The propagators (\ref{free.14}) move only the fields at fixed time. Motion in time is induced by the kinetic interaction in (\ref{free.11}). Its full effect can easily be calculated in perturbation theory. For the 1-field, when we again consider a Fourier mode with a given momentum ${\bf k}$ and Matsubara energy $\omega_n$, we find: \begin{eqnarray} \setcoordinatesystem units <1cm,1cm> \unitlength=1cm D_{11} & = & \ex{\psi_1\psi_1^*} = \plot 10 2.8 50 2.8 / \plot 10 3.0 50 3.0 / \plot 10 3.2 50 3.2 / \plot 10 3.4 50 3.4 / \plot 10 3.6 50 3.6 / \plot 10 3.8 50 3.8 /\nonumber \\ & = & \plot 5 3.3 25 3.3 / \hspace{1.05cm} + \plot 5 3.3 25 3.3 / \put {\circle{6}} [B1] at 32 3.3 \setdashes <2pt> \plot 32 3.3 53 3.3 / \put {\circle{6}} [B1] at 59 3.3 \setsolid \plot 59 3.3 79 3.3 / \hspace{2.95cm} + \plot 5 3.3 25 3.3 / \put {\circle{6}} [B1] at 32 3.3 \setdashes <2pt> \plot 32 3.3 53 3.3 / \put {\circle{6}} [B1] at 59 3.3 \setsolid \plot 59 3.3 79 3.3 / \put {\circle{6}} [B1] at 86 3.3 \setdashes <2pt> \plot 86 3.3 107 3.3 / \put {\circle{6}} [B1] at 113 3.3 \setsolid \plot 113 3.3 133 3.3 / \hspace{4.8cm} + \cdots\nonumber\\ & = & D_{11}^{(0)} + D_{11}^{(0)}(\omega_n)D_{22}^{(0)}(-\omega_n)D_{11}^{(0)}\nonumber \\ & + & D_{11}^{(0)}(\omega_n)D_{22}^{(0)}(-\omega_n)D_{11}^{(0)}(\omega_n)D_{22}^{(0)}(-\omega_n)D_{11}^{(0)} + \cdots\nonumber \\ & = & {D_{11}^{(0)}\over 1 + \omega_n^2D_{11}^{(0)}D_{22}^{(0)}} = {e_{2{\bf k}}\over\omega_n^2 + e_{1{\bf k}}e_{2{\bf k}}} \end{eqnarray} Similarly, we find \begin{eqnarray} \setcoordinatesystem units <1cm,1cm> \unitlength=1cm D_{22} & = & \ex{\psi_2\psi_2^*} = \setdashes <2pt> \plot 10 2.6 50 2.6 / \plot 10 2.8 50 2.8 / \plot 10 3.0 50 3.0 / \plot 10 3.2 50 3.2 / \plot 10 3.4 50 3.4 / \plot 10 3.6 50 3.6 / \plot 10 3.8 50 3.8 /\nonumber \\ & = & \setdashes <2pt> \plot 5 3.3 25 3.3 / \hspace{1.05cm} + \plot 5 3.3 26 3.3 / \put {\circle{6}} [B1] at 32 3.3 \setsolid \plot 32 3.3 53 3.3 / \put {\circle{6}} [B1] at 59 3.3 \setdashes <2pt> \plot 59 3.3 79 3.3 / \hspace{2.95cm} + \plot 5 3.3 26 3.3 / \put {\circle{6}} [B1] at 32 3.3 \setsolid \plot 32 3.3 53 3.3 / \put {\circle{6}} [B1] at 59 3.3 \setdashes <2pt> \plot 59 3.3 80 3.3 / \put {\circle{6}} [B1] at 86 3.3 \setsolid \plot 86 3.3 107 3.3 / \put {\circle{6}} [B1] at 113 3.3 \setdashes <2pt> \plot 113 3.3 133 3.3 / \hspace{4.8cm} + \cdots\nonumber\\ & = & {e_{1{\bf k}}\over\omega_n^2 + e_{1{\bf k}}e_{2{\bf k}}} \end{eqnarray} and \begin{eqnarray} \setcoordinatesystem units <1cm,1cm> \unitlength=1cm D_{12} & = & \ex{\psi_1\psi_2^*} = \plot 10 2.8 30 2.8 / \plot 10 3.0 30 3.0 / \plot 10 3.2 30 3.2 / \plot 10 3.4 30 3.4 / \plot 10 3.6 30 3.6 / \plot 10 3.8 30 3.8 / \thicklines \put {\circle{6}} [B1] at 36 3.3 \setdashes <2pt> \plot 36 2.8 56 2.8 / \plot 36 3.0 56 3.0 / \plot 36 3.2 56 3.2 / \plot 36 3.4 56 3.4 / \plot 36 3.6 56 3.6 / \plot 36 3.8 56 3.8 /\nonumber \\ & = & \setsolid \plot 5 3.3 25 3.3 / \put {\circle{6}} [B1] at 32 3.3 \setdashes <2pt> \plot 32 3.3 52 3.3 / \hspace{2cm} + \setsolid \plot 5 3.3 25 3.3 / \put {\circle{6}} [B1] at 32 3.3 \setdashes <2pt> \plot 32 3.3 53 3.3 / \put {\circle{6}} [B1] at 59 3.3 \setsolid \plot 59 3.3 79 3.3 / \put {\circle{6}} [B1] at 86 3.3 \setdashes <2pt> \plot 86 3.3 106 3.3 / \hspace{4cm} + \cdots\nonumber \\ & = & {\omega_n\over\omega_n^2 + e_{1{\bf k}}e_{2{\bf k}}} \end{eqnarray} while $D_{21} = -D_{12}$. These results can simply be summed up in the Dyson-Schwinger equations for the full propagators \begin{eqnarray} D_{ab} = D_{ab}^{(0)} + D_{ac}^{(0)}\Pi_{cd}^{(0)}D_{db} \label{prop.5} \end{eqnarray} With the free propagator (\ref{free.14}), $D_{12}^{(0)} = 0$ and the non-diagonal self energy $\Pi_{cd}^{(0)} = \epsilon_{cd}\omega_n$, the equations are easily solved to give the same results as above. We will later use the Schwinger-Dyson equations to calculate the propagators when the bosons have a short-range interaction. Here the fields are essentially free and the propagators can be obtained directly from the matrix operator (\ref{free.12}). Its Fourier transform is \begin{eqnarray} M_{ab} = \left(\begin{array}{rr} e_{1{\bf k}} & -\omega_n \\ \omega_n & e_{2{\bf k}} \end{array}\right) \end{eqnarray} Taking the inverse, we then simply have \begin{eqnarray} D_{ab} = \ex{\psi_a\psi_b} = M_{ab}^{-1} = {1\over\omega_n^2 + e_{1{\bf k}}e_{2{\bf k}}}\left( \begin{array}{rr} e_{2{\bf k}} & \omega_n \\ -\omega_n & e_{1{\bf k}} \end{array}\right) \label{prop.6} \end{eqnarray} which is seen to agree with the previous results. \section{Hard-core bosons in the one-loop approximation} We will here consider the idealized case of bosons having only a repulsive interaction potential $V({\bf r})$ at short distances. The thermodynamics of the gas will then be mostly independent of the detailed shape of the potential which will only enter the results via the $S$-wave scattering length \cite{PN} \begin{eqnarray} a = {m\over 4\pi}\int\!d^3r\, V({\bf r}) \label{int.1} \end{eqnarray} which is positive. This is equivalent to saying that the potential is a $\delta$-function, i.e. $V({\bf r}) = 2\lambda\,\delta({\bf r})$ with the coupling constant $\lambda = 2\pi a/m$. In the second quantized theory it will correspond to an interaction term $\lambda(\psi^*\psi)^2$ in the Lagrangian. While the coupling constant $\lambda$ would be dimensionless in the corresponding relativistic theory, it is not in the non-relativistic description we use here. Let us comment briefly upon this point. The Lagrangian of a real relativistic scalar field $\Psi$ can be written as \begin{eqnarray} {\cal L}_0 = \frac{1}{2}{\partial}_\mu\Psi{\partial}^\mu\Psi - \frac{m^2}{2}\Psi^2 - \lambda_0\Psi^4 \end{eqnarray} in real time formalism. Here $\mu = 0\ldots 3$ and $\lambda_0$ is the relativistic coupling constant. Since the action must be dimensionless, the field takes on the same dimension as the inverse time or inverse distance in units where the velocity of light $c=1$. The coupling $\lambda_0$ is therefore dimensionless. We now take the non-relativistic limit by letting $m\rightarrow\infty$. Before doing so, we introduce the non-relativistic field $\psi$ through \begin{eqnarray} \Psi = \frac{1}{\sqrt{2m}}\left(e^{-imt}\psi + e^{imt}\psi^*\right) \end{eqnarray} This leads to a number of terms in the Lagrangian which oscillate with frequency $2m$. They may be dropped as $m\rightarrow\infty$. The resulting non-relativistic Lagrangian takes the form \begin{eqnarray} {\cal L} = i\psi^*{\partial}_t\psi - \frac{1}{2m}|{\bg\nabla}\psi|^2 - \lambda(\psi^*\psi)^2 \end{eqnarray} with a non-relativistic coupling constant $\lambda = 3\lambda_0/2m^2$. This coupling is obviously not dimensionless. \subsection{The classical ground state} Including the above interaction, the {\em Euclidean} Lagrangian (\ref{free.5}) describing the bosons is changed into \begin{eqnarray} {\cal L}_E = \psi^*{\partial}_{\tau}\psi + {1\over 2m}|{\bg\nabla}\psi|^2 - \mu\psi^*\psi + \lambda(\psi^*\psi)^2 \label{int.3} \end{eqnarray} In the classical limit at zero temperature the system will be in the lowest energy state. The field will then attain a constant value given by the minimum of the classical potential \begin{eqnarray} U(\psi) = - \mu\psi^*\psi + \lambda(\psi^*\psi)^2 \label{int.4} \end{eqnarray} It is invariant under the $U(1)$ phase transformation $\psi({\bf x}) \rightarrow e^{i\theta} \psi({\bf x})$ and thus depends only on the modulus $|\psi|$ of the field. In Fig.1 the classical potential is plotted for the two cases $\mu > 0$ and $\mu < 0$. We see that in the first case, we will have a ground state with spontaneous breakdown of the $U(1)$ symmetry in which the field takes the classical value $|\psi| = \sqrt{\mu/2\lambda}$. We will in later sections see that $\mu$ will be negative at high temperatures and the gas will be in the normal state with $|\psi| = 0$. \begin{figure}[htb] \begin{center} \mbox{\psfig{figure=bec3Dfig1.ps,width=11cm,angle=270,height=8cm}} \end{center} \caption[The classical potential $U(\psi)$ plotted as function of the field modulus $|\psi|$ in arbitrary units.]{\footnotesize The classical potential $U(\psi)$ plotted as function of the field modulus $|\psi|$ in arbitrary units.} \label{bec3D-fig:class} \end{figure} To be more quantitative, the symmetry-broken ground state has a classical pressure given by the minimum of the potential (\ref{int.4}), i.e. $P(\mu) = \mu^2/4\lambda$. The corresponding number density is then $\rho = \mu/2\lambda$ and therefore $P = \lambda\rho^2$. This is also the classical ground state energy density ${\cal E}(\rho)$ as follows from the thermodynamic Legendre transform \begin{eqnarray} {\cal E}(\rho) = \mu\rho - P(\mu) \label{int.5} \end{eqnarray} In this way we have already cured one of the problems of the ideal Bose-Einstein gas, namely the infinite compressibility of the condensed phase. What is needed next is the inclusion of quantum and thermal fluctuations around this classical ground state. \subsection{Gaussian fluctuations} Denoting the fluctuating field by $\chi$, we can write the full Bose field as \begin{eqnarray} \psi(x) = \sqrt{\f{1}{2}}\,[v + \chi(x)] \label{fluct.1} \end{eqnarray} where $v$ is the constant field of the condensate. Inserting this into the Lagrangian (\ref{int.3}), we can rewrite it as \begin{eqnarray} {\cal L}_E = - {\mu\over 2}|v|^2 + {\lambda\over 4}|v|^4 + {1\over 2}\chi_a\wh{M}_{ab}\chi_b + \lambda (v\cdot\chi)(\chi\cdot\chi) + {\lambda\over 4}(\chi\cdot\chi)^2 \label{fluct.2} \end{eqnarray} when expressed in terms of the two real components of the complex fields $\chi$ and $v$. Terms linear in $\chi$ have been dropped since they will not contribute after integration over space. The matrix operator $\wh{M}_{ab}$ is now \begin{eqnarray} \wh{M}_{ab} = i\epsilon_{ab}{\partial}_\tau - \left({\nabla^2\over 2m} -\lambda(v\cdot v) + \mu\right)\!\delta_{ab} + 2\lambda\,v_av_b \label{fluct.3} \end{eqnarray} Due to the $U(1)$ symmetry of the system, we can choose the classical field $ v$ to be real. Taking the Fourier transform of the operator, it becomes \begin{eqnarray} M_{ab} = \left(\begin{array}{cc} \varepsilon_{\bf k} - \mu + 3\lambda v^2 & -\omega_n \\ \omega_n & \varepsilon_{\bf k} - \mu + \lambda v^2 \end{array}\right) \label{fluct.4} \end{eqnarray} where again $\varepsilon_{\bf k} = {\bf k}^2/2m$. The grand canonical partition function (\ref{free.4}) can be now written as a functional integral over the real fields $\chi_1$ and $\chi_2$ as \begin{eqnarray} \Xi(\beta,\mu) = e^{-\beta V(-{\mu\over 2}v^2 + {\lambda\over 4}v^4)} \int\!{\cal D}\chi_1{\cal D}\chi_2\,e^{-\int_0^\beta\!d\tau\!\int\!d^3x\, [{1\over 2}\chi_a\wh{M}_{ab}\chi_b + {\cal L}_{int}]} \label{fluct.5} \end{eqnarray} where $V$ is the volume of the system and \begin{eqnarray} {\cal L}_{int} = \lambda v\chi_1(\chi\cdot\chi) + {\lambda\over 4}(\chi\cdot\chi)^2 \label{fluct.5b} \end{eqnarray} gives the interactions of the fluctuating field. The contribution from the first part of the Lagrangian being quadratic in the fields, can be evaluated as for the free theory in Section 2 and will be the one-loop result. Higher loop corrections due to the cubic and quartic terms in ${\cal L}_{int}$ can then be systematically calculated in perturbation theory. These will be especially important at finite temperatures and will be considered in the next section. In the one-loop approximation, we keep only the quadratic part of the Lagrangian. The integral (\ref{fluct.5}) is then Gaussian and gives the corresponding free energy \begin{eqnarray} {1\over V}\,\Omega(\mu,v) = -{\mu\over 2}v^2 + {\lambda\over 4}v^4 + {1\over 2\beta V}\ln\mbox{det} M \label{fluct.6} \end{eqnarray} From (\ref{fluct.4}) we find the determinant $\mbox{det} M = \prod_{n,{\bf k}}(\omega_n^2 + \omega_{\bf k}^2)$ where the excitation energy is now \begin{eqnarray} \omega_{\bf k} = \sqrt{(\varepsilon_{\bf k} - \mu + \lambda v^2) (\varepsilon_{\bf k} - \mu + 3\lambda v^2)} \label{fluct.7} \end{eqnarray} Summing over the Matsubara frequencies using (\ref{free.7}) again, we obtain for the free energy (\ref{fluct.6}) \begin{eqnarray} {1\over V}\,\Omega(\mu,v) = -{\mu\over 2}v^2 + {\lambda\over 4}v^4 + {1\over V}\sum_{\bf k} \left[{1\over 2}\omega_{\bf k} + T\ln\left(1 - e^{-\beta\omega_{\bf k}}\right)\right] \label{fluct.8} \end{eqnarray} As a function of the condensate $v$, this is the effective potential $U_{eff}$ for the system in the one-loop approximation. The thermodynamical free energy is the value of the function in its minimum and equals the negative of the pressure $P(\mu,T)$. Due to the quantum fluctuations, this will be slightly shifted away from the classical minimum at $v^2 = \mu/\lambda$. For this value of the condensate, the dispersion relation (\ref{fluct.7}) simplifies to the Bogoliubov result \cite{Bogoliubov} \begin{eqnarray} \omega = \sqrt{\varepsilon(\varepsilon + 2\mu)} = {k\over 2m}\sqrt{k^2 + 4m\mu} \label{fluct.9} \end{eqnarray} Here, and in the following we drop the wave number index on $\omega$. In the long-wavelength limit $k\rightarrow 0$ it becomes linear and represents the phonon excitations. This is a consequence of the Goldstone theorem which requires excitations with a linear dispersion relation whenever a continuous symmetry is spontaneously broken as here \cite{PWA}. We will find in Section 4 that the quantum corrections will not change this dispersion relation except that the classical chemical potential $\mu$ is replaced by an effective potential $\td{\mu}$ which results from the quantum self energies in the field propagators. \subsection{Renormalization} The infinite sum (\ref{fluct.8}) is seen to be strongly divergent in the ultraviolet limit $k \rightarrow \infty$. A similar divergence was also found in the free theory in Section 2. We can thus remove the strongest divergence by subtracting the first term in (\ref{free.8}) so that we recover the pressure in the non-interacting gas. Taking the infinite-volume limit we then have for the effective potential \begin{eqnarray} U_{eff}(v,\mu,T) = -{\mu\over 2}v^2 + {\lambda\over 4}v^4 + \int\!{d^3k\over(2\pi)^3}\!\left[{1\over 2}(\omega - \varepsilon + \mu) + T\ln\left(1 - e^{-\beta\omega}\right)\right] \label{fluct.8b} \end{eqnarray} However, the integral representing the Gaussian fluctuations at zero temperature is still not finite. The divergences can be removed by renormalizing the coupling constant $\lambda$ and the chemical potential $\mu$. For this purpose we introduce the counter-terms \begin{eqnarray} {\cal L}_{ct} & = &- \delta\mu\psi^*\psi + \delta\lambda(\psi^*\psi)^2 = - {1\over 2}\delta\mu\,v^2 + {1\over 4}\delta\lambda\,v^4 + {1\over 4}\delta\lambda(\chi\cdot\chi)^2 \nonumber \\ & - & {1\over 2}(\delta\mu - 3v^2\delta\lambda)\chi_1^2 - {1\over 2}(\delta\mu - v^2\delta\lambda)\chi_2^2 \label{ct.1} \end{eqnarray} resulting directly from the classical potential (\ref{int.4}). Again we have ignored terms linear in the fluctuating field $\chi$. At zero temperature we then get for the renormalized effective potential \begin{eqnarray} U_{eff}(v,\mu,T=0) & = & -{\mu\over 2}v^2 + {\lambda\over 4}v^4 \nonumber\\ & + & {1\over 2}\int\!{d^3k\over(2\pi)^3}\!\left(\omega - \varepsilon + \mu\right) - {1\over 2}\delta\mu\,v^2 + {1\over 4}\delta\lambda\,v^4 \label{re.2} \end{eqnarray} This can now be made finite by adjusting the quantities $\delta\mu$ and $\delta\lambda$. The temperature-dependent contribution in (\ref{fluct.8b}) is finite by itself and thus the full free energy will be finite. Explicit expressions for the counter-terms can be obtained by isolating the two leading divergences in the integral (\ref{re.2}) by expanding the expression (\ref{fluct.7}) for $\omega$ for large values of the momentum $k$. Cutting off the integral at $k = \Lambda$, we can then rewrite $U_{eff}$ as \begin{eqnarray*} && U_{eff}(v,\mu,T=0) = -{\mu\over 2}v^2 + {\lambda\over 4}v^4 - {1\over 2}\delta\mu\,v^2 + {1\over 4}\delta\lambda\,v^4 \\ & + & {1\over 2}\int^\Lambda\!{d^3k\over(2\pi)^3}\! \left(\omega - \varepsilon + \mu - 2\lambda v^2 + {\lambda^2v^4\over 2\varepsilon}\right) + {1\over 2}\int^\Lambda\!{d^3k\over(2\pi)^3}\! \left(2\lambda v^2 - {\lambda^2v^4\over 2\varepsilon}\right) \label{re.3} \end{eqnarray*} The divergences in the last integral are removed by taking the counter-terms to be \begin{eqnarray} \delta\mu = \int^\Lambda\!{d^3k\over(2\pi)^3}\,2\lambda = {\lambda\over 3\pi^2}\Lambda^3 \label{re.4} \end{eqnarray} and \begin{eqnarray} \delta\lambda = \int^\Lambda\!{d^3k\over(2\pi)^3}{\lambda^2\over\varepsilon} = {\lambda^2\over\pi^2}m\Lambda \label{re.5} \end{eqnarray} These give the same renormalized parameters as obtained by Benson, Bernstein and Dodelson \cite{BBD}. Jackiw \cite{Jackiw} has also obtained the same renormalized coupling constant from considerations of bound states of non-relativistic particles in a $\delta$-function potential. The unrenormalized coupling constant $\lambda_0$ is thus given in terms of the cut-off by \begin{eqnarray} {1\over\lambda_0} = {1\over\lambda} - m{\Lambda\over\pi^2} \label{re.6} \end{eqnarray} For fixed, renormalized coupling $\lambda$ it increases with the cut-off. In the corresponding relativistic theory, this increases is logarithmic instead of linear. Having removed the divergences, we can now let the cut-off $\Lambda\rightarrow\infty$. We then have the finite result \begin{eqnarray} U_{eff}(v,\mu,T=0) & = & -{\mu\over 2}v^2 + {\lambda\over 4}v^4 \nonumber\\ & + & {1\over 2}\int\!{d^3k\over(2\pi)^3}\! \left(\omega - \varepsilon + \mu - 2\lambda v^2 + {\lambda^2v^4\over 2\varepsilon}\right) \label{re.7} \end{eqnarray} With definite values for the couplings $\mu$ and $\lambda$ it can be calculated by a numerical integration. As a function of the condensate value $v$ it has the same shape as in Fig. 1. It will have an imaginary part in the region below the classical minimum at $v^2 = \mu/\lambda$ whose physical significance has been discussed by Weinberg and Wu \cite{WW}. \subsection{Condensate and pressure at zero temperature} Because of the quantum fluctuations, the minimum of the effective potential (\ref{re.7}) is shifted away from the classical value. In the new minimum the derivative\\ $({\partial} U_{eff}/{\partial} v)_\mu = 0$. With \begin{eqnarray} \left({{\partial}\omega\over{\partial} v}\right)_\mu = (2\varepsilon - 2 \mu +3\lambda v^2){1\over\omega} \end{eqnarray} we find the minimum to be at the condensate value \begin{eqnarray} v_0^2 = {\mu\over\lambda} - \int\!{d^3k\over(2\pi)^3}\! \left({2\varepsilon - 2 \mu +3\lambda v^2\over\omega} - 2 + {\lambda v^2\over\varepsilon}\right) \label{re.9} \end{eqnarray} The integral represents here the effects of the quantum fluctuations. To this order in perturbation theory we can evaluate it using the classical value $v^2 = \mu/\lambda$ for the condensate. It then gives \begin{eqnarray} v_0^2 = {\mu\over\lambda} - \int\!{d^3k\over(2\pi)^3}\! \left({2\varepsilon +\mu\over\omega} +{\mu\over\varepsilon} - 2\right) \label{re.10} \end{eqnarray} where now the Bogoliubov frequency $\omega$ is given by (\ref{fluct.9}). We will in the next section show that the quantum shift of the classical minimum is due to the self energies of the interacting fields. These will effectively change the chemical potential from the classical value $\mu$ to a new value $\td{\mu}$ which is just $\lambda$ times the right-hand side of equation (\ref{re.10}) at zero temperature. With the above value for the condensate in the minimum of the effective potential, we find by insertion into (\ref{re.7}) the value for the thermodynamic pressure at zero temperature. To lowest order in the quantum correction, it can be written as \begin{eqnarray} P(\mu) & = & {\mu^2\over 4\lambda} - {1\over 2}\int\!{d^3k\over(2\pi)^3}\! \left(\omega - \varepsilon - \mu + {\mu^2\over 2\varepsilon}\right) \label{re.11} \end{eqnarray} The full density of particles is now given by the derivative $\rho={\partial} P/{\partial}\mu$, i.e. \begin{eqnarray} \rho = {\mu\over 2\lambda} - {1\over 2}\int\!{d^3k\over(2\pi)^3}\! \left({\varepsilon\over\omega} + {\mu\over\varepsilon} - 1\right) \label{re.12} \end{eqnarray} Most of the particles are in the condensate with ${\bf k} = 0$. Their density $\rho_c = v_0^2/2$ follows directly from (\ref{re.10}). The density $\rho_e = \rho - \rho_c$ of particles in excited states with ${\bf k} \ne 0$ is thus \begin{eqnarray} \rho_e = {1\over 2}\int\!{d^3k\over(2\pi)^3}\! \left({\varepsilon + \mu\over \omega} - 1\right) \label{re.13} \end{eqnarray} It is caused by the hard-core repulsion between the particles and was also obtained by Benson, Bernstein and Dodelson \cite{BBD}. With the Bogoliubov dispersion relation (\ref{fluct.9}) for the excitation energy $\omega$, we can now evaluate the pressure in (\ref{re.11}). Besides elementary integrations, it involves the integral \begin{eqnarray} \int\!d\varepsilon \,\varepsilon\sqrt{\varepsilon + 2\mu} = {2\over 5}(\varepsilon + 2\mu)^{5/2} - {4\over 3}\mu (\varepsilon + 2\mu)^{3/2} \label{re.14} \end{eqnarray} By construction, we now get only a non-zero contribution from the lower limit $\varepsilon = 0$ of the integrals which gives \begin{eqnarray} P(\mu) = {\mu^2\over 4\lambda} - {8m^{3/2}\over 15\pi^2}\mu^{5/2} \label{re.15} \end{eqnarray} The last term represents the one-loop, quantum corrections to the classical result in the first term. This agrees with the original results of Lee and Yang \cite{LeeYang_1} who considered the same system of interacting bosons with a hard-core repulsion within the framework of quantum statistical mechanics. Equation (\ref{re.12}) enables us to relate the chemical potential to the particle density. It can be obtained more directly by just taking the derivative of (\ref{re.15}) with respect to $\mu$, \begin{eqnarray} \rho = {\mu\over 2\lambda} - {4\over 3\pi^2}(m\mu)^{3/2} \label{re.16} \end{eqnarray} The sign of the second term here is apparently opposite to what was obtained by Bernstein and Dodelson \cite{BD}. It must be negative as here in order to have an increase in the zero-temperature pressure because of the repulsion between the particles and not. By inversion, we then obtain for the chemical potential to lowest order in perturbation theory, \begin{eqnarray} \mu = 2\lambda\rho + {8\lambda\over 3\pi^2}(2m\lambda\rho)^{3/2} \label{re.17} \end{eqnarray} Similarly, from (\ref{re.13}) we find the density $\rho_e = (m\mu)^{3/2}/3\pi^2$ of particles in excited states. Expressed instead in terms of the full density, it is \begin{eqnarray} \rho_e = {1\over 3\pi^2}(2m\lambda\rho)^{3/2} = {8\rho\over 3}\sqrt{\rho a^3\over\pi} \label{re.18} \end{eqnarray} when the coupling constant $\lambda$ is replaced by the scattering length $a$. This is just the textbook result \cite{Huang}. With the chemical potential from (\ref{re.17}), we now get for the pressure (\ref{re.15}) \begin{eqnarray} P(\rho) & = &\lambda\rho^2 + {4m^{3/2}\over 5\pi^2}(2\lambda\rho)^{5/2} \label{re.19} \end{eqnarray} The corresponding energy density follows then from the Legendre transformation ${\cal E} = \mu\rho - P$ and is \begin{eqnarray} {\cal E}(\rho) = \lambda\rho^2 + {8m^{3/2}\over 15\pi^2}(2\lambda\rho)^{5/2} \label{re.19b} \end{eqnarray} It was first calculated by Lee and Yang \cite{LeeYang_2} using the binary collision method and also by Lee, Huang and Yang \cite{LHY} who used instead the pseudopotential method. Usually, it is expressed in terms of the scattering length and takes then the form \begin{eqnarray} {\cal E} = {2\pi a\over m}\rho^2 \left[1 + {128\over 15}\sqrt{\rho a^3\over\pi}\right] \label{re.19c} \end{eqnarray} As a consistency check, we see that it reproduces the pressure (\ref{re.19}) using the standard definition $P = \rho^2 {\partial}({\cal E}/\rho)/{\partial}\rho$. \section{Effective potential at finite temperature} Ignoring the counter-terms, the effective potential in the one-loop approximation and at finite temperature was obtained in (\ref{fluct.8b}) as function of the condensate $v$. The excitation energy $\omega$ is now given by the general formula (\ref{fluct.7}). Taking the derivative with respect to $v$, we then find that the minimum of $U_{eff}$ is shifted to \begin{eqnarray} v^2 = {\mu\over\lambda} - \!\int\!{d^3k\over(2\pi)^3}{4\varepsilon - 4\mu + 6\lambda v^2\over\omega} \left[{1\over 2} + {1\over e^{\beta\omega} - 1}\right] \label{temp.3} \end{eqnarray} due to thermal fluctuations. Here, $\mu$ is a function of temperature for fixed total particle density $\rho$. The above equation then gives the condensate as an implicit function of temperature. The contribution from thermal fluctuations is finite. Therefore, the divergences are removed by the counter-terms already introduced at zero temperature. With thermal fluctuations present, $v^2$ may differ considerably from $\mu/\lambda$. From (\ref{fluct.7}) we see that the excitation energy no longer satisfies $\omega \propto k$ in the long-wavelength limit. In other words, at finite temperature when the classical field takes on a modified value, the Goldstone theorem seems to be violated in the one-loop approximation considered up to now. It is easy to see how the effective potential can be improved so that the Goldstone theorem is restored. At finite temperature there must be additional effects taken into consideration which changes the thermodynamic chemical potential $\mu$ into an effective chemical potential $\td{\mu}$ so that the condensate is again given by $v^2 = \td{\mu}/\lambda$. This will give a linear dispersion relation at low energy. The minimum of the effective potential will then be at \begin{eqnarray} v^2 = v_0^2 - \int\!{d^3k\over(2\pi)^3}{4\varepsilon + 2\td{\mu}\over\omega} {1\over e^{\beta\omega} - 1} \label{temp.4} \end{eqnarray} where $v_0^2$ is the zero-temperature result (\ref{re.10}). This is now consistent with the Bogoliubov dispersion relation \begin{eqnarray} \omega = \sqrt{\varepsilon(\varepsilon + 2\td{\mu})} \label{temp.5} \end{eqnarray} which will be verified in the following where the ring-improved effective potential is derived. The value of the condensate decreases with increasing temperatures and becomes zero when $\td{\mu} = 0$. This defines the critical temperature for the system. At higher temperatures it is in the phase of unbroken symmetry where the dispersion relation simplifies to $\omega = \varepsilon - \td{\mu}$ as it is for a system of free particles. \subsection{Ring corrections to the effective potential} The chemical potential $\mu$ represents the energy of a adding or removing a single particle from the system of interacting bosons. It represents the self energy of the complex field $\psi$. We see from (\ref{fluct.4}) that in the symmetry-broken phase each of the two real modes of the field has self energies equal to respectively $\mu - 3\lambda v^2$ and $\mu - \lambda v^2$ within the Gaussian approximation. Including the interactions in the full Lagrangian (\ref{fluct.2}) these values will be modified by radiative loop corrections. When the system is at a non-zero temperature, the additional contributions to the self energies will depend upon temperature. This will then enable us to define a new, temperature-dependent chemical potential $\td{\mu}$. In the Gaussian approximation used in the previous section we found that the one-loop contribution to the effective potential (\ref{fluct.6}) followed directly from the inverse of the field propagator $D_{ab} = \ex{\chi_a\chi_b}$. After Fourier transformation we know from (\ref{fluct.4}) that it has the matrix form \begin{eqnarray} D_{ab}^{-1} = \left(\begin{array}{cc} e_{1{\bf k}} & -\omega_n \\ \omega_n & e_{2{\bf k}} \end{array}\right) \label{ring.1} \end{eqnarray} where $e_{1{\bf k}} = \varepsilon_{\bf k} - \mu + 3\lambda v^2$ and $e_{2{\bf k}} = \varepsilon_{\bf k} - \mu + \lambda v^2$. This lowest correction to the classical result is thus \begin{eqnarray} \Omega_1 = {1\over 2\beta}\mbox{Tr}\ln D^{-1} = {1\over 2\beta}\sum_{n,{\bf k}}\ln(\omega_n^2 + e_{1{\bf k}}e_{2{\bf k}}) \label{ring.4} \end{eqnarray} which gave the standard expression (\ref{fluct.8}). The matrix (\ref{ring.1}) is the inverse of the free propagator which is used in the simplest version of the one-loop approximation. It was pointed out a long time ago in connection with the effective potential for scalar, relativistic theories \cite{DJ,Weinberg} that self energy corrections to the field propagator gave important contributions to the free energy at finite temperatures. More recently these daisy corrections have been investigated in more detail \cite{Carrington} and are now often sometimes called ring corrections to the effective potential. We will here see that they also play a crucial role in the non-relativistic theory in saving the Goldstone theorem at non-zero temperatures. A technique which incorporates the use of resumed propagators in a self-consistent way is the effective action for composite operators \cite{CJT}. In the Appendix we show that this method leads to the same result as found here, to the order considered. \begin{figure}[htb] \begin{center} \mbox{\psfig{figure=bec3Dfig2.ps,width=11cm,angle=0,height=1.5cm}} \end{center} \caption[A graphical representation of the Schwinger-Dyson equation. The thin and thick lines represent the free and interacting propagators, respectively.]{\footnotesize A graphical representation of the Schwinger-Dyson equation (\ref{ring.5a}). The thin and thick lines represent the free and interacting propagators, respectively.} \label{bec3D-fig:sd} \end{figure} Denoting the interacting propagator by $\bar{D}_{ab}$, it will in general satisfy the Dyson-Schwinger equation (\ref{prop.5}). If $\Pi_{ab}$ is the full, one-particle irreducible self energy, the equation then becomes \begin{eqnarray} \bar{D}_{ab} = D_{ab} + D_{ac}\Pi_{cd}\bar{D}_{db} \label{ring.5a} \end{eqnarray} as shown in Fig. 2. Using now this propagator in the one-loop approximation, we get the modified contribution \begin{eqnarray} \bar{\Omega}_1 = {1\over 2\beta}\mbox{Tr}\ln \bar{D}^{-1} \label{ring.5b} \end{eqnarray} to the effective potential. Since we can write (\ref{ring.5a}) in the form \begin{eqnarray} \bar{D}_{ab}^{-1} = D_{ab}^{-1} - \Pi_{ab} \label{ring.6} \end{eqnarray} we have \begin{eqnarray} \bar{\Omega}_1 & = & {1\over 2\beta}\mbox{Tr}\ln D^{-1}(1 - D\Pi) \nonumber \\ & = & {\Omega}_1 - {1\over 2\beta}\mbox{Tr}\left[D\Pi + {1\over 2}(D\Pi)^2 + {1\over 3}(D\Pi)^3 + \cdots \right] \label{ring.7} \end{eqnarray} These additional contributions to the lowest order result are called ring corrections as seen in Fig. 3. \begin{figure}[htb] \begin{center} \mbox{\psfig{figure=bec3Dfig3.ps,width=11cm,angle=0,height=2.2cm}} \end{center} \caption[A sketch of the lowest order ring diagrams.]{\footnotesize A sketch of the lowest order ring diagrams.} \label{bec3D-fig:ring} \end{figure} Perturbative calculations will in the following show that the off-diagonal self energies $\Pi_{12} = -\Pi_{21}$ vanish for zero external energy. This follows in general from time-reversal invariance, which implies that the off-diagonal self energies must be odd in the Matsubara frequency $\omega_n$. The structure of the interacting propagator will thus be the same as for the free propagator (\ref{ring.1}) but with the diagonal elements changed to \begin{eqnarray} \bar{e}_{1{\bf k}} = \varepsilon_{\bf k} - \mu + 3\lambda v^2 - \Pi_{11} \label{ring.8} \end{eqnarray} and \begin{eqnarray} \bar{e}_{2{\bf k}} = \varepsilon_{\bf k} - \mu + \lambda v^2 - \Pi_{22} \label{ring.9} \end{eqnarray} The ring-corrected one-loop contribution to the effective effective potential (\ref{ring.5b}) is thus \begin{eqnarray} \bar{\Omega}_1 = {1\over 2\beta}\sum_{n,{\bf k}} \ln(\omega_n^2 + \bar{e}_{1{\bf k}}\bar{e}_{2{\bf k}}) \label{ring.10} \end{eqnarray} and will be of exactly the same form as the lowest order result (\ref{fluct.8}) except for the modified dispersion relation \begin{eqnarray} \bar{\omega}_{\bf k} = \sqrt{(\varepsilon_{\bf k} - \mu + 3\lambda v^2 - \Pi_{11}) (\varepsilon_{\bf k} - \mu + \lambda v^2 - \Pi_{22})} \label{ring.11} \end{eqnarray} for the elementary excitations above the ground state. The Goldstone theorem can now be satisfied also at non-zero temperatures if the chemical potential is related to the value of the field $ v$ in the minimum of the effective potential by \begin{eqnarray} \mu = \lambda v^2 - \Pi_{22} \label{ring.12} \end{eqnarray} This is our form of the Pines-Hugenholtz relation \cite{PH} which is usually written is the complex field basis. A simple derivation of this theorem is given in \cite{SB}. Introducing the effective chemical potential \begin{eqnarray} \td{\mu} = \mu + \Pi_{22} \label{ring.13} \end{eqnarray} the expectation value of the field when the system is in thermal equilibrium is then given by $ v = (\td{\mu}/\lambda)^{1/2}$ as it is at zero temperature. This will be demonstrated in the next section. The dispersion relation (\ref{ring.11}) now becomes \begin{eqnarray} \bar{\omega}_{\bf k} = \sqrt{\varepsilon_{\bf k}(\varepsilon_{\bf k} + 2\td{\mu} + \Pi_{22} - \Pi_{11})} \label{ring.14} \end{eqnarray} and is by construction linear in the momentum $k$ in the long-wavelength limit. When the self energies are calculated in the following to the lowest order in perturbation theory, the difference $\Pi_{22} - \Pi_{11}$ will be negligible. The dispersion relation at finite temperatures is thus the same as at zero temperature (\ref{fluct.9}) when the chemical potential $\mu$ is replaced by $\td{\mu}$, i.e. it has the desired form (\ref{temp.5}). Similarly, the inverse full propagator (\ref{ring.6}) has the form \begin{eqnarray} \bar{D}_{ab}^{-1} = \left(\begin{array}{cc} \varepsilon_{\bf k} + 2\td{\mu} & -\omega_n \\ \omega_n & \varepsilon_{\bf k} \end{array}\right) \label{ring.15} \end{eqnarray} if the Goldstone theorem is to be satisfied. \begin{figure}[h] \begin{center} \mbox{\psfig{figure=bec3Dfig4.ps,width=11cm,angle=0,height=2cm}} \end{center} \caption[Four-point vertices. Solid lines represent the $D_{11}$-propagator and dashed lines the $D_{22}$-propagator.]{\footnotesize Four-point vertices. Solid lines represent the $D_{11}$-propagator and dashed lines the $D_{22}$-propagator.} \label{bec3D-fig:4ver} \end{figure} \subsection{One-loop contributions to the self energies} The interacting propagators and the corresponding self energies can be obtained from the full partition function $\Xi$ of the system. It is given by the functional integral (\ref{fluct.5}). In the exponent we see that there is the usual coupling $(-\lambda)$ between four excitations $\chi$ as shown in Fig.4 but also a new coupling of magnitude $(-\lambda v)$ between three excitations due to the presence of the condensate $\ex{\psi} = v/\sqrt{2}$. Since we have chosen this expectation value to be real, this latter coupling will generate only the two vertices shown in Fig.5. \begin{figure}[htb] \begin{center} \mbox{\psfig{figure=bec3Dfig5.ps,width=6cm,angle=0,height=2cm}} \end{center} \caption[Three-point vertices due to the non-zero condensate.]{\footnotesize Three-point vertices due to the non-zero condensate.} \label{bec3D-fig:3ver} \end{figure} Separating out the free partition function $\Xi_0$, we will here calculate the part $\Xi_1 = \Xi/\Xi_0$ due to the interactions in lowest order of perturbation theory, i.e. include terms to order $\lambda$. Since the chemical potential is now defined to be $\td{\mu} = \lambda v^2$, we must include all diagrams where the four-excitation coupling occurs once or the three-excitation coupling occurs twice. Only connected diagrams will contribute to $\ln{\Xi_1}$ and they are given with the corresponding combinatorial factors in Figure 6. The lines in the diagrams represent here the free propagator $D_{ab}$ given in (\ref{ring.1}). \begin{figure}[htb] \begin{center} \mbox{\psfig{figure=bec3Dfig6.ps,width=14cm,angle=0,height=12cm}} \end{center} \caption[Two- and three-loop diagrams contributing to $\ln\Xi_I$. Only two-loop diagrams are considered here.]{\footnotesize Two- and three-loop diagrams contributing to $\ln\Xi_I$. Only two-loop diagrams are considered here. The lines represent the free propagator $D_{ab}$.} \label{bec3D-fig:lnxi} \end{figure} We can now derive the full propagator $\bar{D}_{ab} = \ex{\chi_a\chi_b}$ from the partition function $\Xi$ by taking the derivative with respect to the free propagator $D_{ab}$ as one can see from the functional integral (\ref{fluct.5}). One then finds \begin{eqnarray} \bar{D}_{ab} = -2\frac{\delta\ln\Xi}{\delta D^{-1}_{ab}} = - 2\frac{\delta\ln\Xi_0}{\delta D^{-1}_{ab}} + 2D_{ca}D_{bd}\frac{\delta\ln\Xi_1}{\delta D_{cd}} \label{self.1} \end{eqnarray} The first term is just the free propagator while the last term gives the self energy. In terms of diagrams, the derivative is obtained directly from the bubble diagrams in Fig. 6 by opening up the corresponding lines in all of the loops. For example, $\bar{D}_{11}$ is given by the diagrams shown in Fig. 7. \begin{figure}[htb] \begin{center} \mbox{\psfig{figure=bec3Dfig7.ps,width=12cm,angle=0,height=12cm}} \end{center} \caption[The lowest order diagrams contributing to the interacting propagator $\td{D}_{11}$. Crosses indicate that the external propagators should be included.]{\footnotesize The lowest order diagrams contributing to the interacting propagator $\td{D}_{11}$. Crosses indicate that the external propagators should be included. The lines represent the free propagator $D_{ab}$.} \label{bec3D-fig:D11} \end{figure} \begin{figure}[hbt] \begin{center} \mbox{\psfig{figure=bec3Dfig8.ps,width=11cm,angle=0,height=8cm}} \end{center} \caption[Self energies to lowest order in the loop expansion. The external propagators should not be included.]{\footnotesize Self energies to lowest order in the loop expansion. The external propagators should not be included.} \label{bec3D-fig:Pi} \end{figure} By cutting off the external lines in these diagrams, we can read off the different self energies $\Pi_{cd}$ shown in Fig. 8. This is equivalent to using (\ref{ring.5a}) in lowest order of perturbation theory where it gives $\bar{D}_{ab} = D_{ab} + D_{ac}\Pi_{cd}D_{db}$ from which $\Pi_{cd}$ can be isolated. An infinite set of higher loop contributions to the free energy can now be obtained by replacing the free propagator $D_{ab}$ in the diagrams in Fig. 6 with the full propagator $\bar{D}_{ab}$. This iteration then generates diagrams which correspond to the ``super daisies'' of Dolan and Jackiw \cite{DJ}. They have more recently been investigated in more detail by Carrington \cite{Carrington} and others \cite{AE}. In this improved perturbation theory the self energies in Fig. 8 are then to be calculated with the full propagators. In order for the Goldstone theorem to be satisfied, we know that $\bar{D}_{ab}$ must have the form (\ref{ring.15}) involving the effective chemical potential $\td{\mu}$. If this approximation method is consistent, the unknown function $\td{\mu}(T)$ can then be determined from the Pines-Hugenholtz relation (\ref{ring.13}) which now will constitute a gap equation. There are other diagrams in addition to those given in Fig. 6 and Fig. 7, but they all vanish. Tadpole diagrams, such as $ \plot 10 3.2 20 3.2 / \plot 10 3.0 20 3.0 / \plot 10 3.2 20 3.2 / \plot 10 3.4 20 3.4 / \plot 10 3.6 20 3.6 / \plot 10 3.8 20 3.8 / \thicklines \put {\circle{12}} [B1] at 32 3.3 $ \hspace{1.5cm}, are zero because $\ex{\chi_1} = 0 = \ex{\chi_2}$. Diagrams with a $\bar{D}_{12}$-loop $ \plot 10 3.2 15 3.2 / \plot 10 3.0 15 3.0 / \plot 10 3.2 15 3.2 / \plot 10 3.4 15 3.4 / \plot 10 3.6 15 3.6 / \plot 10 3.8 15 3.8 / \put {\circle{4}} [B1] at 28.5 3.3 \thicklines \circulararc 156 degrees from 26.5 5.7 center at 21 3.3 \circulararc 156 degrees from 26.4 5.7 center at 21 3.3 \circulararc 156 degrees from 26.3 5.7 center at 21 3.3 \setdashes <1.50pt> \circulararc 156 degrees from 15 3.3 center at 21 3.3 \setdashes <1.50pt> \circulararc 156 degrees from 15.1 3.3 center at 21 3.3 \setdashes <1.50pt> \circulararc 156 degrees from 15.2 3.3 center at 21 3.3 $ \hspace{1.0cm} are odd in the Matsubara frequency and thus vanish upon summation. The perturbative contributions to the diagonal self energies are non-zero. They depend in general on external momentum and energy. We have assumed a spatially constant condensate and can thus take the external momenta in the self energies to be zero. For the external energies we will also take these to be zero. This is obviously an approximation. At sufficiently high temperature, the zero-energy Matsubara modes should dominate the partition function and the approximation should then be reasonable. Especially for $\Pi_{22}$ which we will need in the Pines-Hugenholtz relation (\ref{ring.13}), the approximation should be good because the pole in the corresponding real-time propagator is at zero energy. The off-diagonal self energies $\Pi_{12}$ and $\Pi_{21}$ in Fig. 8 both vanish at zero external energy. As earlier stated, this follows to all orders in perturbation theory from time-reversal invariance. From the diagrams in Fig. 8 we then find the self energy \begin{eqnarray} \Pi_{11} & = & {1\over\beta}\sum_n\int{d^3k\over(2\pi)^3} \left[3(-\lambda)\bar{D}_{11}(k) + (-\lambda)\bar{D}_{22}(k) + 18(-\lambda v)^2\bar{D}_{11}(k)\bar{D}_{11}(-k) \right. \nonumber \\ & + & \left.2(-\lambda v)^2 \bar{D}_{22}(k)\bar{D}_{22}(-k)+ 12(-\lambda v)^2 \bar{D}_{12}(k)\bar{D}_{12}(-k)\right] \end{eqnarray} where we use $k \equiv (\omega_n,{\bf k})$. With the full propagators from (\ref{ring.15}) this can be simplified to \begin{eqnarray} \Pi_{11} = - {\lambda\over\beta}\sum_n\int{d^3k\over(2\pi)^3} \left[{4\varepsilon + 2\td{\mu}\over\omega^2 + \omega_n^2} - {2\lambda v^2(10\varepsilon^2 + 4\varepsilon\td{\mu}+ 4\td{\mu}^2 - 6\omega_n^2) \over(\omega^2 + \omega_n^2)^2}\right] \label{pi.4} \end{eqnarray} where $\omega^2 = \varepsilon(\varepsilon + 2\td{\mu})$. The Matsubara summations are now easy to do. From the standard sum \begin{eqnarray} \sum_{n=-\infty}^\infty {1\over \omega^2 + \omega_n^2} = {\beta\over\omega}\left[{1\over 2} + {1\over e^{\beta\omega} - 1}\right] \end{eqnarray} follows directly by differentiation \begin{eqnarray} \sum_{n=-\infty}^\infty {\omega^2\over (\omega^2 + \omega_n^2)^2} = {\beta\over 2\omega}\left[{1\over 2} + {1\over e^{\beta\omega} - 1} + \beta\omega{e^{\beta\omega}\over \left(e^{\beta\omega} - 1\right)^2}\right] \end{eqnarray} With $\omega_n^2$ instead of $\omega^2$ in the numerator on the left-hand side, we get the same result except for an opposite sign in front of the last term. The zero-temperature self energy is then found to be \begin{eqnarray} \Pi_{11}(T=0) &=& -\lambda\int{d^3k\over(2\pi)^3}\left[{2\varepsilon + 4\td{\mu}\over\omega} - {\td{\mu}(5\varepsilon^2 + 2\varepsilon\td{\mu} + 2\td{\mu}^2)\over\omega^3}\right]\nonumber\\ &=&-\lambda\int{d^3k\over(2\pi)^3}\left[{2\varepsilon + \td{\mu}\over\omega} - {2\td{\mu}(\varepsilon - \td{\mu})^2\over\omega^3}\right] \label{pi.5} \end{eqnarray} In the last line we have regrouped the terms to show the difference between $\Pi_{11}$ and $\Pi_{22}$ found below. The leading divergences will be cancelled by the counter-terms. This will explicitly be demonstrated in the case $\Pi_{22}$ which we evaluate next. From the diagrams in Fig. 8 it is given as \begin{eqnarray} \Pi_{22}& = & {1\over\beta}\sum_n\int{d^3k\over(2\pi)^3} \left[3(-\lambda)\bar{D}_{22}(k) + (-\lambda)\bar{D}_{11}(k)\right.\nonumber \\ & + & \left.4(-\lambda v)^2\bar{D}_{11}(k)\bar{D}_{22}(-k) + 4(-\lambda v)^2 \bar{D}_{12}(k)\bar{D}_{12}(-k)\right] \nonumber \\ & = & - {\lambda\over\beta}\sum_n\int{d^3k\over(2\pi)^3} \left[{4\varepsilon + 6\td{\mu}\over\omega^2 + \omega_n^2} - {4\td{\mu}(\varepsilon^2 + 2\varepsilon\td{\mu} + \omega_n^2) \over(\omega^2 + \omega_n^2)^2}\right]\nonumber\\ & = & - {\lambda\over\beta}\sum_n\int{d^3k\over(2\pi)^3} {4\varepsilon + 2\td{\mu}\over\omega^2 + \omega_n^2} \end{eqnarray} A summation over Matsubara frequencies gives at non-zero temperature \begin{eqnarray} \Pi_{22}(T) = -\lambda\int{d^3k\over(2\pi)^3}{4\varepsilon + 2\td{\mu}\over\omega} \left[{1\over 2} + {1\over e^{\beta\omega} - 1}\right] \label{pi.8} \end{eqnarray} The first part of the integral which represents the zero-temperature self energy, is ultraviolet divergent. However, from (\ref{ct.1}) we see that we should add in the counter-terms $\delta\mu - v^2\delta\lambda$. Taking these from (\ref{re.4}) and (\ref{re.5}), with the replacement $\mu\rightarrow\td{\mu}$, we obtain the renormalized and finite result \begin{eqnarray} \Pi_{22}(T) = - \lambda\int{d^3k\over(2\pi)^3}\left[{2\varepsilon + \td{\mu}\over\omega} + {\td{\mu}\over\varepsilon} - 2 + {4\varepsilon + 2\td{\mu}\over\omega} {1\over e^{\beta\omega} - 1}\right] \label{pi.10} \end{eqnarray} Similarly, an expansion for high momenta of the zero-temperature part of $\Pi_{11}$ in (\ref{pi.5}) shows that the counter-terms $\delta\mu - 3v^2\delta\lambda$ introduced in (\ref{ct.1}), again with $\mu\rightarrow\td{\mu}$, exactly cancel the ultraviolet divergences in $\Pi_{11}$. The assumption that the Goldstone theorem is fulfilled, i.e. the Pines-Hugenholtz relation (\ref{ring.12}), can now be shown to equivalent to the minimalization criterion (\ref{temp.4}) for a one-loop potential calculated with the full propagator $\bar{D}_{ab}$. Combining (\ref{re.10}) with $\mu \rightarrow \td{\mu}$ in the one-loop term and (\ref{temp.4}) we find \begin{eqnarray} v^2 &=& \frac{\mu}{\lambda} - \int{d^3k\over(2\pi)^3}\left[{2\varepsilon + \td{\mu}\over\omega} + {\td{\mu}\over\varepsilon} - 2 + {4\varepsilon + 2\td{\mu}\over\omega} {1\over e^{\beta\omega} - 1}\right]\nonumber\\ &=& \frac{\mu}{\lambda} + \frac{1}{\lambda}\Pi_{22}(T) \label{pi.11} \end{eqnarray} when making use of the result (\ref{pi.10}). This is the same as equation (\ref{ring.12}) which embodies the Goldstone theorem . As mentioned at the beginning of this section, the self energies are calculated using the dispersion relation $\omega = \sqrt{\varepsilon(\varepsilon + 2\td{\mu})}$, i.e. neglecting terms of ${\cal O}(\Pi_{11} - \Pi_{22})$. Using the renormalized self energies, we see that the difference at zero temperature is \begin{eqnarray} \Pi_{11} - \Pi_{22} = 2\lambda\mu\int{d^3k\over(2\pi)^3}\left[{(\varepsilon - \mu)^2\over\omega^3} - {1\over\varepsilon}\right] \label{pi.12} \end{eqnarray} The integral is finite at high momenta due to the counter-term, but is infrared divergent. This is caused by the exchange diagram in $\Pi_{11}$ with two $\bar{D}_{22}$-lines which diverges as external energy and momenta are taken to zero. The infrared divergence signals the onset of long-distance effects which are not properly handled by the present one-loop approximation. As discussed by Kapusta \cite{Kapusta_2}, one can cure the divergence using a non-zero external energy. However, this is not a problem here. At low temperature, the self energies are of ${\cal O}(\lambda\mu^{3/2})$ and will be neglected all together, being small compared to $\mu$. On the other hand, at high temperature, i.e. near the critical temperature $T_C$, $\Pi_{22}$ is comparable to $\mu$ and the self energies must be included. In this regime we can use the high-temperature limits of $\Pi_{11}$ and $\Pi_{22}$ which below are shown to be equal to leading order. Thus, in the temperature range where it should be considered, $\Pi_{11} - \Pi_{22} = 0$. At high temperatures $T\gg\td{\mu}$ the exchange diagrams may be neglected. The dominating terms in the remaining diagrams go like $\varepsilon/(\omega(e^{\beta\omega}-1))$, and the combinatorial prefactors are the same for the two self energies. Expanding (\ref{pi.4}) and (\ref{pi.10}) in powers of $\td{\mu}/T$ we find to leading order \begin{eqnarray} \Pi_{22}(T) = \Pi_{11}(T) = -4\lambda\zeta{(3/2)}\left({mT\over 2\pi}\right)^{3/2} \label{pi.13} \end{eqnarray} This result will be found if one uses free propagators $D_{ab}$ in the self energy loops since $\td{\mu}$ can be neglected to leading order. In the opposite limit $T\ll\td{\mu}$, we can take $\td{\mu}$ approximately equal to $\mu$. Diagrams with a $\bar{D}_{11}$-loop are suppressed by a factor $T^2/\mu^2$ compared to the others. We then obtain for the leading terms of the self energy \begin{eqnarray} \Pi_{22}(T) = -\frac{10\lambda}{3\pi^2}(m\mu)^{3/2} - \frac{\lambda T^2m^{3/2}}{6\mu^{1/2}} \label{pi.14} \end{eqnarray} The first part is seen to be of the order ${\cal O}(\sqrt{\rho a^3})$ smaller than the chemical potential and the second term even smaller by a factor $(T/\td{\mu})^2$. To leading order, we can thus neglect the self energies at these low temperatures and set $\td{\mu} = \mu$. Thus, the effects of ring corrections on the thermodynamics of the system will only appear at high temperature. \subsection{Condensate and pressure at finite temperature} Including the effects of ring corrections, we can now write down the renormalized finite temperature effective potential \begin{eqnarray} U_{eff}(v,\mu ,T) &=& -\frac{\mu}{2}v^2 + \frac{\lambda}{4}v^4 + {1\over 2}\int\!{d^3k\over(2\pi)^3}\!\left(\omega - \varepsilon + \td{\mu} - 2\lambda v^2 + {\lambda^2v^4\over 2\varepsilon}\right) \nonumber\\ &+& T \int\!{d^3k\over(2\pi)^3}\!\ln(1 - e^{-\beta\omega}) \label{eff.1} \end{eqnarray} From (\ref{pi.11}) we know that it has a minimum at $v = (\td{\mu}/\lambda)^{1/2}$. In the minimum we then have the pressure \begin{eqnarray} P(\mu ,T) &=& -U_{eff}(\mu ,T) = \frac{1}{4\lambda}(\mu^2 - \Pi_{22}^2) - {1\over 2}\int\!{d^3k\over(2\pi)^3}\!\left[\omega - \varepsilon - \td{\mu} + \frac{\td{\mu}^2}{2\varepsilon}\right]\nonumber\\ &-& T\int\!{d^3k\over(2\pi)^3}\!\ln(1 - e^{-\beta\omega}) \label{eff.2} \end{eqnarray} The total density of particles is again found by using $\rho = \partial P/\partial\mu$ which gives \begin{eqnarray} \rho &=& {\mu\over 2\lambda} - {1\over 2}\int\!{d^3k\over(2\pi)^3}\! \left[{\varepsilon\over\omega} + {\td{\mu}\over\varepsilon} - 1 + \frac{2\varepsilon}{\omega(e^{\beta\omega}-1)}\right] \nonumber\\ &=& {\td{\mu}\over 2\lambda} + {1\over 2}\int\!{d^3k\over(2\pi)^3}\! \left[{\varepsilon + \td{\mu}\over\omega} - 1 + \frac{2(\varepsilon + \td{\mu})}{\omega(e^{\beta\omega}-1)}\right] \label{eff.3} \end{eqnarray} while the density of particles in the condensate is \begin{eqnarray} \rho_c = \frac{v^2}{2} = \frac{\td{\mu}}{2\lambda} = \frac{\mu}{2\lambda} -{1\over 2}\int\!{d^3k\over(2\pi)^3}\!\left[\frac{2\varepsilon + \td{\mu}}{\omega} + \frac{\td{\mu}}{\varepsilon} - 2 + \frac{4\varepsilon + 2\td{\mu}}{\omega(e^{\beta\omega}-1)}\right] \label{eff.4} \end{eqnarray} The difference \begin{eqnarray} \rho_e = {1\over 2}\int\!{d^3k\over(2\pi)^3}\!\left[\frac{\varepsilon + \td{\mu}}{\omega}\left(1 + \frac{2}{e^{\beta\omega}-1}\right) -1 \right] \label{eff.5b} \end{eqnarray} represents the density of particles in excited states with ${\bf k}\neq 0$. Except for the replacement of $\mu$ with $\td{\mu}$, this result agrees with what was obtained in \cite{BBD}. Now both the hard-core repulsion and thermal fluctuations cause the excitation of particles from the condensate. With increasing temperatures more and more particles are in excited states and at a critical temperature where $\td{\mu} = 0$, the condensate becomes zero. We then have a phase transition to the normal phase. In the following we will see that this critical temperature is the same as for a free Bose gas to the accuracy we are working. Equation (\ref{eff.3}) gives upon inversion the chemical potential as a function of the particle density. The first integral on the right-hand side was done in Section 3.4, and we can now simply replace $\mu$ with $\td{\mu}$ in Eq. (\ref{re.14}). The second integral \begin{eqnarray} I_T \equiv \int\!{d^3k\over(2\pi)^3}\frac{\varepsilon}{\omega(e^{\beta\omega}-1)} \label{eff.6} \end{eqnarray} must be done numerically. However, in the two important temperature ranges we can easily find good analytical approximations. At low temperature we again take $\td{\mu}\simeq\mu$. The dominant contributions to the integral comes from small values of $\varepsilon$ and we can thus take $\varepsilon + 2\mu\simeq 2\mu$ in the denominator. Then we have the result \begin{eqnarray} I_T = \frac{\pi^2m^{3/2}T^4}{60\mu^{5/2}} \label{eff.7} \end{eqnarray} Since the loop corrections are small, we can set $\mu = 2\lambda\rho$ on the right hand side of (\ref{eff.3}). Inversion then trivially gives \begin{eqnarray} \mu = 2\lambda\rho + \frac{8\lambda}{3\pi^2}(2m\lambda\rho)^{3/2} + \frac{\lambda\pi^2 m^{3/2}T^4}{30(2\lambda\rho)^{5/2}} \label{eff.8} \end{eqnarray} For comparison, the expression for the effective chemical potential takes the form \begin{eqnarray} \td{\mu} = 2\lambda\rho_c = 2\lambda\rho - \frac{2\lambda}{3\pi^2}(2m\lambda\rho)^{3/2} - \frac{\lambda m^{3/2}T^2}{6\sqrt{2\lambda\rho}} \end{eqnarray} Thus, at low $T$, the chemical potential {\em increases} with temperature, while the {\em effective} chemical potential, and thus the condensate density decreases with temperature. This temperature dependence in the same as obtained in the Bogoliubov approximation \cite{FW}. The pressure now follows from Eq. (\ref{eff.2}). As already explained, we can neglect the contribution coming from the self-energy. Thus \begin{eqnarray} P = \frac{\mu^2}{4\lambda} - \frac{8m^{3/2}}{15\pi^2}\mu^{5/2} + P_T \label{eff.9} \end{eqnarray} where the temperature dependence is in the function \begin{eqnarray} P_T = -T\int\!{d^3k\over(2\pi)^3}\!\ln(1 - e^{-\beta\omega}) = \frac{\pi^2 m^{3/2}T^4}{90\mu^{3/2}} \label{eff.10} \end{eqnarray} evaluated in the same approximation as above. The pressure at low temperatures is therefore \begin{eqnarray} P(\rho) = \lambda\rho^2 + \frac{4m^{3/2}}{5\pi^2}(2\lambda\rho)^{5/2} + \frac{m^{3/2}\pi^2}{90}\frac{T^4}{(2\lambda\rho)^{3/2}} \label{eff.11} \end{eqnarray} At non-zero temperature we can obtain the energy density from the thermodynamic relation \begin{eqnarray} {\cal E} = -\frac{\partial}{\partial\beta}(\beta P)_{\beta\mu} \end{eqnarray} which now gives \begin{eqnarray} {\cal E} = \frac{2\pi a}{m}\rho^2\left[1 + \frac{128}{15}\sqrt{\frac{a^3\rho}{\pi}}\,\right] + \frac{m^{3/2}}{240\pi}\frac{T^4}{\rho}\sqrt{\frac{\pi}{a^3\rho}} \label{eff.12} \end{eqnarray} It is seen that the effect of non-zero temperature comes in as a $T^4$ contribution as in the Stefan-Boltzmann law for photons. In that case the excitations are mass-less because of gauge invariance, while here they are Goldstone bosons resulting from a broken, continuous symmetry. In the opposite temperature range near the critical temperature $T_C$, we must include the effects of the self energies. They have previously been obtained in (\ref{pi.13}) which gives for the effective chemical potential (\ref{ring.13}) \begin{eqnarray} \td{\mu} = \mu - 4\lambda\zeta{(3/2)}\left({mT\over 2\pi}\right)^{3/2} \label{eff.14} \end{eqnarray} This quantity also enters the total density of particles in (\ref{eff.3}) where we now to leading order can set $\td{\mu} = 0$. The temperature-dependent integral (\ref{eff.6}) then simplifies to \begin{eqnarray} I_T = \zeta{(3/2)}\left({mT\over 2\pi}\right)^{3/2} \label{eff.13} \end{eqnarray} Since the chemical potential is now $\mu = 2\lambda(\rho + I_T)$, we have the relation \begin{eqnarray} \mu = 4\lambda\rho - \td{\mu} \end{eqnarray} for temperatures just below the $T_C$. The critical temperature is defined by $\td{\mu} = 0$. Here the chemical potential takes the value $\mu_C = 4\lambda\rho$, and hence (\ref{eff.14}) gives \begin{eqnarray} T_C = \frac{2\pi}{m}\left(\frac{\mu_C}{4\lambda\zeta{(3/2)}}\right)^{2/3} = \frac{2\pi}{m}\left(\frac{\rho}{\zeta{(3/2)}}\right)^{2/3} \label{eff.17} \end{eqnarray} which is just the textbook result \cite{Huang}. As seen in Fig. 9, the thermodynamic chemical potential $\mu$ increases continuously from zero to $\mu_C$ at the critical temperature. On the other hand, the effective chemical potential $\td{\mu}$ decreases smoothly from zero temperature and is by definition zero at $T_C$. \begin{figure}[htb] \begin{center} \mbox{\psfig{figure=bec3Dfig9.ps,width=11cm,angle=270,height=8cm}} \end{center} \caption[The thermodynamic and effective chemical potentials, $\mu$ and $\td{\mu}$ plotted as functions of temperature for $ g=0.1$ and $m=1$.]{\footnotesize The thermodynamic and effective chemical potentials, $\mu$ and $\td{\mu}$ plotted as functions of temperature for $ g=0.1$ and $m=1$. The results were obtained by matching the low and high temperature expressions for the chemical potentials.} \label{bec3D-fig:mu} \end{figure} In the general formula (\ref{eff.2}) we must now keep the contributions from the self energies when we calculate the pressure for temperatures just below the critical temperature. Thus \begin{eqnarray} P = \frac{1}{4\lambda}(\mu^2 - \Pi_{22}^2) + \left(\frac{m}{2\pi}\right)^{3/2}T^{5/2}\zeta(5/2) - \left(\frac{mT}{2\pi}\right)^{3/2}\zeta(3/2)(\mu + \Pi_{22}) + {\cal O}(\td{\mu}^2) \end{eqnarray} with $\Pi_{22}$ given in (\ref{pi.13}). In order to compare with results in the literature, we write the pressure as a function of the particle density \begin{eqnarray} P = \frac{2\pi}{m}a\rho^2 - \left(\frac{m}{2\pi}\right)^2\zeta(3/2)^2aT^3 + \left(\frac{m}{2\pi}\right)^{3/2}\zeta(5/2)T^{5/2} \end{eqnarray} Comparing with the result of Lee and Yang \cite{LeeYang_3}, we find that the second term has the wrong sign. The reason is the following: An expansion of the ring-corrected potential on powers of $\lambda$ shows that the two-loop contribution is counted twice. Higher order diagrams are reproduced correctly. We must correct for this double counting by subtracting the two-loop contribution, as given in perturbation theory, from the obtained result. This contribution is given by the three uppermost diagrams in Fig. 6. Near $T_C$ the propagators are equal to lowest order, $D_{11} = D_{22} = \varepsilon/(\varepsilon^2 + \omega_n^2)$. The total contribution is $-2(m/2\pi)^2\zeta(3/2)^2aT^3$. Subtracting this contribution, we thus have \begin{eqnarray} P = \frac{2\pi}{m}a\rho^2 + \left(\frac{m}{2\pi}\right)^2\zeta(3/2)^2aT^3 + \left(\frac{m}{2\pi}\right)^{3/2}\zeta(5/2)T^{5/2} \label{eff.20} \end{eqnarray} which now agrees with \cite{LeeYang_3}. This result also comes out in the mean field approximation \cite{Huang,Anne}. The equation for the chemical potential is not modified by this correction at the present order of accuracy. In the same approximation we also find the the energy density \begin{eqnarray} {\cal E}= \frac{2\pi}{m}a\rho^2 -\sqrt{\frac{m}{2\pi}}\zeta(3/2)^2a\rho T^{3/2} + 2(\frac{m}{2\pi})^2\zeta(3/2)^2aT^3 + \frac{3}{2}(\frac{m}{2\pi})^{3/2}\zeta(5/2)T^{5/2} \label{eff.22} \end{eqnarray} At the critical temperature these expressions reduce to \begin{eqnarray} P(T_C)= \frac{2\pi}{m}\zeta(3/2)^{5/3}\zeta(5/2)\rho^{2/3} + \frac{4\pi}{m}a\rho^2 \label{eff.23} \end{eqnarray} and \begin{eqnarray} {\cal E}(T_C) = \frac{3\pi}{m}\zeta(3/2)^{5/3}\zeta(5/2)\rho^{2/3} + \frac{4\pi}{m}a\rho^2 \label{eff.24} \end{eqnarray} which in the following are shown to equal the limits taken in the normal state, securing continuity at the critical temperature. The special treatment at two-loop order could of course have been introduced at an earlier stage, but there would be no effects on the results found so far. When minimalizing the effective ring-corrected potential we discarded contributions from $\partial\Pi/\partial v$, being of ${\cal O}(\lambda^2)$. Since the contribution from differentiation of the two-loop potential is of the same order, this must be discarded as well, and the minimalization criterion remains unchanged. In the Appendix we briefly discuss how this is incorporated in the effective action for composite operators. \subsection{Equation of state in the normal phase} At temperatures above $T_C$, the condensate density is zero and the effective potential has its only minimum at $v=0$. The modified dispersion relation then reads ${\omega = \varepsilon - \td{\mu}}$ with effective chemical potential $\td{\mu} = \mu +\Pi_{22} < 0$. Since $v=0$, the exchange diagrams vanish and the expressions for the self energies are \begin{eqnarray} \Pi_{11} = \Pi_{22} \equiv \Pi &=& {1\over\beta}\sum_n\int{d^3k\over(2\pi)^3} \left[3(-\lambda)\bar{D}_{22}(k) + (-\lambda)\bar{D}_{11}(k)\right] + \mbox{c.t.}\nonumber\\ &=& -\lambda\int\frac{d^3k}{(2\pi)^3}\frac{1}{e^{\beta\omega} - 1} \label{eos.1} \end{eqnarray} The Bose-integral is standard and the self energy becomes \begin{eqnarray} \Pi = -4\lambda\left(\frac{mT}{2\pi}\right)^{3/2}\mbox{Li}_{3/2}(e^{\beta\td{\mu}}) \end{eqnarray} We have here introduced the polylogarithmic function \begin{eqnarray} \mbox{Li}_n(x) = \sum_{k=1}^{\infty}\frac{x^k}{k^n} \label{eos.2} \end{eqnarray} with $\mbox{Li}_n(1) = \zeta(n)$. Near $T_C$ we may approximate the exponential with unity and at $T_C$ the result (\ref{pi.13}) is reproduced. To the present accuracy we may set $\td{\mu} = \mu$ in the self energy. With vanishing condensate the classical and zero point terms do not contribute to the pressure, which simplifies to \begin{eqnarray} P &=& \left(\frac{m}{2\pi}\right)^{3/2}T^{5/2}\mbox{Li}_{5/2}(\td{z}) + 2 \lambda \left(\frac{mT}{2\pi}\right)^3\mbox{Li}^2_{3/2}(z) \label{eos.3a} \end{eqnarray} with $z = \exp(\beta\mu)$ and $\td{z} = \exp(\beta\td{\mu})$. To lowest order in $\lambda$ the pressure can be written \begin{eqnarray} P = \left(\frac{m}{2\pi}\right)^{3/2}T^{5/2}\mbox{Li}_{5/2}(z) - 2 \lambda\left(\frac{mT}{2\pi}\right)^3\mbox{Li}^2_{3/2}(z) \label{eos.3b} \end{eqnarray} where the last term is the two-loop contribution. We have here used the following property of the polylogarithmic functions: \begin{eqnarray} \frac{d}{dx}\mbox{Li}_n(x) = \frac{1}{x}\mbox{Li}_{n-1}(x) \end{eqnarray} Expanding in powers of $\beta\mu$, we find that this agrees with the result of Popov \cite{Popov}. Again we want to write the pressure in terms of the particle density which now becomes \begin{eqnarray} \rho &=& \left(\frac{mT}{2\pi}\right)^{3/2}\mbox{Li}_{3/2}(\td{z}) + 4\lambda\left(\frac{m}{2\pi}\right)^3T^2\mbox{Li}_{1/2}(z)[\mbox{Li}_{3/2}(z) - \mbox{Li}_{3/2}(\td{z})] \label{eos.5} \end{eqnarray} Inverting this relation, we find to lowest order in $\lambda$ \begin{eqnarray} \mu = T\ln \mbox{Li}^{-1}_{3/2}(\rho\Lambda_T^3) + 4\lambda\rho \label{eos.6} \end{eqnarray} where $\mbox{Li}_n^{-1}$ is the inverse of $\mbox{Li}_n$ and $\Lambda_T = \sqrt{2\pi/mT}$ is the thermal wave length. To the same order the self energy is $\Pi = -4\lambda\rho$. Thus, the effective chemical potential becomes \begin{eqnarray} \td{\mu} = \mu + \Pi = T\ln \mbox{Li}^{-1}_{3/2}(\rho\Lambda_T^3) \label{eos.7} \end{eqnarray} which equals the usual chemical potential for a free gas. Let us again refer to Fig. 9 where the chemical potentials $\td{\mu}$ and $\mu$ are plotted as function of temperature. The difference $\mu - \td{\mu} = 4\lambda\rho$ found just below the critical temperature, remains constant in the normal phase where the potentials have the same temperature dependence. As a function of density the pressure becomes \begin{eqnarray} P = \frac{4\pi}{m}a\rho^2 + \rho T\left[1 - \frac{\Lambda_T^3\rho}{2^{5/2}}\right] + {\cal O}(\rho^3) \label{eos.8} \end{eqnarray} The corresponding second virial coefficient reads \begin{eqnarray} B_2 =-\frac{\Lambda_T^3}{2^{5/2}}\left[1 -8a\sqrt{\frac{mT}{\pi}}\right] \label{eos.10} \end{eqnarray} From the relation ${\cal E} = -{\partial}/{\partial}\beta(\beta P)|_{\beta\mu}$ we similarly have for the energy density \begin{eqnarray} {\cal E} = \frac{4\pi}{m}a\rho^2 + \frac{3}{2}\rho T\left[1 - \frac{\Lambda_T^3\rho}{2^{5/2}}\right] + {\cal O}(\rho^3) \label{eos.9} \end{eqnarray} again in agreement with earlier results \cite{LeeYang_3,Huang,Anne}. At $T_C$ these expressions coincide with the limits taken from sub-critical temperatures, found in the last subsection. Summing up, the Eqs. (\ref{eff.11}), (\ref{eff.20}) and (\ref{eos.8}) constitute the equation of state for the Bose gas at all temperatures. \section{Discussion and conclusion} The non-interacting Bose-Einstein gas in the condensed phase is not a thermodynamically stable system since its compressibility is infinite \cite{Huang}. Also, the phase transition from the normal phase is special in that the correlation length diverges below the transition temperature. This unphysical behavior is directly related to the absence of a real spontaneous breakdown of the $U(1)$ symmetry of the system with the accompanying Goldstone bosons which here are the phonon excitations. With the introduction of a weak repulsion between the particles, the physics of the system is well-defined in both phases. We have here described the system using modern field-theoretic methods based on functional integrals. Instead of using the standard formalism with complex fields as used in the condensed matter literature \cite{FW}, we find it more convenient to use real fields. The divergences in the loop integrals have been regulated by introducing a physical cutoff. Since these are proportional to powers of the cutoff, we could instead have used dimensional regularization where they consequently would be absent. We have put special emphasis on enforcing the consequences of Goldstone's theorem. In the real-field formalism the important Pines-Hugenholtz theorem then takes a slightly different form. The thermodynamics is most directly obtained from the effective potential whose minimum gives the free energy of the system. At non-zero temperatures we find that the one-loop effective potential does not give a consistent description of the symmetry breakdown and that it must be improved by adding in ring corrections in a self-consistent way. It then becomes natural to introduce an effective chemical potential $\td{\mu}$ which acts as an order parameter, being positive below the critical temperature $T_C$ and giving a non-zero condensate, and passing through zero at $T_C$. In this way we find $\td{\mu}$ is a more important variable in characterizing the phases of the system than the thermodynamic chemical potential $\mu$. In particular, for temperatures below $T_C$ where the particle number concept looses some of its meaning, the physical significance of $\mu$ is not clear. Our results are derived to lowest order in the coupling between the particles. As such, most of the thermodynamic results are already in the literature dating back to the pioneering calculations of Lee, Yang and collaborators obtained by methods from statistical mechanics. What is new in addition to the systematic use of the effective chemical potential, is basically a more coherent derivation in a framework which has to a large extent been developed for corresponding relativistic systems in high energy physics. There is nothing preventing us in extending the calculations to higher orders in the interactions. One is then forced to consider the theory as an effective field theory which is non-renormalizable at very high energies, the coupling constant having dimensions of a length. As recently shown by Braaten and Nieto \cite{BN}, the zero-temperature ground state energy to next order in the coupling constant can then be obtained in a most direct way using the renormalization group for this effective field theory. With the recent introduction of magnetic traps, the experimental study of the thermodynamics of weakly interacting bosons have entered a new phase which will allow a much more detailed study of this important system. Here we have considered the particles in an open volume where the interactions dominate, while it is the confining potential which has the most important r\^ole in the trapped systems \cite{BEC}. The number of particles in the system now is finite, but one can still use quantum field theory in the grand canonical ensemble \cite{canonical}. Even with the large effort going on in the field at present, there is still a long way between experimental results and detailed verifications of the theoretical properties as derived for the open system here.\\\\ {\large \bf Acknowledgment}\\ We thank Mark Burgess for many useful discussions on ring-corrections of effective potentials at finite temperature.
hep-ph/9706312
\section{Introduction} Early in the next decade two heavy-ion accelerators, the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC), will create highly excited regions, similar to heavy-ion in size and with temperatures exceeding $200 MeV $. Among the most interesting speculations regarding ultra-high energy heavy ion collisions there is the idea that regions of misaligned vacuum might occur\cite{Lee,RW,Gavin,AHW,AA,GGP}. In such misaligned regions, which are analogous to misaligned domains in a ferromagnet, the chiral condensate points in a different direction from that favored in the ground state. If they were produced, misaligned vacuum regions plausibly would behave as a pion laser, relaxing to the ground state by coherent pion emission\cite{Pratt,CGZ}. It is generally assumed that the fluctuation of the ratio of neutral to charged pions may be viewed as a signature of DCC phenomena. If the DCC state is formed, the product of one kind of pions should be larger comparing to other kinds of pions. Furthermore since a pion laser is formed, the mean momentum of the pions emitted from DCC regions should be much smaller. PHOBOS\cite{BW} is a compact silicon detector designed to measure particle multiplicity for all charged particles and for photons and it will be used in AGS and RHIC. The abability to measure photons will allow PHOBOS to study their parents $\pi^{0}$ and the fluctuation of the ratio of $\pi^{0}$ to $\pi^{+,-}$. The most interesting thing of PHOBOS for us is that it can measure the low mometum particles, that is it can detect the other signature of DCC phenomena. Two-pion Bose-Einstein correlation is widely used in high energy heavy-ion collisions to provide the information of the space-time structure, degree of coherence and dynamics of the region where the pions were produced\cite{BGJ,GKW,ZAJ,LOR}, which is closely related to the single and double pions inclusive distribution. The coherent pion emission causes pions concentrated at low transverse momenta. This behavior should have explicit effects on the pion single particle spectrum and two-pion interferometry. Besides the large domain size of the disoriented chiral condensate (DCC) regions, there is also another kind of coherent length which corresponds to the wave packet length scale of the emitter. The wave packet length should also affect the pion spectrum distribution. Then the question arises: which one, the DCC size or the wavepacket length, affects more seriously on pion spectrum distribution. To answer this question, in this paper, we first derive the formula of single particle spectrum by taking into account both DCC size and the wave packet length in Section two. As a simple example, the DCC region size and the emitter size effects on the pion spectrum distribution are given in Section three. Conclusions are given in Section four. \section{Pion spectrum distribution for partially coherent source} It is widely accepted that a state created by a classical pion source is described by \cite{BGJ,GKW,CGZ,ZQH1,ZQH} \begin{equation} |\phi>=exp(i\int d \vec{p} \int d^{4}x j(x) exp(ip\cdot x) c^{+}(\vec{p}) |0>, \end{equation} where $c^{+}(\vec {p})$ is the pion creation operator, $|0>$ is the pion vacuum. $j(x)$ is the current of the pion, which can be expressed as \begin{equation} j(x)=\int d^{4}x' d^{4}p j(x',p) \nu(x') exp(-ip\cdot (x-x')) . \end{equation} Here $j(x',p)$ is the probability amplitude of finding a pion with momentum $p$ , emitted by the emitter at $x'$. $\nu(x')$ is a random phase factor. All emitters are uncorrelated in coordinate space when assuming: \begin{equation} <\nu^{*}(x')\nu(x)>=\delta^{4}(x'-x) . \end{equation} This is in ideal cases. In a more realistic case, each chaotic emitter has a small coherent wave package length scale and the above equation can be replaced by \begin{equation} <\nu^{*}(x')\nu(x)>=\frac{1}{\delta^{4}} \exp\{-\frac{(x_{1}-x'_{1})^{2}}{\delta^{2}} -\frac{(x_{2}-x'_{2})^{2}}{\delta^{2}} -\frac{(x_{3}-x'_{3})^{2}}{\delta^{2}} -\frac{(x_{0}-x'_{0})^{2}}{\delta^{2}} \} . \end{equation} Here $\delta $ is a parameter which determines the coherent length (time) scale of the emitter. For simplicity, the same coherent scale is taken for both spacial and time at the moment. The above formula means that two-emitters within the range of $\delta$ can be seen as one emitter, while two-emitters out of this range are incoherent. For simplicity we also assume that \begin{equation} <\nu^{*}(x)>=<\nu(x)>=0 , \end{equation} which means that for each emitter the phases are randomly distributed in the range of $0$ to $2 \pi$. The coherent state can be expanded in Fock-Space as \begin{eqnarray} |\phi>=\sum_{n=0}^{\infty} \frac{(i \int j(x) e^{ip\cdot x} c^{+}(p) d\vec{p} d^{4}x)^{n}}{n!}|0> =\sum_{n=0}^{\infty}|n>, \end{eqnarray} with \begin{equation} |n>=\frac{(i \int j(x) e^{ip\cdot x} c^{+}(p) d\vec{p} dx)^{n}}{n!}|0>. \end{equation} Here $|n>$ is the n-pion state, using the relationship \begin{equation} \left[c^+(\vec{p_1}), c(\vec{p_2})\right]=\delta(\vec{p_1}-\vec{p_2}) \end{equation} we have \begin{equation} c(\vec{p})|n>=i\int d^{4}x j(x) exp(ip\cdot x) |n-1>, \end{equation} then \begin{eqnarray} I(Q,K)&=&<1|c^{+}(\vec{p}_{1})c(\vec{p}_{2})|1>= \int d^{4}x_{1} d^{4}x_{2} j^{*}(x_{1}) j(x_{2}) exp(-i(p_{1}\cdot x_{1}-p_{2}\cdot x_{2})) \nonumber\\ &=&\int d^{4}x_1 d^{4}x_2 j^{*}(x_1)j(x_2)exp(-i(K/2+Q)\cdot x_1 -i(K/2-Q)\cdot x_2) \nonumber\\ &=&\int d^{4}x_1 d^{4}x_2 j^{*}(x_1)j(x_2)exp(-iK\cdot (x_1-x_2)/2 -iQ\cdot (x_1+x_2)) \\ &=&\int d^{4}y d^{4}Y j^{*}(Y+y/2)j(Y-y/2)exp(-iK\cdot y/2 -i2Q\cdot Y) \nonumber\\ &=&\int d^{4}Y g_{w}(Y,k) exp(-i2Q\cdot Y) . \nonumber \end{eqnarray} Here $Y=\frac{x_{1}+x_{2}}{2}, y=x_{1}-x_{2}$ are four dimensional coordinates, while $ Q=\frac{p_{1}-p_{2}}{2}$ and $K=2k=p_{1}+p_{2} $ are the corresponding four dimensional momenta. The above transformation, referred as Wigner transformation, can be found in the original paper of E. Wigner \cite{Wigner32}. $g_{w}(Y,K)$ is the Wigner function, which can be explained as the probability of finding a pion at $Y$ with momentum $k=K/2$\cite{Wigner32} and is defined as \begin{equation} g_{w}(Y,k)=\int d^{4}y j^{*}(Y+y/2)j(Y-y/2)exp(-iK\cdot y/2) . \end{equation} Inserting eq.(2) into the above equation we have \begin{eqnarray} g_{w}(Y,k)&=&\int d^{4}y exp(-ik\cdot y) \nonumber\\ &&\int d^{4}x'j^{*}(x',p_{1})dp_{1}e^{ip_{1}\cdot (Y+y/2-x')}\nu^{*}(x') \\ &&\int d^{4}x''j(x'',p_{2})dp_{2}e^{-ip_{2}\cdot (Y-y/2-x'')}\nu(x'') , \nonumber \end{eqnarray} then the single pion inclusive distribution $P_{1}^{cha}(\vec{p})$ can be expressed as (eq.10): \begin{eqnarray} P_{1}^{cha}(\vec{k})&=&<1|c^+(\vec{k})c(\vec{k})|1>=\int g_{w}(Y,k) d^{4}Y \nonumber\\ &=&\int d^{4}x' d^{4}x'' d^{4}p_{1}d^{4}p_{2} j^{*}(x',p_{1})j(x'',p_{2}) \nonumber\\ &&\nu^{*}(x')\nu(x'')\delta^{4}(k-\frac{p_{1}+p_{2}}{2}) \delta^{4}(p_{1}-p_{2}) e^{ip_{1}\cdot (Y-x')}e^{-ip_{2}\cdot (Y-x'')}\\ &=&\int d^{4}x' d^{4}x'' j^{*}(x',k)j(x'',k)\nu^{*}(x')\nu(x'')e^{-ik\cdot (x'-x'')} . \nonumber \end{eqnarray} with $k_{0}$ taken to be $k_{0}=\sqrt{\vec{k}^{2}+m_{\pi}^{2}}$. In the above equation, we have taken into account the wave packet length $\delta$ of the emitter which form a chaotic source. For a system with one finite size coherent source, e.g., a finite DCC region and many other totally chaotic emitters which form a chaotic source, the state is described by \cite{CGZ} \begin{equation} |\phi>_{part}=exp(i\int d \vec{p} \int d^{4}x (j(x)+j_{c}(x)) exp(ip\cdot x) c^{+}(\vec{p})|0>~~~ , \end{equation} where $c^{+}(p)$ is the pion creation operator; $ j_{c}(x)$ is the current of pions produced by the coherent source, which can be expressed as \begin{equation} j_{c}(x)=\int d^{4}x' d^{4}p j_{c}(x',p) \exp\{-ip\cdot (x-x')\}; \end{equation} $j(x)$ is the current of pions produced by totally chaotic source , which can be expressed as eq.(2). The defference between $j(x)$ and $j_c(x)$ is that: For chaotic source, each emitter have different phase (different $\nu(x)$ in eq.(2)) while for $j_c(x)$ each emitter have the same phase. The state $|\phi>_{part}$ can be expanded as \begin{equation} |\phi>=\sum_{n=0}^{\infty} \frac{(i \int (j(x)+j_{c}(x))e^{ip\cdot x} c^{+}(p) d\vec{p} dx)^{n}}{n!}|0> =\sum_{n=0}^{\infty}|n>_{part}~~~~~, \end{equation} with \begin{equation} |n>_{part}=\frac{(i \int (j_{c}(x)+j(x)) e^{ip\cdot x} c^{+}(p) d\vec{p} d^4x)^{n}}{n!}|0>. \end{equation} Here $|n>_{part}$ is the n-pion state. Then the pion's spectrum distribution is \begin{eqnarray} P_{1}^{part}(\vec{p})&=&_{part}<1|c^{+}(\vec{p})c(\vec{p})|1>_{part} \nonumber\\ &=&\left( \int d^4 x (j(x) +j_c(x))e^{ip\cdot x} \right)^* \left( \int d^4 x (j(x) +j_c(x))e^{ip\cdot x} \right) \\ &=&\int d^4x_1d^4x_2 (j^*(x_1)j(x_2)+j^*(x_1)j_c(x_2) +j^*_c(x_1)j(x_2)+ \nonumber\\ &&j^{*}_{c}(x_1)j_c(x_2) ) e^{-ip\cdot (x_1-x_2)} \nonumber \end{eqnarray} Taking the phase average and using the relationship of eq.(5) we have \begin{equation} <j^*(x)j_c(y)>=<j_c(x)j^*(y)>=0 \end{equation} Then the above equation can be re-expressed as \begin{eqnarray} P_1(\vec{p})&=&\int d^4x_1d^4 x_2 j^*(x_1)j(x_2) e^{-i(p\cdot (x_1 - x_2))} \nonumber\\ && +\int d^4x_1d^4x_2 j^*_c(x_1)j_c(x_2)e^{-i(p\cdot (x_1- x_2))} \\ &=&\int g_w(x,\vec{p})d^4 x + |j_c(\vec{p})|^2 \nonumber \end{eqnarray} Here $j_{c}(\vec{p})$ can be expressed as \begin{equation} j_{c}(\vec{p})=\int j_{c}(x) \exp(ip\cdot x) d^{4} x . \end{equation} The above formula (eq.(20)) shows that the total pion spectrum distribution is consist of two-parts, one is spectrum distribution of the chaotic source, the other is the spectrum distribution of the coherent source. In the above derivation we have taken into account the wave packet length and the coherent pion source $j_{c}$, therefore, in our formulation it is possible to examine both the wave packet length of each chaotic emitter and the DCC radius effects on the pion single particle distribution when the pions emitted from the DCC region are assumed to be coherent. \section{Coherent length effects on pion spectrum } In this section, we will give an example to investigate the wave packet length and the coherence source radius effects on single pion distributions. We assume that the chaotic emitter amplitude distribution is \begin{equation} j(x,k)=\exp(\frac{-x_{1}^{2}-x_{2}^{2}-x_{3}^{2}}{2R_{0}^{2}}) \delta(x_{0}) exp(-\frac{k_{1}^{2}+k_{2}^{2}+k_{3}^{2}}{2\Delta^{2}})~~~. \end{equation} Where $R_{0}$ and $ \Delta $ are parameters which represent the radius of the chaotic source size and the momentum range of pions respectively. Here $x=(x_0,x_1.x_2,x_3)$ and $k=(k_0,k_1,k_2,k_3)$ is pion's coordinate and momentum respectively. Bringing eq.(4) and eq.(22) into eq.(13), we can easily get the pion single particle spectrum distribution \begin{equation} P_{1}^{cha}(\vec{p})= (\frac{\frac{1}{\Delta^{2}}+\frac{R_{0}^{2}\delta^{2}}{\delta^{2}+4R_{0}^{2}} }{\pi})^{\frac{3}{2}} \exp\{-\vec{p}^{2}\cdot (\frac{1}{\Delta^{2}}+\frac{R_{0}^{2}\delta^{2}}{\delta^{2}+4R_{0}^{2}} )\} . \end{equation} From the above expressions, we can see that the wave packet length of each chaotic emitter has great influence on the pion single particle inclusive distribution. The single particle momentum distribution is shown in fig.1 . The input value of $R_{0}$ and $\Delta$ is $5 fm $ and $0.3 GeV $, respectively. The solid, dashed and dot-dashed lines correspond to $\delta =0 fm$, $0.5 fm $ and $ 1 fm $ , respectively. It is clearly shown that as the wave packet length $\delta$ increases, the mean momentum of pions gets smaller, which means that the wave packet length of the chaotic emitter can cause abundant pions at low momentum. Now we consider the finite size coherence source effects on the pion single particle inclusive distribution. We assume that the emitting amplitude of coherent pions is \begin{equation} j_{c}(x,k)=\exp(\frac{-x_{1}^{2}-x_{2}^{2}-x_{3}^{2}}{2R_{c}^{2}}) \delta(x_{0}) exp(-\frac{k_{1}^{2}+k_{2}^{2}+k_{3}^{2}}{2\Delta_{c}^{2}}) . \end{equation} Here $R_{c}$and $ \Delta_{c} $ are parameters which represent the radius of the coherent source, e.g. the DCC region, and the mean momentum of coherent pions. $x=(x_0,x_1,x_2,x_3)$ and $k=(k_0,k_1,k_2,k_3)$ is pion's coordinate and momentum respectively. The normalized pion single particle distribution can be expressed as \begin{equation} P_{nor}^{part}(\vec{p})= A\cdot P^{cha}_{nor}(\vec{p}) + (1-A) P^{c}_{nor}(\vec{p})~~~~~, \end{equation} where $ A $ is a parameter which determines the incoherent degree of the source and is defined by \begin{equation} A=\frac{\int d\vec{p} P^{cha}_{1}(\vec{p})} {\int d\vec{p} (P^{cha}_{1}(\vec{p}) + P^{c}_{1}(\vec{p}))}. \end{equation} For $A=1$ the source is totally chaotic, for $A=0$ the source is totally coherent, otherwise the source is partially coherent. Here $P^{cha}_{nor}(\vec{p})$ can be expressed as \begin{equation} P^{cha}_{nor}(\vec{p})= (\frac {\frac{1}{\Delta^{2}}+\frac{R_{0}^{2}\delta^{2}}{\delta^{2}+4R_{0}^{2}} }{\pi})^{\frac{3}{2}} \exp\{-\vec{p}^{2}(\frac{1}{\Delta^{2}}+ \frac{R_{0}^{2}\delta^{2}}{\delta^{2}+4R_{0}^{2}} )\} \end{equation} and $P^{c}_{nor}(\vec{p})$ can be expressed as \begin{equation} P^{c}_{nor}(\vec{p})=(\frac {\frac{1}{\Delta_{c}^{2}}+R_{c}^{2}} {\pi})^{3/2} \exp\{-\vec{p}^{2}(\frac{1}{\Delta_{c}^{2}}+R_{c}^{2}) \}~~~. \end{equation} Then the single particle inclusive distribution for partially coherent source is shown in fig.2, where the input values of $A$, $R_{0}$, $\Delta$, $\Delta_{c}$ and $\delta$ is $0.5$, $5 fm$, $0.3 GeV$, $0.15 GeV $ and $0.3 fm$, respectively. The solid, dashed and dot-dashed lines correspond to $R_{c}=1fm$, $2fm$ and $ 3fm$, respectively. It is clear that as $R_{c}$ becomes larger, that is DCC region becomes larger, the pion's mean momentum becomes smaller, this condition is consistent with the nature of the coherent property of the source. It can be seen from fig.2 that DCC effects on pion spectrum distribution is more important than emitter size effects. So observing abundant pions at low momentum can be taken as a signature of DCC effect. \section {Conclusions} It has been suggested that a large DCC region may be formed in relativistic heavy-ion collisions. If the DCC region is formed, a large number of lower momentum pions should be produced, which is one of the signature of DCC phenomena. There is also another kind of coherent length which corresponds to the size of the wave packet length of each chaotic emitter and which also affects the pion momentum distribution. Therefore it is very interesting to analyze the effects of the two kind of lengths, namely the DCC size and wave packet length, on pion single particle inclusive distributions and to find out whose effect is more imporatnt. In this paper, as a simple example, we have derived the formula of the pion spectrum distribution by taking into account both the wavepacket length and DCC size and analyzed the effect of the two coherent lengths on pion inclusive distributions. It has been shown that both coherent lengths can cause the abundance of pions at low momentum. Among the two, the DCC size effects on the pion spectrum distribution is more important. Therefore observing abundant pions at low momentum may provide a signal of the DCC effects. Such a signal can be detected by PHOBOS at RHIC. \section*{{Acknowledgement}} The authors would like to express their gratitude to the referees for their helpful suggestion and comments. One of the authors (Q.H.Z.) would like to express his thanks to Dr. Pang Yang for helpful discussions. This study was partially supported by the National Natural Science Foundation of China, Post-doctoral Science Foundation of China and the Alexander von Humboldt Foundation in Germany .
1212.3028
\section{Introduction}\label{sec:1} In two dimensional quantum field theory, two elegant theorems are known. Zamolodchikov showed~\cite{Zamolodchikov:1986gt} that there exists a function $c(r)$ of a length scale $r$ which monotonically decreases as $r$ is increased, and becomes constant only on conformal fixed points. Roughly speaking, this result indicates that a number of ``degrees of freedom'' monotonically decreases along renormalization group (RG) flows. This is the famous Zamolodchikov's $c$-theorem. Then, Polchinski proved~\cite{Polchinski:1987dy} that all scale invariant theories (with discrete spectrum of scaling dimensions) are also conformally invariant. The result of Ref.~\cite{Zamolodchikov:1986gt} played a crucial role in the proof of Ref.~\cite{Polchinski:1987dy}. There have also been significant developments in the study of monotonically decreasing quantities in higher dimensions. In even dimensional CFT, the trace of the energy-momentum tensor has anomaly when the CFT is coupled to external gravitational background. It is given as~\cite{Deser:1993yx} \begin{eqnarray} T^\mu_\mu=(-1)^{\frac{d}{2}+1} a E_d+\cdots, \label{eq:eventraceanomaly} \end{eqnarray} where $E_{d} \propto \epsilon_{\mu_1\mu_2 \cdots \mu_{d-1}\mu_{d}} \epsilon^{\nu_1\nu_2 \cdots \nu_{d-1}\nu_{d}}R^{\mu_1\mu_2}_{~~~~\nu_1\nu_2}\cdots R^{\mu_{d-1}\mu_{d}}_{~~~~~~~\nu_{d-1}\nu_{d}}$ is the Euler density, and the dots indicate terms which vanish in conformally flat background. In two dimensional CFTs, the coefficient of the Euler density $E_{d}$ in the trace anomaly (written as $a$ in Eq.~(\ref{eq:eventraceanomaly})) coincides with the Zamolodchikov's $c$ function. In general even dimensional field theories, it was conjectured~\cite{Cardy:1988cwa} that $a$ decreases along RG flows in theories which interpolate UV and IR CFTs, that is, $a_{\rm IR}<a_{\rm UV}$. The quantity $a$ may be extracted in the following way. Let us put a CFT on a $d$-dimensional sphere $S^d$ with radius $r$ and consider the partition function on the sphere, \begin{eqnarray} Z=\int [D \varphi] e^{-S}, \end{eqnarray} where $\varphi$ denotes dynamical fields of the theory, $S$ is the action on the sphere, and $\int [D \varphi]$ is the path integral. Using the fact that the change of the radius $r$ as $r \to e^{\sigma} r$ for a constant $\sigma$ is equivalent to the Weyl rescaling of the metric as $g_{\mu\nu} \to e^{2\sigma}g_{\mu\nu}$, the free energy $F=-\log Z$ satisfies \begin{eqnarray} \frac{d F}{d \log r}=-\vev{ \int d^d \sqrt{g} T^\mu_\mu} \propto (-1)^{\frac{d}{2}} a, \end{eqnarray} where we have used the fact that the metric of the sphere is conformally flat in projective coordinates, $ds^2=(2r^2/(x^2+r^2))^2dx^2$, and hence the terms denoted by the dots in Eq.~(\ref{eq:eventraceanomaly}) do not contribute. Therefore, the above conjecture may be interpreted as the conjecture that the function $(-1)^{\frac{d}{2}} d F / d \log r$ decreases as $r$ is increased. In odd dimensions, there is a similar conjecture about the free energy $F$~\cite{Jafferis:2011zi,Klebanov:2011gs}. In this case, it is $(-1)^{\frac{d+1}{2}} F$ that is conjectured to decrease. Therefore, in both even and odd dimensions, the free energy on the sphere, $F=-\log Z$, plays an important role in the study of monotonically decreasing quantities. Recently, a proof that $a$ satisfies $a_{\rm IR}<a_{\rm UV}$ was given in four dimensions~\cite{Komargodski:2011vj} and further discussed in Refs.~\cite{Komargodski:2011xv,Luty:2012ww}. Also, a monotonically decreasing quantity was constructed in three dimensions~\cite{Casini:2012ei} which coincides with $F$ in CFT~\cite{Casini:2011kv}. Completely different methods were used in the proofs in two~\cite{Zamolodchikov:1986gt}, three~\cite{Casini:2012ei} and four~\cite{Komargodski:2011vj} dimensions. There is still no proof in general space-time dimensions, although holography suggests the existence of a monotonically decreasing function in arbitrary dimensions~\cite{Freedman:1999gp,Myers:2010xs,Myers:2010tj}. See Refs.~\cite{Elvang:2012st,Elvang:2012yc,Bhattacharyya:2012tc} for recent works in six and higher dimensions. Progress has also been made regarding the equivalence of scale and conformal invariance in four dimensions. A proof of the equivalence was given in perturbation theory~\cite{Luty:2012ww,Fortin:2012hn} (see also Refs.~\cite{Callan:1970ze,Polchinski:1987dy,Dorigoni:2009ra,Antoniadis:2011gn,Zheng:2011bp}).\footnote{ Although the existence of explicit counterexamples are discussed~\cite{Fortin:2011ks,Fortin:2011sz,Fortin:2012ic,Fortin:2012cq}, they are argued to be conformally invariant~\cite{Nakayama:2011tk,Nakayama:2012nd,Fortin:2012hn} based on the results of Refs.~\cite{Jack:1990eb,Osborn:1991gm}. } Ref.~\cite{Luty:2012ww} also gave a non-perturbative argument in favor of the equivalence. In that proof, the existence of a monotonically decreasing quantity $a$ (or more precisely the dilaton forward scattering amplitude) is essential. This is similar to the proof of the equivalence in two dimensions~\cite{Zamolodchikov:1986gt,Polchinski:1987dy}. However, much less is known in other dimensions.\footnote{ There exist free field theory counterexamples in $d >4$~\cite{ElShowk:2011gz,Jackiw:2011vz}. But there is no local current operator for scaling symmetry and only the charge is well-defined in those theories. There still remains a possibility that every scale invariant theory with a scaling current is conformally invariant.} In particular, there are many perturbative field theories in three dimensions, and there is a possibility that some of them could be scale invariant without conformal invariance by the same mechanism discussed in Refs.~\cite{Fortin:2011ks,Fortin:2011sz,Fortin:2012ic,Fortin:2012cq}. As discussed above, one of the ways to generalize the two dimensional theorems to arbitrary dimensions may be to use the free energy on the sphere. The flow of the free energy was studied when a CFT is deformed by adding slightly marginal operators ${\cal O}$ to the Lagrangian~\cite{Cardy:1988cwa,Klebanov:2011gs}. The operators were assumed to have scaling dimensions $d-y$ with $y \ll 1$. In this paper, we study the free energy for general weakly interacting field theories with marginal interactions. A list of such theories is given in Table.~\ref{table:1}. We show that $(-1)^{\frac{d+1}{2}} F$ (in odd dimensions) or $(-1)^{\frac{d}{2}} d F / d \log r$ (in even dimensions) decreases monotonically in these theories. Furthermore, following Ref.~\cite{Luty:2012ww}, we argue that scale invariance is equivalent to conformal invariance in these theories. The rest of the paper is organized as follows. In section~\ref{sec:2} we give a relation between the free energy on the sphere and the ``dilaton effective action'' which was used in the proof of the $a$-theorem in four dimensions~\cite{Komargodski:2011vj,Komargodski:2011xv,Luty:2012ww}. It enables us to compute perturbative flows of the free energy by using the method of Refs.~\cite{Komargodski:2011xv,Luty:2012ww}. We obtain the dilaton effective action in dimensional regularization. In section~\ref{sec:3}, we compute the flow of the free energy using the dilaton effective action. We check our result in two, three, four and six dimensions. Using the result, we argue the equivalence of scale and conformal invariance. Section~\ref{sec:4} is devoted to conclusions. \begin{table}[] \begin{center} \begin{tabular}{|c|c|} \hline Dimensions & Lagrangians \\ \hline $d=2$ & $G_b(\phi)(\partial_\mu\phi)^2 +G_f(\phi)\psi \Slash{\partial}\psi+H(\phi)\psi^4$\\ \hline $d=3$ & $ (A \partial A+\frac{2}{3}A^3)+(D_\mu \phi)^2+\psi \Slash{D}\psi+\phi^6+\phi^2 \psi^2 $ \\ \hline $d=4$ & $(F_{\mu\nu})^2+(D_\mu \phi)^2+\psi \Slash{D}\psi+\phi^4+ \phi \psi^2$ \\ \hline $d=6$ & $ (\partial_\mu \phi)^2+\phi^3$ \\ \hline \end{tabular} \caption{Schematic forms of the Lagrangians of perturbative theories with marginal interactions. The fields $\phi$ are bosons, $\psi$ are fermions, $A$ are gauge bosons, $F$ are gauge field strengths, and possible indices specifying these fields are suppressed. $G_b(\phi)$, $G_f(\phi)$ and $H(\phi)$ are arbitrary functions of scalar fields $\phi$. All the coupling constants are dimensionless in these Lagrangians.} \label{table:1} \end{center} \end{table} \section{Dilaton effective action}\label{sec:2} We define the free energy of a theory on a $d$-dimensional sphere as a dilaton effective action in the following way. We first consider the partition function as a functional of a background metric $\hat{g}_{\mu\nu}$. (The hat is used on the metric following the notation of Refs.~\cite{Komargodski:2011vj,Komargodski:2011xv,Luty:2012ww}.) It is given as \begin{eqnarray} Z &=& \int [D\varphi] \exp\left(-S[\varphi,\hat{g}_{\mu\nu}]-S_{\rm c.t.}[\hat{g}_{\mu\nu}] \right) \nonumber \\ &=& Z_0 \exp\left(-S_{\rm eff,0}[\hat{g}_{\mu\nu}]-S_{\rm c.t.}[\hat{g}_{\mu\nu}] \right), \end{eqnarray} where $\varphi$ denotes dynamical fields of the theory, and $S[\varphi,\hat{g}_{\mu\nu}]$ is the action of the fields $\varphi$ coupled to the metric $\hat{g}_{\mu\nu}$. The factor $Z_0$ is the contribution to the partition function which does not depend on the background metric, and $S_{\rm eff,0}$ is the (bare) effective action of the metric obtained as a result of the path integral. The counterterm $S_{\rm c.t.}$ is taken so that the functional \begin{eqnarray} S_{\rm eff}[\hat{g}_{\mu\nu}]=S_{\rm eff,0}[\hat{g}_{\mu\nu}]+S_{\rm c.t.}[\hat{g}_{\mu\nu}] , \end{eqnarray} becomes finite. We will impose further condition on the counterterms $S_{\rm c.t.}$ later. We introduce a dilaton field $\tau$ and a new metric $g_{\mu\nu}$ as $\hat{g}_{\mu\nu}=e^{-2\tau} g_{\mu\nu}$. Then the dilaton effective action is defined as \begin{eqnarray} S[\tau, g_{\mu\nu}]=S_{\rm eff}[\hat{g}_{\mu\nu}=e^{-2\tau} g_{\mu\nu}]. \end{eqnarray} This definition of the dilaton effective action is emphasized in Ref.~\cite{Luty:2012ww}. When $g_{\mu\nu}=\eta_{\mu\nu}$, it gives the dilaton effective action in flat space, and this definition makes clear the invariance of the dilation effective action under conformal transformations. This is because conformal transformations are just the subgroup of the diffeomorphism of the original metric $\hat{g}_{\mu\nu}$ which preserves the form $d\hat{s}^2=e^{-2\tau}dx^2$. The metric of the sphere with radius $r$ can be written using the projective coordinates as $d\hat{s}^2=\hat{g}_{\mu\nu}dx^\mu dx^\nu=[2r^2/(x^2+r^2)]^2 dx^2$. However, we may also interpret this as a flat metric $g_{\mu\nu}=\eta_{\mu\nu}$ with a nontrivial background for the dilaton, $e^{-\tau}=2r^2/(x^2+r^2)$. Then, the free energy of the theory on the sphere, $F=-\log Z$, is given as \begin{eqnarray} F(r)=-\log Z_0+S\left[ e^{-\tau}=\frac{2r^2}{x^2+r^2}, \eta_{\mu\nu} \right]. \label{eq:deffreeenergy} \end{eqnarray} By this interpretation, we can use the results of Refs.~\cite{Komargodski:2011xv,Luty:2012ww} for the dilaton effective action to compute the free energy on the sphere. It is clear that the dependence of $F(r)$ on the radius of the sphere $r$ should be contained in the second term of Eq.~(\ref{eq:deffreeenergy}). In this paper we attempt to calculate only the derivatives of $F(r)$ with respect to $r$. Then we may neglect the term $\log Z_0$, and focus on the dilaton effective action. The above definition still has an ambiguity regarding the choice of the counterterms in $S_{\rm c.t.}$. Although the divergent part of $S_{\rm c.t.}$ is determined uniquely so that it makes the metric effective action $S_{\rm eff}$ finite, the finite part of $S_{\rm c.t.}$ is not fixed. We impose the following requirement on the finite part. In this paper we only consider massless theories which do not contain dimensionful parameters (see Table.~\ref{table:1}). Furthermore, we always use dimensional regularization as a regularization method. Then, by using mass-independent renormalization scheme (such as minimal subtraction), counterterms which contain dimensionful coefficients are not necessary (see e.g. Ref.~\cite{Weinberg:1996kr}). That is, we can set all the counterterms to zero aside from counterterms with dimensionless coefficients which are schematically given as \begin{eqnarray} S_{\rm c.t.}[\hat{g}_{\mu\nu}] \sim \int d^dx \sqrt{\hat{g}} (R_{\mu\nu\rho\sigma}(\hat{g}))^{\frac{d}{2}}, \label{eq:massindependentcounterterm} \end{eqnarray} where $R_{\mu\nu\rho\sigma}$ is the Riemann tensor and indices are contracted in arbitrary ways. Therefore, we only introduce counterterms of the form (\ref{eq:massindependentcounterterm}). This is our criterion for choosing the counterterms. In the case of odd dimensions, terms like Eq.~(\ref{eq:massindependentcounterterm}) do not exist and hence we need no counterterms at all. Therefore $F(r)$ is uniquely determined by our criterion. In even dimensions, finite counterterms of the form (\ref{eq:massindependentcounterterm}) are allowed,\footnote{The finite counterterms are in fact necessary in order for the effective action $S_{\rm eff}(\hat{g}_{\mu\nu})$ to be RG invariant. Even if they are set to zero at some RG scale, they are generated along RG flows.} and hence the ambiguity in defining $F(r)$ remains. However, one can see that the contributions coming from these finite counterterms disappear if we take the derivative of the free energy, $dF/dr$.\footnote{Strictly speaking, these contributions are not precisely zero in dimensional regularization. They are suppressed by $\epsilon$, where the space-time dimensions is given by $d=({\rm integer})-2\epsilon$. Then, the contributions of the finite part of the counterterms become zero when we take $\epsilon \to 0$, but the contributions from divergent part of the counterters are important. } As discussed in the introduction, the important quantity in even dimensions is $dF/dr$ rather than $F$ itself, and hence the remaining ambiguity in choosing the counterterms does not matter. The above requirement on the counterterms is a little technical. More physical requirement may be that the free energy $F$ (in odd dimensions) or its derivative $dF/dr$ (in even dimensions) becomes constant on UV/IR fixed points. This physical requirement will be satisfied by the above choice of the counterterms. Now let us study the dilaton effective action $S[\tau]$ in flat (Euclidean) space-time. The most important part of $S[\tau]$ in perturbation theory has been given in Refs.~\cite{Komargodski:2011xv,Luty:2012ww}. We use dimensional regularization where we work in $d=d_0-2\epsilon$ dimensions with $d_0 $ an integer. We expand the action as \begin{eqnarray} S[\varphi, \hat{g}_{\mu\nu}= e^{-2\tau}\eta] &=& S[\varphi, \eta_{\mu\nu}] +\int d^d x \tau T^{\mu}_{\mu}+O(\tau^2), \label{eq:mattertau} \end{eqnarray} where $T^{\mu\nu}$ is the energy-momentum tensor. The linear term in $\tau$ is proportional to the trace anomaly. We assume that interaction terms are present in the Lagrangian as $\lambda_0^i {\cal O}_i $, where $\lambda^i_0$ are bare couplings and ${\cal O}_i$ are bare operators. For example, in a four-dimensional scalar $\phi^4$ theory, we may define ${\cal O}=\frac{1}{4!} \phi^4$. Then, if the energy-momentum tensor is improved appropriately, the trace anomaly may be given as \begin{eqnarray} T^\mu_\mu=-\sum_i {\cal B}^i [{\cal O}_i], \label{eq:tracetmtensor} \end{eqnarray} where $[{\cal O}]_i$ are the renormalized operators corresponding to the bare operator ${\cal O}_i$, and ${\cal B}^i$ are the beta functions of the renormalized coupling constants $\lambda^i$. In cases where there are many flavors of matter fields, there are ambiguities in the definition of usual beta functions $\beta^i$, while the beta functions ${\cal B}^i$ appearing in Eq.~(\ref{eq:tracetmtensor}) is unambiguous~\cite{Jack:1990eb,Osborn:1991gm}. Following the notation of Refs.~\cite{Jack:1990eb,Osborn:1991gm}, we denote this unambiguous beta functions as ${\cal B}^i$ rather than $\beta^i$. The improvement of the energy-momentum tensor is related to the term $R \phi^2$ in the Lagrangian, where $R$ is the Ricci scalar and $\phi$ are scalar fields of the theory. For a moment, let us assume that this term is chosen so that Eq.~(\ref{eq:tracetmtensor}) holds. We will revisit this point at the end of this section. The higher order terms of $\tau$ in Eq.~(\ref{eq:mattertau}) are accompanied by additional powers of $\epsilon$ or the coupling constants~\cite{Luty:2012ww}. The reason is the following. In theories with only dimensionless parameters which are listed in Table.~\ref{table:1}, the dilaton appears in the combination $\epsilon \tau$ in the bare Lagrangian after performing appropriate Weyl rescaling of matter fields. For example, in the case of a four dimensional $\phi^4$ theory, the bare Lagrangian of the theory is given as \footnote{Here we pretend as if the term $R \phi^2$ is chosen as the conformal coupling of a free scalar, $\frac{(d-2)}{8(d-1)}R \phi^2$. This is not correct at higher orders of perturbation theory~\cite{Collins:1976vm}, but the corrections occur at sufficiently higher orders so that the following discussion is not violated. } \begin{eqnarray} \sqrt{\hat{g}} \left( \frac{1}{2}\hat{g}^{\mu\nu}\partial_\mu \hat{\phi}\partial_\nu \hat{\phi}+\frac{d-2}{8(d-1)}R(\hat{g}) \hat{\phi}^2 +\frac{\lambda_0}{4!}\hat{\phi}^4 \right)= \left( \eta^{\mu\nu} \partial_\mu \phi\partial_\nu \phi +e^{-2\epsilon \tau} \frac{\lambda_0}{4!}\phi^4 \right) \end{eqnarray} where $\hat{\phi}$ is the bare field, $\hat{g}_{\mu\nu}=e^{-2\tau}\eta_{\mu\nu}$ and $\phi=e^{-(d-2)\tau/2}\hat{\phi}$. Loop calculations give divergences which may cancel the factor $\epsilon$ in $\epsilon \tau$. However, whenever $\epsilon$ is cancelled, there is always an additional loop suppression factor $L$ (e.g., $L=\lambda/16\pi^2 $ in the $\phi^4$ theory). Therefore, $\tau$ appears only in the combination $\epsilon \tau$ or $L \tau$. Then, neglecting the higher order terms, the leading order term in the dilaton effective action is given by \begin{eqnarray} S_{\rm eff,0}[\hat{g}_{\mu\nu}=e^{-2\tau}\eta_{\mu\nu}]=-\frac{1}{2}\int d^d x d^d y \tau(x) \tau(y) \sum_{i,j}{\cal B}^i{\cal B}^j\vev{ [{\cal O}_i(x)] [{\cal O}_i(y)] }+\cdots \end{eqnarray} where dots denote higher order terms in $\epsilon$ or loop factors. At the leading order of perturbation theory, correlation functions of $[{\cal O}_i]$ are given as \begin{eqnarray} \vev{ [{\cal O}_i(x)] [{\cal O}_i(y)] }=\frac{\mu^{2(d-d_i)}c_i \delta_{ij}}{|x-y|^{2d_i}}, \label{eq:operatorcorrelation} \end{eqnarray} where $c_i$ are dimensionless constants (e.g., $c=\frac{1}{4!}(\Gamma(d/2-1)/4\pi^{d/2})^4$ for ${\cal O}=\frac{1}{4!} \phi^4$ ), $\mu$ is the unit of mass of dimensional regularization (or in other words the RG scale), and $d_i$ is the classical scaling dimension of ${\cal O}_i$. Although we are considering only marginal interactions, $d_i$ differs from $d$ by order $\epsilon$ (e.g., $d_i=2(d-2)=4-4\epsilon$ for ${\cal O}=\frac{1}{4!} \phi^4$). The operators $[{\cal O}_i]$ are assumed to be normalized so that $\vev{ [{\cal O}_i(x)] [{\cal O}_i(y)] } $ is proportional to $\delta_{ij}$ at the leading order. The constants $c_i$ are ensured to be positive by reflection positivity. Then the dilaton effective action becomes \begin{eqnarray} S_{\rm eff,0}[\hat{g}_{\mu\nu}=e^{2\tau}\eta_{\mu\nu}] &=&-\frac{1}{2}\int d^d x d^d y \tau(x) \tau(y) \sum_{i}\frac{\mu^{2(d-d_i)}c_i {\cal B}_i^2}{|x-y|^{2d_i}}+\cdots \nonumber \\ &=&- \int \frac{d^d k}{(2\pi)^d} |\tilde{\tau}(k) |^2 \sum_{i}\frac{\pi^{d \over 2} 2^{d-2d_i} \Gamma(d/2-d_i)}{2\Gamma(d_i)}c_i {\cal B}_i^2\mu^{2(d-d_i)}k^{2d_i-d}+\cdots \nonumber \\ \label{eq:baredilatonaction} \end{eqnarray} where we have Fourier-transformed the dilaton as $\tilde{\tau}(k)=\int d^d x e^{-ikx}\tau(x)$. In odd dimensions, the above dilaton effective action is finite in the limit $\epsilon \to 0$. This is consistent with the fact that we need no counterterms in odd dimensions as discussed above. In even dimensions, there is a divergence coming from the factor $ \Gamma(d/2-d_i)$ and we have to renormalize it. The counterterm should be local and is given as \begin{eqnarray} S_{\rm c.t.}[\hat{g}_{\mu\nu}=e^{-2\tau}\eta_{\mu\nu}] &=& \int \frac{d^d k}{(2\pi)^d} |\tilde{\tau}(k) |^2 \left(a_0+ \sum_{i}\frac{\pi^{d \over 2} 2^{d-2d_i} \Gamma(d/2-d_i)}{2\Gamma(d_i)} c_i {\cal B}_i^2\right)\mu^{d-d_0}k^{d_0}+\cdots \nonumber \\ &=&\int d^d x \tau(x)(-\partial^2)^{d_0 \over 2}\tau(x) \left(a_0+ \sum_{i}\frac{\pi^{d \over 2} 2^{d-2d_i} \Gamma(d/2-d_i)}{2\Gamma(d_i)} c_i {\cal B}_i^2\right)\mu^{d-d_0}+\cdots \nonumber \\ \label{eq:dilatonconter} \end{eqnarray} where $a_0$ is a constant which is finite in the limit $\epsilon \to 0$. This counterterm makes the dilaton effective action finite. Although it is not immediately evident whether the counterterm (\ref{eq:dilatonconter}) can be obtained from counterterms for the metric $S_{\rm c.t.}[\hat{g}_{\mu\nu}] $ by replacing the metric as $\hat{g}_{\mu\nu} \to e^{-2\tau}\eta_{\mu\nu}$, it is known to be possible~\cite{Elvang:2012st,Elvang:2012yc}. It may be instructive to see it explicitly in the simplest case where the space-time dimension is $d=2-2\epsilon$. There is only one candidate for the counterterm which is given by \begin{eqnarray} S_{\rm c.t.}[\hat{g}_{\mu\nu}] \propto \int d^{d} x \sqrt{\hat{g}}R(\hat{g}). \end{eqnarray} Then, the dilaton counterterm is obtained as \begin{eqnarray} S_{\rm c.t.}[\hat{g}_{\mu\nu}=e^{-2\tau}g_{\mu\nu}] & \propto & \int d^{d} x \sqrt{g} e^{2\epsilon\tau}\left[R(g)- 2\epsilon(1-2\epsilon) (\nabla \tau)^2 \right]\nonumber \\ &=& \int d^{d} x \sqrt{g}\left[R(g)+2\epsilon \left(\tau R(g) -(\nabla \tau)^2 \right) +O(\epsilon^2) \right]. \label{eq:twodimcounter} \end{eqnarray} Thus, by taking $g_{\mu\nu} \to \eta_{\mu\nu}$, the counterterm of the form $\frac{1}{\epsilon} \tau \partial^2 \tau$ (a single pole term in $\epsilon$) is obtained from $\frac{1}{\epsilon^2} \sqrt{\hat{g}}R(\hat{g})$ (a double pole term in $\epsilon$). One should also notice that the finite term in the dilaton counterterm $S_{\rm c.t.}[\hat{g}_{\mu\nu}=e^{-2\tau}\eta_{\mu\nu}] $ actually comes from the divergent term in the metric counterterm $S_{\rm c.t.}[\hat{g}_{\mu\nu}]$. In fact, this is how the Wess-Zumino action for the dilaton~\cite{Schwimmer:2010za,Komargodski:2011vj} arises in dimensional regularization. The integral of the Euler density $E_{d_0}$ is a topological quantity in $d_0$-dimensions. In the case of $d=d_0-2\epsilon$ dimensional space-time, this topological property is broken by $\epsilon$, and the change of the metric $\hat{g}_{\mu\nu} = e^{-2\tau}g_{\mu\nu}$ gives \begin{eqnarray} \int d^d x \sqrt{\hat{g}}E_{d_0}(\hat{g}) = \int d^d x \sqrt{g}E_{d_0}(g)+2\epsilon S_{\rm WZ}+O(\epsilon^2), \label{eq:dilatonWZ} \end{eqnarray} where \begin{eqnarray} S_{\rm WZ}=\int d^dx \sqrt{g} \left( \tau E_{d_0}(g)+\cdots \right) \end{eqnarray} is the Wess-Zumino action for the dilaton. See Eq.~(\ref{eq:twodimcounter}) for the case of $d_0=2$. In CFTs, we need a counterterm of the form $\int d^d x(a / \epsilon)E_{d_0}$ to make the energy-momentum tensor finite~\cite{Duff:1993wm}. This counterterm leads to the trace anomaly $T^\mu_\mu \sim a E_{d_0} $. One can see that the presence of this counterterm gives the finite Wess-Zumino action $a S_{\rm WZ}$ for the dilaton by using Eq.~(\ref{eq:dilatonWZ}). Let us return to the computation of the dilaton effective action. We have neglected higher order terms in $\epsilon \tau$ and loop factors. We continue to neglect the higher order corrections of the loop factors. However, for our purposes it is important to recover the higher order terms of $\epsilon \tau$. Actually, there are divergences when we compute the free energy by substituting $e^{-\tau}=2r^2/(x^2+r^2)$. It turns out that terms containing extra powers of $\tau=\log ((x^2+r^2)/2r^2)$, $\tau^k $~($k=0,1,2,\cdots$), give additional divergences $\frac{1}{\epsilon^k}$. Therefore it is necessary to retain higher order terms in $\epsilon \tau$. To recover the dependence on $\epsilon \tau$, we use the conformal invariance of the dilaton effective action. As discussed above, the dilaton effective action should be conformally invariant since the conformal transformations are just the subgroup of the diffeomorphism of the original effective action for the metric. We can make Eqs.~(\ref{eq:baredilatonaction}) and (\ref{eq:dilatonconter}) conformally invariant by replacing them as \begin{eqnarray} \int d^d x \tau(x) \frac{1}{|x-y|^{2d_i}} \tau(y)\to \int d^d x d^d y \left( \frac{e^{-(d-d_i)\tau(x)}}{d-d_i} \right) \frac{1}{|x-y|^{2d_i}} \left( \frac{e^{-(d-d_i)\tau(y)}}{d-d_i} \right), \label{eq:replaceaction} \end{eqnarray} and \begin{eqnarray} \int d^d x \tau(x) (-\partial^2)^{d_0 \over 2} \tau(y)\to \int d^d x \left( \frac{e^{-(\frac{d-d_0}{2})\tau(x)}}{(d-d_0)/2} \right) (-\partial^2)^{d_0 \over 2} \left( \frac{e^{-(\frac{d-d_0}{2})\tau(x)}}{(d-d_0)/2} \right). \label{eq:replacecounterterm} \end{eqnarray} By expanding in $\tau$, one can check that the linear terms in $\tau$ are absent due to the properties of dimensional regularization. The quadratic terms in $\tau$ just reproduce the original ones. Higher order terms in $\tau$ are accompanied by appropriate powers of $d-d_i \propto \epsilon$ as expected. The modified action in the right-hand-side of Eq.~(\ref{eq:replaceaction}) is conformally invariant since the field $e^{-(d-d_i)\tau}$ has the scaling dimension $(d-d_i)$. By performing Fourier transformation to momentum space, the right-hand-side of Eq.~(\ref{eq:replacecounterterm}) is just the same as that of Eq.~(\ref{eq:replaceaction}) by analytically continuing $d_i \to(d+ d_0)/2$ (up to a field-independent factor). Therefore it is also conformally invariant. See Refs.~\cite{Elvang:2012st,Elvang:2012yc} for the construnction of this counterterm from $S_{\rm c.t.}[\hat{g}_{\mu\nu}]$. By using the replacements (\ref{eq:replaceaction}) and (\ref{eq:replacecounterterm}) in Eqs.~(\ref{eq:baredilatonaction}) and (\ref{eq:dilatonconter}) respectively, we obtain the desired dilaton effective action with nonzero $\epsilon$. Before closing this section, let us discuss the term $R \phi^2$ in the matter action. In general, the existence of this term gives an additional ambiguity in the definition of the free energy because we can introduce new parameters $\xi$ as $\xi R \phi^2$.\footnote{ In the case of the six dimensional $\phi^3$ theories, there can also exist linear terms of $\phi$ given by $ \eta R\nabla^2 \phi$ and $\zeta R^2 \phi$. Just for simplicity, we consider the case $d \leq 4$ in the following discussion.} This is related to the ambiguity of the energy-momentum tensor $T_{\mu\nu}$, since the addition of the term $\xi R \phi^2$ changes $T_{\mu\nu}=\frac{2}{\sqrt{g}}\frac{\delta S}{\delta g^{\mu\nu}}$ by improvement term of the form $(\partial^\mu \partial^\nu-\eta^{\mu\nu}\partial^2)\xi \phi^2$. We assume the existence of the renormalization scheme for the energy-momentum tensor $T_{\mu\nu}$ such that \begin{enumerate} \item $T_{\mu\nu}$ is RG invariant, i.e., $\mu\frac{\partial}{\partial\mu} T_{\mu\nu}=0$. In other words, there is no operator mixing with $(\partial^\mu \partial^\nu-\eta^{\mu\nu}\partial^2)\phi^2$. \item The trace anomaly (with nontrivial metric) is given by \begin{eqnarray} T^{\mu}_\mu=-{\cal B}^i[{\cal O}_i]'+{\rm purely~metric~terms}, \label{eq:curvedtrace} \end{eqnarray} where $[{\cal O}_i]'$ are operators which coincide with $[{\cal O}_i]$ in the flat space limit. In particular, $T^\mu_\mu$ becomes metric dependent c-number when ${\cal B}_i=0$. \end{enumerate} (Actually, in $d \leq 4$ dimensions it is enough that Eq.~(\ref{eq:curvedtrace}) is satisfied in flat space, since the flat space trace anomaly combined with Wess-Zumino consistency condition for Weyl transformations leads to Eq.~(\ref{eq:curvedtrace}) in nontrivial metric background~\cite{Osborn:1991gm}.) We always couple the metric to the energy-momentum tensor satisfying the above assumptions. The existence of the energy-momentum tensor satisfying the above assumptions can be explicitly proved for some theories. For example, a proof was given in Ref.~\cite{Collins:1976vm} for the case of $\phi^4$ theory in four dimensions. A large class of four dimensional renormalizable supersymmetric theories also satisfies the assumptions. In this case there is a Ferrara-Zumino supercurrent multiplet~\cite{Ferrara:1974pz}, ${\cal J}^{\rm FZ}_{\alpha\dot{\alpha}}$ (if the theory does not contain FI-terms~\cite{Komargodski:2009pc}. See also Refs.~\cite{Komargodski:2010rb,Dumitrescu:2011iu} for recent comprehensive discussions on supercurrents.) The Ferrara-Zumino supercurrent multiplet can mix with other operators only as ${\cal J}^{\rm FZ}_{\alpha\dot{\alpha}} \to {\cal J}^{\rm FZ}_{\alpha\dot{\alpha}}+[D_\alpha , \bar{D}_{\dot{\alpha}}](\Phi+\Phi^\dagger)$, where $\Phi$ is a chiral superfield, i.e, $\bar{D}_{\dot{\alpha}}\Phi=0$, and $\Phi$ has mass dimension two. If there is no candidate for $\Phi$ which is invariant under any global or local symmetries, the multiplet ${\cal J}^{\rm FZ}_{\alpha\dot{\alpha}}$ cannot mix with any other operators. Then the energy-momentum tensor contained in ${\cal J}^{\rm FZ}_{\alpha\dot{\alpha}}$ satisfies the first assumption. It also satisfies the second assumption~\cite{Clark:1978jx,Piguet:1981mu,Piguet:1981mw}.\footnote{In higher orders of perturbation theory, there is a notorious problem called the anomaly puzzle~\cite{Grisaru:1978vx}. See Refs.~\cite{Grisaru:1985ik,Shifman:1986zi,ArkaniHamed:1997mj,Huang:2010tn,Yonekura:2010mc,Yonekura:2012uk} and references therein for discussions on this problem. } Thus a large class of supersymmetric theories has the energy-momentum tensor satisfying the above assumptions. The situation is similar in three dimensions as long as the Ferrara-Zumino multiplet exists. If we introduce additional parameters $\xi$, it is possible to construct an RG invariant energy-momentum tensor. Let ${\cal O}^{\phi^2}_a$ denote the set of operators $\phi^2$, where $a$ is the label specifying the operators. If the energy-momentum tensor satisfies Eq.~(\ref{eq:curvedtrace}), the operator mixing is in general given by (see e.g., Ref.~\cite{Osborn:1991gm}) \begin{eqnarray} \mu \frac{\partial}{\partial \mu} \left( \begin{array}{c} ~T_{\mu\nu} \\ ~[{\cal O}_i] \\ ~[{\cal O}^{\phi^2}_a] \end{array} \right) = \left( \begin{array}{ccc} 0 & 0 & -{\cal B}^k \delta^b_k(\eta^{\mu\nu}\partial^2-\partial^\mu \partial^\nu)/(d-1) \\ 0 & -\partial_i {\cal B}^j & \delta^b_i \partial^2 \\ 0 & 0 & \gamma^b_a \end{array} \right) \left( \begin{array}{c} ~T_{\mu\nu} \\ ~[{\cal O}_j] \\ ~[{\cal O}^{\phi^2}_b]\end{array} \right), \end{eqnarray} where $\delta^a_i$ and $\gamma^b_a$ are some anomalous dimension matrices, and $\partial_i {\cal B}^j$ is the derivative of $ {\cal B}^j$ with respect to the coupling $\lambda^i$. We introduce new parameters $\xi^a_i$ which are defined to satisfy the RG equation \begin{eqnarray} \mu \frac{\partial}{\partial \mu}\xi^a_i + \gamma^a_b \xi^b_i+\partial_i {\cal B}^j \xi^a_j+\delta^a_i=0. \label{eq:additionalparameter} \end{eqnarray} Then, we define the new operators as \begin{eqnarray} [{\cal O}^{\rm (new)}_i] &=& [{\cal O}_i]+\xi^a_i \partial^2 [{\cal O}^{\phi^2}_a], \\ T^{\rm (new)}_{\mu\nu} &=& T_{\mu\nu}-\frac{1}{d-1}{\cal B}^i \xi^a_i(\eta^{\mu\nu}\partial^2-\partial^\mu \partial^\nu) [{\cal O}^{\phi^2}_a]. \label{eq:newtensor} \end{eqnarray} This new energy-momentum tensor is invariant under RG and has a trace $-{\cal B}^i [{\cal O}^{\rm (new)}_i] $. If we can find $\xi^a_i$ as a function of the coupling constants $\lambda^i$ with a well-defined perturbative expansion, the difference between $T_{\mu\nu}$ and $T^{\rm (new)}_{\mu\nu}$ is just a change of renormalization prescription and $T^{\rm (new)}_{\mu\nu}$ is the desired energy-momentum tensor. This was indeed shown to be possible in $\phi^4$ theory~\cite{Collins:1976vm}. Even if $\xi^a_i$ is not a function of $\lambda^i$, our computation of the free energy is valid as long as we can find a solution to Eq.~(\ref{eq:additionalparameter}) such that $\xi^a_i$ remain small along RG flows. One can check that $\delta^a_i$ vanish at the one loop level, and hence nonzero $\xi^a_i$ are generated only at higher orders of perturbation theory. \section{Perturbative free energy and $c$-theorems}\label{sec:3} \subsection{Free energy in $d$-dimensions} Let us summarize the result for the leading term of the dilaton effective action obtained in the previous section. The unrenormalized effective action is given by \begin{eqnarray} S_{\rm eff,0}[\hat{g}_{\mu\nu}=e^{-2\tau}\eta_{\mu\nu}] = -\frac{1}{2 }\sum_{i} c_i {\cal B}_i^2 I_{d_i} , \end{eqnarray} and the counterterm (for $d_0$=even) is given by \begin{eqnarray} S_{\rm c.t.}[\hat{g}_{\mu\nu}=e^{-2\tau}\eta_{\mu\nu}]=\left(a_0+\sum_{i} \frac{\pi^{d \over 2} 2^{d-2d_i} \Gamma(d/2-d_i)}{2\Gamma(d_i)}c_i {\cal B}_i^2 \right) J, \end{eqnarray} where $I_{d_i}$ and $J$ are defined as \begin{eqnarray} I_{d_i} &=& \mu^{2(d-d_i)}\int d^d x d^d y \left( \frac{e^{-(d-d_i)\tau(x)}}{d-d_i} \right) \frac{1}{|x-y|^{2d_i}} \left( \frac{e^{-(d-d_i)\tau(y)}}{d-d_i} \right), \\ J &=&\mu^{(d-d_0)} \int d^d x \left( \frac{e^{-(\frac{d-d_0}{2})\tau(x)}}{(d-d_0)/2} \right) (-\partial^2)^{d_0 \over 2} \left( \frac{e^{-(\frac{d-d_0}{2})\tau(y)}}{(d-d_0)/2} \right). \end{eqnarray} In this section we evaluate the explicit values of $I_{d_i}$ and $J$ when we substitute the dilation vacuum expectation value $e^{-\tau}=2r^2/(x^2+r^2)$. The computation of $I_{d_i}$ is the same as in Refs.~\cite{Cardy:1988cwa,Klebanov:2011gs}. First we rewrite $I_{d_i}$ as \begin{eqnarray} I_{d_i} &=&\frac{\mu^{2(d-d_i)}}{(d-d_i)^2}\int d^d x \sqrt{\hat{g}(x)} d^d y \sqrt{\hat{g}(y)} \frac{1}{s(x,y)^{2d_i}}, \\ s(x,y) &=& \frac{2r^2}{(x^2+r^2)^{1 \over 2}(y^2+r^2)^{1 \over 2}} |x-y|. \end{eqnarray} where $\hat{g}_{\mu\nu}=e^{-2\tau} \eta_{\mu\nu}$ is the metric of the sphere, and $s(x,y)$ is the ``chordal distance'' between points $x$ and $y$ if the sphere is embedded in a flat Euclidean space. Then, by using the rotational invariance of the sphere, we may set $y=\infty$ to obtain \begin{eqnarray} I_{d_i} &=&\frac{\mu^{2(d-d_i)}{\rm Vol.}(S^d)}{(d-d_i)^2} \int d^d x \sqrt{\hat{g}(x)} \frac{1}{s(x,\infty)^{2d_i}} \nonumber \\ &=& \frac{\mu^{2(d-d_i)}}{(d-d_i)^2} \frac{2\pi^{\frac{d+1}{2}} r^{d} }{\Gamma(\frac{d+1}{2}) } \int d^dx \frac{(2r^2)^{d-2d_i}}{(x^2+r^2)^{(d-d_i)}} \nonumber \\ &=&(2\mu r)^{2(d-d_i)} \frac{ \pi^d \Gamma(\frac{d}{2})\Gamma(\frac{d}{2}-d_i) }{(d-d_i) \Gamma(d) \Gamma(d-d_i+1) } , \end{eqnarray} where ${\rm Vol.}(S^d)$ is the volume of the sphere with radius $r$, and in the process of the computation we have used some identities of the gamma function such as $\Gamma(\frac{d}{2})\Gamma(\frac{d+1}{2})=\pi^{1 \over 2}2^{1-d}\Gamma(d)$. It is also easy to compute $J$. By Fourier transforming to momentum space and looking at the expressions for $I_{d_i}$ and $J$ in momentum space, we find that $I_{d_i} $ and $J$ are related as \begin{eqnarray} J &=& \lim_{d_i \to {d+d_0 \over 2}} \frac{\Gamma(d_i)}{\pi^{d \over 2} 2^{d-2d_i} \Gamma(d/2-d_i)} I_{d_i} \nonumber \\ &=& (2\mu r)^{(d-d_0)} \frac{ 2^{d_0 } \pi^{d \over 2} \Gamma(\frac{d}{2}) \Gamma(\frac{d+d_0}{2}) }{(\frac{d-d_0}{2}) \Gamma(d) \Gamma(\frac{d-d_0}{2}+1) }. \end{eqnarray} By combining the above results, we finally get the free energy $F=-\log Z$ for odd dimensions or its derivative $d F/d \log r $ for even dimensions as follows.\\ {\bf Odd dimensions} \begin{eqnarray} F_{d_0={\rm odd}} &=& -\log Z_0-\frac{1}{2 }\sum_{i} c_i {\cal B}_i^2 (2\mu r)^{2(d-d_i)} \frac{ \pi^d \Gamma(\frac{d}{2})\Gamma(\frac{d}{2}-d_i) }{(d-d_i) \Gamma(d) \Gamma(d-d_i+1) } +\cdots\nonumber \\ &=& ({\rm const.})+(-1)^{d_0-1 \over 2} \frac{ 2 \pi^{d_0+1} }{ d_0 ! } \log(\mu r)\sum_{i} c_i {\cal B}_i^2 +\cdots. \end{eqnarray} {\bf Even dimensions} \begin{eqnarray} \frac{d F_{d_0={\rm even}}}{d \log r} &=&(2\mu r)^{(d-d_0)} \frac{ 2^{d_0 } \pi^{d \over 2} \Gamma(\frac{d}{2}) \Gamma(\frac{d+d_0}{2}) }{\Gamma(d) \Gamma(\frac{d-d_0}{2}+1) } \left(2a_0+\sum_{i} \frac{\pi^{d \over 2} 2^{d-2d_i} \Gamma(d/2-d_i)}{\Gamma(d_i)}c_i {\cal B}_i^2 \right) \nonumber \\ &&-\sum_{i} c_i {\cal B}_i^2 (2\mu r)^{2(d-d_i)} \frac{ \pi^d \Gamma(\frac{d}{2})\Gamma(\frac{d}{2}-d_i) }{\Gamma(d) \Gamma(d-d_i+1) } +\cdots \nonumber \\ &=&({\rm const.})+(-1)^{\frac{d_0}{2}+1} \frac{4 \pi^{d_0}}{d_0 !} \log(\mu r) \sum_{i} c_i {\cal B}_i^2 +\cdots \end{eqnarray} The dots denote higher order corrections in $\epsilon$ or loop factors. The constant terms in the above equations depend on $\log Z_0$ (in odd dimensions) or $a_0$ (in even dimensions) which we have not computed. However, from the above result we can obtain the flows of $F$ or $d F/d \log r$ as \begin{eqnarray} (-1)^{d_0+1 \over 2} \frac{d F_{d_0 = {\rm odd}}}{d \log r} &=& - \frac{ 2 \pi^{d_0+1} }{ d_0 ! } \sum_{i} c_i {\cal B}_i^2 +\cdots \label{eq:oddflow} \\ (-1)^{\frac{d_0}{2}} \frac{d^2 F_{d_0 = {\rm even}}}{d (\log r)^2} &=& - \frac{4 \pi^{d_0}}{d_0 !} \sum_{i} c_i {\cal B}_i^2 +\cdots . \label{eq:evenflow} \end{eqnarray} As is usual in perturbation theory, the higher order terms contain powers of the logarithm $\log (\mu r)$ and we may set the renormalization scale as $\mu \to r^{-1}$ to avoid large logarithmic corrections in perturbation theory. Then the coupling constants $\lambda (\mu)$ become functions of $r$ as $\lambda(\mu) \to \lambda(r^{-1})$. Eqs.~(\ref{eq:oddflow}) and (\ref{eq:evenflow}) are our main result. Similar results were obtained in Refs.~\cite{Cardy:1988cwa,Klebanov:2011gs} when a CFT is deformed by slightly marginal operators. Eqs.~(\ref{eq:oddflow}) and (\ref{eq:evenflow}) extend those results to theories with marginal interactions. In particular, these equations show that $(-1)^{d_0+1 \over 2}F_{d_0 = {\rm odd}}$ and $(-1)^{\frac{d_0}{2}} d F_{d_0 = {\rm even}}/d \log r $ decreases monotonically as we increase the radius of the sphere $r$ because the coefficients $c_i$ are positive. \subsection{c-theorems in various dimensions} \paragraph{Two dimensions} In two dimensions, the trace anomaly in CFT is given as~\footnote{ We do not introduce an additional factor of $2\pi$ in the definition of the energy-momentum tensor which often appears in the literature of two dimensional CFT. } \begin{eqnarray} T^\mu_\mu=\frac{c}{24 \pi} R, \label{eq:twodimcentral} \end{eqnarray} where $c$ is the central charge. The derivative of the free energy with respect to the radius of the sphere, $r$, is given by the one-point function of $T^\mu_\mu$ on the sphere $S^2$, and hence \begin{eqnarray} \frac{d F_{d_0 = 2}}{d \log r } =-\vev{\int d^2 x \sqrt{\hat{g}} T^\mu_\mu }_{S^2}=-\frac{c}{3}. \label{eq:relationfandc} \end{eqnarray} This relation is valid for CFT. The Zamolodchikov's $c$-function~\cite{Zamolodchikov:1986gt} is defined as \begin{eqnarray} c(|x|)=(2\pi)^2 \left[ 2 z^4 \vev{T_{zz}(x)T_{zz}(0) }-4z^2 x^2 \vev{T_{z\bar{z}}(x)T_{zz}(0) }-6 x^4 \vev{T_{z\bar{z}}(x)T_{z\bar{z}}(0) } \right], \end{eqnarray} where $z=x^1+ix^2$. The function $c(|x|)$ coincides with the central charge $c$ in Eq.~(\ref{eq:twodimcentral}) at conformal fixed points. The flow of this function is given by \begin{eqnarray} \frac{\delta c(|x|)}{\delta \log |x|} &=&-6\pi^2 x^4\vev{T^\mu_\mu(x) T^\nu_\nu(0)} \nonumber \\ &=&-6\pi^2\sum_{i,j}{\cal B}_i {\cal B}_j x^4\vev{{\cal O}_i(x) {\cal O}_j(0)}. \label{eq:cfunctionflow} \end{eqnarray} At the leading order, the operator correlation functions are given in Eq.~(\ref{eq:operatorcorrelation}). Therefore, by comparing Eqs.~(\ref{eq:evenflow}) and (\ref{eq:cfunctionflow}), we find \begin{eqnarray} \frac{d^2 F_{d_0 = 2}(r) }{d (\log r)^2 } =-\frac{1}{3} \left. \frac{d c(|x|)}{d \log |x|} \right|_{|x|=r}. \end{eqnarray} This is consistent with Eq.~(\ref{eq:relationfandc}). Although the two functions $d F_{d_0 = 2} / d \log r $ and -$c(r)/3$ need not precisely be the same in non-CFT, the above result shows that they indeed coincide at the order of perturbation theory we are considering. In particular, our formula (\ref{eq:evenflow}) correctly reproduces the difference of UV and IR central charges $c_{\rm UV}-c_{\rm IR}$ if the UV and IR theories are conformal. \paragraph{Four dimensions} The case of four dimensions is similar to that of two dimensions. The trace anomaly in CFT is given as \begin{eqnarray} T^\mu_\mu=\frac{1}{16\pi^2} \left( -aE_4+cW_{\mu\nu\rho\sigma}W^{\mu\nu\rho\sigma} \right), \end{eqnarray} where $W_{\mu\nu\rho\sigma}$ is the Weyl tensor and $E_4=R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}-4R_{\mu\nu}R^{\mu\nu}+R^2$ is the Euler density. Putting the theory on the sphere $S^4$, we obtain \begin{eqnarray} \frac{d F_{d_0 = 4}}{d \log r } =-\vev{\int d^2 x \sqrt{\hat{g}} T^\mu_\mu }_{S^2}=4a. \label{eq:relationfanda} \end{eqnarray} The change of $a$ as we vary some length scale $r$ is given by~\cite{Komargodski:2011xv,Luty:2012ww}\footnote{ More precisely, $a$ is defined by the dilaton forward scattering amplitude, and $r$ should be the inverse of the center-of-mass energy in that scattering process. See Ref.~\cite{Luty:2012ww} for details. } \begin{eqnarray} \frac{d a(r)}{d \log r}=-\frac{\pi^4}{24}\sum_i c_i {\cal B}_i^2. \label{eq:afunctionflow} \end{eqnarray} Therefore, by comparing Eqs.~(\ref{eq:evenflow}) and (\ref{eq:afunctionflow}) we get \begin{eqnarray} \frac{d^2 F_{d_0 = 4}(r) }{d (\log r)^2 } =4 \frac{d a(r)}{d \log r} . \end{eqnarray} This relation is consistent with Eq.~(\ref{eq:relationfanda}). \paragraph{Three dimensions} In three dimensional ${\cal N}=2$ supersymmetric theories there are exact results for the partition functions~\cite{Kapustin:2009kz,Jafferis:2010un,Hama:2010av}.\footnote{See also Refs.~\cite{Klebanov:2011gs,Klebanov:2011td} for calculations of $F$ in non-supersymmetric theories.} We check our result for a simple case by comparing it to the exact result. Let us consider an ${\cal N}=2$ supersymmetric Chern-Simons matter theory with gauge group $\mathop{\rm U}(N_c)$. We introduce chiral superfields $(Q,\tilde{Q})$ which are in a representation of the gauge group, $R \oplus \bar{R}$, where we take $R$ as $N_f$ copies of some irreducible representation $r$, i.e., $R=N_f \times r$. The Chern-Simons level is denoted as $k$, and we take $k \gg N_c,~N_f$ so that perturbation theory is applicable. We take the superpotential as \begin{eqnarray} W= \frac{\lambda}{2} (\tilde{Q} T_a Q)(\tilde{Q} T_a Q) \end{eqnarray} where $T_a$ are generators of the gauge group $\mathop{\rm U}(N_c)$ normalized as $\mathop{\rm tr}_{\rm fund} (T_a T_b)=\delta_{ab}$ for a fundamental representation. Without loss of generality we take $\lambda$ to be real and positive. See Ref.~\cite{Gaiotto:2007qi} for details of this theory. The theory has ${\cal N}=2$ supersymmetry for a general value of the yukawa coupling constant $\lambda$, and the supersymmetry is enhanced to ${\cal N}=3$ when $\lambda=4\pi/k$. The RG equation for $\lambda$ is given by~\cite{Gaiotto:2007qi}~\footnote{Note that our normalization of $T_a$ is different from Ref.~\cite{Gaiotto:2007qi}, and we have also corrected some errors in the beta function of Ref.~\cite{Gaiotto:2007qi}.} \begin{eqnarray} {\cal B}_\lambda=\frac{d \lambda}{d \log \mu}=\frac{b_0}{16\pi^2} \lambda \left( \lambda^2-\left(\frac{4\pi}{k} \right)^2 \right) \end{eqnarray} where $b_0=\frac{2}{\dim R} (\mathop{\rm tr}_R (T_a T_b)\mathop{\rm tr}_R (T_a T_b)+\mathop{\rm tr}_R(T_a T_b T_a T_b))$ and $\dim R$ is the dimension of the representation $R$. Therefore, this model connects two different superconformal fixed points; $\lambda=0$ in the UV and $\lambda=4\pi/k$ in the IR. Let us apply our formula~(\ref{eq:oddflow}) to this model. First, we define the operator ${\cal O}_\lambda$ as \begin{eqnarray} {\cal O}_\lambda= \frac{1}{2} \left. (\tilde{Q} T_a Q)(\tilde{Q} T_a Q) \right|_{\theta^2}+{\rm h.c.}, \end{eqnarray} where $|_{\theta^2}$ means that we take the $\theta^2$ component of a chiral field. The Lagrangian of this theory contains the interaction term $\lambda {\cal O}_\lambda$. By computing the correlation function $\vev{{\cal O}_\lambda(x){\cal O}_\lambda(0)}$ at the leading order, we find that the constant $c_\lambda$ defined as $\vev{{\cal O}_\lambda(x){\cal O}_\lambda(0)}=c_\lambda /x^{6}$ is given by \begin{eqnarray} c_\lambda&=&\frac{6b_0 \dim R}{(4\pi)^4}, \end{eqnarray} where we have neglected $O(\epsilon)$ corrections. Then, the difference of the UV and IR free energies, $F_{{\rm UV}}=F_{d_0=3}(r \to 0)$ and $F_{{\rm IR}}=F_{d_0=3}(r \to \infty)$, is given as \begin{eqnarray} F_{{\rm IR}}- F_{{\rm UV}} &=&-\frac{\pi^4}{3} \int^\infty_0 \frac{d r}{r} c_\lambda {\cal B}_\lambda^2 \nonumber \\ &=&\frac{\pi^4}{3} \int^{4\pi \over k}_0 d \lambda c_\lambda {\cal B}_\lambda \nonumber \\ &=&-\frac{\pi^2 b_0^2 \dim R}{2^5 k^4}. \end{eqnarray} Now let us restrict our attention to the case that the gauge group is $\mathop{\rm U}(1)$ (i.e., $N_c=1$) and there are $N_f$ pairs of chiral fields $(Q, \tilde{Q})$ with charge $\pm 1$. In this case, the gauge group generator is $T=1_{N_f \times N_f}$, and we have $b_0=2(N_f+1)$ and $\dim R=N_f$. Therefore, we obtain \begin{eqnarray} F_{{\rm IR}}- F_{{\rm UV}} =-\frac{\pi^2 N_f(N_f+1)^2 }{8 k^4}.\label{eq:perturbativechernsimons} \end{eqnarray} On the other hand, the exact partition function for the model is explicitly obtained in Ref.~\cite{Jafferis:2010un}. Denoting the superconformal $R$-charge of $(Q,\tilde{Q})$ as $\Delta=\frac{1}{2}-a$, the real part of the free energy~\footnote{ The imaginary part is just an artifact of imaginary supergravity background; see Refs.~\cite{Festuccia:2011ws,Closset:2012vg,Closset:2012vp} for details.} is given as \begin{eqnarray} {\rm Re}F(a)=\log (2^{N_f} \sqrt{k}) -\frac{\pi^2 N_f }{2} a^2+\frac{\pi^2 N_f(N_f+1)}{16 k^2} (1+8a) + O(k^{-5}), \end{eqnarray} where we have assumed $a =O(k^{-2})$, which will be justified below. In the UV CFT ($\lambda=0$), the value of $a$ is determined by the solution of $d {\rm Re}F(a)/ d a=0$~\cite{Jafferis:2010un,Closset:2012vg} and is given by $a=\frac{N_f+1}{2k^2}$. In the IR CFT ($\lambda=4\pi/k$), we should have $a=0$ for the superpotential to be invariant under the $R$-symmetry. Then, we obtain \begin{eqnarray} {\rm Re}F_{\rm IR}-{\rm Re}F_{\rm UV} &=& {\rm Re}F(a=0)-{\rm Re}F(a=\frac{N_f+1}{2k^2}) \nonumber \\ &=&-\frac{\pi^2 N_f(N_f+1)^2}{8k^4}. \end{eqnarray} This result completely agrees with Eq.~(\ref{eq:perturbativechernsimons}). \paragraph{Six dimensions} In six dimensions, scalar field theories with $\phi^3$ interactions can be treated perturbatively. The Lagrangian is given by \begin{eqnarray} {\cal L}=\frac{1}{2} \sum_a \partial_\mu \phi_a \partial^\mu \phi_a+\frac{1}{6} \sum_{a,b,c} \lambda_{a,b,c}\phi_a \phi_b \phi_c. \end{eqnarray} Our result shows that $-d F_{d_0=6} / d \log r$ decreases monotonically as we increase $r$. Notice that the conformal coupling $\frac{d-2}{8(d-1)} R \phi^2$ makes the vacuum perturbatively stable when the theory is put on the sphere, and hence we need not worry about infrared divergences. This model is asymptotically free at the one-loop level~\cite{Macfarlane:1974vp}, but unfortunately, no Banks-Zaks type IR fixed point is known. However, it is at least encouraging for the six dimensional $a$-theorem~\cite{Elvang:2012st} (see also Ref.\cite{Elvang:2012yc}) that $d F_{d_0=6} / d \log r$ decreases monotonically as a function of $r$. \subsection{Scale versus conformal invariance} Now we discuss the equivalence between scale and conformal invariance in the class of theories studied in this paper (see Table.~\ref{table:1}). More generally, we study the possible IR (or UV) asymptotics of perturbative quantum field theories. Our discussion follows the one in Ref.~\cite{Luty:2012ww} which studied the same problem in four dimensions. First let us briefly review the mechanism by which a theory could have scale invariance without conformal invariance~\cite{Fortin:2011ks,Fortin:2011sz,Fortin:2012ic,Fortin:2012cq}. If the trace of the energy-momentum tensor is given by a total derivative, $T^\mu_\mu=\partial_\mu j^\mu_V$, where $j_V^\mu$ is some vector field called the virial current, the theory is scale invariant because we can define a conserved current of scaling transformation as $T^{\mu\nu}x_\nu-j_V^\mu$. In theories studied in this paper, this requirement is given as \begin{eqnarray} T^\mu_\mu=-\sum_i {\cal B}^i [{\cal O}_i]=\partial_\mu j_V^\mu. \label{eq:scaleinvtrace} \end{eqnarray} If $\partial_\mu j_V^\mu \neq 0$, the beta functions ${\cal B}^i$ are nonzero and the theory is not conformally invariant. If we see the coupling constants of the theory as spurions, there is a symmetry associated with the current $j_V$ under which the coupling constants transform nontrivially. When Eq.~(\ref{eq:scaleinvtrace}) holds, the RG flow is generated by that symmetry acting on the coupling constants. Our purpose is to show that such RG flows are impossible and $T^\mu_\mu$ actually vanishes when the theory has scale invariance. The free energy $F$ in general depends on the parameters in the finite part of the counterterms $S_{\rm c.t.}[\hat{g}_{\mu\nu}]$. These new parameters are absent in the original flat space theories. However, as we discussed in section~\ref{sec:2}, there is a way to define the free energy in which $F$ in odd dimensions and $d F/ d\log r$ in even dimensions do not contain any such new parameters. In that definition, they only depend on the coupling constants $\lambda_i(\mu)$, the renormalization scale $\mu$, and the radius of the sphere $r$ as \begin{eqnarray} C &=& C \left(\lambda(\mu), \log(r/\mu) \right) =C(\lambda(r^{-1})) \label{eq:functionalform} \end{eqnarray} where we define $C$ as \begin{eqnarray} C= \left\{ \begin{array}{ll} (-1)^{d_0+1 \over 2} \displaystyle{\frac{ d_0 ! }{ 2 \pi^{d_0+1} }} F_{d_0 = {\rm odd}} & (d_0={\rm odd}) \\ \\ (-1)^{\frac{d_0}{2}} \displaystyle{ \frac{d_0 !}{4 \pi^{d_0}}} \displaystyle{ \frac{d F_{d_0 = {\rm even}}}{d \log r}} & ( d_0={\rm even}) \end{array} \right. . \label{eq:defineCfunction} \end{eqnarray} In the last equality in Eq.~(\ref{eq:functionalform}) we have used RG invariance and set $\mu=r^{-1}$. Therefore, the function $C$ defined as in Eq.~(\ref{eq:defineCfunction}) is only a function of $\lambda_i(r^{-1})$. From Eqs.~(\ref{eq:oddflow}) and (\ref{eq:evenflow}), we see that $C$ satisfies \begin{eqnarray} {d C \over d \log r}=-\sum_i c_i {\cal B}_i^2 \label{eq:Cfloweq} \end{eqnarray} with $c_i$ all positive. Suppose that the theory is weakly coupled in the IR limit $r \to \infty$. (The discussion is completely parallel in the UV.) Then, we can trust perturbation theory in the IR, and $C(\lambda(r^{-1}))$ remains finite in the IR since all the couplings $\lambda_i(r^{-1})$ are small. Then, from Eq.~(\ref{eq:Cfloweq}), it is necessary that $\int ^\infty d\log r \sum_i c_i {\cal B}_i^2$ is finite, and hence $c_i {\cal B}_i^2$ should vanish faster than $1/\log r$ in the limit $r \to \infty$ for $C$ to be finite in the IR. Since $\vev{T^\mu_\mu(x) T^\nu_\nu(0)} =\sum_i c_i {\cal B}_i^2 /x^{2d} $, the trace of the energy-momentum tensor should vanish in the IR limit and hence the theory is conformal. We conclude that the IR limit of the class of theories studied in this paper is either conformal or strongly coupled so that perturbation breaks down. Although we have neglected higher order corrections, they do not change the conclusion. As long as the energy-momentum tensor satisfies Eq.~(\ref{eq:curvedtrace}), couplings between the dilaton and dynamical fields in Eq.~(\ref{eq:mattertau}) are always proportional to the beta functions ${\cal B}_i$. Then, higher order corrections to Eq.~(\ref{eq:Cfloweq}) are of the form $ \sum_{i,j} \delta c_{ij} {\cal B}_i {\cal B}_j$, where $\delta c_{ij}$ are suppressed by loop factors compared with $c_i$. These corrections are always smaller than the leading contribution and the above discussion is valid even if we include them. See Ref.~\cite{Luty:2012ww} for detailed discussions. Furthermore, the ambiguity of the energy-momentum tensor related to improvement discussed in section~\ref{sec:2} does not invalidate the above argument if there exists a solution to Eq.~(\ref{eq:additionalparameter}) in which $\xi^a_i$ remains small in the IR. In particular, in a scale invariant theory in which the energy-momentum tensor and the virial current $j^\mu_V$ in Eq.~(\ref{eq:scaleinvtrace}) are eigenstates of dilatation with eigenvalues $d$ and $d-1$ respectively, there is no operator mixing and the above argument of the equivalence between scale and conformal invariance is valid. \section{Conclusions}\label{sec:4} In this paper we have studied the free energy $F=-\log Z$ on a $d$-dimensional sphere with radius $r$ for theories which have marginal interactions. Such theories are listed in Table.~\ref{table:1}. If we couple the metric of the sphere to the energy-momentum tensor $T_{\mu\nu}$ satisfying $T^\mu_\mu=-\sum_i {\cal B}^i [{\cal O}_i]$, the free energy $F$ satisfies \begin{eqnarray} (-1)^{d_0+1 \over 2} \frac{d F_{d_0 = {\rm odd}}}{d \log r} &=& - \frac{ 2 \pi^{d_0+1} }{ d_0 ! } \sum_{i} c_i {\cal B}_i^2 +\cdots \label{eq:concloddflow} \\ (-1)^{\frac{d_0}{2}} \frac{d^2 F_{d_0 = {\rm even}}}{d (\log r)^2} &=& - \frac{4 \pi^{d_0}}{d_0 !} \sum_{i} c_i {\cal B}_i^2 +\cdots , \label{eq:conclevenflow} \end{eqnarray} where $d_0$ is the space-time dimension, ${\cal B}_i$ are beta functions of coupling constants, $c_i$ are positive constants defined as $\vev{[{\cal O}_i(x)][{\cal O}_j(0)]}=c_i \delta_{ij} / x^{2d}+\cdots$, and dots indicate sub-leading terms which are always smaller than the leading term. In particular, $(-1)^{\frac{d_0+1}{2}} F$ (in odd dimensions) or $(-1)^{\frac{d_0}{2}} d F / d \log r$ (in even dimensions) decreases monotonically in perturbation theory. This result extends the perturbative $c$ ($a$) theorem in two (four) dimensions to other dimensions. Using this result, we have extended the work of Ref.~\cite{Luty:2012ww,Fortin:2012hn} to other dimensions and argued that scale invariance is equivalent to conformal invariance in perturbation theory. \section*{Acknowledgements} The author would like to thank N.~Kim, I.~Klebanov, J.~Maldacena, B.~Safdi, E.~Witten and especially T.~Nishioka and M.~Yamazaki for useful discussions. The work of KY is supported by NSF grant PHY-0969448.
1212.3179
\section{Introduction} The Hosoya polynomial of a graph was introduced in the Hosoya's seminal paper~\cite{hosoya-1988} back in 1988 and received a lot of attention afterwards. The polynomial was later independently introduced and considered by Sagan, Yeh, and Zhang~\cite{sagan-1996} under the name {\em Wiener polynomial of a graph}. Both names are still used for the polynomial but the term Hosoya polynomial is nowadays used by the majority of researchers. The main advantage of the Hosoya polynomial is that it contains a wealth of information about distance based graph invariants. For instance, knowing the Hosoya polynomial of a graph, it is straightforward to determine the famous Wiener index of a graph as the first derivative of the polynomial at point 1. Cash~\cite{cash-2002} noticed that the hyper-Wiener can be obtained from the Hosoya polynomial in a similar simple manner. Among others, the Hosoya polynomial has been by now investigated on (in the historical order) trees~\cite{caporossi-1999,stevanovic-1999,gutman-2005}, composite graphs~\cite{stevanovic-2001,doslic-2008,eliasi-2013}, benzenoid graphs~\cite{gutman-2001,xu-2008b}, tori~\cite{diudea-2002}, zig-zag open-ended nanotubes~\cite{xuzhdi-2007}, certain graph decorations~\cite{yan-2007}, armchair open-ended nanotubes~\cite{xuzh-2007}, zigzag polyhex nanotorus~\cite{eliasi-2008}, $TUC_4C_8(S)$ nanotubes~\cite{xu-2009}, pentachains~\cite{ali-2011}, polyphenyl chains~\cite{li-2012}, as well as on Fibonacci and Lucas cubes~\cite{klavzar-2012} and Hanoi graphs~\cite{kishori-2012}. For relations to other graph polynomials see~\cite{gutman-2006,behmaram-2011}. In this paper we consider the Hosoya polynomial on graphs that contain cut-vertices. Such graphs can be decomposed into subgraphs that we call {\em primary subgraphs}. Blocks of graphs are particular examples of primary subgraphs, but a primary subgraph may consist of several blocks. In our main result, the Hosoya polynomial of a graph is expressed in terms of the Hosoya polynomials of the corresponding primary subgraphs. A related result for the Wiener index of a graph (in terms of the block-cut-vertex tree of the graph) was obtained in~\cite{balakrishnan-2008}. Our main result can be thus considered as an extension (and a simplification) of~\cite[Theorem 1]{balakrishnan-2008}. In the case when a graph is decomposed into two primary subgraphs, our result is a special case of~\cite[Theorem 2.1]{xu-2008a} where a formula is given for the Hosoya polynomial of the gated amalgamation of two graphs, which is in turn a generalization of the corresponding result on the Wiener index~\cite{klavzar-2005}. On the other hand, \cite[Corollary 2.1]{xu-2008a} is a special case of our main result. We point out that our formulae require the knowledge of the Hosoya polynomials of the primary subgraphs, the so-called partial Hosoya polynomials, and specific distances. In many cases these are known or easy to find; especially in the case of bouquets, chains, and links when---to make things easier---the blocks are very often identical graphs. Very often authors go through several pages of computations to find only the Wiener index of a family of graphs; one of the point of the present paper is to show that with much less effort one can find the Hosoya polynomial. We proceed as follows. In the rest of this section the Hosoya polynomial and other concepts needed are formally introduced, while in the next section the main result is stated and proved. In Section~\ref{sec:constructions} the result is applied to bouquets of graphs, circuits of graphs, chains of graphs, and links of graphs. These results are then applied in the final section to several families of graphs that appear in chemistry. The Wiener index and the hyper-Wiener index of them is obtained as a side product. Let $G$ be a connected graph and let $d(G,k)$, $k\ge 0$, be the number of vertex pairs at distance $k$. Then the {\em Hosoya polynomial}~\cite{hosoya-1988} of $G$ is defined as $$H(G,t) = \sum_{k \geq 1} d(G,k)\,t^k\,.$$ Before we continue we point out that some authors define the Hosoya polynomial by adding in the above expression also the constant term $d(G,0) = |V(G)|$. For our purposes the present definition is more convenient. Clearly, no matter which definition is selected, the considerations are equivalent. We will write $d_G(u,v)$ for the usual shortest-path distance between $u$ and $v$ in $G$. If there will be only one graph in question, we will shorten the notation to $d(u,v)$. Let $H_1$ and $H_2$ be subgraphs of a connected graph $G$. Then the distance $d_G(H_1,H_2)$ between $H_1$ and $H_2$ is $\min \{d(u,v)\ |\ u\in V(H_1), v\in V(H_2)\}$. The {\em diameter} of $G$ is defined as ${\rm diam}(G) = \max_{u,v\in V(G)} d(u,v)$. For a finite set $A$ and a nonegative integer $k$ let ${A\choose k}$ denote the set of all $k$-subsets of $A$. Note that $\left|{A\choose k}\right| = {|A|\choose k}$. With these notations in hand $H(G,t)$ can be more specifically written as $$H(G,t) = \sum_{k=1}^{{\rm diam}(G)} d(G,k)\,t^k = \sum_{\{u,v\}\in {V(G)\choose 2}} t^{d(u,v)}\,.$$ Recall that the {\em Wiener index} $W(G)$ of $G$ is defined by $$W(G) = \sum_{\{u,v\}\in {V(G)\choose 2}} d(u,v)\,,$$ and that the {\em hyper-Wiener index} $WW(G)$ is $$WW(G) = \frac{1}{2}\sum_{\{u,v\}\in {V(G)\choose 2}} \left( d(u,v) + d(u,v)^2\right)\,.$$ The relations between the Hosoya polynomial and these two indices are then $$W(G) = \frac{d H(G,t)}{dt}\big\vert_{t=1} \quad {\rm and} \quad WW(G) = \frac{d H(G,t)}{dt}\big\vert_{t=1} + \frac{1}{2}\cdot \frac{d^2 H(G,t)}{dt^2}\big\vert_{t=1}\,.$$ Finally, for a positive integer $n$ we will use the notation $[n] = \{1,2,\ldots, n\}$. \section{Main result} Let $G$ be a connected graph and let $u\in V(G)$. Then the {\em partial Hosoya polynomial with respect to $u$} is $$H_u(G,t) = \sum_{v\in V(G) \atop v\ne u} t^{d(u,v)}\,.$$ This concept was used by Do\v sli\' c in~\cite{doslic-2008} under the name {\em partial Wiener polynomial}. Let $G$ be a connected graph constructed from pairwise disjoint connected graphs $G_1,\ldots, G_k$ as follows. Select a vertex of $G_1$, a vertex of $G_2$, and identify these two vertices. Then continue in this manner inductively. More precisely, suppose that we have already used $G_1, \ldots, G_i$ in the construction, where $2\le i\le k-1$. Then select a vertex in the already constructed graph (which may in particular be one of the already selected vertices) and a vertex of $G_{i+1}$; we identify these two vertices. Note that the graph $G$ constructed in this way has a tree-like structure, the $G_i$'s being its building stones (see Fig.~\ref{fig:construction}). We will briefly say that $G$ is obtained by {\em point-attaching} from $G_1,\ldots, G_k$ and that $G_i$'s are the {\em primary subgraphs} of $G$. A particular case of this construction is the decomposition of a connected graphs into blocks. \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=0.6,style=thick] \def\vr{4pt} \draw (0,0) ellipse (2 and 1); \draw (0,2.5) ellipse (1 and 1.5); \draw[rotate around={50:(2.8,1.8)}] (2.8,1.8) ellipse (1.7 and 0.7); \draw (-4,0) ellipse (2 and 1); \draw[rotate around={50:(-7.1,-1.4)}] (-7.1,-1.4) ellipse (1.7 and 0.5); \draw[rotate around={-50:(-7.1,1.4)}] (-7.1,1.4) ellipse (1.7 and 0.5); \draw (-4,-3.5) ellipse (1 and 2.5); \draw (0,-3) ellipse (3 and 1); \draw[rotate around={-30:(3.5,-1)}] (3.5,-1) ellipse (1.7 and 0.7); \draw (0,-6) ellipse (1.5 and 2); \draw[dashed] (-2,0) .. controls (-2.5,0) and (-3.5,0) .. (-4,-1); \draw[dashed] (-4,-1) .. controls (-4,-1.5) and (-4,-2.5) .. (-3,-3); \draw[dashed] (-3,-3) .. controls (-2,-3) and (-1,-3.5) .. (0,-4); \draw (0,1) [fill=white] circle (\vr); \draw (1.66,0.56) [fill=white] circle (\vr); \draw (-2,0) [fill=white] circle (\vr); \draw (-6,0) [fill=white] circle (\vr); \draw (-4,-1) [fill=white] circle (\vr); \draw (-3,-3) [fill=white] circle (\vr); \draw (1.95,-0.15) [fill=white] circle (\vr); \draw (0,-4) [fill=white] circle (\vr); \draw (0.1,-0.5) [fill=white] circle (\vr); \draw (-0.5,-6.7) [fill=white] circle (\vr); \draw (0.8,0.5) node {$G_i$}; \draw (0.6,-0.5) node {$u$}; \draw (1,-5.7) node {$G_j$}; \draw (0,-6.7) node {$v$}; \draw (-1,0) node {$x_{i\rightarrow j}$}; \draw (0,-4.7) node {$x_{j\rightarrow i}$}; \end{tikzpicture} \end{center} \caption{Graph $G$ obtained by point-attaching from $G_1,\ldots, G_k$} \label{fig:construction} \end{figure} Let $G$ be a graph obtained by point-attaching from $G_1,\ldots, G_k$. Then let $\delta_{ij} = d_G(G_i,G_j)$. This distance is realized by precisely one vertex from $G_i$ and one vertex from $G_j$, denote them with $x_{i\rightarrow j}$ and $x_{j\rightarrow i}$, respectively; see Fig.~\ref{fig:construction} where the distance between $G_i$ and $G_j$ is indicated with a dashed line. Note that if $G_i$ and $G_j$ share a vertex $x$, then $x = x_{i\rightarrow j} = x_{j\rightarrow i}$ and $\delta_{ij} = 0$. Now everything is ready for our main result. \begin{theorem} \label{thm:main} Let $G$ be a connected graph obtained by point-attaching from $G_1,\ldots, G_k$, and let $x_{i\rightarrow j}$ and $\delta_{ij}$ be as above. Then \begin{equation} \label{eq:main} H(G,t) = \sum_{i=1}^k H(G_i,t) + \sum_{\{i,j\}\in {[k]\choose 2}} \left( H_{x_{i\rightarrow j}}(G_i,t)\cdot H_{x_{j\rightarrow i}}(G_j,t)\cdot t^{\delta_{ij}}\right)\,. \end{equation} \end{theorem} \noindent{\bf Proof.\ } Let $u\ne v$ be arbitrary vertices of $G$. We need to show that their contribution to the claimed expression is $t^{d(u,v)}$. Suppose first that $u$ and $v$ belong to the same primary subgraph, say $u,v\in G_i$. Then $d_G(u,v) = d_{G_i}(u,v)$ and hence $t^{d(u,v)}$ is included in the corresponding term of the first sum of the theorem. Assume next that $u$ and $v$ do not belong to the same primary subgraph. If $u$ or $v$ is an attaching vertex, then it belongs to more than one primary subgraph. Hence select primary subgraphs $G_i$ and $G_j$ with $u\in G_i$ and $v\in G_j$ such that $\delta_{ij} = d_G(G_i,G_j)$. By our assumption $i\ne j$ and hence $$d_G(u,v) = d_{G_i}(u, x_{i\rightarrow j}) + \delta_{ij} + d_{G_j}(x_{j\rightarrow i},v)\,,$$ cf. Fig.~\ref{fig:construction} again. It is possible that $\delta_{ij}=0$, that is, $x_{i\rightarrow j} = x_{j\rightarrow i}$, but in any case $t^{d(u,v)}$ is a term in the product $H_{x_{i\rightarrow j}}(G_i,t)\cdot t^{\delta_{ij}}\cdot H_{x_{j\rightarrow i}}(G_j,t)$. We have thus proved that for any distinct vertices $u$ and $v$, the term $t^{d(u,v)}$ is included in the claimed expression. To complete the argument we need to show that no such term is included more than once. To verify this it suffices to prove that the total number of pairs of vertices considered in~\eqref{eq:main} is equal to the total number of pairs of vertices. Set $n_i = |V(G_i)| - 1$, $1\le i\le k$, and note that then $|V(G)| = 1 + \sum_{i=1}^k n_i$. Then the first term of~\eqref{eq:main} involves \begin{equation*} A = \sum_{i=1}^k {n_i + 1\choose 2} \end{equation*} pairs of vertices, while the second sum involves \begin{equation*} B = \sum_{\{i,j\}\in {[k]\choose 2}} n_in_j \end{equation*} pairs of vertices of $G$. Then \begin{eqnarray*} 2(A + B) & = & \sum_{i=1}^k n_i^2 + \sum_{i=1}^k n_i + \sum_{\{i,j\}\in {[k]\choose 2}} 2n_in_j \\ & = & \left( \sum_{i=1}^k n_i\right)\cdot \left( 1 + \sum_{i=1}^k n_i\right) \\ & = & \left( |V(G)| -1\right)\cdot |V(G)|\,. \end{eqnarray*} We conclude that $A + B = {|V(G)|\choose 2}$, that is, the number of pairs of vertices involved in~\eqref{eq:main} is equal to the number of all pairs. \hfill $\square$ \bigskip As an example consider the graph \(Q(m,n)\) constructed in the following manner: denoting by \(K_q\) the complete graph with \(q\) vertices, consider the graph \(K_m\) and \(m\) copies of \(K_n\). By definition, the graph \(Q(m,n)\) is obtained by identifying each vertex of \(K_m\) with a vertex of a unique \(K_n\). The graph \(Q(6,4)\) is shown in Fig.~\ref{fig:q64}. \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=0.6,style=thick] \def\vr{4pt} \path (3,2) coordinate (Q1); \path (6,2) coordinate (Q2); \path (7,4) coordinate (Q3); \path (6,6) coordinate (Q4); \path (3,6) coordinate (Q5); \path (2,4) coordinate (Q6); \path (3,2) coordinate (K11); \path (2,1) coordinate (K12); \path (3,0) coordinate (K13); \path (4,1) coordinate (K14); \path (6,2) coordinate (K21); \path (5,1) coordinate (K22); \path (6,0) coordinate (K23); \path (7,1) coordinate (K24); \path (7,4) coordinate (K31); \path (8,3) coordinate (K32); \path (9,4) coordinate (K33); \path (8,5) coordinate (K34); \path (6,6) coordinate (K41); \path (7,7) coordinate (K42); \path (6,8) coordinate (K43); \path (5,7) coordinate (K44); \path (3,6) coordinate (K51); \path (4,7) coordinate (K52); \path (3,8) coordinate (K53); \path (2,7) coordinate (K54); \path (2,4) coordinate (K61); \path (1,3) coordinate (K62); \path (0,4) coordinate (K63); \path (1,5) coordinate (K64); \draw (Q1) --(Q2) -- (Q3) --(Q4) --(Q5) --(Q6) --(Q1); \draw (Q1) --(Q3) -- (Q5) --(Q1) --(Q4) --(Q6) --(Q2) -- (Q4); \draw (Q3) --(Q6); \draw (Q2) -- (Q5); \draw (K11) --(K12) -- (K13) -- (K14) --(K11) --(K13) -- (K14) -- (K12); \draw (K21) --(K22) -- (K23) -- (K24) --(K21) --(K23) -- (K24) -- (K22); \draw (K31) --(K32) -- (K33) -- (K34) --(K31) --(K33) -- (K34) -- (K32); \draw (K41) --(K42) -- (K43) -- (K44) --(K41) --(K43) -- (K44) -- (K42); \draw (K51) --(K52) -- (K53) -- (K54) --(K51) --(K53) -- (K54) -- (K52); \draw (K61) --(K62) -- (K63) -- (K64) --(K61) --(K63) -- (K64) -- (K62); \foreach \i in {1,...,6} { \draw (Q\i) [fill=white] circle (\vr); } \foreach \i in {1,...,4} { \draw (K1\i) [fill=white] circle (\vr); } \foreach \i in {1,...,4} { \draw (K2\i) [fill=white] circle (\vr); } \foreach \i in {1,...,4} { \draw (K3\i) [fill=white] circle (\vr); } \foreach \i in {1,...,4} { \draw (K4\i) [fill=white] circle (\vr); } \foreach \i in {1,...,4} { \draw (K5\i) [fill=white] circle (\vr); } \foreach \i in {1,...,4} { \draw (K6\i) [fill=white] circle (\vr); } \end{tikzpicture} \end{center} \caption{$Q(6,4)$} \label{fig:q64} \end{figure} Clearly, the Hosoya polynomial of \(K_q\) is \(\frac{1}{2}q(q-1)t\) and the partial Hosoya polynomial with respect to any of its vertices is \((q-1)t\). The distance between the central \(K_m\) and a \(K_n\) is 0, while the distance between any two distinct \(K_n\)'s is 1. Now, Theorem~\ref{thm:main} gives, after elementary calculations, \begin{eqnarray*} H(Q(m,n),t) & = & \frac{1}{2}m(m+n^2-n-1)t \\ & & + m(m-1)(n-1)t^2 + \frac{1}{2} m(m-1)(n-1)^2 t^3\,. \end{eqnarray*} For the Wiener index and the hyper-Wiener index we obtain \begin{eqnarray*} W(Q(m,n)) & = & \frac{1}{2}mn\left(3mn-2m-2n+1\right)\,, \\ HW(Q(m,n)) & = & \frac{1}{2}m\left(6mn^2-6mn-5n^2+m+5n-1\right)\,. \end{eqnarray*} Notice that the Wiener index \(W(Q(m,n))\) is symmetric in \(m\) and \(n\). \section{Specific constructions} \label{sec:constructions} In this section we present several constructions of graphs to which our main result can be applied. These constructions will in turn be used in the next section where chemical applications will be given. \subsection{Bouquet of graphs} Let \(G_1, G_2,\ldots, G_k\) be a finite sequence of pairwise disjoint connected graphs and let \(x_i \in V(G_i)\). By definition, the bouquet \(G\) of the graphs \(\{G_i\}_{i=1}^k\) with respect to the vertices \(\{x_i\}_{i=1}^k\) is obtained by identifying the vertices \(x_1, x_2, \ldots, x_k\) (see Fig.~\ref{fig:bouquet} for \(k=3\)). \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=0.6,style=thick] \def\vr{4pt} \draw (0,0) .. controls (-1.5,0) and (-3.2,-0.3) .. (-4.5,2.5); \draw (-4.5,2.5) .. controls (-4.6,2.7) and (-4.3,2.9) .. (-4,3); \draw (0,0) .. controls (-1,2) and (-2,3) .. (-4,3); \draw (0,0) .. controls (-0.4,1) and (-1.1,2) .. (-1,3.5); \draw (-1,3.5) .. controls (-0.5,4.7) and (0.5,4.7) .. (1,3.5); \draw (0,0) .. controls (0.4,1) and (1.1,2) .. (1,3.5); \draw (0,0) .. controls (1.5,0) and (3.2,-0.3) .. (4.5,2.5); \draw (4.5,2.5) .. controls (4.6,2.7) and (4.3,2.9) .. (4,3); \draw (0,0) .. controls (1,2) and (2,3) .. (4,3); \draw (0,0) [fill=white] circle (\vr); \draw (0.0,-0.5) node {$x_1=x_2=x_3=x$}; \draw (-2.5,1.5) node {$G_1$}; \draw (0,2.5) node {$G_2$}; \draw (2.5,1.5) node {$G_3$}; \end{tikzpicture} \end{center} \caption{A bouquet of three graphs} \label{fig:bouquet} \end{figure} Clearly, we have a graph obtained by point-attaching from \(G_1, G_2,\ldots, G_k\) and formula~\eqref{eq:main} holds with \(\delta_{ij}=0\) and \(x_{i \to j}=x_{j \to i}=x\), where \(x\) is the vertex obtained from the identification of the \(x_i\)'s. Formula~\eqref{eq:main} becomes \begin{equation} \label{eq:bouquet1} H(G,t) = \sum_{i=1}^k H(G_i,t) + \sum_{\{i,j\}\in {[k]\choose 2}} H_x(G_i,t)\cdot H_x(G_j,t)\,. \end{equation} Consider the following special case of identical \(G_i\)'s. Let \(X\) be a connected graph and let \(x \in V(X)\). Take \(G_i = X\) and \(x_i = x\) for \(i\in [k]\). Formula~\eqref{eq:bouquet1} becomes \begin{equation} \label{eq:bouquet2} H(G,t) = kH(X,t) + \frac{1}{2}k(k-1)H_x^2(X,t)\,. \end{equation} \subsection{Circuit of graphs} Let \(G_1, G_2,\ldots, G_k\) be a finite sequence of pairwise disjoint connected graphs and let \(x_i \in V(G_i)\). By definition, the circuit \(G\) of the graphs \(\{G_i\}_{i=1}^k \) with respect to the vertices \(\{x_i\}_{i=1}^k\) is obtained by identifying the vertex \(x_i\) of the graph \(G_i\) with the \(i\)-th vertex of the cycle graph \(C_k\) (see Fig.~\ref{fig:circuit} for \(k=4\)). \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=0.4,style=thick] \def\vr{4pt} \draw (0,4) .. controls (-0.4,5) and (-1.1,7) .. (-1,7.5); \draw (-1,7.5) .. controls (-0.5,8.7) and (0.5,8.7) .. (1,7.5); \draw (0,4) .. controls (0.4,5) and (1.1,7) .. (1,7.5); \draw (0,0) .. controls (-0.4,-1) and (-1.1,-3) .. (-1,-3.5); \draw (-1,-3.5) .. controls (-0.5,-5.7) and (0.5,-5.7) .. (1,-3.5); \draw (0,0) .. controls (0.4,-1) and (1.1,-3) .. (1,-3.5); \draw (-2,2) .. controls (-3,1.6) and (-5,0.9) .. (-5.5,1); \draw (-5.5,1) .. controls (-6.8,1.5) and (-6.8,2.5) .. (-5.5,3); \draw (-2,2) .. controls (-3,2.4) and (-5,3.1) .. (-5.5,3); \draw (2,2) .. controls (3,1.6) and (5,0.9) .. (5.5,1); \draw (5.5,1) .. controls (6.8,1.5) and (6.8,2.5) .. (5.5,3); \draw (2,2) .. controls (3,2.4) and (5,3.1) .. (5.5,3); \draw (0,0) -- (2,2) -- (0,4) -- (-2,2) -- (0,0); \draw (0,0) [fill=white] circle (\vr); \draw (2,2) [fill=white] circle (\vr); \draw (-2,2) [fill=white] circle (\vr); \draw (0,4) [fill=white] circle (\vr); \draw (0,6.5) node {$G_1$}; \draw (0,-2.5) node {$G_3$}; \draw (-5,2) node {$G_4$}; \draw (5,2) node {$G_2$}; \draw (1,4) node {$x_1$}; \draw (2,1.2) node {$x_2$}; \draw (-1,0) node {$x_3$}; \draw (-2,2.8) node {$x_4$}; \end{tikzpicture} \end{center} \caption{A circuit of four graphs} \label{fig:circuit} \end{figure} The Hosoya polynomial of \(G\) is given by \begin{equation} \label{eq:circuit1} H(G,t) = \sum_{i=1}^k H(G_i,t) + \sum_{\{i,j\}\in {[k]\choose 2}} t^{\min(j-i,n-j+i)}\left(1+ H_{x_i}(G_i,t)\right) \left(1+H_{x_j}(G_j,t)\right)\,. \end{equation} This can be derived from Theorem~\ref{thm:main} by viewing \(G\) as a graph obtained by point-attaching from the \(k+1\) graphs \(G_1,G_2,\ldots,G_k\), and \(C_k\). However, we prefer to give a direct proof. Let \(u \ne v\) be arbitrary vertices in \(G\). Suppose first that \(u\) and \(v\) belong to the same graph \(G_i\), In this case, \(t^{d(u,v)}\) is included in the corresponding term of the first sum in~\eqref{eq:circuit1}. Assume now that \(u \in G_i\) and \(v \in G_j\), \(i < j\). Then \(d(u,v) = d(u, x_i) + d(x_i,x_j) + d(x_j,v)\), where $d(x_i,x_j)=\min(j-i,k-j+i)$. The first and the last term of this sum may be equal to 0. It follows that \(t^{d(u,v)}\) is a term in the product under the 2nd sum in~\eqref{eq:circuit1}. To complete the argument we need to show that no such term is included more than once. To verify this it suffices to prove that the total number of pairs of vertices considered in~\eqref{eq:circuit1} is equal to the total number of pairs of vertices. Setting \(n_i = |V(G_i)|\), the number of pairs of vertices involved in the right-hand side of~\eqref{eq:circuit1} is \(\frac{1}{2}\sum_{i=1}^k n_i(n_i-1)+ \sum_{\{i,j\}\in {[k]\choose 2}} n_i n_j = \frac{1}{2}\left(\sum_{i=1}^k n_i\right) \left(\sum_{i=1}^k n_i -1 \right) \), i.e. the number of all unordered pairs of distinct vertices in \(G\). Consider the following special case of identical \(G_i\)'s. Let \(X\) be a connected graph and let \(x \in V(X)\). Take \(G_i = X\), \(x_i = x\) for \(i\in [k]\). Then formula~\eqref{eq:circuit1} becomes \begin{equation} \label{eq:circuit2} H(G,t)=k H(X,t)+(1+H_x(X,t))^2H(C_k,t)\,. \end{equation} \subsection{Chain of graphs} Let \(G_1, G_2,\ldots, G_k\) be a finite sequence of pairwise disjoint connected graphs and let \(x_i, y_i \in V(G_i)\). By definition (see~\cite{mansour-2009,mansour-2010}) the chain \(G\) of the graphs \(\{G_i\}_{i=1}^k \) with respect to the vertices \(\{x_i,y_i\}_{i=1}^k\) is obtained by identifying the vertex \(y_i\) with the vertex \(x_{i+1}\) for \(i\in [k-1]\) (see Fig.~\ref{fig:chain} for \(k=4\)). \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=0.7,style=thick] \def\vr{4pt} \draw (1.5,0) ellipse (1.5 and 0.5); \draw (4.5,0) ellipse (1.5 and 0.5); \draw (7.5,0) ellipse (1.5 and 0.5); \draw (10.5,0) ellipse (1.5 and 0.5); \draw (0,0) [fill=white] circle (\vr); \draw (3,0) [fill=white] circle (\vr); \draw (6,0) [fill=white] circle (\vr); \draw (9,0) [fill=white] circle (\vr); \draw (12,0) [fill=white] circle (\vr); \draw (0,-0.7) node {$x_1$}; \draw (3.0,-0.7) node {$y_1=x_2$}; \draw (6.0,-0.7) node {$y_2=x_3$}; \draw (9.0,-0.7) node {$y_3=x_4$}; \draw (12.0,-0.7) node {$y_4$}; \end{tikzpicture} \end{center} \caption{A chain of graphs} \label{fig:chain} \end{figure} Denoting \(d_l = d(x_l, y_l)\), we define \begin{equation} \label{eq:chain} s_{i,j}=\begin{cases} d_{i+1}+d_{i+2}+\cdots+d_{j-1} & \text{if \(j-i \geq 2\)\,,} \\ 0 & \text{otherwise}\,. \end{cases} \end{equation} Clearly, we have a graph obtained by point-attaching from \(G_1,\ldots,G_k\) and formula~\eqref{eq:main} holds with \(x_{i \to j} = y_i, x_{j \to i} = x_j\), and \(\delta_{ij}=s_{ij}\). Consequently, we have \begin{equation} \label{eq:chain1} H(G,t)= \sum_{i=1}^k H(G_i,t) + \sum_{\{i,j\}\in {[k]\choose 2}} H_{y_i}(G_i,t)H_{x_j}(G_j,t)t^{s_{i,j}}. \end{equation} Consider the following special case of identical \(G_i\)'s. Let \(X\) be a connected graph and let \(x,y \in V(X)\). Take \(G_i = X\), \(x_i = x\), \(y_i = y\) for \(i\in [k]\). Then, denoting \(d = d(x,y)\), we have \(s_{i,j} = (j-i-1)d\) and formula~\eqref{eq:chain1} becomes \begin{equation} \label{eq:chain2} H(G,t) = kH(X,t) + H_x(X,t)H_y(X,t)\frac{t^{kd}-kt^d+k-1}{(t^d-1)^2}\,. \end{equation} We add that in~\cite{mansour-2010} long expressions with long proofs are given for the Wiener index (pp.~86--89) and for the hyper-Wiener index (pp.~93--94) for the chain of graphs. \subsection{Link of graphs} Let \(G_1, G_2, \ldots, G_k\) be a finite sequence of pairwise disjoint connected graphs and let \(x_i, y_i \in V(G_i)\). By definition (see~\cite{ghorbani-2010}), the link \(G\) of the graphs \(\{G_i\}_{i=1}^k \) with respect to the vertices \(\{x_i,y_i\}_{i=1}^k\) is obtained by joining by an edge the vertex \(y_i\) of \(G_i\) with the vertex \(x_{i+1}\) of \(G_{i+1}\) for all \(i=1,2,\ldots,k-1\) (see Fig.~\ref{fig:link} for \(k=4\)). \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=0.7,style=thick] \def\vr{4pt} \draw (1.5,0) ellipse (1.5 and 0.5); \draw (5.5,0) ellipse (1.5 and 0.5); \draw (9.5,0) ellipse (1.5 and 0.5); \draw (13.5,0) ellipse (1.5 and 0.5); \draw (3,0) --(4,0); \draw (7,0) --(8,0); \draw (11,0) --(12,0); \draw (0,0) [fill=white] circle (\vr); \draw (3,0) [fill=white] circle (\vr); \draw (4,0) [fill=white] circle (\vr); \draw (7,0) [fill=white] circle (\vr); \draw (8,0) [fill=white] circle (\vr); \draw (11,0) [fill=white] circle (\vr); \draw (12,0) [fill=white] circle (\vr); \draw (15,0) [fill=white] circle (\vr); \draw (0,-0.6) node {$x_1$}; \draw (3.0,-0.6) node {$y_1$}; \draw (4.0,-0.6) node {$x_2$}; \draw (7.0,-0.6) node {$y_2$}; \draw (8.0,-0.6) node {$x_3$}; \draw (11.0,-0.6) node {$y_3$}; \draw (12.0,-0.6) node {$x_4$}; \draw (15.0,-0.6) node {$y_4$}; \end{tikzpicture} \end{center} \caption{A link of graphs} \label{fig:link} \end{figure} \noindent The Hosoya polynomial of \(G\) is given by \begin{equation} \label{eq:link1} H(G,t) = \sum_{i=1}^k H(G_i,t) + \sum_{\{i,j\}\in {[k]\choose 2}} \left(1+ H_{y_i}(G_i,t)\right) \left(1+H_{x_j}(G_j,t)\right)t^{j-i+s_{i,j}}\,, \end{equation} where \(s_{ij}\) is defined in~\eqref{eq:chain}, with \(d_l=d(x_l,y_l)\). Formula~\eqref{eq:link1} can be derived from Theorem~\ref{thm:main} by viewing \(G\) as a chain of \(2k-1\) graphs: the \(k\) \(G_i\)'s alternating with the \(k-1\) \(K_2\)'s (edges). This derivation is rather cumbersome and, consequently, we prefer to give a direct proof. Let \(u \ne v\) be arbitrary vertices in \(G\). Suppose first that \(u\) and \(v\) belong to the same graph \(G_i\), In this case, \(t^{d(u,v)}\) is included in the corresponding term of the first sum in~\eqref{eq:link1}. Assume now that \(u \in G_i\) and \(v \in G_j\), \(i < j\). We break up \(d(u,v)\) into three parts: \(d(u, y_i), d(y_i,x_j)=j-i+s_{i,j}\), and \(d(x_j,v)\). The first and the last part may be equal to 0. It follows that \(t^d(u,v)\) is a term in the product under the 2nd sum in~\eqref{eq:link1}. Using the same reasoning as for the circuit of graphs we then infer that the number of pairs of vertices involved in the right-hand side in~\eqref{eq:link1} is equal to the number of all unordered pairs of distinct vertices in \(G\). Consider the following special case of identical \(G_i\)'s. Let \(X\) be a connected graph and let \(x,y \in V(X)\). Take \(G_i = X\), \(x_i = x\), \(y_i = y\) for all \(i\in [k]\). Then, denoting \(d = d(x,y)\), we have \(d_1 = d_2 = \cdots =d_k=d\) and formula~\eqref{eq:link1} becomes \begin{equation} \label{eq:link2} H(G,t) = kH(X,t) + \left(1+H_x(X,t)\right) \left(1+H_y(X,t)\right)\frac{t^{kd+k+1}-kt^{d+2}+kt-t}{(t^{d+1}-1)^2}\,. \end{equation} \section{Chemical applications} In this section we apply our previous results in order to obtain the Hosoya polynomial of families of graphs that are of importance in chemistry. As already pointed out, numerous distance-based invariants such as the Wiener and the hyper-Wiener index can then be routinely derived. \subsection{Spiro-chains} Spiro-chains are defined in~\cite[p.114]{diudea-2001}. Making use of the concept of chain of graphs, a spiro-chain can be defined as a chain of cycles. We denote by \(S_{q,h,k}\) the chain of \(k\) cycles \(C_q\) in which the distance between two consecutive contact vertices is \(h\) (see \(S_{6,2,5}\) in Fig.~\ref{fig:spiro}). \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=0.25,style=thick] \def\vr{4pt} \draw (0,0) +(60:3) \foreach \a in {60,120,180,240,300,360} { -- +(\a:3) } -- cycle; \draw (3,5.2) +(60:3) \foreach \a in {60,120,180,240,300,360} { -- +(\a:3) } -- cycle; \draw (9.0,5.2) +(60:3) \foreach \a in {60,120,180,240,300,360} { -- +(\a:3) } -- cycle; \draw (12.0,0) +(60:3) \foreach \a in {60,120,180,240,300,360} { -- +(\a:3) } -- cycle; \draw (18.0,0) +(60:3) \foreach \a in {60,120,180,240,300,360} { -- +(\a:3) } -- cycle; \end{tikzpicture} \end{center} \caption{Spiro-chain $S_{6,2,5}$} \label{fig:spiro} \end{figure} The Hosoya polynomial of \(S_{q,h,k}\) can be easily obtained from~\eqref{eq:chain2}. We distinguish two cases: \(q\) odd and \(q\) even. Assume \(q\) is odd: \(q = 2r+1\) (\(r \geq 1\)). In~\eqref{eq:chain2} we take \(g=C_q\) and \(d=h\). We have \(H(C_q,t) = (2r+1)\sum_{j=1}^r t^j\) ~\cite{sagan-1996} and \(H_x(C_q,t) = 2\sum_{j=1}^r t^j\) for any vertex \(x\) of \(C_q\). Now,~\eqref{eq:chain2} yields \begin{equation*} H(S_{2r+1,h,k},t) = \frac{k(2r+1)t(t^r-1)}{t-1}+\frac{4t^2(t^r-1)^2(t^{k h}-k t^h+k-1)}{(t-1)^2(t^h-1)^2}\,. \end{equation*} For the Wiener index and the hyper-Wiener index we obtain \begin{eqnarray} \label{eq:oddW} W(S_{2r+1,h,k}) & = & \frac{1}{6} kr[3(r+1)(1-2r+4kr)+4rh(k-1)(k-2)]\,, \\ \nonumber WW(S_{2r+1,h,k}) & = & \frac{1}{6}kr[(r+1)(2-6r+11kr+7kr^2-5r^2) \\ \label{eq:oddhyperW} & & +2rh(k-1)(k-2)(2r+3)+rh^2(k-1)^2(k-2)]\,. \end{eqnarray} Assume \(q\) is even: \(q = 2r\) (\(r \geq 1\)). Again in~\eqref{eq:chain2} we take \(g=C_q\) and \(d=h\). We have \(H(C_q,t) = 2r\sum_{j=1}^{r-1} t^j+rt^r\) ~\cite{sagan-1996} and \(H_x(C_q,t) = 2\sum_{j=1}^{r-1} t^j + t^r\) for any vertex \(x\) of \(C_q\). Now,~\eqref{eq:chain2} yields \begin{equation*} H(S_{2r,h,k},t) = \frac{kr(t^{r+1}+t^r-2t)}{t-1}+\frac{(t^{r+1}+t^r-2t)^2(t^{kh}-kt^h+k-1)}{(t-1)^2(t^h-1)^2}\,. \end{equation*} For the Wiener index and the hyper-Wiener index we obtain \begin{eqnarray} \label{eq:evenW} W(S_{2r+1,h,k}) & = & \frac{1}{6}k[h(2r-1)^2 (k-1)(k-2)+6r^2(1-r+2rk-k)]\,, \\ \nonumber WW(S_{2r+1,h,k}) & = & \frac{1}{6}kr[(r+1)(2-6r+11kr+7kr^2-5r^2) \\ \label{eq:evenhyperW} & & +2rh(k-1)(k-1)(2r+3)+rh^2(k-1)^2(k-2)]\,. \end{eqnarray} From Eqs.~\eqref{eq:oddW}, \eqref{eq:oddhyperW}, \eqref{eq:evenW}, \eqref{eq:evenhyperW}, setting \(q = 3,4,5,6\) and \(h \in \{1,2,\ldots,\lfloor{\frac{q}{2}\rfloor}\}\), we recover all the expressions in Table 4.2 of~\cite[p.115]{diudea-2001} (they occur also in~\cite{diudea-1995}). Incidentally, there is a typo in the last expression of Table 4.2 of~\cite{diudea-2001}: 847 should be changed to 874. The corresponding expression in~\cite[Eq.~(64)]{diudea-1995} is correct. \subsection{Polyphenylenes} Similarly to the above definition of the spiro-chain \(S_{q,h,k}\), we can define the graph \(L_{q,h,k}\) as the link of \(k\) cycles \(C_q\) in which the distance between the two contact vertices in the same cycle is \(h\). See Fig.~\ref{fig:l-spiro} for $L_{6,2,5}$. \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=0.25,style=thick] \def\vr{4pt} \draw (0,0) +(30:3) \foreach \a in {30,90,...,330} { -- +(\a:3) } -- cycle; \draw (2.6,-1.5) -- (5,-1.5); \draw (7.6,0) +(30:3) \foreach \a in {30,90,...,330} { -- +(\a:3) } -- cycle; \draw (10.2,-1.5) -- (12.8,-1.5); \draw (15.4,0) +(30:3) \foreach \a in {30,90,...,330} { -- +(\a:3) } -- cycle; \draw (18,-1.5) -- (20.4,-1.5); \draw (23,0) +(30:3) \foreach \a in {30,90,...,330} { -- +(\a:3) } -- cycle; \draw (25.6,-1.5) -- (28,-1.5); \draw (30.6,0) +(30:3) \foreach \a in {30,90,...,330} { -- +(\a:3) } -- cycle; \end{tikzpicture} \end{center} \caption{$L_{6,2,5}$} \label{fig:l-spiro} \end{figure} We consider here only the case of hexagons (\(q=6\)), the so-called ortho-, meta-, or para-polyphenyl chains, corresponding to \(h=1, 2\) or 3, respectively (see~\cite{dara-2010,deng-2012}). If in~\eqref{eq:link2} we take \(H(X,t)=6t+6t^2+3t^3\) (the Hosoya polynomial of \(C_6\)) and \(H_x(X,t)=H_y(X,t)=2t+2t^2+t^3\) (the relative Hosoya polynomial of \(C_6\) with respect to any of its vertices), then~\eqref{eq:link2} becomes \begin{equation*} H(L_{6,h,k},t) = 3kt(2+2t+t^2)+\frac{(t+1)^2(t^2+t+1)^2(t^{kh+k+1}-kt^{h+2}+kt-t)}{t^{h+1}-1)^2}\,. \end{equation*} The expression obtained from here for all the possible values \(h=1,2,3\) have been obtained by a different method in~\cite{li-2012} (Theorems 2.1, 2.2, and 2.3). Now, for the Wiener index and the hyper-Wiener index we obtain \begin{eqnarray} \label{eq:6hkwiener} W(L_{6,h,k}) & = & 3k[4h-11+6k(3-h)+2k^2(1+h)]\,, \\ \nonumber WW(L_{6,h,k}) & = & \frac{3}{2}k[-2h^2+32h-69 + k(5h^2-44h+82) \\ \nonumber & & -2k^2(h+1)(2h-7)+k^3(h+1)^2] \,. \end{eqnarray} Setting \(h=1,2,3\) in~\eqref{eq:6hkwiener}, we recover the expressions given in~\cite[Corollary 3.3]{deng-2012}. The Wiener index of \(L_{6,3,k}\) is found also in~\cite[p.1233]{dara-2010}. However, the formulation of the final result has a typo: the binomial \(\binom{n+1}{3}\) should be preceded by 144. The reader may be interested to find in the same way the Hosoya polynomial, the Wiener index, and the hyper-Wiener index of \(L_{q,h,k}\). \subsection{Nanostar dendrimers} We intend to derive the Hosoya polynomial of the nanostar dendrimer \(D_k\) defined pictorially in~\cite{ghorbani-2010a}. A better pictorial definition can be found in~\cite{ghorbani-2010b}. In order to define $D_k$, first we define recursively an auxiliary family of rooted dendrimers \(G_k\) (\(k\geq 1\)). We need a fixed graph \(F\) defined in Fig.~\ref{fig:f}; we consider one of its endpoint to be the root of \(F\). The graph \(G_1\) is defined in Fig.~\ref{fig:g1}, the leaf being its root. Now we define \(G_k\) (\(k\geq 2\)) as the bouquet of the following 3 graphs: \(G_{k-1}, G_{k-1},\) and \(F\) with respect to their roots; the root of \(G_k\) is taken to be its unique leaf (see \(G_2\) and \(G_3\) in Fig.~\ref{fig:g2-g3}). Finally, we define \(D_k\) (\(k \geq 1\)) as the bouquet of 3 copies of \(G_k\) with respect to their roots (\(D_2\) is shown in Fig.~\ref{fig:nanostarD2}, where the circles represent hexagons). \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=0.25,style=thick] \def\vr{8pt} \draw (-5,0) -- (-3,0); \draw (0,0) +(60:3) \foreach \a in {60,120,180,240,300,360} { -- +(\a:3) } -- cycle; \draw (3,0) -- (5,0); \draw (8,0) +(60:3) \foreach \a in {60,120,180,240,300,360} { -- +(\a:3) } -- cycle; \draw (11,0) -- (13,0); \end{tikzpicture} \end{center} \caption{Graph $F$} \label{fig:f} \end{figure} \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=0.25,style=thick] \def\vr{8pt} \draw (-5,0) -- (-3,0); \draw (0,0) +(60:3) \foreach \a in {60,120,180,240,300,360} { -- +(\a:3) } -- cycle; \end{tikzpicture} \end{center} \caption{Graph $G_1$} \label{fig:g1} \end{figure} \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=0.4,style=thick] \def\vr{8pt} \draw (3,0) -- (3,1); \draw (3,1) -- (2,2) -- (2,3) -- (3,4) -- (4,3) -- (4,2) -- (3,1); \draw (3,4) -- (3,5); \draw (3,5) -- (2,6) -- (2,7) -- (3,8) -- (4,7) -- (4,6) -- (3,5); \draw (3,8) -- (3,9); \draw (3,9) -- (2,10); \draw (3,9) -- (4,10); \draw (2,10) -- (1,10) -- (0,11) -- (0,12) -- (1,12) -- (2,11) -- (2,10); \draw (4,10) -- (4,11) -- (5,12) -- (6,12) -- (6,11) -- (5,10) -- (4,10); \draw (5,4.5) node {$G_2$}; \draw (17,0) -- (17,1); \draw (17,1) -- (16,2) -- (16,3) -- (17,4) -- (18,3) -- (18,2) -- (17,1); \draw (17,4) -- (17,5); \draw (17,5) -- (16,6) -- (16,7) -- (17,8) -- (18,7) -- (18,6) -- (17,5); \draw (17,8) -- (17,9); \draw (17,9) -- (16,10); \draw (17,9) -- (18,10); \draw (16,10) -- (15,10) -- (14,11) -- (14,12) -- (15,12) -- (16,11) -- (16,10); \draw (14,12) -- (13,13); \draw (13,13) -- (12,13) -- (11,14) -- (11,15) -- (12,15) -- (13,14) -- (13,13); \draw (11,15) -- (10,16); \draw (10,16) -- (9,16); \draw (10,16) -- (10,17); \draw (9,16) -- (8,15) -- (7,15) -- (6,16) -- (7,17) -- (8,17) -- (9,16); \draw (10,17) -- (9,18) -- (9,19) -- (10,20) -- (11,19) -- (11,18) -- (10,17); \draw (18,10) -- (18,11) -- (19,12) -- (20,12) -- (20,11) -- (19,10) -- (18,10); \draw (20,12) -- (21,13); \draw (21,13) -- (21,14) -- (22,15) -- (23,15) -- (23,14) -- (22,13) -- (21,13); \draw (23,15) -- (24,16); \draw (24,16) -- (24,17); \draw (24,16) -- (25,16); \draw (24,17) -- (23,18) -- (23,19) -- (24,20) -- (25,19) -- (25,18) -- (24,17); \draw (25,16) -- (26,17) -- (27,17) -- (28,16) -- (27,15) -- (26,15) -- (25,16); \draw (19,4.5) node {$G_3$}; \end{tikzpicture} \end{center} \caption{Graphs $G_2$ and $G_3$} \label{fig:g2-g3} \end{figure} \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=0.35,style=thick] \def\vr{15pt} \draw (2,-7) -- (4,-9); \draw (6,-7) -- (4,-9); \draw (4,-9) -- (4,-16); \draw (4,-16) -- (-2,-22); \draw (4,-16) -- (10,-22); \draw (-4,-22) -- (-2,-22) -- (-2,-24); \draw (10,-24) -- (10,-22) -- (12,-22); \draw (2,-7) [fill=white] circle (\vr); \draw (6,-7) [fill=white] circle (\vr); \draw (4,-11) [fill=white] circle (\vr); \draw (4,-14) [fill=white] circle (\vr); \draw (2,-18) [fill=white] circle (\vr); \draw (0,-20) [fill=white] circle (\vr); \draw (6,-18) [fill=white] circle (\vr); \draw (8,-20) [fill=white] circle (\vr); \draw (-4,-22) [fill=white] circle (\vr); \draw (-2,-24) [fill=white] circle (\vr); \draw (10,-24) [fill=white] circle (\vr); \draw (12,-22) [fill=white] circle (\vr); \end{tikzpicture} \end{center} \caption{Nanostar $D_2$} \label{fig:nanostarD2} \end{figure} Let \(s\) denote the partial Hosoya polynomial of the graph \(F\) with respect to its root and let \(p\) denote the Hosoya polynomial of \(F\). Direct computation yields \begin{equation*} s=t^9+t(1+t)(1+t+t^2)(1+t^4)\,, \end{equation*} \begin{equation} \label{eq:p} p=15t+20t^2 +18t^3+12t^4+10t^5+8t^6+5t^7+2t^8+t^9\,. \end{equation} Let \(r_k\) denote the partial Hosoya polynomial of \(G_k\) with respect to its root. It is straightforward to find \(r_1=t(1+t)(1+t+t^2)\) and the recurrence relation \(r_k = s+2t^9r_{k-1}\); they lead to \begin{equation} \label{eq:rk} r_k = s\frac{(2t^9)^{k-1}-1}{2t^9-1} + (2t^9)^{k-1}t(1+t)(1+t+t^2)\,. \end{equation} Now from~\eqref{eq:bouquet1} we obtain a recurrence relation for \(H(G_k,t)\): \begin{equation*} H(G_k,t)=2H(G_{k-1},t)+p+2sr_{k-1}+r_{k-1}^2\,, \end{equation*} the initial condition being \begin{equation} \label{eq:Hg1} H(G_1,t)=7t+8t^2+5t^3+t^4\,. \end{equation} The solution is \begin{equation} \label{eq:Hgk} H(G_k,t) = 2^{k-1}(p+H(G_1,t))-p+\sum_{j=1}^{k-1}2^{k-1-j}r_j(2s+r_j)\,, \end{equation} where \(p\) and \(H(G_1,t)\) are given in~\eqref{eq:p} and~\eqref{eq:Hg1}, respectively. Although not required in the sequel, we give the Wiener index and the hyper-Wiener index of \(G_k\): \begin{eqnarray} \nonumber W(G_k) & = & 1323+2^{k-1}3735-2^{2k-2}12711+2^k2223k+2^{2k-2}3249k\,, \\ \nonumber WW(G_k) & = & -45867-2^{k-1}173401+2^{2k-3}1060083 - 2^{k-1}132777k \\ \nonumber & & -2^{2k-3}454347k+20007k^2 2^{k-1}+29241k^22^{2k-2}\,. \end{eqnarray} Since \(D_k\) is a bouquet of three copies of \(G_k\) with respect to their roots, from~\eqref{eq:bouquet2} we have \begin{equation*} H(D_k,t) = 3H(G_k,t) + 3r_k^2\,, \end{equation*} where the terms in the right-hand side are given in~\eqref{eq:Hgk} and~\eqref{eq:rk}. For the Wiener index and the hyper-Wiener index of \(D_k\) we obtain \begin{eqnarray} \nonumber W(D_k) & = & -9369-2^{2k-2}75411+2^{2k-2}29241k+2^{k-1}56205\,, \\ \nonumber WW(D_k) & = & 116340-2^{k-1}1429983+2^{2k-3}4790367-2^{2k-3}2685555k \\ \nonumber & & +2^{2k-2}263169k^2\,. \end{eqnarray} Incidentally, the formula given in~\cite[p.62]{ghorbani-2010a} for the Wiener index of \(D_n\) contains some misprints; for example, for \(D_1\) it does not give the correct value 666 found on p. 60. \subsection{Triangulenes} We intend to derive the Hosoya polynomial of the triangulane \(T_k\) defined pictorially in~\cite{khalifeh-2008}. We define \(T_k\) recursively in a manner that will be useful in our approach. First we define recursively an auxiliary family of triangulanes \(G_k\) (\(k \geq 1\)). Let \(G_1\) be a triangle and denote one of its vertices by \(y_1\). We define \(G_k\) (\(k \geq 2\)) as the circuit of the graphs \(G_{k-1}, G_{k-1}\), and \(K_1\) and denote by \(y_k\) the vertex where \(K_1\) has been placed. The graphs \(G_1, G_2, G_3\) are shown in Fig.~\ref{fig:g1-g2-g3}. \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=0.6,style=thick] \def\vr{4pt} \draw (0,0) -- (1,2) -- (-1,2) -- (0,0); \draw (0,0) [fill=white] circle (\vr); \draw (0.7,0) node {$y_1$}; \draw (0,-1) node {$G_1$}; \draw (5,0) -- (6,2) -- (4,2) -- (5,0); \draw (4,2) -- (4.7,4) -- (3.3,4) -- (4,2); \draw (6,2) -- (6.7,4) -- (5.3,4) -- (6,2); \draw (5,0) [fill=white] circle (\vr); \draw (5.7,0) node {$y_2$}; \draw (5,-1) node {$G_2$}; \draw (12,0) -- (14,2) -- (10,2) -- (12,0); \draw (10,2) -- (11,4) -- (9,4) -- (10,2); \draw (14,2) -- (15,4) -- (13,4) -- (14,2); \draw (9,4) -- (9.5,6) -- (8.5,6) -- (9,4); \draw (11,4) -- (11.5,6) -- (10.5,6) -- (11,4); \draw (13,4) -- (13.5,6) -- (12.5,6) -- (13,4); \draw (15,4) -- (15.5,6) -- (14.5,6) -- (15,4); \draw (12,0) [fill=white] circle (\vr); \draw (12.7,0) node {$y_3$}; \draw (12,-1) node {$G_3$}; \end{tikzpicture} \end{center} \caption{Graphs $G_1$, $G_2$, $G_3$} \label{fig:g1-g2-g3} \end{figure} Now, \(T_k\) is defined as the circuit of 3 copies of \(G_k\) with respect to their vertices \(y_k\) (\(T_2\) is shown in Fig.~\ref{fig:t2}). \begin{figure}[ht!] \begin{center} \begin{tikzpicture}[scale=0.5,style=thick] \def\vr{4pt} \draw (6,-13) -- (11,-3); \draw (7,-3) -- (12,-13); \draw (7,-3) -- (12,-13); \draw (4,-9) -- (14,-9); \draw (5,-7) -- (8,-13); \draw (10,-13) -- (13,-7); \draw (6,-5) -- (12,-5); \draw (6,-5) -- (7,-3); \draw (11,-3) -- (12,-5); \draw (4,-9) -- (5,-7); \draw (6,-13) -- (8,-13); \draw (10,-13) -- (12,-13); \draw (14,-9) -- (13,-7); \end{tikzpicture} \end{center} \caption{Graph $T_2$} \label{fig:t2} \end{figure} Let \(r_k\) denote the partial Hosoya polynomial of \(G_k\) with respect to the vertex \(y_k\). It is straightforward to derive that \begin{equation} \label{eq:1rk} 1+r_k = \frac{2^{k+1}t^{k+1}-1}{2t-1}\,. \end{equation} Since \(G_k\) is the circuit of the graphs \(G_{k-1}, G_{k-1}\), and \(K_1\), from~\eqref{eq:circuit1} we obtain the recurrence equation \begin{equation*} H(G_k,t) = 2H(G_{k-1},t)+t(1+r_{k-1})^2+2t(1+r_{k-1})\,. \end{equation*} The initial condition is \(H(G_1,t)=3t\) and the solution is found to be \begin{equation} \label{eq:THgk} H(G_k,t) = \frac{2^{k+2}t^{k+2}+4t^2-3t}{(2t-1)^2}-\frac{2^k(4t^2+3t)}{2t^2-1}+\frac{2^{2k+1}t^{2k+3}}{(2t-1)^2(2t^2-1)}\,. \end{equation} Although not required in the sequel, we give the Wiener index and the hyper-Wiener index of \(G_k\): \begin{eqnarray*} W(G_k) & = & 2^{2k+1}(2k-5)+2^k(4k+9)+1\,, \\ WW(G_k) & = & 2^{2k+1}(2k^2-9k+16)+2^k(2k^2-6k-29)-3\,. \end{eqnarray*} Since \(T_k\) is a circuit of 3 copies of \(G_k\) with respect to the vertices \(y_k\), from~\eqref{eq:circuit2} we obtain \begin{equation*} H(T_k,t) = 3H(G_k,t) + 3t(1+r_k)^2\,, \end{equation*} where the expressions occurring in the right-hand side are given in~\eqref{eq:THgk} and~\eqref{eq:1rk}. We obtain easily \begin{equation*} H(T_k,t)=\frac{6t}{2t-1}-\frac{2^n3t(4t+3)}{2t^2-1}+\frac{2^{2n+1}3t^{2n+3}(2t+1)}{(2t-1)(2t^2-1)}\,. \end{equation*} For the Wiener index of \(T_k\) we obtain \begin{equation*} W(T_k)=2^{2n+1}3(6n-7)+2^n51-6\,. \end{equation*} This expression can be found in~\cite[Theorem 1]{khalifeh-2008} and in~\cite[p.37, Theorem 5]{cataldo-2011}. For the hyper-Wiener index of \(T_k\) we have \begin{equation*} WW(T_k)=2^{2n+1}3(6n^2-11n+20)-2^n123+6\,. \end{equation*} \section*{Acknowledgments} This work has been financed by ARRS Slovenia under the grant P1-0297. The second author is also with the Institute of Mathematics, Physics and Mechanics, Ljubljana.
1212.3238
\section{Introduction} \label{intr} The LMG model was introduced in nuclear physics to mimic the behavior of closed shell nuclei \cite{LMG}. It is a simple model with one quantum degree of freedom. The dimension of the Hamiltonian matrix increase linearly with the size of the systems allowing its exact diagonalization for large system sizes. As such, the model has been extremely useful to test many-body approximations to nuclear problems (see for example \cite{Schuck} and references therein). More recently the model found applications to many other areas of physics like quantum spin systems \cite{Botet}, ion traps \cite{Unayan}, Bose-Einstein condensates in double wells \cite{Links1} or in cavities \cite {Chen}, and in circuit QED \cite{Larson}. The model has been also utilized to study quantum phase transitions (QPT) \cite{Castanos, duke1} and their relations with quantum entanglement properties \cite{Vidal1, Vidal2}, as well as to explore excited states QPT \cite{Heiss} and quantum decoherence \cite{deco}. On a different respect, the LMG model was shown to be exactly solvable \cite{Pan} and quantum integrable \cite{NPB707} as a particular limit of the $SU(1,1)$ Richardson-Gaudin integrable models \cite{duke2, RMP}. The most important feature of the exact solution is that it provides a unique form for the wavefunction of the complete set of eigenstates of the model in terms of a set of pair energies or pairons obtained as a solution of the non-linear coupled Richardson equations. The distribution of pair energies in the energy space change dramatically close to a critical point. A typical example is the exactly solvable $p_x + i p_y$ fermion superfluid derived from the $SU(2)$ hyperbolic RG model \cite{Ger1,Links2,Hyp}. The model has two interesting lines in the phase diagram of density versus coupling constant: a) the Moore-Read line in which all pairons collapses to zero energy, and b) the Read-Green line in which all pairons are real and negative. While in the first case the existence of a QPT is still debated, in the second case it has been shown that Read-Green line corresponds to a third order QPT. The LMG can be mapped to a two-level boson systems by means of the Schwinger representation of the $SU(2)$ algebra. In this representation the LMG Hamiltonian transforms to a two-level boson pairing model associated with a $SU(1,1)\otimes SU(1,1)$ algebra \cite{Pan}. Within these models, the relation between the distribution of pair energies and the occurrence of a QPT has been discussed in Ref. \cite{PittDuk} in connection with the {\it s-d} dominance in the Interacting Boson Model of nuclear physics. More recently, a thorough analysis of critical points of the two-site Bose-Hubbard model in terms of the roots of the Richardson equations has been presented \cite{Angela}, showing the intimate relation between quantum criticality and the rapid change in the behavior of the pairons. In this paper we will continue these studies focusing on the generalized LMG model which has a rather rich phase diagrams with lines of first and second order QPT and a triple point with a third order QPT. Moreover, the pair energies in a region of the parameter space display a behavior similar to the $p_x + i p_y$ superfluid model, opening the possibility of correlating the physics of both models in the critical regions. We will start by introducing the LMG model, its Schwinger boson representation leading to a two-level boson pairing model, and the mean field phase diagram of the model in section \ref{LMGbos}. In section \ref{LMG-RG} we will introduce the $SU(1,1)$ RG models and discuss the limits leading to the LMG model. We will introduce in this section a robust numerical method to solve the Richardson equations based on their relation with the Lam\'e ordinary differential equation. Section \ref{GSQPT} is devoted to the study of the behavior of the ground state pairons close to the phase transition, and the region in parameters space which shows a behavior similar to the $p_x + i p_y$ model. Finally, in section \ref{excited} we will describe the RG solutions for the excited states. Concluding remarks are given in section \ref{summa}. \section{The Lipkin-Meshkov-Glick model and its bosonic representation} \label{LMGbos} The LMG model is based on the $SU(2)$ algebra, whose three elements satisfy the commutation relations $$ [S_{+},S_{-}]=2S_z ~, ~~~ [S_z,S_{}\pm]=\pm S_{\pm}. $$ The three elements can be considered as the components of the pseudo-spin operator $\bf {S}$. They commute with the Casimir operator of $SU(2)$, ${\bf S}^2=\frac{1}{2}(S_{+}S{-}+S_{-}S{+})+S^2_z$. In terms of these elements the LMG Hamiltonian can be written as \begin{equation} H_{L}=\epsilon S_z+\frac{\lambda}{2}\left(S_+^2+S_-^2 \right)+\frac{\gamma}{2}\left(S_+ S_-+S_-S_+ \right), \label{LipMo} \end{equation} Note that $H_{L}$ does not commute with the $z$ component $S_z$ for $\lambda\neq 0$ but it commutes with the total pseudo-spin Casimir operator ${\bf {S}}^2$. Therefore, the Hilbert space of the model can be separated in different sub-spaces labeled by the eigenvalues of the total pseudo-spin $j(j+1)$, with basis $\mathcal{H}_j=\{ | j m\rangle : m=-j,-j+1,...,j-1,j\}$. Additionally, the LMG Hamiltonian commutes with the parity operator $\hat{P}=\exp{i\pi (S_z+j)}$, yielding, for a given $j$, two invariant sub-spaces ($P=+$ and $P=-$), which are spanned, respectively, by the basis $\mathcal{H}_{j+}= \{ | j m\rangle : m=-j,-j+2,-j+4,...\}$ and $\mathcal{H}_{j-}= \{ | j m\rangle : m=-j+1,-j+3,-j+5,...\}$. From now on and for the sake of simplicity, we will assume integer values for $j$. The semi-integer case can be worked out following the same lines with some slight modifications. For integer $j$ the dimensions of the invariant subspaces, $\mathcal{H}_{j+}$ and $\mathcal{H}_{j-}$, are $j+1$ and $j$ respectively. Having introduced the LMG Hamiltonian in terms of $SU(2)$ operators, a physical realization of the model requires a representation of the algebra either in terms of a collection of spins or in terms of a fermionic or bosonic system. In its original presentation the $SU(2)$ operators were expressed in terms of a collection of $2j$ fermions distributed in two levels, each having a $2j$ fold degeneracy. Instead we will make use of the Schwinger boson representation of the $SU(2)$ which allows to a simple connection with the bosonic RG integrable models. The Schwinger representation of the $SU(2)$ algebra in terms of two bosons is \begin{equation} S_z=\frac{ b^\dagger b-a^\dagger a}{2}=\frac{\hat{n}_b-\hat{n}_a}{2},\ \ \ \ S_+=b^\dagger a, \ \ \ \ S_-= a^\dagger b, \label{Schwinger} \end{equation} with $a$ and $b$ boson operators, satisfying the usual commutation rules $[a,a^\dagger]=[b,b^\dagger]=1$ and $[a,b]=[a,b^\dagger]=0$. Inserting the boson mapping (\ref{Schwinger}) into the Hamiltonian (\ref{LipMo}) the bosonic version of the LMG Hamiltonian reads: \begin{equation} H_{L}=\frac{\gamma+\epsilon}{2} b^\dagger b + \frac{\gamma - \epsilon}{2} a^\dagger a + \frac{\lambda}{2}\left(b^{\dagger 2} a^2+a^{\dagger 2}b^2 \right)+ \gamma \left( b^\dagger a^\dagger a b \right) . \label{LMGb} \end{equation} Using the Schwinger representation, a basis for the Hilbert space with total pseudo-spin $j$ can be written in terms of boson creation operators as $ |j m\rangle= |n_a=j-m, n_b=j+m\rangle, $ where $|n_a,n_b\rangle=\frac{(a^\dagger)^{n_a} (b^\dagger)^{n_b}}{\sqrt{n_a ! n_b !} } |0\rangle$, with $|0\rangle$ the boson vacuum. Note that for a given $j$ the total number of bosons is constant $N\equiv n_b+n_a=2 j$. Likewise, the positive and negative parity basis in the Schwinger representation are given byin" $$ \mathcal{H}_{j+}= \{ |n_a=2j, nb=0\rangle,|n_a=2j-2,n_b=2\rangle, |n_a=2j-4,n_b=4\rangle,... \} $$ $$ \mathcal{H}_{j-}= \{ |n_a=2j-1, n_b=1\rangle,|n_a=2j-3,n_b=3\rangle, |n_a=2j-5,n_b = 5\rangle,... \}. $$ A detailed analysis of the LMG phase diagram in terms $SU(2)$ coherent states with definite parity has been performed in Ref. \cite{Castanos}. The different phases of the model and the order of their transitions were identified. Here we repeat that analysis using the Schwinger representation and the boson coherent state \begin{equation} |z_a z_b\rangle=e^{-\frac{|z_a|^2+|z_b|^2}{2}} e^{z_a a^\dagger+z_b b^\dagger}|0\rangle, \label{coher} \end{equation} where $z$ are c-numbers parametrized as $z=\rho e^{i \theta}$. The expectation value of the LMG Hamiltonian (\ref{LMGb}) in the coherent state $|z_a z_b\rangle=$ is \begin{equation} \langle z_b z_a | H_L |z_a z_b\rangle = \frac{\gamma+\epsilon}{2}\rho_b^2 + \frac{\gamma-\epsilon}{2} \rho_a^2 + (\lambda\cos(2(\theta_a-\theta_b))+\gamma)\rho_a^2\rho_b^2. \end{equation} The constraint $n_a+n_b=2j$ implies that coherent states parameters should fulfilled $\rho_a^2+\rho_b^2=2j$. Enforcing this relation, the energy surface is \begin{equation} E[\rho_b,\theta]=\langle z_b z_a | H_L |z_a z_b\rangle= \frac{2j \epsilon }{2j-1}\left[ A+\Big(2j-1+j B_\theta\Big)\left(\frac{\rho_b^2}{2j}\right)-j B_\theta \left(\frac{\rho_b^2}{2j}\right)^2 \right], \end{equation} with $A=\frac{\gamma_x+\gamma_{y}- 2 (2j-1)}{4}$ and $B_\theta=\Big(\gamma_x+\gamma_y+(\gamma_x-\gamma_y)\cos 2\theta\Big)$, where we have used $\theta=\theta_a-\theta_b$, and the re-scaled parameters defined in \cite{Castanos} \begin{equation} (\gamma_x,\gamma_y)\equiv\frac{2j-1}{\epsilon}( \gamma+\lambda,\gamma-\lambda). \label{gamas} \end{equation} In the thermodynamic limit, $j\rightarrow\infty$, the energy per particle, $\mathcal{E}[\rho_b,\theta]\equiv E[\rho_b,\theta]/(2j)$, simplifies to \begin{equation} \frac{2\mathcal{E}[\rho_b,\theta]}{\epsilon}+1= \Big(2+B_\theta\Big)\left(\frac{\rho_b^2}{2j}\right) - B_\theta \left(\frac{\rho_b^2}{2j}\right)^2, \label{EMF} \end{equation} where terms of order $\mathcal{O}(1/j)$ have been neglected. The phase diagram of the LMG model is obtained by minimizing the energy (\ref{EMF}) with respect to the variables $\theta\in (-\pi,\pi]$ and $\frac{\rho_b^2}{2j}\in[0,1]$ for different values of the model parameters, $\gamma_x$-$\gamma_y$. The different phases, separated by dashed lines in Fig.\ref{fig1}, are described in Table I where we have classified the values of the parameters $\theta$ and $\rho_b$ characterizing the coherent state (\ref{coher}) at the absolute minimum. Additionally, Table I shows the energy per particle and the expectation values of the operators $S_x^2$ and $S_y^2$ that play the role of order parameters. The critical line $\gamma_x=\gamma_y<-1$ is a special case because the relative phase $\theta$ is completely undetermined, i.e. the minimum in energy is independent of $\theta$. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & & & & & & \\ phase& region & $\theta_{min}$ & $\left(\frac{\rho_b^2}{2j}\right)_{min}$ & $\left(\frac{2\mathcal{E}}{\epsilon}+1\right)_{min}$ & $\left(\frac{\langle S_x^2\rangle}{j^2}\right)_{min}$ &$\left(\frac{\langle S_y^2\rangle}{j^2}\right)_{min}$\\ & & & & & & \\ \hline A&$\gamma_y\geq-1$ and $\gamma_x\geq-1$ & 0& 0& 0 & 0 & 0 \\ B&$\gamma_y>\gamma_x {\hbox{ and }}\gamma_x<-1$ & $0$ or $\pi$ & $\frac{\gamma_x+1}{2\gamma_x}$ & $\frac{(\gamma_x+1)^2}{2\gamma_x}$ & $4 \left( \frac{\rho_a^2}{2j}\right)\left( \frac{\rho_b^2}{2j}\right)$ & 0 \\ C&$\gamma_y<\gamma_x {\hbox{ and }}\gamma_y<-1$ & $\pm\frac{\pi}{2}$ & $\frac{\gamma_y+1}{2\gamma_y}$ & $\frac{(\gamma_y+1)^2}{2\gamma_y}$. & 0 & $4 \left( \frac{\rho_a^2}{2j}\right)\left( \frac{\rho_b^2}{2j}\right)$ \\ \hline \end{tabular} \end{center} \caption{Phases and their order parameters in the LMG model} \end{table} We are now ready to establish the phase diagram of the LMG model in the Schwinger boson representation. In complete accord with Ref.\cite{Castanos}, three phases are identified, which are distinguished by the occupation of boson $b$ and the relative phase ($\theta$) of the coherent state parameters $z_a$ and $z_b$. Alternatively we can characterize the three phases by the order parameters $\frac{\langle S_x^2\rangle}{j^2}= (\rho_a^2/j)(\rho_b^2/j) \cos^2\theta $ and $\frac{\langle S_y^2}{j^2}\rangle=(\rho_a^2/j)(\rho_b^2/j)\sin^2\theta$, where we have neglected terms of order $\mathcal{O}(1/j)$. Phase A ($\gamma_x\geq-1$ and $\gamma_y\geq-1$) has zero occupation of the boson $b$, $\left(\frac{\rho_b^2}{2j}\right)_{min}= \frac{\langle z_a z_b| \hat{n}_b| z_az_v\rangle}{2j} =0$. Therefore the two order parameters are also zero, $\langle S_x^2\rangle = \langle S_y^2\rangle = 0$. In phase B, ($\gamma_y>\gamma_x$ and $\gamma_x<-1$), the coherent state mixes $a$ and $b$ and the order parameter $\langle S_x^2\rangle/(j^2)$ is finite. Finally, phase C with ($\gamma_y<\gamma_x$ and $\gamma_y<-1$) is the mirror of phase B corresponding to an exchange between $x$ and $y$. Upon inspection of the order parameters, we can immediately recognize that the transitions between phases (A-B) and (A-C) are continuous in the order parameters, defining a second order phase transition. The transition between B and C is discontinuous in the order parameters characterizing a first order phase transition. These facts were confirmed in Ref. \cite{Castanos} by analyzing the energy derivatives. At the triple point, $\gamma_x=\gamma_y=-1$, both order parameters converge to 0 avoiding the discontinuity of the first order critical line. As shown in \cite{Castanos} this critical point represents a third order phase transition when it is traversed in the direction indicated by the arrow in Fig.\ref{fig1}. \begin{figure}[t*] \centering{ \includegraphic [width=0.5 \textwidth]{fig1.pdf} } \caption{Phase diagram of the LMG model and Richardson-Gaudin areas of integrability in the $\gamma_x$-$\gamma_y$ parameter space. The light quadrants correspond to hyperbolic model ($s=-1$) with light gray for $0<t<1$ and white for $t>1$. Dark gray quadrants correspond to the trigonometric model. Upper quadrants correspond to positive $g$ while lower quadrants to negative $g$. The dashed lines separate the three different phases (A,B and C) of the LMG model discussed in the text. The triple point $(-1,-1)$ in the intersection of the lines is a third order transitions when it is traversed in the direction indicated by the arrow. The horizontal line $\gamma_y=0$ and the vertical one $\gamma_x=0$ correspond to the rational version of the model ($s=0$). } \label{fig1} \end{figure} \section{$SU(1,1)$ Richardson-Gaudin and LMG models} \label{LMG-RG} While the exact solution of the LMG was derived in Ref. \cite{Pan} using an algebraic approach based on the Bethe ansatz, the connection between the LMG Hamiltonian and the $SU(1,1)$ RG models was established later \cite{NPB707}. In order to make the present work self-contained and to fix the notation, we will here derive the exact solution of the LMG Hamiltonian from the more general $SU(1,1)$ RG models following a different path. The non-compact $SU(1,1)$ algebra, defined in terms of ladder ($K_+, K_-$) and weight ($K_z$) operators resembles that of the $SU(2)$ group, differing in a sign in the commutation relations $$ \left[ K^z, K^\pm \right]=\pm K^\pm, \ \ \ \left[ K^+, K^- \right]=-2 K^z . $$ Let us now consider $N_c$ different copies of the $SU(1,1)$ algebra, and construct $N_c$ linear and quadratic hermitian combinations of the three elements of the algebra \begin{equation} R_i=K_i^z-2 g \sum_{j\not= i} \left[\frac{X(t_i,t_j)}{2} \left( K_i^+ K_j^-+K_i^-K_j^+\right)- Z(t_,i,t_j)K_i^z K_j^z \right], \label{Ri} \end{equation} where $i$, $j$ label each of the $N_c$ copies and $g$ is an arbitrary parameter. The structure of the operators $R_i$ is such that they commute with the total $K^{z}$ operator ($K^z=\sum_{i} K^{z}_{i}$). It has be shown \cite{duke2} that the $N_c$ operators commute among themselves ($[R_i,R_j]=0$), defining and integrable model, if the functions $X(t_i,t_j)$ and $Z(t_i,t_j)$ are anti-symmetric functions of an arbitrary set of parameters $t_i$ \begin{equation} X(t_i,t_j)=\frac{\sqrt{(1+s t_i^2)(1+s t_j^2)}}{t_i-t_j}, \ \ \ \ Z(t_i,t_j)= \frac{1+s t_it_j}{t_i-t_j}. \label{XZ} \end{equation} The parameter $s$ can take three different values $s=0,1,-1$, defining the rational, the trigonometric, and the hyperbolic families of $SU(1,1)$ RG models respectively. The LMG model is obtained in the limit of two $SU(1,1)$ copies, which we will label as $a$ and $b$. In the pair boson representation of the $SU(1,1)$ algebra, the elements of the two copies are \begin{eqnarray} K_a^+=\frac{1}{2}a^\dagger a^\dagger & K_a^-,=\frac{1}{2}a a, & K_a^z=\frac{1}{2}\left(a^\dagger a+\frac{1}{2}\right)\nonumber \\ K_b^+=\frac{1}{2}b^\dagger b^\dagger & K_b^-,=\frac{1}{2}b b, & K_b^z=\frac{1}{2}\left(b^\dagger b+\frac{1}{2}\right). \label{su11} \end{eqnarray} The irreducible representations (irreps) of the non-compact $SU(1,1)$ algebra are dimensionally infinite, but they possess a minimum weight state defined by $K_i^- |MW\rangle=0$. For the previous bosonic representation, these states are given by $|\nu_i=0\rangle\equiv |0\rangle_i$ and $|\nu_i=1\rangle\equiv |1 \rangle_i$, where $|0\rangle_i$ is the vacuum of bosons $i=a,b$. The parameters $\nu_i$ are the so-called seniorities of each of the $SU(1,1)$ copies. The seniority quantum number, $\nu_i$, counts the number of unpaired bosons $i$ and can take only two values $0$ or $1$. If $\nu_i=0$ the number of bosons $i$ ($n_i$) is even, and odd if $\nu_i=1$. Inserting the boson pair representation of the two copies (\ref{su11}) in the integrals of motion (\ref{Ri}) we construct the two integrals of motion of the LMG model. We can verify the the sum of both integrals gives the conserved quantity $K^z$. Taking the difference between both we obtain: \begin{eqnarray} R_b-R_a= K_b^z-K_a^z-2 g X_{ba} \left[ K_b^+ K_a^-+K_a^+K_b^-\right]+ 4 gZ_{ba} K_b^z K_a^z\nonumber\\ = \frac{1}{2} \left(b^\dagger b-a^\dagger a\right)- g\frac{X_{ba}}{2} \left(b^{\dagger 2} a^2 +a^{\dagger 2}b^2 \right)+ g Z_{ba}\left( b^\dagger b + \frac{1}{2}\right)\left( a^\dagger a+ \frac{1}{2} \right), \nonumber \end{eqnarray} with $Z_{ba}\equiv Z(t_b,t_a)$ and $X_{ba}\equiv X(t_b,t_a)$. Comparing with the LMG Hamiltonian in the Schwinger representation (\ref{LMGb}), one finds the following relation between the LMG model and the integrals of motion of the $SU(1,1)$ RG models \begin{equation} H_L = \epsilon(R_b-R_a)- \frac{\gamma}{4}, {\hbox{\ \ \ \ \ with \ \ \ \ \ }} g X_{ba}=-\frac{\lambda}{\epsilon}, \ \ {\hbox{ and }} \ g Z_{ba}=\frac{\gamma}{\epsilon}. \label{LipRG} \end{equation} Without any loss of generality, we choose the parameters entering in $X_{ba}$ and $Z_{ba}$ as $t_b=-t_a\equiv t$, with $t\geq 0$. The functions $X_{ba}$ and $Z_{ba}$ reduce to $X_{ba}=\frac{1+st^2}{2 t}$ and $Z_{ba}=\frac{1-st^2}{2t}$. Then, the relation between the LMG Hamiltonian parameters ($\lambda$, $\gamma$, $\epsilon$) and those of the $SU(1,1)$ RG models are $$ \frac{\lambda}{\epsilon}=- g \frac{1+st^2}{2t},\ \ \ \ \ \frac{\gamma}{\epsilon}= g \frac{1-st^2}{2t} . $$ Or in terms of the $\gamma_x$ and $\gamma_y$ parameters (\ref{gamas}) we have \begin{equation} (\gamma_x,\gamma_y)\equiv\frac{2j-1}{\epsilon}( \gamma+\lambda,\gamma-\lambda) = (2j-1)g\left(-st,\frac{1}{t}\right) . \label{gxgy} \end{equation} The relation between the $\gamma_x$ and $\gamma_y$ parameters and those of the RG model classify the quadrants of the phase diagram of Fig.\ref{fig1} in terms of the hyperbolic ($s=-1$) and trigonometric ($s=1$) RG models. The first ($s=-1$, $g>0$) and third ($s=-1$, $g<0$) quadrants correspond to the hyperbolic RG model, whereas the second ($s=1$, $g>0$) and fourth ($s=1$, $g<0$) are associated with the trigonometric model. These regions are indicated in Figure \ref{fig1}, by dark gray zones for the trigonometric model and light zones for the hyperbolic one. The rational RG model is limited to the $\gamma_x=0$ and $\gamma_y=0$ lines. The phase transition lines, discussed in the section \ref{LMGbos} and shown in Fig. \ref{fig1} by dashed lines, are translated to the RG parameters in table \ref{tabla}. \begin{table} \begin{center} \begin{tabular}{|c|c|ccc|} \hline transition & line & \multicolumn{3}{c|}{ RG parameters} \\ \hline first-order&$\gamma_x=\gamma_y$ , $\gamma_x< -1$ & $s=-1$ & $t=1$& $(2j-1)g= -1$ \\ \hline second-order& $\gamma_x=-1$, $\gamma_y>-1$ & $s=+1$& & $(2j-1)g= +\frac{1}{t}$ \\ & & $s=-1$ & $t>1$& $(2j-1)g= -\frac{1}{t} $\\ \hline second-order&$\gamma_x>-1$, $\gamma_y=-1$ & $s=+1$& & $(2j-1)g= -t$\\ & & $s=-1$& $t<1$& $(2j-1)g = -t$\\ \hline \end{tabular} \end{center} \caption{Transition lines in phase diagram $\gamma_x$, $\gamma_y$ and their translation to the RG parameters.} \label{tabla} \end{table} The LMG model has symmetries that relates the spectrum of systems in two different points in the parameter space. The first of these symmetries is a point reflection trough the origin, and relates systems obtained from a simple change of sign in the parameters $(\gamma_x, \gamma_y)\rightarrow (\gamma'_x, \gamma'_y)=(-\gamma_x,- \gamma_y)$. This change in sign is equivalent to a global sign change in the Hamiltonian, implying that the spectrum of a system is minus the spectrum of the transformed system. In terms of the RG parameter, this transformation corresponds to $g\rightarrow g'=-g$. A second symmetry of the LMG model is a reflection across the line $\gamma_y=\gamma_x$. Two mirror points of the phase diagram symmetrically located around this line have exactly the same energy spectrum, as a result of the invariance of the $SU(2)$ algebra under the canonical transformation $S_+\rightarrow -i S_+,~ S_{-}\rightarrow i S_{-},~S_z \rightarrow S_{z}$, corresponding to $b\rightarrow i b, a\rightarrow a$ in the Schwinger boson realization \cite{Van}. In terms of the RG parameters, this symmetry implies that systems with parameters related by \begin{equation} (g,t) \rightarrow (g',t')=(-sg,1/t), \label{mirror} \end{equation} have the same spectrum. The $\gamma_y=\gamma_x$ line ($s=-1$ and $t=1$ in terms of RG parameters), located within the hyperbolic regions of the phase diagram, has the peculiarity that the RG solutions display a singular behavior as it will be shown in section \ref{GSQPT}. However, the eigenstates along this line can be easily obtained in closed form by resorting, for instance, to the LMG Hamiltonian in terms of pseudospin operators. The condition $\gamma_x=\gamma_y$ implies $\lambda=0$ and results in the following LMG Hamiltonian \begin{eqnarray} H_{L}&=&\epsilon \left(S_z+\frac{\gamma_x}{2(2j-1)}\left(S_+ S_-+S_-S_+ \right)\right) \epsilon \left(S_z+g\left(S^2-S_z^2\right)\right), \end{eqnarray} which commutes with both $\bf S^2$ and $S_z$, and has eigenstates $|jm\rangle$ and eigenvalues \begin{equation} E_m=\epsilon \left(m+\frac{\gamma_x}{(2j-1)}\left(j(j+1)-m^2\right)\right)=\epsilon \left(m+g\left(j(j+1)-m^2\right)\right). \label{diagE} \end{equation} The conservation of the $S_z$ operator along this critical line allows the existence of real crossings between states of the same parity. These crossings take place when $E_{m}=E_{m\pm 2}$. For instance, the $P+$ ground state energy for $g=0$, $E_{m=-j}$, crosses the first $P+$ excited state energy, $E_{m=-j+2}$, when $\gamma_x=\gamma_y=-(2j-1)/(2j-2)$, or in terms of RG model parameter when $s=-1$, $t=1$ and $g=-1/(2j-2)$. Having related the two-level $SU(1,1)$ RG models with the LMG model, we can now explore the exact RG solutions in each subspace defined by the number of boson pairs $M$ and seniorities $\nu_a$,$\nu_b$. For integer $j$ the number of Schwinger bosons, $N=n_a+n_b=2j$, is even, the seniorities are equal, $\nu_a=\nu_b\equiv\nu=0$ or $1$, and the number of boson pairs is $M=j-\nu$. The seniority sectors $\nu=0$ and $\nu=1$ correspond to the two invariant sub-spaces $P=+$ and $P=-$ respectively. The eigenvalues $r_i$, of the $R_i$ integrals of motion are \cite{NPB707} $$ r_i=d_i\left( 1+ 2 \sum_{\alpha}^M Z(t_i,e_\alpha) + 2\sum_{j\not=i}d_j Z(t_i,t_j)\right), $$ where $d_i=(1/2)(\nu_i+\frac{1}{2})$, $Z$ is the function defined in (\ref{XZ}), and $e_\alpha$ are the so-called spectral parameters or pairons. Each particular eigenstate is completely defined by a particular $M$ pairon solution of the coupled set of Richardson equations $$ 1+2\sum_{i} d_i Z(e_\alpha,t_i)+2\sum_{\beta\not=\alpha}^M Z(e_\alpha,e_\beta)=0. $$ For the particular case of the LMG model, with two $SU(1,1)$ copies, $t_b=-t_a=t$, and integer $j$ ($\nu_a=\nu_b=\nu=0,1$), the Richardson equations reduce to \begin{equation} \frac{1-2gs\left[M+\nu-\frac{1}{2} \right]e_\alpha}{1+s e_\alpha^2}+g\left(\nu+\frac{1}{2}\right)\left(\frac{1}{e_\alpha +t}+\frac{1}{e_\alpha-t}\right)+2g \sum_{\beta\not=\alpha}^M\frac{1}{e_\alpha-e_\beta}=0. \end{equation} The eigenvalues of the LMG Hamiltonian are given by \begin{eqnarray} E_L&=&\epsilon(r_b-r_a)-\frac{\gamma}{4}= g\epsilon\frac{(1-s t^2)}{2t}\nu(\nu+1)+ 2g\epsilon \left(\nu+\frac{1}{2}\right)t \sum_{\alpha}\frac{1+s e_\alpha^ 2}{t^2-e_\alpha^2}. \label{enerLip \end{eqnarray} The unnormalized eigenvectors common to the two integrals of motion ($R_i$) and, consequently, to the LMG Hamiltonian are \begin{equation} \prod_{\alpha=1}^M \left(\frac{a^\dagger a^\dagger}{e_\alpha+t}+\frac{b^\dagger b^\dagger}{e_\alpha-t}\right)|\nu_a \nu_b\rangle . \label{wavfun} \end{equation} The Richardson equations can be interpreted as an electrostatic problem in two-dimensions \cite{Elect1, Elect2}. In order to make explicitly this connection we rewrite the Richardson equations as \begin{equation} \frac{Q_C}{e_\alpha- P_C}+ \frac{Q_D}{e_\alpha-P_D}+ \frac{\nu+\frac{1}{2}}{2}\left(\frac{1}{e_\alpha+t}+\frac{1}{e_\alpha-t}\right)+\sum_{\beta\not=\alpha}^M\frac{1}{e_\alpha-e_\beta}=0, \label{RichEqsEl} \end{equation} with the effective charges $Q_C$ and $Q_D$ \begin{eqnarray} Q_C&=&\frac{1}{4g\sqrt{-s}}-\frac{2 j-1}{4}\nonumber \\ Q_D&=&-\frac{1}{4g\sqrt{-s}}-\frac{2 j-1}{4} \label{chargesEff}, \end{eqnarray} located at position $P_C=-1/\sqrt{-s}$ and $P_D=1/\sqrt{-s}$ respectively. The pairons have a positive unit charge and they are located at positions $e_\alpha$ in the complex plane. Eq.(\ref{RichEqsEl}) describes the electrostatic interaction of a set of $M$ pairons with positive unite charge in a two dimensional space. The first two terms in (\ref{RichEqsEl}) describe the electrostatic interaction of the pairons with the two charges $Q_C$ and $Q_D$. For the trigonometric case ($s=1$) the effective charges are complex $Q_C=-\frac{2 j-1}{4}-\frac{i}{4 g}$, $Q_D=Q_C^*$, and they are located in $P_C=i$, $P_D=-i$. Whereas in the hyperbolic case ($s=-1$) both, the charges and their positions, are real, $Q_C=-\frac{2 j-1}{4}+\frac{1}{4g}$ and $Q_D= -\frac{2 j-1}{4}-\frac{1}{4g}$, located in $P_C=-1$ and $P_D=1$ respectively. The third term in (\ref{RichEqsEl}) represents the interaction of the pairons with two charges $\frac{\nu+\frac{1}{2}}{2}$ at positions $\mp t$. Finally, the fourth term corresponds to the mutual repulsion between pairons. Each independent solution of the Richardson equations determines the equilibrium position of the pairons in the complex plane. The electrostatic mapping will be useful to interpret the pairons distribution in each of the quantum phases of the LMG model and the structural changes that take place close to the quantum phase transitions. \subsection{The Richardson solution as the roots of a generalized Heine-Stieltjes polynomial} The standard way to solve the Richardson equations is to start from the weak coupling limit where the solution is known [see Eq. (\ref{smallg}) below]. The coupling strength $g$ is increased gradually, using the previous solution as an initial guess to solve the equations for the updated $g$ by means of a standard Newton-Raphson method. A recursive use of this strategy allows to reach the solution for an arbitrary value of $g$ provided one is able to develop a method to treat the numerical instabilities appearing when two or more pairons converge at the same point (at the position of the $t_i$ parameters in this case) generating singularities in the equations \cite{Romb}. Recently, two related methods for solving the Richardson equations have been presented (\cite{solvPan, solvPan2} and \cite{solvGritsev}). Both methods exploit the relation between the Richardson equations and the Lam\'e's Ordinary Differential Equation which has a generalized form of the Heine-Stieltjes polynomials as a solution. The roots of these polynomials are precisely the spectral parameters or pairons. These methods have the advantage of being numerically more stable for system of moderate sizes. For larger systems they have instabilities due to the large precision needed to calculate the roots of a polynomial of high degree. Here we follow the method of references \cite{solvPan, solvPan2}, which is more adequate for systems with a small number of levels \cite{solvLinks2}, as it is the case of the LMG model. We begin with the Richardson equations in the form (\ref{RichEqsEl}), which can be written as \begin{equation} \sum_{\beta\not= \alpha}\frac{1}{e_\alpha-e_\beta}= - \sum_{k=1}^4 \frac{\rho_k}{e_\alpha-\eta_k}, \label{REF} \end{equation} with $(\rho_k, \eta_k)=(Q_C,-1/\sqrt{-s}), (Q_D,1/\sqrt{-s}), ((2\nu+1)/4,-t),$ and $((2\nu+1)/4,t)$ for $k=1,2,3,4$ respectively. Let us now define the polynomial $P(x)=\prod _{\alpha=1}^M (x-e_\alpha)$ which can be expanded in powers of $x$ as \begin{equation} P(x)=\sum_k a_k x^k, \label{P} \end{equation} whose roots are the set $\{e_\alpha\}$ for a particular solution of the Richardson equations. This polynomial is a generalized Heine-Stieltjes polynomial that satisfies the following Lam\'e's ordinary differential equation (\ref{app1}): \begin{equation} A(x) P''(x)+ B(x) P'(x)-V(x)P(x)=0, \label{edo} \end{equation} where the functions $A,B$ and $V$ are polynomials defined as \begin{equation} A(x)= \prod_k^4(x-\eta_k),\ \ \ \ \ B(x)= A(x)\sum_{k=1}^4 \frac{2\rho_k}{x-\eta_k},\ \ \ \ {\hbox{and}}\ \ \ \ V(x)= \sum_{i} 2 \rho_k \Lambda(\eta_k)\prod_{l\not=k}(x-\eta_l), \label{ABV} \end{equation} with \begin{equation} \Lambda(x)\equiv \frac{P'(x)}{P(x)}=\sum_\alpha \frac{1}{x-e_\alpha}. \label{V2} \end{equation} The polynomials $A(x)$ and $B(x)$ of degree $4$ and $3$ respectively, depend only on the parameters of the LMG Hamiltonian (\ref{gxgy}). $V(x)$, the so called Van Vleck's polynomial, is at most of third order and depends on the values $\Lambda(\eta_i)$, which in turn depend on the set of pairons $e_\alpha$, \begin{equation} V(x)=\sum_{i=0}^3 b_i x^i. \label{V} \end{equation} For a general problem \cite{solvPan}, one can insert the polynomials $V(x)$ (\ref{V}) and $P(x)$ (\ref{P}) in the ordinary differential equation (\ref{edo}) and, by equating to zero the coefficients at each order in $x$, one obtains two systems of equations for the coefficients $b_i$ and $a_i$. The first set of equations is linear allowing a solution in which coefficients $b_i$ are expressed in terms of the $a_i$, leaving a second set of non-linear equations for the $a_i$ coefficients. Finally, the $a_i$ coefficients determine the polynomial (\ref{P}) whose roots are the pairons $e_{\alpha}$ of the Richardson equations. For the particular case of the LMG model the first system of linear equations allows to determine the coefficients of the Van Vleck polynomial $b_i$ (except $b_0$) directly in terms of the parameters of the problem (i.e. they are independent of the $a_i$ parameters). As a consequence, the second set of equations is linear in the coefficients $a_i$. From Eq. (\ref{ABV}) we obtain: \begin{eqnarray} A(x)&=&-s t^2+(s-t^2)x^2+x^4\nonumber\\ B(x)&=& \frac{-s t^2}{g}+\left[ t^2(2j-1)+s(2\nu + 1)\right]x+\frac{s}{g}x^2-2(M-1)x^3. \end{eqnarray} After substitution of these polynomials into the differential equation (\ref{edo}), and from the terms of order $M+3$, $M+2$ and $M+1$, we obtain $b_3=0$, $b_2=-M(M-1)$ and $b_1=s M/g$, i.e the Van Vleck polynomial is completely determined except for the order zero coefficient: $$ V(x)=b_0+\frac{s M}{g}x-M(M-1)x^2. $$ We can derive the $b_0$ coefficient and the parameters $a_i$ from the orders $0$ to $M$ of the differential equation (\ref{edo}). The result is an eigenvalue equation: $$ \sum_{k'=0}^M D_{kk'}a_{k'}= b_0 a_{k} \ \ \ {\hbox {with }} k=0,1,...,M, $$ where the matrix $D_{k k'}$ is completely determined by the parameters of the model ($t,g,M,\nu,s$). Its non-zero matrix elements are given by \begin{eqnarray} D_{k\ k-2}&=& (k-2)(k-1-2 M)+M(M-1)\nonumber\\ D_{k\ k-1}&=& s(k-M-1)/g\nonumber\\ D_{k\ k} &=& k((2j-k)t^2+s(2\nu +k))\nonumber\\ D_{k\ k+1}&=& -s(k+1)t^2/g\nonumber\\ D_{k\ k+1}&=& -s(k+2)(k+1)t^2. \end{eqnarray} Therefore, the coefficients $a_k$ of the polynomial $P(x)$ are the elements of each eigenvector of the matrix $D$. Once the coefficients $a_i$ are known, the $e_\alpha$ roots of the Richardson equations are obtained by finding the roots of the polynomial $P(x)$. Each of the $M+1$ eigenvectors of matrix $D$, defines a polynomial whose roots $e_{\alpha}$ correspond to a particular eigenstate of the LMG Hamiltonian. It is important to remark here that the drawback or bottleneck of the method resides in the last step. As it is well known, finding the roots of high degree polynomials requires a high precision in the determination of the coefficients. Therefore, the number of pairs $M$ is limited to $\cong 10^2- 10^3$. Conversely, the method allows to find directly the pairon roots without resorting to the iterative method of increasing gradually the coupling strength with the burden of having to deal with the singularities of the Richardson equations. \section{Numerical Results for the ground state} \label{GSQPT} In this section we will present and discuss the numerical solution of the Richardson equations for the ground state in the different phases using the trigonometric and the hyperbolic model as required for each particular phase. \subsection{Trigonometric quadrants} Ir order to illustrate typical results for the trigonometric regions ($s=1$), we will consider the line with $t=1$ as a function of $g$. This line corresponds to $\gamma=0$, which cancels the third term in the LMG Hamiltonian (\ref{LipMo}). The resulting Hamiltonian is the most frequently used in the literature, also known as the Lipkin Hamiltonian. In terms of the scaled parameters this line corresponds to $\gamma_y=-\gamma_x=(2j-1)g$. For increasing values of $g$, we move along the line from the fourth to the second quadrant in the phase diagram. According to this diagram, a second order phase transition takes place in the thermodynamic limit for $\gamma_y=\pm 1$, or equivalently for $g=g_{cr}=\pm 1/(2j-1)$ (see table \ref{tabla}). The $e_\alpha$ pairons for the ground state as a function of the ratio $g/g_{cr}=g(2j-1)$ are shown in figure 2, for a system with $j=15$. \begin{figure}[t*] \centering{ \includegraphic [width=0.6\textwidth]{fig2.pdf} } \caption{ Ground state pairons as a function of $g(2j-1)$ for the trigonometric case ($s=1$) with $t=1$ and $j=15$, corresponding to $\gamma_y=-\gamma_x$ as indicated with the arrow line in the inset. Here the arrow corresponds to increasing values of $g$. The vertical dashed line indicates the critical value of $g=g_{cr}=-1/(2j-1)$. Pairons for positive and negative values of $g$ are related by $e_\alpha\rightarrow 1/e_\alpha$.} \label{fig2} \end{figure} As it can be seen in the figure, for $g\sim 0$ all the pairons are located close to $t=-1$. As the strength of $g$ is increased they expand in the real axis. For negative $g$, the pairons are constrained to the interval $[-t,t]$, and they behave in a very similar way to the rational case already discussed in the context of the IBM-model \cite{PittDuk}. The second order phase transition can be interpreted as a localization-delocalization transition. The pairons initially localized close to $t=-1$, expand to the entire interval $[-t,t]$ in the transition point. For positive $g$ the solutions are, except for a sign in the wave function, entirely equivalent to the negative $g$ case. As it was discussed in section \ref{LMG-RG}, the Richardson solutions for two mirror points symmetrically located around the $\gamma_x=\gamma_y$ lines [ $(g,t)$ and $(-s g,1/t)$ in terms of RG parameters] have the same spectrum and the wave functions are related by a canonical transformation $b\rightarrow ib$. This symmetry is reflected by a simple relation between the pairons with negative and positive $g$ given by $e_\alpha \rightarrow 1/e_\alpha$. It is straightforward to show that the energy in Eq.(\ref{enerLip}) is invariant under the transformation $(g,t,e_\alpha)\rightarrow (-s g,1/t,1/e_\alpha)$, and that this transformation produces a change in the relative sign of the two terms appearing in the product wave function (\ref{wavfun}), in agreement with the canonical transformation $b\rightarrow ib$. In the trigonometric quadrants, the dynamics of the pairons as a function of the control parameters $[\gamma_x,~\gamma_y]$ take place entirely in the real axis and it is very much like the already known dynamics of pairons in the rational boson pairing models \cite{PittDuk}. \subsection{Hyperbolic quadrants} The hyperbolic regions ($s=-1$) of the phase diagram offer much richer structures than the trigonometric regions. In order to illustrate this issue, we will study a system with $j=15$ and $t=1/2$, which corresponds to the line $\gamma_y= 4 \gamma_x$ in phase diagram of Fig.\ref{fig1}. The line traverses the third and first quadrants from below for increasing values of $g$, and has a critical point of a second order phase transition for $\gamma_{y}=-1$, corresponding to $g(2j-1)=-t=-1/2$ (see table \ref{tabla}). \begin{figure}[t*] \centering{ \includegraphic [width=0.6\textwidth]{fig3.pdf} } \caption{ Real part of the ground state pairons as a function of $g(2j-1)$ for the hyperbolic case ($s=-1$) with $t=1/2$ and $j=15$, corresponding to $\gamma_y=4\gamma_x$ as indicated with the arrow line in the inset. The arrow corresponds to increasing values of $g$. The vertical dashed line indicates the critical value of $g=g_{cr}=-1/(4j-2)$ corresponding to the line $\gamma_y=-1$. For positive $g$, successive collapses of pairons in the position $P_C=-1$ (horizontal dashed line) of the effective charge $Q_c$ can be seen.} \label{fig3} \end{figure} Fig. \ref{fig3} shows the ground state pairon roots as function of $g(2j-1)$. Similarly to the trigonometric results, the pairons converge to $-t$ for $g\rightarrow 0$. For negative values of $g$ the pairons are constrained to the interval $[t,-t]$ with the phase transitions (vertical dahsed line) signaled by the delocalization of the pairons in this interval. For the $g$-positive case (where no phase transtition is expected) an interesting behavior of the pairons takes place. As the coupling $g$ is increased the pairons collapse successively to the position ($P_C=-1$) of the effective charge $Q_C$. In \ref{app2} it is shown that a necessary condition to have $N_C$ pairons collapsing to $P_C=-1$ is \begin{equation} g=g^c_{N_C}\equiv\frac{1}{2j+1-2 N_C}, \label{cond1} \end{equation} with $N_C=1,...,M$ and $0<g^c_{N_C=1}<g^c_{N_C=2}<...<g^c_{N_C=M}$. According to this expression, the first collapse occurs for $N_C=1$ (one collapsing pairon) at $g(2j-1)=g^c_1 (2j-1)=1$, then $N_C=2$ pairons converge to $P_C=-1$ at $g(2j-1)=g^c_2(2j-1)=(2j-1)/(2j-3)$, and so on. After the collapse of an even number of pairons a new complex conjugated pair of pairons is created. In figure \ref{fig4}, the real and imaginary parts of the pairons are shown for positive $g$ values. The successive collapses and creation of complex conjugated pairs can be clearly seen. This behavior was completely unexpected in bosonic RG models, where pairons have always been constrained to the real axis. \begin{figure}[t*] \centering{ \begin{tabular}{lr} \includegraphics[width=0.415\textwidth]{fig4a.pdf}& \includegraphics[width=0.43\textwidth]{fig4b.pdf} \end{tabular} } \caption{ The collapses of the ground state pairons as a function of $g(2j-1)$ for the hyperbolic case of figure \ref{fig3} ($t=1/2$ and $j=15$). Left and right panel show the real and imaginary part of the pairons respectively. The collapses take place in the position $P_C=-1$ (horizontal dashed line in left panel) of the effective charge $Q_C$ at values of $g$ given in (\ref{cond1}). The dashed vertical lines indicate the number of pairons $N_C=1,...,8$ involved in the collapse. After the collapse of an even number of pairons a complex conjugated pair is created as can be seen in the right panel of the figure. } \label{fig4} \end{figure} A particular situation occurs when all pairons collapse to $P_C=-1$ for $ g=g^c_{N_C=j}=1$. At this particular point the exact ground state eigenstate (\ref{wavfun}) takes the simple form: $$ |\Psi\rangle_{MR}=\left( \frac{a^\dagger a^\dagger}{t-1}-\frac{b^\dagger b^\dagger}{t+1} \right)^j |0\rangle, $$ which would be the boson version of the Moore-Read state found for the $p_x + i p_y$ model \cite{Ger1,Links2,Hyp}, derived from the fermionic hyperbolic RG model. Likewise, as in the fermionic model, the energy of this state is $E=0$. However, within the LMG model it can be shown by exact diagonalization of very large systems that the collapse of all pairons to $P_C=-1$ is not associated with a ground state phase transition. A subject still under debate for the $p_x + i p_y$ fermionic model \cite{Ger1,Links2,Hyp}. It is worth mentioning here that the condition (\ref{cond1}) of pairons converging to the value $P_C=-1$ applies equally to the excited states, and that similar collapses in the position ($P_D=+1$) of the effective $Q_D$ charge occurs for excited states in the $g<0$ interval for values given by \begin{equation} g=g^c_{N_D}=-\frac{1}{2j+1-2 N_D}, \label{cond2} \end{equation} with $N_D=1,...,M$ and $g^c_{N_D=M}<g^c_{N_D=(M-1)}<...<g^c_{N_D=1}<0$. These issues will be discussed in section \ref{excited}, where it will be shown that, even if the collapses are not associated to a ground state phase transition, they are related to the crossings of excited states of different parities \subsection{The triple point $\gamma_y=\gamma_x=-1$} The triple point in the phase diagram of the Lipkin model, $(\gamma_x,\gamma_y)=(-1,-1)$, constitutes one of the rare example of a third order phase transition in quantum many-body systems. As such, it deserves a thorough study because it could shed light into other third order QPT like the one taking place in the $p_x+i p_y$ model \cite{Hyp}. The third order character of this phase transition reported in \cite{Castanos} is observed when the critical point is crossed, for instance, along the line $\gamma_y=-\gamma_x-2$. In figure \ref{paironsCR}.a the behavior of the ground state pairons close to the triple point is examined for a system of size $j=10$, moving in the phase diagram along the lines $\gamma_y=-\gamma_x+b$, for three values of $b$ ($b=-2.0$ grey solid line, $b_{cr}= -2.\overset{\frown}{1}$ dashed line, and $b=-2.32$ black solid line). We move along these lines using the parameter $t$ of the RG model (\ref{gxgy}). The value $t=1$ corresponds to the point $\gamma_y=\gamma_x=b/2$. The critical value of $b$ is $b_{cr}=-2 (2 j-1)/(2j-2)$, which is $b_{cr}<-2$ for finite systems and $b_{cr}\rightarrow -2$ in the thermodynamic limit. As already discussed, at the points $\gamma_x=\gamma_y$ the LMG Hamiltonian is diagonal in the basis $|jm=(-j + 2k)\rangle\propto (a^ \dagger a^\dagger)^ {(j-k)} (b^\dagger b^\dagger)^{k}|0\rangle$, with eigenvalues given by (\ref{diagE}). The energies $E_{jm=-j}$ and $E_{j m=(-j+2)}$ cross at $\gamma_x=\gamma_y=-(2 j-1)/(2j-2)$. Therefore, the line $\gamma_y=-\gamma_x+b_{cr}$ traverses a point at which the positive parity states $|jm=-j\rangle$ and $|j,m=-j+2\rangle$ are degenerated. \begin{figure}[t*] \centering{ \begin{tabular}{lr} \includegraphics[width=0.455\textwidth]{fig5a.pdf}& \includegraphics[width=0.45\textwidth]{fig5b.pdf} \end{tabular}} \caption{Pairons close to the triple point $(\gamma_x,\gamma_y)=(-1,-1)$ for a $j=10$ system as a function of $t$. $t=1$ corresponds to the line $\gamma_x=\gamma_y$ in the phase diagram. The different lines are associated to three values of values of $b$ along the line $\gamma_y=-\gamma_x+b$. The values $b=-2.0$, $b_{cr}\approx 2.11$ and $2.32$ are represented by gray, dashed, and black lines respectively, with $b_{cr}=-2 (2j-1)/(2j-2)$. Panel ({\bf a}) describes the behavior of the ten pairons in the interval $0.5\leq t \leq1.0$. Panel ({\bf b}) is a close up of the tenth pairons around $t=1.0$ showing the discontinuous jump for $b=b_{cr}$. } \label{paironsCR} \end{figure} Fig. \ref{paironsCR}.a shows the behavior of the 10 pairons in the interval $0.5\leq t \leq1.0$ for the three values of $b$. In the three cases the behavior of the nine lowest pairons is similar, all of them converging to $-1$ in the limit $t\rightarrow 1$. However, the last pairon close to $t=1$ distinguishes clearly the three cases studied. For $b>b_{cr}$ the last pairon converges to $-1$ like the other nine pairons, whereas for $b<b_{cr}$ the last pairon converges to $+1$. The critical $b=b_{cr}$ separates both regions. In this case the last pairon converges to $e_-= \frac{\sqrt{j(2j-1)}-1}{\sqrt{j(2j-1)}+1} \approx 0.865$ for $t=1$. The second panel, Fig. \ref{paironsCR}.b, shows more clearly the behavior of the last pairon near $t=1$ for $b=b_{cr}$ and $b<b_{cr}$. Here, the horizontal scale has been extended to $t>1$ using the symmetry transformation (\ref{mirror}) [$(g,t,e_\alpha)\rightarrow (g,1/t,1/e_\alpha)$]. While for $b<b_{cr}$ the last pairon changes continuously across the value of $t=1$, for $b=b_{cr}$ a discontinuity in the last pairon at $t=1$ occurs due to the crossing of positive parity states, jumping from $e_-\approx 0.865$ to $e_+=(1/e_-)\approx 1.156$. The degeneracy at the point $\gamma_x=\gamma_y=b_{cr}/2$ is associated to the states $|jm=-j\rangle\propto (a^\dagger a^\dagger)^j|0\rangle$ and $|j m=(-j+2)\rangle\propto (a^\dagger a^\dagger)^{(j-1)} (b^\dagger b^\dagger) |0\rangle$. As it can be inferred from the exact wave function(\ref{wavfun}), the limit $t\rightarrow 1^-$ with $b=b_{cr}$ produces the right eigenstate $$|\Psi\rangle=(a^\dagger a^\dagger)^{(j-1)} \left(\frac{a^\dagger a^\dagger}{e_-+1} + \frac{b^\dagger b^\dagger}{e_--1} \right)|0\rangle\propto |\Psi_-\rangle\equiv \frac{ |j m=-j\rangle - |j m=(-j+2)\rangle}{\sqrt{2}}, $$ which is a linear combination of the two degenerated states. The limit $t\rightarrow 1^+$ (with the last pairon converging to $e_+\approx 1.156$) produces the other degenerated state, $|\Psi_+\rangle\equiv(1/\sqrt{2})(|j m=-j\rangle+|j m=(-j+2)\rangle)$, orthogonal to the previous one. In summary, when the system traverses the line $\gamma_y=\gamma_x$ along the line $\gamma_y=-\gamma_x+ b_{cr}$ the ground state wave function presents a discontinuity, changing from $|\Psi_-\rangle$ to $|\Psi_+\rangle$. In general, the first order phase transition along the line $\gamma_y=\gamma_x$ (with $\gamma_x<1$) is due to the crossing of the states $|jm\rangle$ and $|j (m+2)\rangle$. As it was illustrated above for the particular case of the states $|jm=-j\rangle$ and $|j m=(-j+2)\rangle$, the behavior of the pairons near this line reflects these crossings by a discontinuity in their values. As a result, crossing the line $\gamma_y=\gamma_x$ (with $\gamma_x<1$) implies a jump from a $|\Psi_{-m}\rangle$ ground state to a $|\Psi_{+m}\rangle$ ground state, where $|\Psi_{\pm m}\rangle\equiv \frac{1}{\sqrt{2}}(|j m\rangle \pm |j (m+2)\rangle)$. This first order phase transition is signaled by a discontinuous change in the order parameters $\langle S^2_x\rangle$ and $\langle S^2_y\rangle$. It can be shown that in the thermodynamic limit the order parameters for these states are \begin{eqnarray} \frac{\langle \Psi_{\pm m}| S_x^2 |\Psi_{\pm m}\rangle}{j^2}&=& \frac{2\pm 1}{4}\left( 1-\left(\frac{m}{j}\right)^2\right) \nonumber\\ \frac{\langle \Psi_{\pm m}| S_y^2 |\Psi_{\pm m}\rangle}{j^2}&=&\frac{2\mp 1}{4}\left( 1-\left(\frac{m}{j}\right)^2\right). \end{eqnarray} Therefore, a jump from $|\Psi_{- m}\rangle$ to $|\Psi_{+m}\rangle$, for $m\not=-j$, produces a discontinuity in the order parameters characterizing a first order phase transition, in complete accord with the analysis of section 1. For the particular case of figure \ref{paironsCR} ($m=-j$) both order parameters vanish at the critical point preventing a first order phase transition. The critical value for this continuous phase transition is $b_{cr}=-2 (2 j-1)/(2j-2)\rightarrow -2$ in the thermodynamic limit, corresponding to the triple point $\gamma_x=\gamma_y=-1$, in complete agreement with the thermodynamic results of reference \cite{Castanos} \section{Excited states in the hyperbolic LMG model} \label{excited} We will study in this section the excited states for the hyperbolic regions ($s=-1$) of the LMG model in terms of the pairon dynamics of the RG model. A similar description for the rational bosonic RG model was performed in \cite{PittDuk}. Pairons for the rational as well as for trigonometric bosonic RG models are always real. The hyperbolic model has the particular feature that the pairons can take complex values. For the sake of clarity, let us assume the specific value $t=1/2$, with generic results for the cases $t<1$. The effective charges $Q_C$ and $Q_D$ of the Richardson equations (\ref{RichEqsEl}) located at positions $P_C=-1$ and $P_D=1$, are outside of the interval $[-t,t]$. The cases with $t>1$, can be inferred from those with $t<1$ trough the mirror transformation of Eq. (\ref{mirror}). \begin{figure}[t] \centering{ \includegraphics [width=0.5 \textwidth]{fig6.pdf} } \caption{ Pairons for the $k=10$-th positive parity excited state of a $j=15$ system. The perturbative results (\ref{smallg}) are shown by dashed lines and the exact ones with solid lines. The total number of pairons is $M=j=15$. The $k=10$-th state is characterized by having $5$ pairons close to $t_a=-t$ and $10$ pairons close to $t_b=+t$ at weak coupling. Note that the lower pairons for negative $g$ are inside the interval $[-t,t]$, and the upper ones outside. The opposite happens for positive $g$ where the lower pairons are outside the interval, and the upper pairons are inside.} \label{otra} \end{figure} Let us first analyze the limit $g=0$. In this limit the LMG Hamiltonian reduces to an one-body Hamiltonian $H=\epsilon S_z=\epsilon \frac{b^\dagger b-a^\dagger a}{2}$ with eigenvalues $\epsilon(n_b-n_a)/2$ and eigenstates $|\Psi\rangle= |n_b=\nu + 2k, n_a=\nu+2(M-k) \rangle$. Here, the seniorities are the number of unpaired bosons $\nu=0,1$, $M=(j-\nu)$ is the total number of boson pairs, and $k=1,...,M$. The positive (negative) parity sector corresponds to $n_a$ and $n_b$ even (odd), or in terms of the seniorities it corresponds to $\nu=0$ ($\nu=1$). Independently of the parity, the ground state has $M$ boson pairs occupying the $a$ level. The excited sates are obtained by promoting boson pairs from the $a$ to the $b$ level. In this way the $k$-th excited state for a given parity has $M-k$ boson pairs occupying the $a$ level, and $k$ in the $b$ level. From the wave function (\ref{wavfun}), it can be seen that in the $k$-th excited state $M-k$ pairons converge to $t_a=-t$ and $k$ pairons to $t_b=+t$. For finite but small $g$, it was shown in ref.\cite{NPB707} that the pairons $e_\alpha$ can be approximated by \begin{equation} e_\alpha\approx t_i -g (1+st^2)r_l, \ \ \ \ (s=-1) \label{smallg} \end{equation} where $t_i=t_a=-t$ or $t_i=t_b=+t$, and $r_l$ are the positive roots of the Legendre polynomial $L_{N_i}^{\nu-1/2}(x)$, with $N_i$ ($i=a,b$) the number of $e_\alpha$ pairons converging to $t_i$ for $g=0$, i.e, $N_a=M-k$ and $N_b=k$, for the $k$-th excited state. Hence, for small $g$, the entire set of states can be classified by the number of pairons distributed close to $t_a$ and close to $t_b$. For the $k$-th excited state in the limit $g$ negative and small, a group of $M-k$ pairons is close and above $t_a=-t$, implying they are in the interval $[-t,t]$, which will turn be very relevant for their behavior in larger $g$. A second group of $k$ pairons is close and above $t_b=+t$, i.e. outside the interval $[-t,t]$. For small and positive $g$ the situation is reversed. $M-k$ pairons are outside the interval $[-t,t]$ and close to $t_a=-t$, whereas the remaining $k$ pairons are inside the interval and close to $t_b=+t$. In Figure \ref{otra} we illustrate this behavior for the positive parity $10$-th excited state of a system with $M=j=15$ pairons. In the figure, $k=10$ pairons are close to $t_b=+t$ and the remaining $M-k=5$ sit close to $t_a=-t$. Even if the perturbative result is not valid for large $g$, the behaviour of the pairons as a function of $g$ is strongly dependent on their values at weak coupling. In particular on whether they are inside or outside the interval $[-t,t]$. The pairons inside the interval $[-t,t]$ expand in the real axis but remain constraint to it for any value of $g$. On the contrary, the pairons outside the interval expand on the real axis moving away from the interval. For negative $g$ this expansion takes place above the interval $[-t,t]$ till the pairons start to collapse in the position $P_D=1$ of the effective charge $Q_D$ for values of $g$ given by (\ref{cond2}). Similar collapses occur for positive $g$ for values given by (\ref{cond1}), but here the collapses take place at the position $P_C=-1$ of the effective charge $Q_C$. \begin{figure}[t*] \centering{ \begin{tabular}{lcr} \includegraphics[angle=0,width=0.46\textwidth]{fig7a.pdf} & &\includegraphics[width=0.44\textwidth]{fig7b.pdf} \end{tabular} } \caption{Real ({\bf a}) and Imaginary ({\bf b}) parts of the 15 pairons of the ($k=10$) state of figure \ref{otra} as function of $g(2j-1)$. Vertical dashed lines indicate the $g$ values where pairon collapses occur according to Eq.(\ref{cond2}). The pairons close to $t_a=-t$ in the weak coupling limit remain trapped in the real interval $[-t,t]$, whereas the non trapped pairons expand in the complex plane after collapsing in the position $P_D=1$ [horizontal gray line in panel (a)] of the effective charge $Q_D$. The number of non trapped pairons for this $10$-th excited state is $10$, therefore collapses of 1 to 10 pairons occur at $g$ values given, respectively, by $g=g^c_{N_D}$ with $N_D=1,...,10$. The results corresponding to $g^c_{N_D=10}$ are indicated in the upper scale of the panels. For $g<g_{N_D=10}$ no more collapses occur, and the pairon set consist of 5 real pairons in interval $[-t,t]$ and $5$ complex conjugated pairon pairs. } \label{repa} \end{figure} For positive values of $g$, the $k$-th excited state has $M-k$ pairon non trapped in the interval $[-t,t]$. Therefore, for the ground state ($k=0$) all the pairons will successively collapse at $P_C=-1$ for increasing $g$ as it can be seen in figure \ref{fig3}. The first excited state has one pairon trapped in the interval $[-t,t]$, while the other ($M-1$) non trapped pairons successive collapse at $P_C=-1$ for increasing values of $g$. The same reasoning extends for the rest of the excited states. For the most excited state, with all its pairons trapped in the interval $[-t,t]$, no collapse occurs. The situation is somewhat reversed for negative values of $g$. The $k$-th excited state has $k$ non trapped pairons. Therefore, the higher the excited state is, the more collapses will occur in the position of effective charge $Q_D$ at $g$ values given by Eq. ({\ref{cond2}}). For the grounds state ($k=0$), since all the pairons are trapped in the interval $[-t,t]$ no collapse occurs, as it can be seen in figure \ref{fig3}. For the $k$-th excited state successive collapses of $1$ to $k$ pairons occur at $g$ values given, respectively, by $g^c_{N_D=1}, g^c_{N_D=2},...,g^c_{N_D=k}$. In Fig. \ref{repa} we illustrate this behavior for the same system and state of figure \ref{otra} ($P=+$, $j=15$ and $10$-th excited state), and for negative values of $g$. For small $g$, five pairons are located close and above $t_a=-t$ and $k=10$ pairons are close and above $t_b=t$. As $g$ is increased the trapped pairons expand in the interval $[-t,t]$, but remain constraint to it for any negative $g$. Whereas, the other ten pairons expand outside this interval, till they begin to collapse into the position ($P_D=1$) of the effective charge $Q_D$ at $g$ values given by Eq.(\ref{cond2}). These $g$ values are indicated by vertical dashed lines in the panels of Fig. \ref{repa}. Immediately after the collapse of an even number of pairons two complex conjugated pairons are created, and they expand in the complex plane until the next collapse. For the case illustrated in the figure, since the number of non trapped pairons is $10$, the last collapse to $P_D=1$ takes place at $g=g^c_{N_D=10}$. From there on five complex conjugated pairon pairs expand in the complex plane for $g<g^c_{N_D=10}$. A better insight on the pairon dynamics can be gained by plotting the pairon positions in the two dimensional complex plane. Fig. \ref{purpa} shows the pairon positions in the complex plane of the complete set of positive parity states for a system of $j=30$ and two different negative $g$. The fist one [panel (a)] is $g=g^c_{N_D=22}$ where $N_D=22$ pairons collapse in $P_D=1$. The second one [panel (b)] is an intermediate value of $g$ between two collapses ($g^c_{N_D=23}<g<g^c_{N_D=22}$). The complex conjugated pairs of pairons of a given state are distributed in complex arcs around $P_D=1$. In panel (a), the radius of these arcs goes to zero for states with large enough number of non-trapped pairons, i.e. for those states with $k=22$ to $k=j=30$ non trapped pairons, corresponding to the $22$-th to $30$-th excited state. In both cases the outer arcs correspond to less excited states. \begin{figure} \centering{ \begin{tabular}{lr} \includegraphics[angle=0,width=0.45\textwidth]{fig8a.pdf} \includegraphics[angle=0,width=0.45\textwidth]{fig8b.pdf} \end{tabular} } \caption{Pairons in the complex plane for the complete set of positive parity states for a $j=30$ system. Panel ({\bf a}) corresponds to $g=g_{22}^c$ where collapses of $22$ pairons are expected according to Eq. (\ref{cond2}). Panel ({\bf b}) corresponds to a value of $g$ where collapses are not expected. Horizontal dotted lines indicate the values $t=-1/2$, $t=1/2$ and $P_D=1$. The complex pairons of each state accommodate in arcs around $P_D=1$. Outer arcs correspond to lower energy states. In panel ({\bf a}) the arcs of the $k=22$ to $k=j=30$ excited state collapse to the position $P_D=1$, with the outer arcs corresponding to lower energy excited states.} \label{purpa} \end{figure} Before closing this section, it is interesting to note that the condition (\ref{cond2}) of $N_D$ pairons converging to $P_D=1$ in the hyperbolic region ($s=-1$) of phase diagram, defines hyperbolas in the $\gamma_y$-$\gamma_x$ plane given by \begin{equation} \gamma_y \gamma_x =\left(\frac{2j-1}{2j+1-2N_{D}}\right)^2. \label{hyperbolas} \end{equation} For $N_D=1$, the resulting hyperbola ($\gamma_y\gamma_x=1$) is the same reported in \cite{Castanos} for the crossing of the ground states of positive and negative parities in the third quadrant. Moreover, in \cite{Castanos2} it is argued that the rest of the hyperbolas ($N_{D}=2,...,M$) define the points where there are crossings between excited states of the two parity sectors. For instance, for $N_D=2$ the ground and first excited state of positive parity cross, respectively, those of the negative parity sector. In general, for arbitrary $N_D$ the corresponding hyperbola defines the points where the first $N_D$ states of positive parity cross, respectively, the first $N_D$ negative parity states. This result, already confirmed in \cite{Castanos2,Chen2} for the ground state, is numerically confirmed here for the ground and excited states in figure \ref{dife}. The figure displays the absolute value difference ($|E_{P+}-E_{P-}|$) between $P+$ and $P-$ states for a system with $j=10$ and $t=1/2$ as a function of negative $g$ (third quadrant in the phase diagram $\gamma_x-\gamma_y$). The differences between the ground, first, second, third and fourth excited states of every parity sector are shown in logarithmic scale in order to make clear the crossings between positive and negative parity states. Every line is divided in continuous and dotted segments indicating if the difference $E_{P+}-E_{P-}$ is negative or positive respectively. The points where this difference changes sign indicate a crossing between positive and negative parity states. The vertical dashed lines signal the points where the hyperbolas (\ref{hyperbolas}) are traversed, i.e. when $g (2j-1)=g^c_{N_D} (2j-1)=-\frac{2j-1}{2j+1-2 N_D}$. As it can be seen in the figure, for $N_D=1$ [where $g(2j-1)=-1$] the ground states of every parity sector cross, whereas for the leftmost vertical lines ($N_D=2,3,4,5$), in addition to the ground states, more and more excited states cross. It is worth mentioning that the phase transition in the thermodynamic limit is expected at $g (2j-1)=-t=-1/2$ (see table \ref{tabla}). We can appreciate in the figure a dramatic change in the difference between the energies of the positive and negative parity ground states around this value. However, the first crossing occurs at $g(2j-1)=-1$. \begin{figure}[t*] \centering{ \includegraphics[width=.6\textwidth]{fig9.pdf} } \caption{Absolute value of the difference between the ground to the fourth excited state energies of positive and negative parity states for the hyperbolic LMG model with $j=10$ and $t=1/2$. Continuous segments indicate that the $P=+$ state is lower in energy respect to the corresponding $P=-$ state, whereas the dashed segments correspond to the opposite situation $E_{P-}<E_{P+}$. The vertical dashed lines indicate the values $g^c_{N_D}$ (with $N_D=1,...,5$ signaled in the upper scale).} \label{dife} \end{figure} A preliminary view to the relation between collapses and crossings of different parity states indicates that the states participating in a given crossing do not have their pairons collapsing in $P_D=+1$. Contrarily, the pairon collapses of a given state prevent it from having a crossing. As result, the ground state having no pairon collapses, has crossings for all the values $g_{N_D}^c$. By contrast, the most excited state with pairon collapses in every value $g_{N_D}^c$, does not have crossings. For intermediate states, they present crossings in $g=g_{N_D}^c$ only if for this particular value, their pairons do not present collapses. This condition occurs if $N_D$ is greater than the number of their non trapped pairons ($k$ for the $k$-th excited state). For instance, the $10$-th $P=+$ excited state of Figure \ref{repa} cross the $10$-th excited $P=-$ state only for $g_{N_D=11}^c, g_{N_D=12}^c,...,g_{N_D=j}$. Further research in this relation is desirable to establish a deeper connection between both phenomena, crossings and pairing collapses. Likewise, the hyperbolic quadrants of the phase diagram have a region where avoiding crossings between states of the same parity take place. This region already identified in \cite{vidal} by studying the density of states in the thermodynamic limit, appears in the hyperbolic case ($s=-1$) when $|g(2j-1)|>1/t$. The relation between avoiding crossings and the pairon behavior in the $SU(1,1)$ RG models is out of scope of this contribution, and deserves more work for a complete clarification. \section{Summary} \label{summa} The Schwinger boson representation of the SU(2) algebra allows to connect the LMG model with the two-level SU(1,1) RG pairing models. We have exploited this relation to classify the entire parameter space of the LMG model in terms of the three RG families, the rational, the trigonometric and the hyperbolic. This classification sheds new light into the LMG phase diagram and its quantum phase transitions. Moreover, the electrostatic mapping of the trigonometric and hyperbolic models provides new insights into the structure of the different phases. We explored the LMG model from the perspective of the RG models, where the eigenstates are completely determined by the spectral parameters (pairons) of a particular solution of the non-linear set of Richardson equations. We proposed a numerically robust method to solve the Richardson equations which generalize that of reference \cite{solvPan2}. The method was proven to be suitable for obtaining the pairons of the complete set of eigenstates for moderate systems sizes. Using boson coherent states, we have re-derived the phase diagram of the LMG model and the characteristics of its different phase transitions. The second order phase transitions were interpreted in terms of the RG solution as a localization-delocalization of the ground state pairons, which takes place when the pairons concentrated around the value $t_a=-t$ at weak coupling, expand in the whole interval $[t_a,t_b]=[-t,t]$. On the other hand, the first order phase transition was related to the discontinuity of the pairons when the transition line is traversed. This discontinuity was related, in turn, with the crossings between states of the same parity. We have confirmed that the dynamics of pairons in the rational and trigonometric RG models take place entirely in the real axis. However, it was unexpected to find complex pairon solutions in the hyperbolic regions of the phase diagram. For the ground sate, complex values of the pairons are obtained after the collapses of an even number of pairons into the position $P_C=-1$ of the effective charge $Q_C$. It was numerically verified diagonalizing very large systems that no phase transition is associated with this singular pairon behavior, even for the particular case in which all the pairons collapse into $P_C=-1$. For the latter case the ground state wave function has a particularly simple form which is equivalent to the Moore-Read state of the $p_x + i p_y$ model \cite{Ger1, Links2, Hyp}. A complete classification of the excited states for the hyperbolic regions was given in terms of their pairon positions. For negative couplings it was found that the $k$-th excited state is characterized by a set of $(M-k)$ pairons trapped in the real interval $[-t,t]$ while the other $k$ pairons lie outside this interval. The dynamics of these non trapped pairons show collapses of $N_D$ pairons in the position $P_D=1$ of the effective charge $Q_D$, at $g$ values given by $g_{N_D}^c=-1/(2j+1-2N_D)$. This singular behavior of the excited state pairons was found to be connected to the crossings of $N_D$ lowest positive parity states with $N_D$ negative parity states, at exactly the same values $g_{N_D}^c$. As discussed in Ref. \cite{Angela}, the pairon dynamics can help to identify significant physical phenomena. The relation between collapses and crossings is an example of this connection that deserves a deeper study. The collapses of pairons in the positions of the effective charges obtained in the hyperbolic boson RG models discussed here, is a feature also found in the $p_x+ip_y$ fermion pairing realization of the hyperbolic $SU(2)$ RG model \cite{Ger1,Links2,Hyp}. In this latter model the collapses were related with another singular phenomenon: a third order phase transition. The insight gained in the study of the LMG model, where the set of pairons of every state in the spectrum is easily accessible, can help to elucidate more intricate mechanisms in other integrable models where the numerical access to the pairon sets is more demanding. \noindent \vskip 0.5truecm {\bf Acknowledgements}\\ J. D. acknowledges support from the Spanish Ministry of Economy and Competitiveness under grant FIS2009-07277. \noindent \vskip 0.5truecm
2107.13038
\section{Introduction} An over-arching goal of cosmology is the reconstruction of how the minuscule cosmic microwave background (CMB) anisotropies grow to the intricate cosmic web that we observe from galaxy surveys. The key challenge in this process is the deeper understanding of the elusive dark energy component in the consensus $\Lambda$-Cold Dark Matter ($\Lambda$CDM) model, which suppresses the growth rate of structure and speeds up the cosmic expansion rate at low redshifts. Among the most direct observational tests of dark energy are the weak secondary CMB anisotropies generated by the evolving low-$z$ cosmic web. The dominant signal comes from the late-time integrated Sachs-Wolfe effect \citep[ISW,][]{Sachs1967} at linear scales, while subdominant contributions from the non-linear Rees-Sciama effect \citep[RS,][]{Rees1968} remain at the $\sim10\%$ level compared to the ISW term \citep[see e.g.][]{Cai2010}. The linear ISW temperature shift along direction $\hat{\mathbf{n}}$ can be calculated from the time-dependent gravitational potential $\dot{\Phi}\neq0$ based on the line-of-sight integral \begin{equation} \label{eq:ISW_definition} \frac{\Delta T_\rmn{ISW}}{\overline{T}}(\boldsymbol{\hat n}) = 2\int_0^{z_\rmn{LS}} \frac{a}{H(z)}\dot\Phi\left(\boldsymbol{\hat n},\chi(z)\right)\,\rmn{d}z\;, \end{equation} with scale factor $a=1/(1+z)$, Hubble parameter $H(z)$, and co-moving distance $\chi(z)$, extending to the redshift of last scattering, $z_\mathrm{LS}$. In the linear growth approximation, density perturbations ($\delta$) grow as $\dot\delta = \dot D\delta$, where $D(z)$ is the linear growth factor. Combined with the Poisson equation for the $\Phi$ potential one can obtain the following ISW formula: \begin{equation} \label{eq:ISW_definition2} \frac{\Delta T_\rmn{ISW}}{\overline{T}}(\boldsymbol{\hat n}) = -2\int_0^{z_\rmn{LS}} a\left(1-f(z)\right)\Phi\left(\boldsymbol{\hat n},z\right)\,\rmn{d}z\;, \end{equation} where $f= \mathrm{d}\ln D /\mathrm{d}\ln a$ is the linear growth rate of structure. Throughout the matter-dominated epoch in $\Lambda$CDM at $z\gtrsim1.5$, the delicate balance of cosmological expansion rate and the growth of structure virtually guarantees constant gravitational potentials ($f\approx1$, $\dot{\Phi}\approx0$) generally in the linear regime of shallow ($|\bar{\delta}| \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 0.3$) density fluctuations averaged over $R\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 100~h^{-1}\mathrm{Mpc}$ scales. CMB photons may traverse hills and wells in the gravitational potential, but their temperatures are not altered as long as the underlying potentials themselves do not change ($\Delta T_\mathrm{ISW}\approx0$). However, at low redshifts ($z \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 1.5$) the balance is broken due to the emerging dominance of the dark energy component and its extra space-stretching effects (i.e. sub-critical matter density, $\Omega_{m}<1$). Large-scale potentials decay ($\dot{\Phi}<0$) which slightly changes the energies of photons traversing extended matter density perturbations in the low-$z$ Universe at $\mathrm{\sim 100~h^{-1}\mathrm{Mpc}}$ scales \citep[see e.g.][]{Cai2010}. The late-time $\Delta T_\mathrm{ISW}$ signal is however calculated to be at the $\mu K$-level, which presents observational challenges. The large uncertainty is due to the primary CMB temperature signal that represents an uncorrelated noise term for such secondary anisotropies. A common method to detect the ISW signal has been the measurement of the projected 2-point cross-correlation function of matter density fluctuations and CMB temperature anisotropies \citep[see e.g.][and references therein]{Fosalba2003,ho,gian,Stolzner2018}. However, focusing on the most extreme environments, where most of the signal is generated, we also expect that the ISW effect is accessible by cross-correlating CMB temperature maps with individually identified $\mathrm{\sim 100~h^{-1}\mathrm{Mpc}}$ scale galaxy \emph{superstructures}. \subsection{ISW anomalies} In the $\Lambda$CDM model, CMB photons gain net energy traversing superclusters because the potential well is shallower on exit than on entry ($\Delta T_\mathrm{ISW} > 0$). In contrast, CMB photons lose energy in large negative density fluctuations, or supervoids ($\Delta T_\mathrm{ISW} < 0$). By stacking the CMB map on the positions of hundreds of superstructures, the ISW signal emerges as noise fluctuations cancel. In turn, the fine details of the measured ISW imprint can constrain the physical properties of dark energy in an alternative way \citep[see e.g.][]{Nadathur2012,CaiEtAl2014,Kovacs2018,Adamek2020}. An important aspect is that the measured amplitude of the ISW signal ($A_\mathrm{ISW}\equiv \Delta T_\mathrm{obs}/\Delta T_\mathrm{\Lambda CDM}$) is often significantly higher than expected in the concordance model ($A_\mathrm{ISW}=1$). Such excess ISW signals were first found by \cite{Granett2008} using luminous red galaxies (LRG) from the Sloan Digital Sky Survey (SDSS) data set. It was then confirmed by several follow-up measurements and simulation analyses that the observed signal from the superstructures is about $A_{\rm ISW}\approx5$ times higher than expected from the $\Lambda$CDM model \citep[see e.g.][]{Nadathur2012,Flender2013,Hernandez2013,Aiola}. Overall, these ISW results are considered anomalous because projected 2-point correlation analyses do not find significant excess ISW signals \citep[see e.g.][]{PlanckISW2015,Hang20212pt}. An influential development was the ``re-mapping'' of the SDSS superstructures using more accurate spectroscopic redshifts from the Baryon Acoustic Oscillations Survey (BOSS). Yet, the results were mixed. The anomalous ISW signals were re-detected if the merging of smaller voids into larger encompassing under-densities was allowed in the void finding process \citep{Cai2017,Kovacs2018}. In contrast, no significant excess signals have been reported from the same data set when using matched filters and definitions without void merging \citep[][]{NadathurCrittenden2016}. Then, \cite{Kovacs2016} used photo-$z$ catalogues of LRGs from the Dark Energy Survey Year-1 data set (DES Y1) and reported an excess signal, similar to the original SDSS detection by \cite{Granett2008}. This analysis was extended to the DES Year-3 data set and the excess ISW signals were confirmed \citep{Kovacs2019}. These findings were crucial, because they independently detected ISW anomalies using a \emph{different} part of the sky. In combination with the BOSS results using similarly defined supervoids \citep{Kovacs2018}, the ISW amplitude from BOSS and DES Y3 data is $A_\mathrm{ISW}\approx5.2\pm1.6$ in the $0.2<z<0.9$ redshift range and its origin is unexplained. \subsection{Alternative hypotheses} A proposed explanation for the stronger-than-expected ISW imprints is the AvERA (Average Expansion Rate Approximation) model \citep{Racz2017}. It is a minimally modified N-body simulation algorithm that uses the separate universe hypothesis to construct an approximation of the emerging curvature models \citep[see e.g.][]{Rasanen2011}. The local expansion rate is calculated on a grid from the Friedmann equations using the \emph{local} matter density, and the expansion rate is averaged over the volume to estimate the zeroth order expansion rate of the simulation box. This treatment of inhomogeneities results in slightly different $H(z)$ expansion and $D(z)$ growth histories compared to a baseline $\Lambda$CDM evolution, including faster low-$z$ expansion rates and a higher $H_0$ value. In particular, the enhanced ISW signals from superstructures may be accommodated as a consequence of an even more suppressed growth rate $f(z)$ than in $\Lambda$CDM at $z\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 1.5$, at least at the largest scales \citep[see][for further details]{Beck2018, Kovacs2020}. A counter-argument was presented by \cite{Hang20212pt} who measured the CMB-galaxy 2-point cross-correlation signal ($C_{\rm gT}$) using the Dark Energy Spectroscopic Instrument (DESI) Legacy Survey photo-$z$ data set \citep{Dey2019}. They found that the AvERA model over-predicts the overall $C_{\rm gT}$ signal at the expense of raising the ISW amplitude in superstructures. Thus they claim that the AvERA model cannot be the final solution for ISW anomalies. In a subsequent observational analysis of superstructures detected in the same DESI Legacy Survey photo-$z$ catalogue, \cite{Hang2021} also questioned the validity of the anomalous ISW signals themselves. They reported a null detection of ISW signals from their supervoid sample, and a marginal detection with no tension from superclusters. These new ISW results further complicate the picture, and they certainly warrant further studies and a better understanding of this important problem in cosmology. \subsection{Testing the evolution of the ISW signal} In this study, we put the $\Lambda$CDM and AvERA models to test in a novel way to potentially exacerbate, or resolve, the existing ISW anomalies. Extending the redshift range of the observations, we identified supervoids in the un-probed $0.8<z<2.2$ range using the eBOSS DR16 QSO catalogue \citep{Ross2020} and then measured their ISW imprint. This is a key redshift range in the sense that, while the $\Lambda$CDM model predicts a gradually fading signal towards $z\approx2$ (see Figure \ref{fig:figure_1} for a preview of our simulated results), the AvERA model has a characteristically different high-$z$ evolution. In comparison to $\Lambda$CDM, the more suppressed growth rate and faster expansion rate at $z\lesssim1.5$ are compensated by a stronger gravitational growth ($f\gtrsim1$) and a slower expansion rate at $1.5 \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} z\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 5$ in the AvERA model \citep[see][for further details]{Beck2018}. Therefore, the subtle balance of growth and expansion expected in $\Lambda$CDM at $z\gtrsim1.5$ does not occur in AvERA. We thus formulated a new hypothesis and tested a bold prediction: \emph{if} cosmic expansion and growth are affected by inhomogeneities and this is the true source of the ISW excess signals at low-$z$, \emph{then} there must also be an additional ISW signal observable at $1.5 \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} z\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 5$, which is absent in $\Lambda$CDM. In particular, this high-$z$ ISW signal in AvERA is sourced by the \emph{growth} of the potentials ($\dot{\Phi} > 0$), not by their decay like at low-$z$ ($\dot{\Phi} < 0$), and thus it is of opposite sign. This is in stark contrast with $\Lambda$CDM’s fading ISW signal. We thus argue that the reconstruction of the ISW signal's evolution in the $0.8<z<2.2$ range can be used to discriminate between these two hypotheses. The first observational study of these high-$z$ ISW signals is the main goal of this paper that is organized as follows. In Section \ref{sec:section_2}, we describe the data sets that are used to measure and model the high-$z$ ISW signals. In Section \ref{sec:section_3}, we provide a summary of our methodology, followed by the presentation of our results in \ref{sec:section_4}. Finally, Section \ref{sec:section_5} contains a discussion and interpretation of our main findings. \section{Data sets} \label{sec:section_2} \subsection{eBOSS quasars} Our observational analysis of supervoids is based on the Data Release 16 (DR16) QSO sample from the eBOSS survey. The CORE QSO target selection is described by \cite{Myers2015}, using both optical imaging data from SDSS and mid-infrared data from the Wide-field Infrared Survey Explorer survey \citep[WISE,][]{wise}. The complete DR16 QSO catalogue is presented by \cite{Lyke2020} while the QSO clustering catalogue, that we use in this analysis, is described by \cite{Ross2020}. The eBOSS DR16 sample contains 343,708 quasars covering a sky area of 4,808 $deg^2$, and spanning the redshift range $0.8<z<2.2$. This sample bridges the gap between BOSS CMASS (contant mass) galaxies at $z<0.7$ and the high redshift quasars at $z>2.2$ that probe the Lyman-$\alpha$ forest fluctuations. This state-of-the-art quasar catalogue has been used for various cosmological analyses \citep[see e.g.][]{Neveux2020,Hou2021,Zhang2021}, including measurements of the growth rate of structure from cosmic voids \citep{Aubert2020}. \subsection{CMB data} We estimated the CMB imprint of large-scale structures by using a foreground-cleaned CMB temperature map based on the local-generalized morphological component (LGMCA) method \citep{Bobin2014}. It combines the Wilkinson Microwave Anisotropy Probe 9-year data set \citep[WMAP9,][]{bennett2012} and the \emph{Planck} data products \citep{Planck2020_1}. At the scales of our interest, this temperature map provides sufficiently accurate results and it guarantees very low foreground contamination. We also performed tests using the \emph{Planck} 2018 temperature map and our results were fully consistent with this fiducial choice. \subsection{QSO mock catalogue} The planned cross-correlations require a catalogue of supervoids and a reconstructed ISW map from the same simulation. Our analysis was based on the Millennium-XXL (MXXL) dark matter only $\Lambda$CDM N-body simulation by \cite{Angulo2012}. The MXXL is an upgraded version of the earlier Millennium run \citep{Springel2005}, covering a co-moving volume of ($3h^{-1}$ Gpc)$^{3}$ with $6720^3$ particles of mass $8.456 \times 10^9 \, M_\odot$. It adopted a cosmology consistent with the WMAP-1 mission results \citep{Spergel2003}. \begin{figure} \begin{center} \includegraphics[width=88mm]{ISW_LCDMprofiles_compare.pdf} \caption{\label{fig:figure_1} In the MXXL $\Lambda$CDM simulation, the magnitude of the ISW signal from supervoids (measured by re-scaling to the $R_v$ void radius) decreases towards higher redshift bins. While the shape of the signals can be reliably estimated from the simulated ISW-only temperature maps, observational analyses are limited by stronger fluctuations from primary CMB anisotropies.} \end{center} \end{figure} In particular, we utilized a mock QSO catalogue, based on the halo occupation distribution framework (HOD), selected from the the publicly available full sky MXXL halo light-cone catalogue by \cite{Smith2017}. This mock catalogue covers the $0.8<z<2.2$ redshift range of the eBOSS DR16 QSO data set used in our real-world measurements. In the construction of the QSO mock catalogue from the MXXL halos, we followed the HOD prescription presented by \cite{Smith2020}. We note that their estimation of the most realistic HOD parameters to match the eBOSS quasar sample was based primarily on the OuterRim N-body simulation \citep{Heitmann2019}. That mock catalogue has different cosmological parameters compared to MXXL, but the expected differences are not significant for our ISW analyses. As discussed by \cite{Smith2020}, the HOD approach describes the average number of central and satellite QSOs residing in halos as a function of halo mass, $M$. The total number of QSOs in a dark matter halo is the sum of the central and satellite quasars, expressed as \begin{equation} \langle N_\mathrm{tot}(M) \rangle = \langle N_\mathrm{cen}(M) \rangle + \langle N_\mathrm{sat}(M) \rangle. \end{equation} We note that the probability of finding more than one QSO within the same dark matter halo is low because quasars are rare tracers of the underlying density field. Formally, the probability that a halo contains a central QSO is given by the smooth step function \begin{equation} \langle N_\mathrm{cen}(M) \rangle = \tau \frac{1}{2} \left[1 + \mathrm{erf} \left( \frac{\log M - \log M_\mathrm{cen}}{\log \sigma_\mathrm{m}} \right) \right], \end{equation} where the position of this step is set by $M_\mathrm{cen}$. A halo with mass $M \ll M_\mathrm{cen}$ hosts no central quasar, which then transitions to a $\tau$ probability for $M \gg M_\mathrm{cen}$ halos. The quasar duty cycle $\tau$ takes into account that not all central black holes are active. It is defined as the fraction of halos which host an active central galaxy, setting the height of the step function, and the width of the transition is set by the parameter $\log \sigma_\mathrm{m}$ \cite[see][for details]{Smith2020}. \begin{figure*} \begin{center} \includegraphics[width=162mm]{ISW_images_eBOSS_LCDM_lowz.pdf} \caption{\label{fig:figure_2a} Stacked ISW signals from supervoids at $0.8<z<1.2$ are compared for the MXXL simulation (left) and the eBOSS QSO data set (right). $R/R_{v}=1$ marks the supervoid radius in re-scaled units, while $R/R_{v}=0$ is the centre where the highest signal is expected. The data shows a moderately significant enhanced ISW amplitude that is similar to previous detections from DES and BOSS.} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=162mm]{ISW_images_eBOSS_LCDM.pdf} \caption{\label{fig:figure_2} Stacked ISW signals from supervoids at $1.5<z<2.2$ are compared for the MXXL simulation (left) and the eBOSS QSO data set (right). $R/R_{v}=1$ marks the supervoid radius in re-scaled units, while $R/R_{v}=0$ is the centre where the highest signal is expected. We found evidence for a sign-change in the observed ISW imprints at $z\approx1.5$, as predicted by the AvERA model.} \end{center} \end{figure*} This form of HOD is similar to what is used for galaxies \citep[][]{Tinker2012}, with the addition of the duty cycle parameter, and it is motivated by the expectation that the brightest QSOs occupy the most massive halos. The number of satellite quasars in each halo is Poisson distributed, with a mean value given by a power law, \begin{equation} \langle N_\mathrm{sat}(M) \rangle = \left( \frac{M}{M_\mathrm{sat}} \right)^{\alpha_\mathrm{sat}} \exp \left( - \frac{M_\mathrm{cut}}{M} \right), \label{eq:hod_satellite_power_law} \end{equation} with $\alpha_\mathrm{sat}$ as the slope, $M_\mathrm{sat}$ as a normalisation, and $M_\mathrm{cut}$ to apply a cutoff at low masses. These satellite QSOs are randomly positioned in the halo following a Navarro-Frenk-White \citep[NFW,][]{NFW1996} profile. We followed the ``HOD0'' prescription presented by \cite{Smith2020} which showed the most realistic description of the eBOSS DR16 QSOs. The MXXL halo light-cone catalogue was populated with QSOs using the following parameters: $f_\mathrm{sat}$=0.19, $\tau$=0.012, $\log M_\mathrm{cen}$=12.13, $\log \sigma_M$=0.2, $\log M_\mathrm{sat}$=15.29, $\log M_\mathrm{cut}$=11.61, and $\alpha_\mathrm{sat}$=1.0. The corresponding average tracer density $\bar{n}\approx2\times10^{-5}h^{3} \mathrm{Mpc^{-3}}$ is comparable to that of the eBOSS DR16 quasars. Given the large-scale nature of our supervoid analysis, possible small differences in observed and simulated tracer density are not expected to significantly affect our ISW results. \subsection{Simulated ISW map} We used the publicly available ISW map reconstruction code by \cite{Beck2018} to produce an ISW map from the MXXL simulation. The same map was used in a previous analysis of MXXL supervoids by \cite{Kovacs2020}. We also followed the simulation analysis presented by \cite{Kovacs2019} who reported that large-scale modes add extra noise to the stacked profile and potentially introduce biases in the measured ISW profiles if measured in smaller patches. We therefore removed the contributions from the largest modes with multipoles $2\leq \ell \leq10$. \begin{figure*} \begin{center} \includegraphics[width=83mm]{ISW_LCDMprofiles_QSO_zbin1.pdf} \includegraphics[width=83mm]{ISW_LCDMprofiles_QSO_zbin4.pdf}\\ \includegraphics[width=83mm]{ISW_LCDMprofiles_QSO_zbin2.pdf} \includegraphics[width=83mm]{ISW_LCDMprofiles_QSO_zbin3.pdf} \caption{\label{fig:figure_3}Measured temperature profiles from stacking analyses in the MXXL simulation and using eBOSS DR16 data. The top-right panel shows our detection of a stronger-than-expected ISW signal at $0.8<z<1.2$ with the sign predicted by the $\Lambda$CDM model. At redshifts $1.2<z<1.5$ (bottom-right), the observed excess ISW signal fades in the centre, but we found traces of an unexpected signal outside the void radius ($R/R_{v}>1$). The two panels on the left show our detection of an \emph{opposite-sign} ISW signal in our two higher-$z$ bins at $1.5<z<1.9$ and at $1.9<z<2.2$. Overall, the observed ISW signals are inconsistent with the $\Lambda$CDM model predictions at all redshifts although the significance of these deviations remains at the moderate $\sim2\sigma$ level.} \end{center} \end{figure*} \section{Methods} \label{sec:section_3} In general, cosmic voids are highly hierarchical objects in the cosmic web with two main classes. Voids-in-clouds tend to be surrounded by an over-dense environment, while voids-in-voids, or supervoids, consist of several sub-voids \citep[see e.g.][]{Sheth2004,Lares2017}. Such large-scale supervoid structures are of high interest in ISW measurements, as they are expected to account for most of the observable signal. In our analysis, we focused on the identification of such $R\geq100~h^{-1}\mathrm{Mpc}$ supervoids, and statistically measured their ISW imprint in the CMB. \subsection{Supervoid identification} While various algorithms exist to define cosmic voids, we used the so-called 2D void finding algorithm \citep{Sanchez2016,Davies2021}. The heart of the method is a restriction to tomographic slices of galaxy data, and analyses of the projected density field around void centre candidates defined by minima in the smoothed density field. The algorithm includes measurements of galaxy density in annuli about void centre candidates until the mean density is reached, which in turn defines the void radius. Large samples of 2D voids have been used in previous DES void lensing and ISW measurements, showing robust detections from both observed data and simulations \citep[see e.g.][]{Vielzeuf2019,Kovacs2019,Fang2019}. \begin{figure} \begin{center} \includegraphics[width=80mm]{likelihoods_comp_LCDM_eBOSS.pdf} \caption{\label{fig:figure_4} Likelihood functions of the $\Lambda$CDM model multiplied by an $A_{\rm ISW}$ amplitude in the light of our eBOSS measurements in different redshift bins (shaded regions mark the $A_\mathrm{ISW}^\mathrm{best-fit}\pm1\sigma$ range). The blue band marks the best-fit $A_\mathrm{ISW}$ results from supervoids in previous low-$z$ measurements. The dotted line shows the overall likelihood of the $\Lambda$CDM model \emph{without} redshift binning (consistent with $A_\mathrm{ISW}\approx1$ and also with zero ISW signal).} \end{center} \end{figure} A free parameter in the 2D void finding process is the thickness of the tomographic slices. It was found that an $s\approx100~h^{-1}\mathrm{Mpc}$ line-of-sight slicing effectively leads to the detection of independent, and individually significant under-densities \citep{Sanchez2016,Kovacs2019}. We thus sliced the MXXL mock and the eBOSS quasar data set into 18 shells of $100~h^{-1}\mathrm{Mpc}$ thickness at $0.8<z<2.2$. Another void finder parameter is the Gaussian smoothing scale applied to the tracer density map to define the density minima, which also controls the merging of smaller voids into larger supervoids. In practical terms, while for example a $\sigma=20~h^{-1}\mathrm{Mpc}$ smoothing allows one to detect more voids, with $\sigma=50~h^{-1}\mathrm{Mpc}$ smoothing a higher number of extended $R_{\rm v}\gtrsim100$ $h^{-1}\mathrm{Mpc}$ supervoids are expected in the catalogues as a result of void merging. For ISW measurements, such a sample of large voids is beneficial since they carry most of the signal. They better trace the large-scale fluctuations in the gravitational potential which naturally varies on larger scales than the galaxy density field. Motivated also by the large mean inter-tracer separation of quasars, we therefore followed \cite{Kovacs2019} and used $\sigma=50~h^{-1}\mathrm{Mpc}$ as a smoothing parameter in our MXXL and eBOSS analyses to detect supervoids. A third parameter is the minimum central under-density that is considered as a void centre ($\delta_{c}$, measured in the innermost $25\%$ region of voids). We again followed \cite{Kovacs2019} and selected supervoids with $\delta_{c}<-0.3$. As a possible consequence of the low QSO tracer density \citep[see e.g.][]{Hawken2020}, we observed that a small fraction of the voids appear completely empty in their centre ($\delta_{c}\approx-1.0$). For our fiducial analysis, we removed these potentially spurious objects from our eBOSS and MXXL samples, but confirmed that they do not change the main conclusions if included in the stacking analysis. To create a binary \texttt{HEALPix} \citep{healpix} mask for the void finder, we constructed an eBOSS survey mask from the QSO catalogue following a similar eBOSS void analysis by \cite{Aubert2020}. In our MXXL analysis, a full sky mock catalogue was available but we split that into octants to more faithfully model the eBOSS void identification process that makes use of a 4,808 $deg^2$ sky area. \subsection{The eBOSS and MXXL supervoid samples} With the above methodology, we identified 8,609 supervoids of radii $R_\mathrm{v}\gtrsim100$ $h^{-1}\mathrm{Mpc}$ at redshifts $0.8<z<2.2$ by combining the results from the 8 octants in the full-sky MXXL mock. In eBOSS, which covers approximately $10\%$ of the sky, we identified 838 supervoids (about $10\%$ of the number of supervoids in the MXXL full-sky map). We then compared the average properties of supervoids in the eBOSS and MXXL catalogues and found great agreement. The maximum supervoid radius in both samples is about $R_\mathrm{v}^{max}\approx350~h^{-1}\mathrm{Mpc}$, while the mean radius is slightly larger for the eBOSS sample ($\bar{R_\mathrm{v}}\approx197~h^{-1}\mathrm{Mpc}$) compared to the MXXL mock ($\bar{R_\mathrm{v}}\approx174~h^{-1}\mathrm{Mpc}$). We also found that the eBOSS supervoids are on average about $10\%$ deeper in their central regions ($\bar{\delta}_{c}\approx-0.64$) than supervoids in MXXL ($\bar{\delta}_{c}\approx-0.53$). Transforming these under-density values from galaxy density to matter density given the previusly estimated linear QSO bias factor $b_{Q}\approx2.45\pm0.05$ \citep[][]{Laurent2017}, we got $\bar{\delta}_{c}^{m}\approx-0.26$ for eBOSS supervoids and $\bar{\delta}_{c}^{m}\approx-0.22$ for MXXL supervoids. These relatively shallow structures certainly are rare fluctuations at such large scales in a $\Lambda$CDM model, but they are consistent with the dimensions of the largest known supervoids in combination of their size and emptiness \citep[see e.g.][]{Jeffrey2021,Shimakawa2021}. We note that neither the mock catalogues and the HOD algorithms nor the eBOSS QSO data set were optimized for the sort of large-scale ISW measurements that we developed in this paper. The observed trends for slightly deeper and larger supervoids in the eBOSS data are consistent with possible imperfections in our HOD modeling, and do not significantly affect our ISW measurements. Alternatively, these differences may correspond to genuine physical differences in the evolution of these large-scale structures compared to the baseline $\Lambda$CDM model. This interesting possibility should be better understood in a more detailed analysis of voids, including other void definitions \citep[see e.g.][]{Aubert2020}, and thus we leave these additional tests for future work. \subsection{Stacking measurement} Given the supervoid parameters in the catalogues we constructed, we first cut out square-shaped patches from the CMB temperature maps aligned with supervoid positions using the \texttt{gnomview} projection method of \texttt{HEALPix} \citep{healpix}. In our initial tests, we determined that a $\sigma=1^{\circ}$ Gaussian smoothing applied to the CMB maps is helpful to suppress strong small-scale fluctuations from degree-scale primary CMB anisotropies, and we applied this in our measurements and simulations consistently. We then stacked the cut-out patches to provide a simple and informative way to statistically study the mean imprints (see Figures \ref{fig:figure_2a} and \ref{fig:figure_2} for examples of stacked ISW images). From the stacked images, we also measured radial ISW profiles in re-scaled radius units using 16 bins of $\Delta (R/R_{v})=0.3$ up to five times the supervoid radius ($R/R_{v}=5$). For completeness, we explored the role of the duty cycle of quasars ($\tau$) in our measurements. We analyzed 6 random realisations of our MXXL QSO mock by activating a different subset of halos. At the catalogue level, we detected small changes in the total number of objects with about N$\approx$8,600$\pm$100 supervoids. Concerning the ISW signal amplitude, we found about $10\%$ fluctuations in the stacked imprints. While the overall consistency of these results was good, we decided to take the mean imprint of these 6 realisations as our estimate of the ISW imprints from MXXL supervoids for more accurate results. We also tested the MXXL ISW signals by applying a simple $\log M$>12.0 halo mass cut which provides a more dense tracer catalogue. Compared to a sub-sampled QSO catalogue, we found about $\sim20\%$ stronger central ISW imprints due to a presumably higher precision to identify the centres of the supervoids where the signal is the strongest. These results confirmed the intuition that future QSO catalogues with higher object density will provide a better chance to measure these signals \citep[see e.g.][]{DESI}. \section{Results} \label{sec:section_4} In order to study the expected redshift evolution of the ISW signal, we decided to split our MXXL and eBOSS supervoid catalogues into the following 4 redshift bins: $0.8<z<1.2$, $1.2<z<1.5$, $1.5<z<1.9$, and $1.9<z<2.2$. This choice results in a fairly equal distribution of the 838 eBOSS supervoids with about 200 of them placed in each redshift bin. A similar split was applied to the 8,609 MXXL supervoids with approximately 2,000 objects in each bin. This analysis setup provides sufficient statistical power to explore the expected trends in the data. Importantly, it was also expected to provide new insights about the hypothesized \emph{sign-change} in the ISW signal at $z\approx1.5$. \subsection{ISW in the MXXL mock} In our simulations, we observed the expected trend in the evolution of the $\Lambda$CDM ISW amplitude. The amplitude of the signal decreases with increasing redshift, as a result of the transition towards the Einstein-de Sitter-like matter dominated universe with $\dot{\Phi}\approx0$ from about $z\approx2$. In Figure \ref{fig:figure_1}, we show the estimated ISW-only signals in the 4 redshift bins. We calculated the corresponding ``theoretical'' uncertainties of the full-sky ISW-only signals from 500 random stacking measurements. Given the ISW auto power spectra \cite{Beck2018} calculated from the MXXL mock, we generated 500 realisations of ISW maps using the \texttt{synfast} routine of \texttt{HEALPix}. We then used the MXXL supervoids for stacking measurements on these uncorrelated maps to estimate the uncertainties of the ISW profile reconstruction itself due to fluctuations in the map. \subsection{ISW from eBOSS supervoids} Next, we measured the imprint of the real-world eBOSS supervoids on the observed CMB temperature anisotropy map. In Figures \ref{fig:figure_2a} and \ref{fig:figure_2}, we compare two stacked images that we created using MXXL and eBOSS supervoids located at $0.8<z<1.2$ and at $1.5<z<2.2$, respectively. While the low-$z$ eBOSS data shows an enhanced cold imprint, the high-$z$ part provides a strong visual impression of qualitatively different ISW imprints with a central cold spot in the $\Lambda$CDM model, and a \emph{hot spot} imprint from the eBOSS data. Following a standard approach in ISW measurements \citep[see e.g.][]{Kovacs2019}, the errors of these stacking measurements were estimated based on 500 random CMB map realisations using the \texttt{synfast} routine. We used the CMB angular power spectrum estimated from the \emph{Planck} data set \citep{Planck2018_cosmo} to generate these random CMB maps, and then stacked the eBOSS supervoids on them. We did not change their positions via randomization, in order to keep their overlaps and internal correlations. We then calculated the covariance (C) of our real-world measurements from this ensemble of random measurements. Given these observed ($\Delta T^\mathrm{o}$) and simulated ($\Delta T^\mathrm{s}$) results, we evaluated a chi-square statistic with \begin{equation} {\chi}^2 = \sum_{ij} (\Delta T_{i}^\rmn{o}-A_{\rm ISW}\Delta T_{i}^\rmn{s} )C_{ij}^{-1} (\Delta T_{j}^\rmn{o}-A_{\rm ISW}\Delta T_{j}^\rmn{s}) \end{equation} where indices $i,j$ correspond to radial bins measured from the stacked image in a given redshift slice. Under the fair assumption of Gaussian likelihoods, we then looked for the maximum of the $\mathcal{L}\sim$ exp$(-\chi^{2}/2)$ function in each redshift bin to determine the best-fit $A_{\rm ISW}$ amplitude and its uncertainties. \subsection{Main findings} We further examined the ISW signals in our 4 redshift bins, and present our main results in Figure \ref{fig:figure_3}. We fitted the $\Lambda$CDM template profile to the data with a varying $A_\mathrm{ISW}$ amplitude, and kept the shape of the ISW imprint profiles fixed. We made the following observations on the $A_\mathrm{ISW}$ amplitudes from the measured ISW profiles: \begin{itemize} \item at redshifts $0.8<z<1.2$, we found an excess ISW signal with $A_\mathrm{ISW}\approx3.6\pm2.1$ amplitude. This appears to be consistent with the $A_\mathrm{ISW}\approx5.2\pm1.6$ amplitude constrained from the DES Y3 and BOSS data at $0.2<z<0.9$. \item the observed eBOSS signal is consistent with the $\Lambda$CDM expectation ($A_\mathrm{ISW}\approx-0.9\pm2.9$) at $1.2<z<1.5$, where the $\Lambda$CDM and AvERA models predict similar amplitudes. The potentially spurious signals that are seen outside the supervoids ($R/R_{v}>1$) do not significantly affect the overall best-fit amplitude, that is also consistent with zero signal. \item we measured $A_\mathrm{ISW}\approx-4.3\pm3.8$ at $1.5<z<1.9$, where the sign of the ISW signal is expected to start changing in the AvERA model. While the significance of the measured signal is low, the data appears to show characteristic features at the centre where the highest signal is expected. \item finally, we found $A_\mathrm{ISW}\approx-8.49\pm4.4$ in the fourth redshift bin at $1.9<z<2.2$, which provides further evidence for a sign-change in the ISW amplitude. \end{itemize} \begin{figure} \begin{center} \includegraphics[width=84mm]{likelihoods_comp_LCDM_summary.pdf} \caption{\label{fig:figure_6} Measured ISW amplitudes with emerging trends. At $z\lesssim1.5$, multiple observational results point to an enhanced positive ISW amplitude. In contrast, our new eBOSS results showed a large \emph{negative} ISW amplitude at $z\gtrsim1.5$. The bottom panel shows the estimated tension with the baseline $\Lambda$CDM model predictions (shaded bands correspond to $1\sigma$ and $2\sigma$).} \end{center} \end{figure} An interesting aspect of our ISW measurement is that the total stacked signal is formally consistent with zero \emph{without} redshift binning, as shown in Figure \ref{fig:figure_4}. The high-$z$ bins favor large negative $A_\mathrm{ISW}$ values, the $1.2<z<1.5$ bin is consistent with zero and also the $\Lambda$CDM prediction, while the $0.8<z<1.2$ bin alone constrains a large positive ISW amplitude. In a stacking measurement without binning, the overall imprint is consistent with zero ISW signal due to the cancellation of the high-$z$ and low-$z$ signals which contribute to the mean imprint with different sign. This feature highlights the importance of considering alternative hypotheses and of executing deeper explorations of the available data. In Figure \ref{fig:figure_6}, we provide comparisons to existing low-$z$ results from BOSS and DES Y3 data, and visualize the redshift trends in ISW anomalies. For completeness, we also include the formal ISW amplitude enhancement ($A_\mathrm{ISW}\approx5.5$) required to fully explain the CMB \emph{Cold Spot} as an ISW imprint from the Eridanus supervoid it is aligned with \citep[see e.g.][]{SzapudiEtAl2014,KovacsJGB2015,Kovacs2020}. Interestingly, it also follows the same ISW anomaly trend at very low redshifts ($z\approx0.15$). \begin{figure} \begin{center} \includegraphics[width=89mm]{ISW_LCDMprofiles_octants.pdf} \caption{\label{fig:figure_5b} A comparison of the anomalous ISW signals from eBOSS data with possible fluctuations in the expected $\Lambda$CDM imprints from simulations. Measured in different octants and considering 6 different random realisations of the QSO activation in the HOD modeling, the observed excess signal is stronger than the most significant fluctuations.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=86mm]{CMBkappa_profiles_lowz_highz.pdf} \caption{\label{fig:figure_5} CMB lensing convergence ($\kappa$) profiles are measured for the high-$z$ (top) and low-$z$ (bottom) parts of the eBOSS supervoid catalogue. Both populations show negative $\kappa$ imprints aligned with the interior of the supervoids, indicating true under-densities.} \end{center} \end{figure} \subsection{Fluctuations in the expected ISW signal} While our main analysis is based on a more accurate full-sky estimation of the ISW imprints expected in the $\Lambda$CDM model, we also tested the strength of possible ``cosmic variance'' fluctuations in the expected signal. Given the size of the eBOSS survey footprint and the randomness in QSO detection from the observed volume, we thus measured the stacked cut-sky ISW signals from supervoids in the 8 octants of the MXXL mock using the 6 different realisations of a random QSO activation (48 slightly different patches). In Figure \ref{fig:figure_5b}, we show the corresponding results for the $0.8<z<1.2$ redshift bin (other bins show consistent results). In comparison to the full-sky result, we found that there are considerable variations in the reconstructed locally measured MXXL ISW signals in eBOSS DR16-like observational windows. However, we concluded that even the most extreme fluctuations are insufficient to explain the discrepancy with the observations. \subsection{CMB lensing tests} In the light of the anomalous opposite-sign ISW signals from the $z\gtrsim1.5$ range, we decided to further test the validity of the eBOSS supervoid sample. We stacked the \emph{Planck} CMB \emph{lensing} convergence ($\kappa$) map \citep{Planck2018_cosmo} on the positions of supervoids, by splitting the data into two bins at $z=1.5$. As demonstrated in Figure \ref{fig:figure_5}, we found that both halves of the catalogue show a generally negative $\kappa$ imprint at $R/R_{v}<1$. This finding suggests that despite the evidence for opposite-sign ISW signals from $z>1.5$ eBOSS supervoids, these objects also correspond to genuine under-densities in the cosmic web. \section{Discussion and Conclusions} \label{sec:section_5} Motivated by recent detections of anomalous ISW signals from the low-$z$ Universe, we extended the redshift range of the relevant observations using the eBOSS DR16 QSO catalogue \citep[][]{Ross2020}. We modeled our measurement with the Millennium XXL simulation \citep[][]{Angulo2012,Smith2017}, and estimated the $\Lambda$CDM ISW signal in the $0.8<z<2.2$ redshift range. We then compared the observed signal from eBOSS supervoids to this $\Lambda$CDM expectation by fitting an $A_{\rm ISW}$ amplitude to the data as a consistency test. These measurements revealed a new ISW anomaly associated with supervoids identified at redshifts higher than before. Considering possible systematic effects, the cross-correlation nature of our CMB stacking measurements using positions of distant supervoids minimizes the chance of confusing the expected ISW signal with remnant local contamination in the foreground-cleaned CMB data. We also note that the observed redshift dependence of the ISW amplitude, above all the hot spot signals from $z>1.5$ eBOSS supervoids, is inconsistent with a hypothetical contamination from a density-dependent dust emission \citep[see e.g.][for a similar low-$z$ analysis]{Hernandez2013}. \subsection{An opposite-sign ISW signal} We also considered an alternative hypothesis. Guided by the nature of the known ISW tensions at $0.2<z<0.9$ and the proposed solution provided by the AvERA model \citep[][]{Racz2017,Beck2018}, we specifically looked for a sign-change in the ISW signal at about $z\approx1.5$. Here we note that ISW analyses were not among the key projects in the eBOSS survey, and thus the QSO catalogue was not optimised for such measurements. The expected signal-to-noise is relatively low due to significant noise from the primary CMB fluctuations. Yet, we did find evidence for such an opposite-sign ISW signal from the eBOSS supervoids which, instead of a cold spot signal, showed a hot spot imprint in the $1.5<z<1.9$ and $1.9<z<2.2$ redshift bins. Combining these two high-$z$ bins, the eBOSS data provided a moderate $2.4\sigma$ detection of an \emph{opposite-sign} ISW imprint at $1.5<z<2.2$. Formally, the actual tension with the $\Lambda$CDM model prediction is $2.7\sigma$ (see Figure \ref{fig:figure_4} for more detailed likelihood analysis results). Considering our new eBOSS results and the excess ISW signal from supervoids at $0.2<z<0.9$ detected from BOSS and DES Y3 data \citep[][]{Kovacs2019}, an emerging trend is seen in the data as displayed in Figure \ref{fig:figure_6}. At $z\lesssim1.5$, multiple observational results point to an enhanced positive ISW amplitude. Furthermore, our new eBOSS results showed a large \emph{negative} ISW amplitude at $z\gtrsim1.5$. With additional tests, we confirmed that these $z\gtrsim1.5$ supervoids with the most anomalous hot spot ISW signal are also aligned with negative CMB lensing convergence ($\kappa<0$), indicating genuine under-densities. Moreover, we showed that the discrepancy is not resolved by considering possible ``cosmic variance'' fluctuations in the expected $\Lambda$CDM signal given the eBOSS survey window (see Figure \ref{fig:figure_5b}). \subsection{Interpretation $\&$ future prospects} Taken at face value, these moderately significant $2-3\sigma$ detections of ISW anomalies in the entire observed redshift range suggest an alternative growth rate of structure; at least in low-density environments at $\sim100~h^{-1}\mathrm{Mpc}$ scales. As shown in Figure \ref{fig:figure_7}, the AvERA model appears to provide a framework to interpret these anomalies, if the observed excess ISW amplitudes are interpreted as an enhancement compared to the expected $\Lambda$CDM growth rate of structure with $\Delta T_\mathrm{ISW}(z)\sim [1-f^{obs}(z)]\equiv A_\mathrm{ISW}(z)\times[1-f^{\Lambda CDM}(z)]$ following Equation \ref{eq:ISW_definition2}. Furthermore, we also tested the consistency of our ISW-based results with relevant other constraints on the growth rate of structure at the redshift range probed by the eBOSS survey. In the case of the FastSound results at $z\approx1.4$ \citep[][]{Okumura2016}, eBOSS DR16 consensus results from clustering analyses \citep{Alam2021}, and constraints from the eBOSS QSO voids \citep[][]{Aubert2020}, we \emph{converted} the measured values of the growth parameter combination $f\sigma_{8}(z)$ to a constraint on $f(z)$ by calculating $f^{*}(z)=f\sigma_{8}(z)/\sigma_{8}^{Pl}(z)$ with the assumption of a \emph{Planck} 2018 cosmology. We stress that this simplistic comparison and scaling are \emph{not} sufficient to draw conclusions. Nonetheless, they are useful to examine possible common trends in different observations of the growth rate of structure. \begin{figure} \begin{center} \includegraphics[width=89mm]{cosmology_modelX_paper_v4.pdf} \caption{\label{fig:figure_7} Growth rate of structure as a function of redshift in the standard $\Lambda$CDM model (assuming a \emph{Planck} 2018 cosmology) and in a variant of the AvERA model \citep[as approximated by][]{Hang20212pt}. Measurements indicated with an asterisk are based on $f\sigma_{8}(z)$ constraints divided by a fiducial \emph{Planck} $\sigma_{8}(z)$ value. If the measured excess amplitudes are formally expressed as re-scaled $\Lambda$CDM growth rate values ($A_{\rm ISW}(z)\times[1-f^{\Lambda CDM}(z)]$), then the ISW anomalies from DES, BOSS, and eBOSS data follow a consistent trend. We note that the ISW amplitude constraint from CMASS data \citep[][]{NadathurCrittenden2016} and eBOSS LRGs do not show significant anomalies. However, eBOSS ELGs, QSOs, and other high-$z$ constraints from the FastSound and SDSS Ly$\alpha$ data are consistent with the results of this paper.} \end{center} \end{figure} As shown in Figure \ref{fig:figure_7}, the eBOSS emission line galaxies (ELG) show good agreement with our results from our lowest redshift bin at $0.8<z<1.2$, although the $z$ range overlap is only partial between them. At $1.2<z<1.5$, the growth rate constraint from the FastSound survey is again perfectly consistent with our ISW-based estimation of $f$. Most relevantly, we also see good agreement with the consensus results from the eBOSS QSO growth rate analyses \citep{Hou2021,Neveux2020} and the less constraining but methodologically more similar result from cosmic voids in the eBOSS QSO survey at $z\approx1.5$ \citep{Aubert2020}. Here we note that these results are based on a single-bin analysis of the eBOSS QSO catalogue, and therefore they are not sensitive to changes in $f(z)$ that we observed at $z\approx1.5$ compared to the expected $\Lambda$CDM evolution. At even higher redshifts ($z\approx3$), the $f=1.46\pm0.29$ constraint from the SDSS Lyman-$\alpha$ forest also appears to be consistent with a stronger gravitational growth at ``cosmic noon'' \citep{McDonald2005}, which may provide further insight on this problem if measured with higher precision. We conclude that, despite the criticism by \cite{Hang20212pt}, the AvERA toy-model approach might provide valuable insights at least in the context of the evolution of supervoids; even though it might not be the final answer for the ISW anomalies in general. Certainly, up-coming data from the Euclid \citep[][]{euclid}, DESI \citep[][]{DESI}, and J-PAS \citep[][]{jpas2014}, surveys will provide tighter constraints on these anomalous ISW signals from supervoids. Furthermore, complementary analyses using superclusters, and more detailed measurements of the CMB lensing imprint of superstructures \citep[see e.g.][]{Vielzeuf2019,Raghunathan2019} at various redshifts may uncover additional details about the ISW anomalies, and their possible relation to other interesting puzzles in cosmology \citep[see e.g.][]{Riess2019,Heymans2021}. \section*{Acknowledgments} The authors thank Marie Aubert, Julian Moore, and Carlos Hern\'andez-Monteagudo for their insightful comments and suggestions which improved the clarity of the manuscript. AK has been supported by a Juan de la Cierva fellowship from MINECO with project number IJC2018-037730-I, and funding for this project was also available in part through SEV-2015-0548 and AYA2017-89891-P. IC and GR acknowledge support from National Research, Development and Innovation Office of Hungary through grant OTKA NN 129148. IS acknowledges support from the National Science Foundation (NSF) award 1616974. \section*{Data availability} The eBOSS QSO data\footnote{https://www.sdss.org/dr16/}, the MXXL halo mock catalogues\footnote{https://tao.asvo.org.au/tao/}, and the CMB temperature\footnote{https://www.cosmos.esa.int/web/planck} and lensing\footnote{http://www.cosmostat.org/products} maps are publicly available. The simulated ISW analysis software\footnote{https://github.com/beckrob/AvERA\_ISW} and the AvERA simulation tools\footnote{https://github.com/eltevo/avera} which provided important foundations for this article are also available from public websites, or will be shared on reasonable request to the corresponding author. \bibliographystyle{mnras}
gr-qc/9304016
\section{Introduction} Since the famous work of von Neumann [1], the statistical operator $\rho$ has been used to represent the general quantum state of a given quantum system. The statistical interpretation of $\rho$ is given in frames of the quantum measurement theory. For the recent years, serious efforts have been made to derive statistical interpretation from the (slightly modified) quantum dynamics itself [2]. To achieve a similar goal, other authors have proposed a certain history-formulation [3] of the quantum mechanics instead of the ordinary one. We do not intend to discuss any of the preceeding proposals. Rather we are going to propose a unique and exact history-interpretation for a particular quantum system. Although we use the terminology of works [3], we would express our results in the conservative (standard) language of the quantum mechanics as well (cf.Ref.[4]). Lessons learned from the works [2] are essential even if not made explicit in the present paper. If one ignores the measurement theory, it is still possible to infer a certain statistical content from a general state $\rho$. We can always decompose $\rho$ into the weighted sum of pure state statistical operators: $$ \rho = \sum_{\al}\wal\psial\psial^{\dagger}.\eqno(1.1) $$ Accordingly, we can interprete the given (mixed) quantum state $\rho$ as follows: the state of the system is just a pure state singled out at random from the set $\{\psial\}$, with probablities $\{\wal\}$, respectively. Hence, for mixed quantum states a genuine statistical interpretation is possible, even without referring to the concept of quantum measurement. The pure states $\psial$ may be called {\it consistent states} because the above statistical interpretation is fully consistent with what is expected of a usual statistical ensemble, cf.Refs.[3]. For the consistent states $\{\psial\}$ we can introduce the notion of {\it decoherence} [3]. Instead of strict decoherence, usually we find a weaker one, e.g. an asymptotic decoherence: $$ \psi_{\al_1}^\dagger\psi_{\al_2}\rightarrow\ 0\eqno(1.2a) $$ if labels $\al_1$ and $\al_2$ become "very" different. To be precise, assume the existence of Euclidean norm on the space of labels. Thus the condition for the limes (1.2a) reads: $$ \Vert\al_1-\al_2\Vert\rightarrow\ \infty.\eqno(1.2b) $$ We wish to emphasize that, at least in our work, decoherence is not a logical necessity to assign consistent probabilities to the terms of the decomposition (1.1) of mixed states. No doubt, asymptotic decoherence (1.2ab) shows up as a characteristic feature of our consistent states, as will be seen in Par.III. The decomposition (1.1) is trivial and unique if the state $\rho$ is already pure. Pure states has no classical statistical content independent of further assumptions like, e.g., performing measurements on the pure state. In general, a pure state turns to be mixed provided we ignore variables belonging to a certain factor space of the system's Hilbert space. Technically, one has to take a trace of the original statistical operator $\rho$ over the factor space of the ignored variables. In recent works this has been termed as {\it coarse graining} [3]. Coarse graining can thus produce mixed quantum states which, in turn, will possess classical statistical content in terms of consistent states and their probabilities, as is seen from Eq.(1.1). For mixed states, unfortunately, the decomposition (1.1) is not unique. We shall propose a certain way to obtain unique results in case of a wellknown model of coarse graining. The proposal relies upon the fact that certain decompositions are distinguished by the dynamics of the coarse grained system. \section{Consistent histories in coarse grained systems} For a given coarse grained (i.e. reduced) dynamics, the statistical operator satisfies {\it linear} evolution equation of the general form $$ \rho(t)=J(t)\rho(0),\ \ t>0,\eqno(2.1) $$ where $J$ is the evolution {\it superoperator}. A basic feature of $J$ is that Eq.(2.1) generates mixed states from pure ones permanently. The formal generalization of Eq.(1.1) reads: $$ \rho(t) = \sum_{\al}\wal(t)\psial(t)\psial^{\dagger}(t).\eqno(2.2) $$ This equation must be conform with the Eq.(2.1), regarding especially the linearity of the superoperator $J$. This condition is easy to meet if the unnormalized states $\sqrt{\wal(t)}\psial(t)$, too, satisfy linear evolution equations. Hence, we shall assume that $$ \psial(t)={1\over\sqrt{\wal(t)}}\Cal(t)\psial(0)\eqno(2.3) $$ where $\Cal(t)$ is time dependent linear evolution operator for the consistent state $\psial(t)$. Observe that the normalized state $\psial(t)$ satisfies nonlinear equation though the nonlinearity is only caused by the normalizing prefactor. It is fixed just by the normalization condition: $$ \wal(t)=\Vert\Cal(t)\psial(0)\Vert^2.\eqno(2.4) $$ Let us substitute Eq.(2.3) into Eq.(2.2) and compare the result with Eq.(2.1). Then, the evolution superoperator can be written in terms of the evolution operators of the consistent states: $$ J(t)=\sum_{\al}\Cal(t)\otimes\Cal^\dagger(t).\eqno(2.5) $$ This superoperator is, as expected, linear. Let us summarize our proposal. Assume a coarse grained system is given, with known linear evolution superoperator $J$ in Eq.(2.1). Single out a certain dyadic decomposition (2.5) of $J$ in terms of linear evolution operators $\Cal$. Once the operators $\Cal$ have been specified, the coarse grained dynamics can be described in terms of {\it consistent histories} $\{\psial(\tau),\tau\in[0,t]\}$ generated by the history operators $\Cal$ via the nonlinear evolution equation (2.3). A given history is realized with probability $\wal(t)$ as expressed by Eq.(2.4). It is most important to realize that the operators $\Cal$ determine the dynamics as well as the statistics of the consistent histories. Still the choice of the $\Cal$ is not unique since Eq.(2.5) offers little constraint on it. \section{Consistent histories in the Caldeira-Leggett model} Nontrivial, i.e. nonunitary evolution equations of type (2.1) are usually not easy to derive. A wellknown exception, calculable explicitly, is the evolution of the state $\rho$ of a Brownian particle interacting with a given bosonic reservoir being in thermal equilibrium [5]. Coarse graining is meant by tracing out the reservoir variables. Then standard calculations lead to the exact form of the evolution superoperator: $$ J(\xfi,\xpf,\xin,\xpi,t)\eqno(3.1) $$ as expressed by Eqs.(A.6) and (A.7) in coordinate representation. Following the proposal of Par.II, the evolution superoperator (3.1) will be decomposed into a specific dyadic form (2.5). We shall exploit the fact that $J$ is expressed by double Gaussian integrals over the {\it paths} $x(\tau)$ and $\xp(\tau),\ \tau\in[0,t]$. Also the label $\alpha$ of the operators $\Cal$ will actually be a path $\xb(\tau)$ rather than a number and, consequently, the summation in Eq.(2.5) will be replaced by functional integration over $\xb$. In coordinate representation one writes Eq.(2.5) in the form: $$ J(\xfi,\xpf,\xin,\xpi,t)= \int D\xb C_{[\xb]}(\xfi,\xin,t)C_{[\xb]}^\ast(\xpf,\xpi,t).\eqno(3.2) $$ If we choose Gaussian form for the operator kernel $C_{[\xb]}(\xfi,\xin,t)$ then the superoperator functional $J$, too, will be Gaussian. By a clever choice, we can just obtain the required form (A.6). Let us assume the following Gaussian expression for the history operators: $$ C_{[\xb]}(\xfi,\xin,t)=\int Dx exp\left({i\over\hbar}S[x]\right) \Phi_{[\xb]}[x]\eqno(3.3a) $$ where \begin{eqnarray}~~~~~~~~~~ \Phi_{[\xb]}[x]=exp\biggl( &&\!\!\!\!\!\! -{2i\over\hbar}\int_0^t d\tau\int_0^\tau d\s x(\tau)\eta(\tau-\s)\xb(\s)\nonumber\\ &&\!\!\!\!\!\! -{1\over\hbar}\int_0^t d\tau\int_0^t d\s [x(\tau)-\xb(\tau)]\tilde\nu (\tau-\s)[x(\s)-\xb(\s)] \biggr)~~~~~~~~~~~~~~~~(3.3b)\nonumber\end{eqnarray} and $\tilde\nu$ is a certain modification of the noise kernel (A.9a), specified below. Let us substitute Eqs.(3.3ab) into Eq.(3.2) and perform the Gaussian functional integration over $\xb$. The resulting expression will coincide with the form given by Eqs.(A.6) and (A.7), provided the following constraint fulfills [cf.Eq.(3.11) of Ref.6]: $$ \nu = \tilde\nu + \eta^r\tilde\nu^{-1}\eta^a\eqno(3.4) $$ where we applied symbolic notation for the convolution of the kernels on the RHS. The retarded dissipation kernel is defined by $\eta^r(\tau)\equiv\theta(\tau)\eta(\tau)$, and $\eta^a(\tau) \equiv\eta^r(-\tau)$. It can be shown that, in general, the implicit equation (3.4) possesses two solutions for $\tilde\nu$. In order to write Eq.(3.3b) into a compact form, observe that the dissipation term simulates an external potential $V_{[\xb]}(x,\tau)=2x\int_0^{\tau} \eta(\tau-\s)\xb(\s)d\s$ as a retarded function of the label path $\xb$. This term leads to a (label-)path dependent contribution to the action: $$ S_{[\xb]}[x]\equiv -\int_0^t d\tau V_{[\xb]}(x,\tau).\eqno(3.5) $$ Furthermore, let us introduce the following norm on the space of paths: $$ \Vert x\Vert^2\equiv{1\over\hbar}\int_0^t d\tau \int_0^t d\s x(\tau)\tilde\nu(\tau-\s)x(\s).\eqno(3.6) $$ Using Eqs.(3.5) and (3.6), the compact form of $\Phi_{[\xb]}[x]$ will be the following: $$ \Phi_{[\xb]}[x]=exp \left({i\over\hbar}S_{[\xb]}[x]-\Vert x-\xb\Vert^2\right).\eqno(3.7) $$ As we have shown above, in the Caldeira- Leggett model an exact statistical decomposition of the evolution superoperator $J$ can explicitly be constructed. Given the initial wave function $\psi(x,0)$, invoke Eqs.(2.3), (3.3a) and (3.7); then introduce {\it path dependent histories} as follows: \begin{eqnarray}~~~~~~~~ \psi_{[\xb]}(\xfi,t)&\equiv&{1\over\sqrt {w_{[\xb]}(t)}} \int C_{[\xb]}(\xfi,\xin)\psi(\xin,0)d\xin\nonumber\\ &=&{1\over\sqrt {w_{[\xb]}(t)}}\int Dx exp \left({i\over\hbar}S[x]+{i\over\hbar}S_{[\xb]}[x] -\Vert x-\xb\Vert^2\right) \psi(\xin,0).~~~~~~~~~~(3.8)\nonumber\end{eqnarray} Let us observe that this expression differs from usual unitary Feynman integrals by the presence of the factor $exp\left(-\Vert x-\xb\Vert^2\right)$. This factor discards all paths from the functional integration except for those which are close to the label path $\xb$. Remind that the role of functional metric specifying a distance between two paths is played by the modified noise kernel $\tilde\nu$. We expect that a typical history (3.8) is depicted by a wave packet propagating along a certain label path $\xb$. The probability distribution $w_{[\xb]}$ will mostly be concentrated on classical trajectories, hence most likely label paths will fluctuate around classically allowed trajectories of the central (damped) oscillator. We have seen that to each possible path $\xb$ and to each path dependent history $\psi_{[\xb]}$ a certain probability can be attributed in a consistent way. It is not at all necessary that these consistent histories be fully decoherent. Nevertheless, two different histories will tend to decohere: $$ \int\psi^\ast_{[\xb_1]}(\xfi,t)\psi_{[\xb_2]}(\xfi,t)d\xfi \rightarrow 0\eqno(3.9) $$ if $\xb_1,\xb_2$ are two distant paths, i.e.: $\Vert\xb_1-\xb_2\Vert\rightarrow\infty$ [cf.Eqs.(1.2ab)]. It is seen heuristically, that decoherence is remarkable when the distance of the two paths is large enough to exclude overlaps between the relevant Feynman paths concentrated along $\xb_1$ or $\xb_2$, respectively. Explicit calculations are possible for the scalar product (3.9) since the Caldeira-Leggett model possesses exact solutions. For pedagogical reasons, we shall consider the very high temperature regime where the history equations are relatively simple. \section{Markovian Consistent histories} It is known from, e.g., Refs.[5] that in the high temperature limit the memory kernels $\eta,\nu$ become, in a good approximation, local kernels. To consider the simplest nontrivial case we assume high temperature and {\it small velocities} $\dot x$. Then the noise term will dominate and we shall ignore the frictional term proportional to $\eta$. In fact we take $\eta=0$, therefore $\nu=\tilde\nu$ holds due to Eq.(3.4). The norm (3.6) on path space simplifies as follows: $$ \Vert x\Vert^2 ={\gamma\over\lambda_{dB}^2}\int_0^t d\tau x^2(\tau). \eqno(4.1) $$ The path dependent history (3.8) takes the following simple form: $$ \psi_{[\xb]}(\xfi,t)={1\over\sqrt {w_{[\xb]}(t)}}\int Dx exp \left({i\over\hbar}S_R[x]-\Vert x-\xb\Vert^2\right) \psi(\xin,0).\eqno(4.2) $$ This expression is wellknown from the theory of continuous quantum measurements [7,6]. It is known, first of all, that the path dependent history (4.2) is $\psi-valued\ Markovian\ process$. Remind the summary in Par.II, according to which the quantity $w_{[\xb]}(t)$ in the normalizing factor yields the probability of the given path $[\xb]$ and of the corresponding history (4.2). It has been shown in Refs.[8] that this process can be described by the following Ito stochastic differential equations: \eject \begin{eqnarray}~~~~~~~~~~~~~~~~~~~~~~~ \dot\psi_{[\xb]}(x,\tau)&=&-{i\over\hbar}(H_R\psi_{[\xb]})(x,\tau) -{\gamma\over2\lambda_{dB}^2}(x-<x>)^2\psi_{[\xb]}(x,\tau)+\nonumber\\ &&+(x-<x>)\psi_{[\xb]}(x,\tau)f(\tau),~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(4.3a)\nonumber\\ \xb(\tau)&=&<x>-{\lambda_{dB}^2\over2\gamma}f(t)~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~(4.3b)\nonumber\end{eqnarray} where $f$ is an auxiliary white noise of correlation $$ <f(\tau)f(0)>={\gamma\over\lambda_{dB}^2}\delta(\tau).\eqno(4.3c) $$ For the history expectation value of the coordinate $$ <x>_{[\xb],\tau}\ \equiv\int dxx\vert\psi_{[\xb]}(x,\tau)\vert^2 \eqno(4.3d) $$ the shorthand notation $<x>$ has been introduced. We can see that the wave function of the path dependent history and the path itself satisfy coupled stochastic differential equations. From Eq.(4.3b) follows that the ordinary quantum expectation value $<x>$ of the coordinate operator and the label path coordinate $\xb$ will coincide in stochastic mean. Incidentally, it is perhaps instructive to write Eq.(4.3a) into an equivalent Ito form for the pure state statistical operator $P_{[\xb]}(x,\xp)\equiv\psi_{[\xb]}(x)\psi^\dagger_{[\xb]}(\xp)$: \begin{eqnarray}~~~~~~~~~~~~ \dot P_{[\xb]}(x,\xp,\tau)&=&-{i\over\hbar}[H_R,P_{[\xb]}](x,\xp,\tau) -{\gamma\over2\lambda_{dB}^2}(x-\xp)^2P_{[\xb]}(x,\xp,\tau)\nonumber\\ &&~~~~~~+(x+\xp-2<x>)P_{[\xb]}(x,\xp,\tau)f(\tau). ~~~~~~~~~~~~~~~~~~~~~~~~(4.4)\nonumber\end{eqnarray} Taking the stochastic average of both sides, the nonlinear term cancels and we obtain the wellknown linear Markovian master equation (A.11). The Markovian history equations (4.3ab) can be solved exactly in the long time limit. In Refs.[8,9] the following result was obtained for the special case, when the renormalized central oscillator is just a free particle, i.e. $\Omega_R=0$. The shape of the wave function becomes stabilized at the Gaussian shape, i.e. $$ \psi_(x,\tau)\sim (2\pi\s^2)^{-1/4} exp\left(ix<p>-{1-i\over4\s^2}(x-<x>)^2\right) \eqno(4.5a) $$ of width $\s=(\hbar/2)^{3/4}(\gamma k_BT)^{-1/4}M^{-1/2}$, while the quantum expectation values \hbox{$<x>,<p>$} of the coordinate and momentum, resp., satisfy the following stochastic equations: \begin{eqnarray}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ {d<x>\over d\tau}&=&{<p>\over M}+2\sigma^2f,\nonumber\\ {d<p>\over d\tau}&=&\hbar f.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (4.5b)\nonumber\end{eqnarray} We note that similar quality of results would be obtained from more general Ito differential equations (cf.Ref.[10]), had we retained the dissipation term proportional to $\nu$ (A.9b). \section{Summary} Starting from the conservative statistical interpretation of mixed quantum states, we have proposed a family of quantum histories possessing consistent probability distribution. The proposal has been realized for the Caldeira-Leggett model of Brownian motion. Our history expansion is exact and needs {\it no particular tuning of decoherence and coarse graining}. In the Markovian regime, we have obtained analytic localized solutions for the Brownian particle's wave function. \section*{A. Coarse graining in the Caldeira-Leggett model [5]} Our central system is a harmonic oscillator of mass M and frequency $\Omega$, with action $$ S[x]={M\over2}\int_0^t d\tau(\xd^2-\Omega^2 x^2).\eqno(A.1) $$ The initial quantum state will be denoted by $\rho(\xin,\xpi,0)$ . Consider a reservoir modeled by a set of harmonic oscillators with masses $m_n$ and with frequencies $\omega_n$. Its action is: $$ S_{res}[q]= \sum_n {m_n\over2}\int_0^t d\tau(\dot q_n^2-\omega_n^2 q_n^2).\eqno(A.2) $$ At $t=0$ the state of the reservoir is thermal equilibrium state at some temperature $T$. Consider a certain linear combination $Q=\sum_nc_nq_n$ of the reservoir coordinates. Introduce the complex correlation function of the Heisenberg operators $Q(\tau)$: $$ \nu(\tau)+i\eta(\tau)\equiv{1\over\hbar}<Q(\tau)Q(0)>_T\eqno(A.3) $$ where $<...>_T$ stands for the expectation value taken in the thermal equilibrium state of the reservoir. The real and imaginary parts $\nu,\eta$ are called the noise (or fluctuation) and the dissipation kernels, respectively. For $t>0$, a linear coupling is introduced between the central oscillator and the reservoir, represented by the action \footnote{The printed version contained the typo $qQ$ instead of the correct $xQ$.} $$ S_{int}[x,q]=-\int_0^td\tau xQ.\eqno(A.4) $$ The usual coarse graining of the above system consists of tracing out the variables of the reservoir. Then the statistical operator of the central oscillator obeys to linear evolution equation: $$\rho(\xfi,\xpf,t)= \int J(\xfi,\xpf,\xin,\xpi,t) \rho(\xin,\xpi,0)d\xin d\xpi.\eqno(A.5) $$ The superoperator $J$ takes the following general form: $$ J(\xfi,\xpf,\xin,\xpi,t)=\int Dx \int D\xp exp\left({i\over\hbar}S[x]-{i\over\hbar}S[\xp]\right)F[x,\xp] \eqno(A.6) $$ with the decoherence functional $F$ defined by \begin{eqnarray}~~~~~~~~~~~~~~~~~~ F[x,\xp]=exp\biggl(&&\!\!\!\!\!\!\!\!-{i\over\hbar}\int_0^t d\tau \int_0^\tau d\tau^\prime [x(\tau)-\xp(\tau)]\eta(\tau-\tau^\prime)[x(\tau^\prime)+\xp(\tau^\prime)]\nonumber\\ &&\!\!\!\!\!\!\!\!-{1\over2\hbar}\int_0^t d\tau \int_0^t d\tau^\prime [x(\tau)-\xp(\tau)]\nu (\tau-\tau^\prime)[x(\tau^\prime)-\xp(\tau^\prime)]\biggr). ~~~(A.7)\nonumber\end{eqnarray} For the Caldeira-Leggett reservoir, the noise and dissipation kernels have the following particular forms, respectively: $$ \nu(\tau)=\ {\gamma M\over \pi}\int_0^{\omega_{max}}d\omega \omega\coth{\hbar\omega\over 2k_BT}\cos(\omega\tau) \eqno(A.8a) $$ $$ \eta(\tau)=-{\gamma M\over \pi}\int_0^{\omega_{max}}d\omega \omega\sin(\omega\tau). \eqno(A.8b) $$ For high temperatures, Markovian approximation can be applied to the noise and dissipation kernels: $$ \nu(\tau)={\gamma\hbar\over \lambda_{dB}^2}\delta(\tau),\eqno(A.9a) $$ $$ \eta(\tau)=\gamma M\delta^\prime(\tau)\eqno(A.9b) $$ where $\lambda_{dB}=\hbar/\sqrt{2Mk_BT}$ is the thermal deBroglie length. In addition, the frequency $\Omega$ of the central oscillator must be replaced by its renormalized value, defined by $\Omega_R^2=\Omega^2-2\gamma\Omega/\pi$. Consequently, the action $S$ on the RHS. of Eq.(A.6) will be replaced by the renormalized action: $$ S_R[x]={M\over2}\int_0^t d\tau(\xd^2-\Omega_R^2x^2).\eqno(A.10) $$ In Markovian approximation the evolution superoperator (A.6) becomes local in time. Hence the evolution equation (A.5) can equivalently be written in form of a linear differential equation. We do not quote the general result but a simplified version valid for small velocities $\xd$: $$ \dot\rho(x,\xp,\tau) =-{i\over\hbar}[H_R,\rho](x,\xp,\tau) -{\gamma\over2\lambda_{dB}^2}(x-\xp)^2 \rho(x,\xp,\tau).\eqno(A.11) $$ The general master equation contains additionally a certain dissipation term proportional to the momentum p, and a further fluctuation term [11] as well. This work was supported by the Hungarian Scientific Research Fund under Grant No 1822/1991.
hep-ph/9304239
\section{#1}} \def\firstsubsec#1{\obsolete\firstsubsec \subsection{#1}} \def\thispage#1{\obsolete\thispage \global\pagenumber=#1\frontpagefalse} \def\thischapter#1{\obsolete\thischapter \global\chapternumber=#1} \def\nextequation#1{\obsolete\nextequation \global\equanumber=#1 \ifnum\the\equanumber>0 \global\advance\equanumber by 1 \fi} \def\afterassigment\B@XITEM\setbox0={\afterassigment\B@XITEM\setbox0=} \def\B@XITEM{\par\hangindent\wd0 \noindent\box0 } \def\catcode`\^^M=5{\catcode`\^^M=5} \def\start{\begingroup\hsize 4.75in\baselineskip 12pt\raggedright \relax} \def\par\endgroup{\par\endgroup} \def{e^+e^-}{{e^+e^-}} \def{\mu\nu}{{\mu\nu}} \def\nextline{\unskip\nobreak\hskip\parfillskip\break} \def\rightarrow{\rightarrow} \def\partial_{\mu}{\partial_{\mu}} \def\dgg#1{#1\!\!\!/\,\,\,} \def\boxit#1{\vbox{\hrule\hbox{\vrule\kern3pt\vbox{\kern3pt#1\kern3pt} \kern3pt\vrule}\hrule}} \catcode`@=12 \defCELLO Collaboration, H.-J. Behrend {\it et~al}.{CELLO Collaboration, H.-J. Behrend {\it et~al}.} \defJADE Collaboration, W.~Bartel {\it et~al}.{JADE Collaboration, W.~Bartel {\it et~al}.} \defMark J Collaboration, B.~Adeva {\it et~al}.{Mark J Collaboration, B.~Adeva {\it et~al}.} \defMark J Collaboration, D.~P. Barber {\it et~al}.{Mark J Collaboration, D.~P. Barber {\it et~al}.} \defTASSO Collaboration, M.~Althoff {\it et~al}.{TASSO Collaboration, M.~Althoff {\it et~al}.} \defPLUTO Collaboration, Ch.~Berger {\it et~al}.{PLUTO Collaboration, Ch.~Berger {\it et~al}.} \defTASSO Collaboration, R.~Brandelik {\it et~al}.{TASSO Collaboration, R.~Brandelik {\it et~al}.} \defUA1 Collaboration, G.~Arnison {\it et~al}.{UA1 Collaboration, G.~Arnison {\it et~al}.} \defUA2 Collaboration, M.~Banner {\it et~al}.{UA2 Collaboration, M.~Banner {\it et~al}.} \defUA2 Collaboration, P.~Bagnaia {\it et~al}.{UA2 Collaboration, P.~Bagnaia {\it et~al}.} \defUA2 Collaboration, J.~A. Appel {\it et~al}.{UA2 Collaboration, J.~A. Appel {\it et~al}.} \def\LEPTONPHOTON#1{{\it Proceedings of the 19#1 Symposium on Lepton and Photon Interactions at High Energies}} \def\LEIPZIG{{\it Proceedings of the 22nd International Conference on High Energy Physics}, Leipzig, 1984, edited by A.~Meyer and E.~Wieczorek (Akademie der Wissenschaften der DDR, Zeuthen Platanenallee)} \def\hbox{$\,\rm eV$}{\hbox{$\,\rm eV$}} \def\hbox{$\,\rm keV$}{\hbox{$\,\rm keV$}} \def\hbox{$\,\rm MeV$}{\hbox{$\,\rm MeV$}} \def\hbox{$\,\rm GeV$}{\hbox{$\,\rm GeV$}} \def\hbox{$\,\rm TeV$}{\hbox{$\,\rm TeV$}} \def\hbox{$\,\rm mb$}{\hbox{$\,\rm mb$}} \def\mub{\hbox{$\,\mu\rm b$}} \let\microb=\mub \def\hbox{$\,\rm nb$}{\hbox{$\,\rm nb$}} \def\hbox{$\,\rm pb$}{\hbox{$\,\rm pb$}} \def\hbox{$\,\rm km$}{\hbox{$\,\rm km$}} \def\hbox{$\,\rm m$}{\hbox{$\,\rm m$}} \def\hbox{$\,\rm cm$}{\hbox{$\,\rm cm$}} \def\hbox{$\,\rm mm$}{\hbox{$\,\rm mm$}} \def\hbox{$\,\rm nm$}{\hbox{$\,\rm nm$}} \def\hbox{$\,\rm fm$}{\hbox{$\,\rm fm$}} \def\hbox{$\,\rm kg$}{\hbox{$\,\rm kg$}} \def\hbox{$\,\rm g$}{\hbox{$\,\rm g$}} \def\hbox{$\,\rm mg$}{\hbox{$\,\rm mg$}} \def\hbox{$\,\rm s$}{\hbox{$\,\rm s$}} \def\hbox{$\,\rm ms$}{\hbox{$\,\rm ms$}} \def\mus{\hbox{$\,\mu\rm s$}} \let\micros=\mus \def\hbox{$\,\rm ns$}{\hbox{$\,\rm ns$}} \def\hbox{$\,\rm ps$}{\hbox{$\,\rm ps$}} \def\bold#1{\setbox0=\hbox{$#1$}% \kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0 \kern-.025em\raise.0433em\box0 } \newdimen\mk \mk=1em \def\LL{\hbox{\lower 0.1\mk\hbox{% \hbox{\vrule height 0.8\mk width 0.06\mk depth 0pt}% \kern0.2\mk\raise0.3\mk \hbox{\vrule height 0.5\mk width 0.06\mk depth 0pt}% \kern-0.31\mk\raise0.75\mk \vbox{\hrule height 0.06\mk width 0.3\mk depth 0pt}% \kern-0.3\mk \vbox{\hrule height 0.06\mk width 0.6\mk depth 0pt}% \kern-0.35\mk\raise 0.25\mk \vbox{\hrule height 0.06\mk width 0.35\mk depth 0pt}% \kern-0.05\mk \hbox{\vrule height 0.30\mk width 0.06\mk depth 0pt}% }}} \def\ZZ{\hbox{$ \not \kern0.15em\not \kern-0.21em\lower0.2em \vbox{\hrule width 0.52em height 0.06em depth 0pt} \kern-0.50em\raise0.7em \vbox{\hrule width 0.52em height 0.06em depth 0pt} $}} \newbox\hdbox% \newcount\hdrows% \newcount\multispancount% \newcount\ncase% \newcount\ncol \newcount\nrows% \newcount\nspan% \newcount\ntemp% \newdimen\hdsize% \newdimen\newhdsize% \newdimen\parasize% \newdimen\spreadwidth% \newdimen\thicksize% \newdimen\thinsize% \newdimen\tablewidth% \newif\ifcentertables% \newif\ifendsize% \newif\iffirstrow% \newif\iftableinfo% \newtoks\dbt% \newtoks\hdtks% \newtoks\savetks% \newtoks\tableLETtokens% \newtoks\tabletokens% \newtoks\widthspec% \immediate\write15{% }% \tableinfotrue% \catcode`\@=1 \def\out#1{\immediate\write16{#1} \def\vrule height 4.3ex depth 2.7ex width 0pt{\vrule height3.1ex depth1.2ex width0pt}% \def\! + \!{\char`\& \def\tablerule{\noalign{\hrule height\thinsize depth0pt}}% \thicksize=1.5p \thinsize=0.6p \def\thickrule{\noalign{\hrule height\thicksize depth0pt}}% \def\hrulefill{\leaders\hrule\hfill}% \def\bigrulefill{\leaders\hrule height\thicksize depth0pt \hfill}% \def\ctr#1{\hfil\ #1\hfil}% \def\altctr#1{\hfil #1\hfil}% \def\vctr#1{\hfil\vbox to0pt{\vss\hbox{#1}\vss}\hfil}% \tablewidth=-\maxdimen% \spreadwidth=-\maxdimen% \def\tabskipglue{0pt plus 1fil minus 1fil}% \centertablestrue% \def\centeredtables{% \centertablestrue% }% \def\noncenteredtables{% \centertablesfalse% }% \parasize=4in% \long\def\para#1 {% \vtop{% \hsize=\parasize% \baselineskip14pt% \lineskip1pt% \lineskiplimit1pt% \noindent #1% \vrule width0pt depth6pt% }% }% }% \gdef\ARGS{######## \gdef\headerARGS{#### \def\@mpersand{& {\catcode`\|=1 \gdef\letbarzero{\let|0 \gdef\letbartab{\def|{&&}}% \gdef\letvbbar{\let\vb|}% {\catcode`\&= \def\ampskip{&\omit\hfil& \catcode`\&=1 \let& \xdef\letampskip{\def&{\ampskip}}% \gdef\letnovbamp{\let\novb&\let\tab&} \def\begintable \begingroup% \catcode`\|=13\letbartab\letvbbar% \catcode`\&=13\letampskip\letnovbamp% \def\multispan##1 \omit \mscount##1% \multiply\mscount\tw@\advance\mscount\hbox{$\,\rm m$}@ne% \loop\ifnum\mscount>\@ne \sp@n\repeat% \def\|{% &\omit\widevline&% }% \ruledtabl \long\def\ruledtable#1\endtable{% \offinterlineski \tabskip 0p \def\widevline{\vrule width\thicksize \def\endrow{\@mpersand\omit\hfil\crnorm\@mpersand}% \def\crthick{\@mpersand\crnorm\thickrule\@mpersand}% \def\crthickneg##1{\@mpersand\crnorm\thickrule \noalign{{\skip0=##1\vskip-\skip0}}\@mpersand}% \def\crnorule{\@mpersand\crnorm\@mpersand}% \def\crnoruleneg##1{\@mpersand\crnorm \noalign{{\skip0=##1\vskip-\skip0}}\@mpersand}% \let\nr=\crnorul \def\endtable{\@mpersand\crnorm\thickrule}% \let\crnorm=\c \edef\cr{\@mpersand\crnorm\tablerule\@mpersand}% \def\crneg##1{\@mpersand\crnorm\tablerule \noalign{{\skip0=##1\vskip-\skip0}}\@mpersand}% \let\ctneg=\crthickneg \let\nrneg=\crnoruleneg \the\tableLETtoken \tabletokens={&#1 \countROWS\tabletokens\into\nrows% \countCOLS\tabletokens\into\ncols% \advance\ncols by -1% \divide\ncols by 2% \advance\nrows by 1% \iftableinfo % \immediate\write16{[Nrows=\the\nrows, Ncols=\the\ncols]}% \fi% \ifcentertables \ifhmode \par\f \line \hs \else % \hbox{% \fi \vbox{% \makePREAMBLE{\the\ncols \edef\next{\preamble \let\preamble=\nex \makeTABLE{\preamble}{\tabletokens \ifcentertables \hss}\else }\f \endgrou \tablewidth=-\maxdime \spreadwidth=-\maxdime \def\makeTABLE#1#2 \let\ifmath \let\header \let\multispan \ncase=0% \ifdim\tablewidth>-\maxdimen \ncase=1\fi% \ifdim\spreadwidth>-\maxdimen \ncase=2\fi% \rela \ifcase\ncase % \widthspec={}% \or % \widthspec=\expandafter{\expandafter t\expandafter o% \the\tablewidth}% \else % \widthspec=\expandafter{\expandafter s\expandafter p\expandafter r% \expandafter e\expandafter a\expandafter d% \the\spreadwidth}% \fi % \xdef\next \halign\the\widthspec{% # \noalign{\hrule height\thicksize depth0pt \the#2\endtabl \nex \def\makePREAMBLE#1 \ncols=# \begingrou \let\ARGS= \edef\xtp{\widevline\ARGS\tabskip\tabskipglue% &\ctr{\ARGS}\vrule height 4.3ex depth 2.7ex width 0pt \advance\ncols by - \loo \ifnum\ncols>0 % \advance\ncols by -1% \edef\xtp{\xtp&\vrule width\thinsize\ARGS&\ctr{\ARGS}}% \repeat \xdef\preamble{\xtp&\widevline\ARGS\tabskip0pt% \crnorm \endgrou \def\countROWS#1\into#2 \let\countREGISTER=#2% \countREGISTER=0% \expandafter\ROWcount\the#1\endcount% }% \def\ROWcount{% \afterassignment\subROWcount\let\next= % }% \def\subROWcount{% \ifx\next\endcount % \let\next=\relax% \else% \ncase=0% \ifx\next\cr % \global\advance\countREGISTER by 1% \ncase=0% \fi% \ifx\next\endrow % \global\advance\countREGISTER by 1% \ncase=0% \fi% \ifx\next\crthick % \global\advance\countREGISTER by 1% \ncase=0% \fi% \ifx\next\crnorule % \global\advance\countREGISTER by 1% \ncase=0% \fi% \ifx\next\crthickneg % \global\advance\countREGISTER by 1% \ncase=0% \fi% \ifx\next\crnoruleneg % \global\advance\countREGISTER by 1% \ncase=0% \fi% \ifx\next\crneg % \global\advance\countREGISTER by 1% \ncase=0% \fi% \ifx\next\header % \ncase=1% \fi% \relax% \ifcase\ncase % \let\next\ROWcount% \or % \let\next\argROWskip% \else % \fi% \fi% \next% \def\counthdROWS#1\into#2{% \dvr{10}% \let\countREGISTER=#2% \countREGISTER=0% \dvr{11}% \dvr{13}% \expandafter\hdROWcount\the#1\endcount% \dvr{12}% }% \def\hdROWcount{% \afterassignment\subhdROWcount\let\next= % }% \def\subhdROWcount{% \ifx\next\endcount % \let\next=\relax% \else% \ncase=0% \ifx\next\cr % \global\advance\countREGISTER by 1% \ncase=0% \fi% \ifx\next\endrow % \global\advance\countREGISTER by 1% \ncase=0% \fi% \ifx\next\crthick % \global\advance\countREGISTER by 1% \ncase=0% \fi% \ifx\next\crnorule % \global\advance\countREGISTER by 1% \ncase=0% \fi% \ifx\next\header % \ncase=1% \fi% \relax% \ifcase\ncase % \let\next\hdROWcount% \or% \let\next\arghdROWskip% \else % \fi% \fi% \next% }% {\catcode`\|=13\letbartab \gdef\countCOLS#1\into#2{% \let\countREGISTER=#2% \global\countREGISTER=0% \global\multispancount=0% \global\firstrowtrue \expandafter\COLcount\the#1\endcount% \global\advance\countREGISTER by 3% \global\advance\countREGISTER by -\multispancount }% \gdef\COLcount{% \afterassignment\subCOLcount\let\next= % }% {\catcode`\&=13% \gdef\subCOLcount{% \ifx\next\endcount % \let\next=\relax% \else% \ncase=0% \iffirstrow \ifx\next& % \global\advance\countREGISTER by 2% \ncase=0% \fi% \ifx\next\span % \global\advance\countREGISTER by 1% \ncase=0% \fi% \ifx\next| % \global\advance\countREGISTER by 2% \ncase=0% \fi \ifx\next\| \global\advance\countREGISTER by 2% \ncase=0% \fi \ifx\next\multispan \ncase=1% \global\advance\multispancount by 1% \fi \ifx\next\header \ncase=2% \fi \ifx\next\cr \global\firstrowfalse \fi \ifx\next\endrow \global\firstrowfalse \fi \ifx\next\crthick \global\firstrowfalse \fi \ifx\next\crnorule \global\firstrowfalse \fi \ifx\next\crnoruleneg \global\firstrowfalse \fi \ifx\next\crthickneg \global\firstrowfalse \fi \ifx\next\crneg \global\firstrowfalse \fi \f \rela \ifcase\ncase % \let\next\COLcount% \or % \let\next\spancount% \or % \let\next\argCOLskip% \else % \fi % \fi% \next% }% \gdef\argROWskip#1{% \let\next\ROWcount \next% \gdef\arghdROWskip#1{% \let\next\ROWcount \next% \gdef\argCOLskip#1{% \let\next\COLcount \next% \def\spancount#1 \nspan=#1\multiply\nspan by 2\advance\nspan by -1% \global\advance \countREGISTER by \nspan \let\next\COLcount \next}% \def\dvr#1{\relax}% \def\header#1{% \dvr{1}{\let\cr=\@mpersand% \hdtks={#1}% \counthdROWS\hdtks\into\hdrows% \advance\hdrows by 1% \ifnum\hdrows=0 \hdrows=1 \fi% \dvr{5}\makehdPREAMBLE{\the\hdrows}% \dvr{6}\getHDdimen{#1}% {\parindent=0pt\hsize=\hdsize{\let\ifmath0% \xdef\next{\valign{\headerpreamble #1\crnorm}}}\dvr{7}\next\dvr{8}% }% }\dvr{2} \def\makehdPREAMBLE#1 \dvr{3}% \hdrows=# \let\headerARGS=0% \let\cr=\crnorm% \edef\xtp{\vfil\hfil\hbox{\headerARGS}\hfil\vfil}% \advance\hdrows by - \loo \ifnum\hdrows>0% \advance\hdrows by -1% \edef\xtp{\xtp&\vfil\hfil\hbox{\headerARGS}\hfil\vfil}% \repeat% \xdef\headerpreamble{\xtp\crcr}% \dvr{4} \def\getHDdimen#1{% \hdsize=0pt% \getsize#1\cr\end\cr% \def\getsize#1\cr{% \endsizefalse\savetks={#1}% \expandafter\lookend\the\savetks\cr% \relax \ifendsize \let\next\relax \else% \setbox\hdbox=\hbox{#1}\newhdsize=1.0\wd\hdbox% \ifdim\newhdsize>\hdsize \hdsize=\newhdsize \fi% \let\next\getsize \fi% \next% }% \def\lookend{\afterassignment\sublookend\let\looknext= }% \def\sublookend{\relax% \ifx\looknext\cr % \let\looknext\relax \else % \relax \ifx\looknext\end \global\endsizetrue \fi% \let\looknext=\lookend% \fi \looknext% }% \def\tablelet#1{% \tableLETtokens=\expandafter{\the\tableLETtokens #1}% }% \catcode`\@=1 \newskip\zatskip \zatskip=0pt plus0pt minus0pt \def\mathsurround=0pt{\mathsurround=0pt} \def\mathrel{\mathpalette\atversim<}{\mathrel{\mathpalette\atversim<}} \def\mathrel{\mathpalette\atversim>}{\mathrel{\mathpalette\atversim>}} \def\atversim#1#2{\lower0.7ex\vbox{\baselineskip\zatskip\lineskip\zatskip \lineskiplimit 0pt\ialign{$\mathsurround=0pt#1\hfil##\hfil$\crcr#2\crcr\sim\crcr}}} \referenceminspace=10pc \hyphenation{brems-strahlung} \def\NPrefmark#1{\attach{\scriptstyle #1 )}} \def\mapright#1{\smash{\mathop{\longrightarrow}\limits^{#1}}} \def\rightarrow{\rightarrow} \defa\hskip-6pt/{a\hskip-6pt/} \def\raise2pt\hbox{$\chi$}{\raise2pt\hbox{$\chi$}} \defe\hskip-6pt/{e\hskip-6pt/} \def\epsilon\hskip-6pt/{\epsilon\hskip-6pt/} \defE\hskip-7pt/{E\hskip-7pt/} \def{J}{{J}} \defJ\hskip-7pt/{J\hskip-7pt/} \defk\hskip-6pt/{k\hskip-6pt/} \defp\hskip-6pt/{p\hskip-6pt/} \defq\hskip-6pt/{q\hskip-6pt/} \def0\hskip-6pt/{0\hskip-6pt/} \def{\cal R \mskip-4mu \lower.1ex \hbox{\it e}}{{\cal R \mskip-4mu \lower.1ex \hbox{\it e}}} \def{\cal I}\mskip-5mu\lower.1ex\hbox{\it m}{{\cal I \mskip-5mu \lower.1ex \hbox{\it m}}} \def{\rm tr\,}{{\rm tr\,}} \def{\rm Tr\,}{{\rm Tr\,}} \def{\it et~al}.{{\it et~al}.} \def{\it e.g.}{{\it e.g.}} \def\langle{\langle} \def\rangle{\rangle} \def\rangle \! \langle{\rangle \! \langle} \def\! + \!{\! + \!} \def\scriptstyle{\rightharpoonup\hskip-8pt{\leftharpoondown}}{\scriptstyle{\rightharpoonup\hskip-8pt{\leftharpoondown}}} \def\textstyle{1\over 2}{\textstyle{1\over 2}} \def{\cal I}\mskip-5mu\lower.1ex\hbox{\it m}{{\cal I}\mskip-5mu\lower.1ex\hbox{\it m}} \def\partial\hskip-8pt\raise9pt\hbox{$\rlh$}{\partial\hskip-8pt\raise9pt\hbox{$\scriptstyle{\rightharpoonup\hskip-8pt{\leftharpoondown}}$}} \def\crcr\noalign{\vskip -6pt}{\crcr\noalign{\vskip -6pt}} \newtoks\Pubnumtwo \newtoks\Pubnumthree \catcode`@=11 \def\p@bblock{\begingroup\tabskip=\hsize minus\hsize \baselineskip=1.5\ht\strutbox\hrule height 0pt depth 0pt \vskip-2\baselineskip \halign to \hsize{\strut ##\hfil\tabskip=0pt\crcr \the\Pubnum\cr \the\Pubnumtwo\cr \the \Pubnumthree\cr \the\date\cr \the\pubtype\cr}\endgroup} \catcode`@=12 \Pubnum{\bf FSU-HEP-930322} \Pubnumtwo{\bf UIUC-HEP-93-01} \Pubnumthree{\bf DTP/93/14} \date={March 1993} \pubtype={} \titlepage \singl@false\doubl@true\spaces@t \title{\fourteenrm Ratios of $W^\pm\gamma$ and $Z\gamma$ Cross Sections: \break New Tools in Probing the Weak Boson Sector at the Tevatron} \singl@false\doubl@false\spaces@t \vskip .2in \author{\fourteenrm U.~Baur\rlap,$^1$ S.~Errede\rlap,$^2$ and J.~Ohnemus$^3$} \vskip 2.mm \centerline{\it $^1$Physics Department, Florida State University, Tallahassee, FL 32306, USA} \centerline{\it $^2$Physics Department, University of Illinois, Urbana, IL 61801, USA} \centerline{\it $^3$Physics Department, University of Durham, DH1 3LE, England} \singl@false\doubl@false\spaces@t \vskip .7in \vskip\frontpageskip\centerline{\fourteenrm ABSTRACT The ratios ${\cal R}_{\gamma ,\ell}=B(Z\rightarrow\ell^+\ell^-)\cdot\sigma(Z\gamma) /\allowbreak B(W\rightarrow\ell\nu)\cdot\sigma(W^\pm\gamma)$, ${\cal R}_{\gamma ,\nu}=B(Z\rightarrow\bar\nu\nu)\cdot\sigma(Z\gamma)/\allowbreak B(W\rightarrow\ell\nu)\cdot\sigma(W^\pm\gamma)$, ${\cal R}_{W\gamma}=\sigma(W^\pm\gamma)/\allowbreak \sigma(W^\pm)$, and ${\cal R}_{Z\gamma}=\sigma(Z\gamma)/\allowbreak\sigma(Z)$ are studied as tools to probe the electroweak boson self-interactions. As a function of the minimum photon transverse momentum, ${\cal R}_{\gamma ,\ell}$ and ${\cal R}_{\gamma ,\nu}$ are found to directly reflect the radiation zero present in $W^\pm\gamma$ production in the Standard Model. All four ratios are sensitive to anomalous $WW\gamma$ and/or $ZZ\gamma/Z\gamma\gamma$ couplings. The sensitivity of the cross section ratios to the cuts imposed on the final state particles, as well as the systematic uncertainties resulting from different parametrizations of parton distribution functions, the choice of the factorization scale $Q^2$, and from higher order QCD corrections are explored. Taking into account these uncertainties, sensitivity limits for anomalous three gauge boson couplings, based on a measurement of the cross section ratios with an integrated luminosity of 25~pb$^{-1}$ at the Tevatron, are estimated. \vfil\break \def\PL #1 #2 #3 {Phys. Lett.~{\bf#1}, #2 (#3)} \def\NP #1 #2 #3 {Nucl. Phys.~{\bf#1}, #2 (#3)} \def\ZP #1 #2 #3 {Z.~Phys.~{\bf#1}, #2 (#3)} \def\PR #1 #2 #3 {Phys. Rev.~{\bf#1}, #2 (#3)} \def\PRD #1 #2 #3 {Phys. Rev.~D {\bf#1}, #2 (#3)} \def\PP #1 #2 #3 {Phys. Rep.~{\bf#1}, #2 (#3)} \def\PRL #1 #2 #3 {Phys. Rev.~Lett.~{\bf#1}, #2 (#3)} \REF\BB{U.~Baur and E.~L.~Berger, \PR D41 1476 1990 .} \REF\BaBe{U.~Baur and E.~L.~Berger, FSU-HEP-921030, CERN-TH.6680/92 preprint, October 1992, to appear in Phys.~Rev.~{\bf D}.} \REF\Rat{F.~Halzen and K.~Mursula, \PRL 51 857 1983 ;\unskip\nobreak\hskip\parfillskip\break K.~Hikasa, \PR D29 1939 1984 ;\unskip\nobreak\hskip\parfillskip\break N.~G.~Deshpande {\it et~al}., \PRL 54 1757 1985 ;\unskip\nobreak\hskip\parfillskip\break A.~D.~Martin, R.~G.~Roberts, and W.~J.~Stirling, \PL 189B 220 1987 ; \unskip\nobreak\hskip\parfillskip\break E.~L.~Berger, F.~Halzen, C.~S.~Kim, and S.~Willenbrock, \PR D40 83 1989 . } \REF\Rexp{C.~Albajar {\it et~al}.\ (UA1 Collaboration), \PL 253B 503 1991 ; \unskip\nobreak\hskip\parfillskip\break J.~Alitti {\it et~al}.\ (UA2 Collaboration), \PL 276B 365 1992 ;\unskip\nobreak\hskip\parfillskip\break F.~Abe {\it et~al}.\ (CDF Collaboration), \PR D44 29 1991 \ and \PRL 69 28 1992 .} \REF\Barg{V.~Barger, T.~Han, J.~Ohnemus, and D.~Zeppenfeld, \PRL 62 1971 1989 ; \PR D40 2888 1989 ; \PR D41 1715 1990 (E).} \REF\BZ{U.~Baur and D.~Zeppenfeld, \NP B308 127 1988 .} \REF\JO{J.~Ohnemus, \PR D47 940 1993 .} \REF\RADZ{Zhu Dongpei, \PR D22 2266 1980 ; \unskip\nobreak\hskip\parfillskip\break C.~J.~Goebel, F.~Halzen, and J.~P.~Leveille, \PR D23 2682 1981 ; \unskip\nobreak\hskip\parfillskip\break S.~J.~Brodsky and R.~W.~Brown, \PRL 49 966 1982 ; \unskip\nobreak\hskip\parfillskip\break R.~W.~Brown, K.~L.~Kowalski, and S.~J.~Brodsky, \PR D28 624 1983 ; \unskip\nobreak\hskip\parfillskip\break M.~A.~Samuel, \PR D27 2724 1983 .} \REF\jjg{F.~A.~Berends {\it et~al}.\ , \PL 103B 124 1981 ;\unskip\nobreak\hskip\parfillskip\break P.~Aurenche {\it et~al}.\ , \PL 140B 87 1984 \ and \NP B286 553 1987 ;\unskip\nobreak\hskip\parfillskip\break V.~Barger, T.~Han, J.~Ohnemus, and D.~Zeppenfeld, \PL 232B 371 1989 .} \REF\CDF{F.~Abe {\it et~al}.\ (CDF Collaboration), \PR D45 3921 1992 .} \REF\priv{H.~Wahl, private communication.} \REF\John{J.~Womersley, private communication;\unskip\nobreak\hskip\parfillskip\break R.~J.~Madaras, FERMILAB-Conf-92/365-E, to appear in the Proceedings of the ``7th Meeting of the American Physical Society Division of Particles and Fields (DPF92)'', Fermilab, Batavia, IL, November 1992;\unskip\nobreak\hskip\parfillskip\break M.~Cobal, FERMILAB-Conf-92/358-E, to appear in the Proceedings of the ``4th Topical Seminar on the Standard Model and Just Beyond'', San Miniato, Italy, June~1992.} \REF\BC{S.~Bethke and S.~Catani, Proceedings of the XXVIIth Rencontre de Moriond, ``QCD and High Energy Hadronic Interactions'', Les Arcs, France, March~22~--~28, 1992, p.~203.} \REF\HMRS{P.~N.~Harriman, A.~D.~Martin, R.~G.~Roberts, and W.~J.~Stirling, \PR D42 798 1990 .} \REF\Cort{J.~Cortes, K.~Hagiwara, and F.~Herzog, \NP B278 26 1986 ; \unskip\nobreak\hskip\parfillskip\break J.~Stroughair and C.~Bilchak, Z.~Phys.~{\bf C26}, 415 (1984); \unskip\nobreak\hskip\parfillskip\break J.~Gunion, Z.~Kunszt, and M.~Soldate, \PL 163B 389 1985 ; \unskip\nobreak\hskip\parfillskip\break J.~Gunion and M.~Soldate, \PR D34 826 1986 ; \unskip\nobreak\hskip\parfillskip\break W.~J.~Stirling {\it et~al}., \PL 163B 261 1985 .} \REF\Wpt{F.~Abe {\it et~al}.\ (CDF Collaboration), \PRL 66 2951 1991 .} \REF\MRS{A.~D.~Martin, W.~J.~Stirling, and R.~G.~Roberts, \PR D47 867 1993 .} \REF\GRV{M.~Gl\"uck, E.~Reya, and A.~Vogt, \ZP C53 127 1992 .} \REF\MT{J.~Morfin and W.~K.~Tung, \ZP C52 13 1991 .} \REF\Jeff{J.~F.~Owens, \PL 266B 126 1991 .} \REF\NMC{P.~Amaudruz {\it et~al}.\ (NMC Collaboration), \PL 295B 159 1992 .} \REF\CCFR{S.~R.~Mishra {\it et~al}.\ (CCFR Collaboration), NEVIS-1459 preprint (June 1992).} \REF\Willy{W.~L.~van Neerven and E.~B.~Zijlstra, \NP B382 11 1992 .} \REF\BR{H.~Baer and M.~H.~Reno, \PR D43 2892 1991 \ and \PR D45 1503 1992 .} \REF\FNR{S.~Frixione, P.~Nason, and G.~Ridolfi, \NP B383 3 1992 .} \REF\Rl{F.~Abe {\it et~al}.\ (CDF Collaboration), \PRL 64 152 1990 .} \REF\Wmass{F.~Abe {\it et~al}.\ (CDF Collaboration), \PRL 65 224 1990 \ and \PR D43 2070 1991 .} \REF\LHC{The LHC Study Group, Design Study of the Large Hadron Collider, CERN 91-03, 1991.} \REF\BHO{U.~Baur, T.~Han, and J.~Ohnemus, in preparation.} \REF\SDC{E.~L.~Berger {\it et~al}.\ (SDC Collaboration), SDC Technical Design Report, SDC-92-201, April~1992.} \REF\BHL{R.~Barbieri, H.~Harari, and M.~Leurer, \PL 141B 455 1985 .} \REF\unit{U.~Baur and D.~Zeppenfeld, \PL 201B 383 1988 .} \REF\Hagi{W.~J.~Marciano and A.~Queijeiro, \PR D33 3449 1986 ;\unskip\nobreak\hskip\parfillskip\break F.~Boudjema, K.~Hagiwara, C.~Hamzaoui, and K.~Numata, \PR D43 2223 1991 .} \REF\Pet{J.~Alitti {\it et~al}.\ (UA2 Collaboration), \PL 277B 194 1992 .} \REF\Benj{D.~Benjamin, talk given at the ``XXVIIIth Rencontre de Moriond: Electroweak Interactions and Unified Field Theories'', Les Arcs, France, March~13 --~20, 1993.} \REF\HHPZ{K.~Hagiwara {\it et~al}., \NP B282 253 1987 .} \REF\Yang{C.~N.~Yang, \PR 77 242 1950 .} \REF\Jog{J.~M.~Cornwall, D.~N.~Levin, and G.~Tiktopoulos, \PRL 30 1268 1973 ; \PR D10 1145 1974 ;\unskip\nobreak\hskip\parfillskip\break C.~H.~Llewellyn Smith, \PL 46B 233 1973 ;\unskip\nobreak\hskip\parfillskip\break S. D. Joglekar, Ann. of Phys. {\bf 83}, 427 (1974).} \REF\CDFm{F.~Abe {\it et~al}.\ (CDF Collaboration), \PRL 69 28 1992 .} \REF\JR{F.~James and M.~Roos, \NP B172 475 1980 .} \REF\BDV{J.~Bagger, S.~Dawson, and G.~Valencia, FERMILAB-PUB-92/75-T preprint (revised August 1992).} \REF\De{A.~De Rujula {\it et~al}., \NP B384 31 1992 ;\unskip\nobreak\hskip\parfillskip\break P.~Hern\'andez and F.~J.~Vegas, CERN-TH.6670/92, preprint.} \REF\BL{C.~Burgess and D.~London, \PRL 69 3428 1992 , McGill-92/04, McGill-92/05 preprints (March 1992).} \REF\HISZ{K.~Hagiwara, S.~Ishihara, R.~Szalapski, and D.~Zeppenfeld, \PL 283B 353 1992 , and MAD/PH/737 preprint (March 1993).} \REF\muon{P.~M\'ery, S.~E.~Moubarik, M.~Perrottet, and F.~M.~Renard, \ZP C46 229 1990 .} \REF\PT{M.~E.~Peskin and T.~Takeuchi, \PRL 65 964 1990 \ and \PR D46 381 1992 .} \REF\Alt{G.~Altarelli and R.~Barbieri, \PL 253B 161 1991 ; \unskip\nobreak\hskip\parfillskip\break G.~Altarelli, R.~Barbieri, and S.~Jadach, \NP B369 3 1992 .} \REF\Foxl{P.~M\'ery, M.~Perrottet and F.~M.~Renard, \ZP C38 579 1988 .} \REF\Boud{G.~Gounaris {\it et~al}.\ , Proceedings of ``$e^+e^-$ Collisions at 500~GeV: The Physics Potential'' edt. P.~Zerwas, Vol.~B, p.~735; \unskip\nobreak\hskip\parfillskip\break F.~Boudjema, Proceedings of ``$e^+e^-$ Collisions at 500~GeV: The Physics Potential'' edt. P.~Zerwas, Vol.~B, p.~757.} \REF\GG{E.~Yehudai, \PR D41 33 1990 \ and \PR D44 3434 1991 ;\unskip\nobreak\hskip\parfillskip\break S.~Y.~Choi and F.~Schrempp, \PL 272B 149 1991 ;\unskip\nobreak\hskip\parfillskip\break O.~Philipsen, \ZP C54 643 1992 ;\unskip\nobreak\hskip\parfillskip\break S.~Godfrey and K.~A.~Peterson, OCIP/C 92-7, preprint (November 1992).} \FIG\one{a) The ratio ${\cal R}_{\gamma ,\ell} = B(Z\rightarrow\ell^+\ell^-)\cdot\sigma(Z\gamma)/\allowbreak B(W\rightarrow\ell\nu)\cdot\sigma(W^\pm\gamma)$ as a function of the minimum transverse momentum of the photon, $p_T^{\rm min}(\gamma)$, at the Tevatron for the cuts summarized in Eqs.~(2.4) -- (2.7) (solid line). The dashed line shows the corresponding ratio of $Zj$ to $W^\pm j$ cross sections, ${\cal R}_{j,\ell}$ [see Eq.~(2.10)], versus $p_T^{\rm min}(j)$. The dotted line, finally, gives the result of ${\cal R}_{\gamma ,\ell}$ for $p_T(\ell)$, $p\hskip-6pt/_T>25$~GeV, instead of the value listed in Eq.~(2.4). \unskip\nobreak\hskip\parfillskip\break b) Sensitivity of ${\cal R}_{\gamma ,\ell}$ at the Tevatron to the cuts imposed. The variation of the cross section ratio, normalized to ${\cal R}_{\gamma ,\ell}$ obtained for the cuts of Eq.~(2.4), is shown versus $p_T^{\rm min}(\gamma)$. Only one cut at a time is varied.} \FIG\two{a) The ratio ${\cal R}_{\gamma ,\ell}$ as a function of the minimum weak boson -- photon invariant mass, $m_{\rm min}$, at the Tevatron for the cuts summarized in Eqs.~(2.4) -- (2.7). The solid line shows the ratio for the true $W\gamma$ invariant mass, whereas the dashed line gives the result if both solutions of the reconstructed longitudinal neutrino momentum are used with equal probabilities. \unskip\nobreak\hskip\parfillskip\break b) Sensitivity of ${\cal R}_{\gamma ,\ell}$ at the Tevatron to the cuts imposed. The variation of the cross section ratio, normalized to ${\cal R}_{\gamma ,\ell}$ obtained for the cuts of Eq.~(2.4), is shown versus $m_{\rm min}$. Only one cut at a time is varied.} \FIG\three{The ratios ${\cal R}_{W\gamma} = \sigma(W^\pm\gamma) /\allowbreak \sigma(W^\pm)$ (solid line) and ${\cal R}_{Z\gamma} = \sigma(Z\gamma)/\allowbreak \sigma(Z)$ (dashed line) a) as a function of the minimum photon transverse momentum, $p_T^{\rm min}(\gamma)$, and b) as a function of the minimum weak boson -- photon invariant mass, $m_{\rm min}$, at the Tevatron. The cuts summarized in Eqs.~(2.4) -- (2.7) are imposed. } \FIG\four{Dependence of ${\cal R}_{\gamma ,\ell}$ on the parametrization of the parton structure functions. The variation $\Delta {\cal R}_{\gamma ,\ell}$, normalized to ${\cal R}_{\gamma ,\ell}$ obtained with the HMRSB parametrization, is shown a) versus $p_T^{\rm min}(\gamma)$ and b) versus $m_{\rm min}$ for five representative parametrizations. The cuts used are summarized in Eqs.~(2.4) -- (2.7).} \FIG\five{ Dependence of a) ${\cal R}_{W\gamma}$ and b) ${\cal R}_{Z\gamma}$ on the parametrization of the parton structure functions. The variation of the cross section ratios is shown versus $p_T^{\rm min}(\gamma)$ for five representative fits, normalized to the cross section ratio obtained with the HMRSB parametrization. The cuts imposed are summarized in Eqs.~(2.4) -- (2.7).} \FIG\six{ Dependence of a) ${\cal R}_{W\gamma}$, b) ${\cal R}_{Z\gamma}$, and c) ${\cal R}_{\gamma ,\ell}$ on the choice of the factorization scale $Q^2$ in the parton distribution functions versus $p_T^{\rm min}(\gamma)$. The variation of the cross section ratios with $Q^2$ is shown for $Q^2=m_W^2$ (solid lines) and $Q^2=100\cdot m_W^2$ (dashed lines), normalized to the cross section ratio obtained with $Q^2=\hat s$. The cuts used are summarized in Eqs.~(2.4) -- (2.7).} \FIG\seven{Sensitivity of a) ${\cal R}_{\gamma ,\ell}$ versus $p_T^{\rm min}(\gamma)$ and b) ${\cal R}_{\gamma ,\ell}$ versus $m_{\rm min}$ to higher order QCD corrections. The variation of the cross section ratio, normalized to the result obtained in the Born [leading log (LL)] approximation, is shown for the full next-to-leading log QCD corrections (solid lines), and for the zero-jet requirement of Eq.~(2.14) (dashed lines). The cuts imposed are listed in Eqs.~(2.12) and~(2.13).} \FIG\eight{Sensitivity of a) ${\cal R}_{V\gamma}$ versus $p_T^{\rm min}(\gamma)$ and b) ${\cal R}_{V\gamma}$ versus $m_{\rm min}$ ($V=W,\,Z$) to higher order QCD corrections. The variation of the cross section ratio, normalized to the result obtained in the Born [leading log (LL)] approximation, is shown for the full next-to-leading log QCD corrections and for the zero-jet requirement of Eq.~(2.14). The cuts used are summarized in Eqs.~(2.12) and~(2.13).} \FIG\nine{The ratio ${\cal R}_{\gamma ,\ell}$ as a function of the minimum transverse momentum of the photon, $p_T^{\rm min}(\gamma)$, at the LHC (dashed line) and SSC (solid line) for the cuts summarized in Eq.~(3.1). The dotted and dash-dotted line show the corresponding ratio of $Zj$ to $W^\pm j$ cross sections, ${\cal R}_{j,\ell}$, versus $p_T^{\rm min}(j)$.} \FIG\ten{Feynman rule for the general $V_1\gamma V_2$, $V_1=W,\,Z$, $V_2=W,\, Z,\,\gamma$ vertex. $e$ is the charge of the proton.} \FIG\eleven{The inverse cross section ratio ${\cal R}^{-1}_{\gamma ,\ell}$ at the Tevatron a) versus $p_T^{\rm min}(\gamma)$ and b) versus $m_{\rm min}$. The cuts imposed are listed in Eqs.~(2.4) -- (2.7). The curves are for the SM (solid), $\Delta\kappa_0=2.6$ (dashed), and $\lambda_0=1.7$ (dotted). A dipole form factor ($n=2$) with $\Lambda=750$~GeV is used to obtain the curves for non-standard couplings. The error bars indicate the expected statistical errors for an integrated luminosity of 25~pb$^{-1}$ for $W\rightarrow e\nu$ and $Z\rightarrow e^+e^-$ decays. Only one $WW\gamma$ coupling is varied at a time. All $ZZ\gamma$ and $Z\gamma\gamma$ couplings are assumed to vanish identically. } \FIG\twelve{The cross section ratio ${\cal R}_{\gamma ,\ell}$ at the Tevatron a) versus $p_T^{\rm min}(\gamma)$ and b) versus $m_{\rm min}$. The cuts used are summarized in Eqs.~(2.4) -- (2.7). The curves are for the SM (solid), $h^Z_{30}=1$ (dashed), and $h^Z_{40}=0.075$ (dotted). For the form factor parameters [see Eq.~(3.9)] we assume $n=3$ ($n=4$) for $h^Z_{30}$ ($h^Z_{40}$) with $\Lambda=750$~GeV. The error bars indicate the expected statistical errors for an integrated luminosity of 25~pb$^{-1}$ for $W\rightarrow e\nu$ and $Z\rightarrow e^+e^-$ decays. Only one $ZZ\gamma$ coupling is varied at a time. Anomalous $WW\gamma$ and $Z\gamma\gamma$ couplings are assumed to vanish identically. } \FIG\thirteen{a) The inverse cross section ratio ${\cal R}^{-1}_{\gamma ,\nu}$ at the Tevatron versus $p_T^{\rm min}(\gamma)$. The curves are for the SM (solid), $\Delta\kappa_0=2.6$ (dashed), and $\lambda_0=1.7$ (dotted). A dipole form factor ($n=2$) with $\Lambda=750$~GeV is used to obtain the curves for non-standard couplings. Only one $WW\gamma$ coupling is varied at a time. All $ZZ\gamma$ and $Z\gamma\gamma$ couplings are assumed to vanish identically. \unskip\nobreak\hskip\parfillskip\break b) The cross section ratio ${\cal R}_{\gamma ,\nu}$ at the Tevatron versus $p_T^{\rm min}(\gamma)$. The curves are for the SM (solid), $h^Z_{30}=1$ (dashed), and $h^Z_{40}=0.075$ (dotted). For the form factor parameters [see Eq.~(3.9)] we assume $n=3$ ($n=4$) for $h^Z_{30}$ ($h^Z_{40}$) with $\Lambda=750$~GeV. Only one $ZZ\gamma$ coupling is varied at a time. Anomalous $WW\gamma$ and $Z\gamma\gamma$ couplings are assumed to vanish identically. \unskip\nobreak\hskip\parfillskip\break The cuts imposed are summarized in Eqs.~(2.4) and~(2.7). The error bars indicate the expected statistical errors for an integrated luminosity of 25~pb$^{-1}$ for $W\rightarrow e\nu$ decays. } \FIG\fourteen{The inverse cross section ratio ${\cal R}^{-1}_{\gamma ,\nu}$ at the Tevatron versus $p_T^{\rm min}(\gamma)$. The curves are for the SM (solid), $\Delta\kappa_0=2.6$, $h^Z_{40}=0.075$ (dashed), and $\lambda_0=1.7$, $h^Z_{30}=1.5$ (dotted). The cuts imposed are summarized in Eqs.~(2.4) and~(2.7). For anomalous $WW\gamma$ couplings a dipole form factor ($n=2$) is used. For non-standard $ZZ\gamma$ couplings we assume $n=3$ ($n=4$) for $h^Z_{30}$ ($h^Z_{40}$). The form factor scale is assumed to be $\Lambda=750$~GeV. The error bars indicate the expected statistical errors for an integrated luminosity of 25~pb$^{-1}$ for $W\rightarrow e\nu$ decays. } \FIG\fifteen{a) The cross section ratio ${\cal R}_{W\gamma}$ at the Tevatron versus $p_T^{\rm min}(\gamma)$. The curves are for the SM (solid), $\Delta\kappa_0=2.6$ (dashed), and $\lambda_0=1.7$ (dotted). A dipole form factor ($n=2$) with $\Lambda=750$~GeV is used to obtain the curves for non-standard couplings. Only one $WW\gamma$ coupling is varied at a time. All $ZZ\gamma$ and $Z\gamma\gamma$ couplings are assumed to vanish identically. \unskip\nobreak\hskip\parfillskip\break b) The cross section ratio ${\cal R}_{Z\gamma}$ at the Tevatron versus $p_T^{\rm min}(\gamma)$. The curves are for the SM (solid), $h^Z_{30}=1$ (dashed), and $h^Z_{40}=0.075$ (dotted). For the form factor parameters we assume [see Eq.~(3.9)] $n=3$ ($n=4$) for $h^Z_{30}$ ($h^Z_{40}$) with $\Lambda=750$~GeV. Only one $ZZ\gamma$ coupling is varied at a time. Anomalous $WW\gamma$ and $Z\gamma\gamma$ couplings are assumed to vanish identically. \unskip\nobreak\hskip\parfillskip\break The cuts imposed are summarized in Eqs.~(2.4) -- (2.7). The error bars indicate the expected statistical errors for an integrated luminosity of 25~pb$^{-1}$ for $W\rightarrow e\nu$ and $Z\rightarrow e^+e^-$ decays. } \pagenumber=1 \chapter{Introduction} The present run of the Tevatron $p\bar p$ collider is expected to result in a substantial increase of the integrated luminosity. The increase in statistics will make it possible to observe new reactions such as $W^\pm\gamma$ and $Z\gamma$ production, and to probe previously untested sectors of the Standard Model (SM) of electroweak interactions, in particular, the vector boson self-interactions. Within the SM, at tree level, these self-interactions are completely fixed by the $SU(2)\times U(1)$ gauge theory structure of the model. Their observation is thus a crucial test of the model. In contrast to low energy and high precision experiments at the $Z$ peak, collider experiments offer the possibility of a direct, and essentially model independent, measurement of the three vector boson vertices. For a detailed investigation at the Tevatron, based on differential cross section distributions, an integrated luminosity of at least 100~pb$^{-1}$ is required\rlap.\refmark{\BB,\BaBe} For smaller data samples the total cross section is also useful. In hadron collider experiments, cross section measurements are usually plagued by large experimental systematic and theoretical errors. These errors, however, can often be significantly reduced by considering ratios of cross sections. A well known example is the ratio $$ {\cal R}_\ell={\sigma(W^\pm\rightarrow\ell^\pm\nu) \over\sigma(Z\rightarrow\ell^+\ell^-)}= {B(W\rightarrow\ell\nu)\cdot\sigma(W^\pm)\over B(Z\rightarrow\ell^+\ell^-) \cdot\sigma(Z)} \eqno (1.1) $$ of the observable $W^\pm$ and $Z$ cross sections\rlap.\refmark{\Rat} Here, $\ell=e,\,\mu$, $B(W\rightarrow\ell\nu)$ and $B(Z\rightarrow\ell^+\ell^-)$ denote the leptonic branching ratios of the $W$ and $Z$ boson, respectively, and $\sigma(W^\pm)$ [$\sigma(Z)$] is the $W^\pm$ [$Z$] production cross section in $p\bar p$ collisions. The systematic error of ${\cal R}_\ell$ is less than half that of the individual cross sections $B(W\rightarrow\ell\nu)\cdot\sigma(W^\pm)$ and $B(Z\rightarrow\ell^+\ell^-)\cdot\sigma(Z)$\rlap.\refmark{\Rexp} Using the SM expectation for the cross section ratio $\sigma(W^\pm)/\sigma(Z)$ together with information on the leptonic branching ratio of the $Z$ boson from LEP, $B(W\rightarrow\ell\nu)$ can be determined from ${\cal R}_\ell$, in turn, this value of $B(W\rightarrow\ell\nu)$ can be translated into a model independent lower limit on the top quark mass of $m_t>55$~GeV (95\%~CL)\rlap. \refmark{\Rexp} It is natural to consider cross section ratios similar to that of Eq.~(1.1) for $W^\pm\gamma$ and $Z\gamma$ production, and to use them to extract information on $WW\gamma$, $ZZ\gamma$, and $Z\gamma\gamma$ couplings. Four different ratios can be formed: $$ \eqalignno{ {\cal R}_{\gamma ,\ell} &= {B(Z\rightarrow\ell^+\ell^-)\cdot\sigma(Z\gamma)\over B(W\rightarrow\ell\nu)\cdot\sigma(W^\pm\gamma)}\,, & (1.2)\cr & & \cr {\cal R}_{\gamma ,\nu} &= {B(Z\rightarrow\bar\nu\nu)\cdot\sigma(Z\gamma)\over B(W\rightarrow\ell\nu)\cdot\sigma(W^\pm\gamma)}\,, & (1.3)\cr & & \cr {\cal R}_{W\gamma} &= {B(W\rightarrow\ell\nu)\cdot\sigma(W^\pm\gamma)\over B(W\rightarrow\ell\nu)\cdot\sigma(W^\pm)} = {\sigma(W^\pm\gamma)\over\sigma(W^\pm)}\,, & (1.4)\cr \noalign{\hbox{and}} {\cal R}_{Z\gamma} &= {B(Z\rightarrow\ell^+\ell^-)\cdot\sigma(Z\gamma)\over B(Z\rightarrow\ell^+\ell^-)\cdot\sigma(Z)} = {\sigma(Z\gamma)\over\sigma(Z)}\,. & (1.5)\cr } $$ Similar ratios have also been proposed for $W^\pm +n$~jet and $Z+n$~jet, $n=1\dots 3$, production\rlap.\refmark{\Barg} The $W^\pm\gamma$ and $Z\gamma$ cross section ratios are related to ${\cal R}_\ell$ of Eq.~(1.1) through the sum rule $$ {\cal R}_\ell\cdot{\cal R}_{\gamma ,\ell}={{\cal R}_{Z\gamma}\over {\cal R}_{W\gamma}}~. \eqno (1.6) $$ Experimentally, the ratios of Eqs.~(1.2) -- (1.5) can be determined from independent data samples. ${\cal R}_{\gamma ,\ell}$ can be measured from an event sample with at least one isolated, high transverse momentum electron (muon) and one isolated high $p_T$ photon. ${\cal R}_{\gamma ,\nu}$ can be determined from a data sample extracted with a missing transverse energy trigger and an additional isolated hard photon. Finally, ${\cal R}_{W\gamma}$ and ${\cal R}_{Z\gamma}$ can be obtained from the inclusive sample of $W$ and $Z$ boson candidates, respectively. Many experimental uncertainties, for example those associated with lepton and photon detection efficiencies, or the uncertainty in the integrated luminosity, are expected to cancel, at least partially, in the cross section ratios. ${\cal R}_{W\gamma}$ and ${\cal R}_{Z\gamma}$ are independent of the vector boson branching ratios, and thus represent directly the ratio of $W^\pm\gamma$ to $W^\pm$, and $Z\gamma$ to $Z$, cross sections. Since the cross section for $W/Z$ production is much larger than the rate for $W\gamma / Z\gamma$ production, the statistical error of ${\cal R}_{W\gamma}$ and ${\cal R}_{Z\gamma}$ is expected to be significantly smaller than that of ${\cal R}_{\gamma ,\ell}$ and ${\cal R}_{\gamma ,\nu}$. In this paper we study the theoretical aspects of the cross section ratios shown in Eqs.~(1.2) -- (1.5). Our calculations are based on results presented in Refs.~\BB,~\BaBe,~\BZ, and~\JO. Cross sections in the Born approximation are obtained by calculating helicity amplitudes for the complete processes $q\bar q'\rightarrow W^\pm\gamma\rightarrow\ell^\pm\nu\gamma$, $q\bar q\rightarrow Z\gamma\rightarrow\ell^+\ell^-\gamma$, and $q\bar q\rightarrow Z\gamma\rightarrow\nu\bar\nu\gamma$, including the effects of timelike photon exchange diagrams and bremsstrahlung from the final state lepton line. Finite $W/Z$ width effects, and correlations between the final state leptons originating from $W/Z$ decay, are also fully incorporated in our calculations. In contrast, next to leading log QCD corrections to $W\gamma$ and $Z\gamma$ production are at present only known in the limit of stable, onshell weak bosons\rlap.\refmark{\JO} In Section~2 we consider the cross section ratios (1.2) -- (1.5) within the framework of the SM at Tevatron energies. Experimentally, one measures the ratios $$\eqalignno{ \widetilde{\cal R}_{\gamma ,\ell} &= {\sigma(\ell^+\ell^-\gamma) \over\sigma(\ell^\pm\nu\gamma)}\, , & (1.7a) \cr \widetilde{\cal R}_{\gamma ,\nu} &= {\sigma(\bar\nu\nu\gamma) \over\sigma(\ell^\pm\nu\gamma)}\, , & (1.7b) \cr \widetilde{\cal R}_{W\gamma} &= {\sigma(\ell^\pm\nu\gamma) \over\sigma(\ell^\pm\nu)}\, , & (1.7c) \cr \noalign{\hbox{and}} \widetilde{\cal R}_{Z\gamma} &= {\sigma(\ell^+\ell^-\gamma) \over\sigma(\ell^+\ell^-)}\, , & (1.7d)\cr} $$ rather than ${\cal R}_{\gamma ,\ell}$, ${\cal R}_{\gamma ,\nu}$, and ${\cal R}_{V\gamma}$ ($V=W,\,Z$) directly. In order to isolate the cross section ratios of Eqs.~(1.2) -- (1.5), appropriate cuts have to be imposed in order to suppress the contributions of final state bremsstrahlung (radiative $W/Z$ decays) to the $\ell\nu\gamma$ and $\ell\ell\gamma$ final states. These cuts are described in Section~(2.1), together with other details of our calculation. In Section~(2.2), the ratios are studied as a function of the minimum photon transverse momentum, $p_T^{\rm min}(\gamma)$, and the minimum $V\gamma$ ($V=W,\,Z$) invariant mass, $m_{\rm min}$. As a function of $p_T^{\rm min}(\gamma)$, ${\cal R}_{\gamma ,\ell}$ and ${\cal R}_{\gamma ,\nu}$ are shown to directly reflect the radiation zero which is present in $W\gamma$ production in the SM\rlap.\refmark{\RADZ} In Section~2.2 we also investigate how the ratios depend on the cuts imposed on the final state particles. The systematic and theoretical uncertainties of the cross section ratios originating from the parametrization of the parton distribution functions, the choice of the factorization scale $Q^2$, and higher order QCD corrections are studied in Section~(2.3). The size of the QCD corrections can be reduced significantly by imposing a central jet veto cut. The theoretical and systematic uncertainties to the cross section ratios are found to be well under control. ${\cal R}_{\gamma ,\ell}$ and ${\cal R}_{\gamma , \nu}$ are significantly less sensitive to these uncertainties than ${\cal R}_{W\gamma}$ and ${\cal R}_{Z\gamma}$. The $W^\pm\gamma$ and $Z\gamma$ cross section ratios thus possess the same advantages which make the ratio of $W$ to $Z$ boson cross sections, Eq.~(1.1), a powerful tool for probing new physics, {\it e.g.}, the extraction of a model independent limit on the top quark mass\rlap.\refmark{\Rat,\Rexp} In Section~3 we study how non-standard three gauge boson couplings affect the cross section ratios. We also estimate the sensitivity limits for anomalous three vector boson couplings which one can hope to achieve from data accumulated in the current Tevatron run, taking into account the systematic uncertainties to the ratios. Section~4, finally, contains our conclusions. \chapter{Standard Model $W^\pm\gamma$ and $Z\gamma$ Cross Section Ratios} \section{Preliminaries} The signal in $p\bar p\rightarrow W^\pm\gamma /Z\gamma$ consists of an isolated high transverse momentum ($p_T$) photon and a $W^\pm$ or $Z$ boson which may decay either hadronically or leptonically. The hadronic $W$ and $Z$ decays will be difficult to observe due to the QCD 2~jet + $\gamma$ background\rlap.\refmark{\jjg} In the following we therefore focus on the leptonic decay modes of the weak bosons. The signal for $W^\pm\gamma$ production is then $$ p\bar p\rightarrow\ell^\pmp\hskip-6pt/_T\gamma, \eqno (2.1) $$ where $\ell=e,\,\mu$ (we neglect the $\tau$ decay mode of the $W/Z$) and the missing transverse momentum $p\hskip-6pt/_T$ results from the nonobservation of the neutrino from the $W$ decay. The signal for $Z\gamma$ production is $$ p\bar p\rightarrow\ell^+\ell^-\gamma \eqno (2.2) $$ if the $Z$ boson decays into a pair of charged leptons, and $$ p\bar p\rightarrowp\hskip-6pt/_T\gamma \eqno(2.3) $$ if the $Z$ boson decays into a pair of neutrinos. Besides the standard Feynman diagrams for $q\bar q'\rightarrow W\gamma$ and $q\bar q\rightarrow Z\gamma$, final state bremsstrahlungs diagrams contribute to (2.1) and (2.2). We incorporate their effects, together with those from timelike photon exchange diagrams contributing to (2.2), and finite $W/Z$ width effects, in our numerical simulations of the lowest order cross sections. All cross sections and dynamical distributions are evaluated using parton level Monte Carlo programs. In order to simulate the finite acceptance of detectors we impose, unless stated otherwise, the following set of transverse momentum, pseudorapidity ($\eta$), and separation cuts: $$\matrix{ p_T(\gamma)> 10~{\rm GeV}, & \qquad & |\eta(\gamma)|<3, \cr p_T(\ell)> 15~{\rm GeV}, & \qquad & |\eta(\ell)|<3.5, \cr p\hskip-6pt/_T> 15~{\rm GeV}, & \qquad & \Delta R(\ell\gamma) > 0.7. \cropen{10pt}} \eqno (2.4) $$ Here, $$ \Delta R(\ell\gamma)=\left[\left(\Delta\Phi_{\ell\gamma}\right)^2 + \left(\Delta\eta_{\ell\gamma}\right)^2\right]^{1/2} \eqno (2.5) $$ is the charged lepton photon separation in the pseudorapidity azimuthal angle plane. The cuts listed in Eq.~(2.4) approximate the phase space region covered by the CDF and D$0\hskip-6pt/$ detectors at the Tevatron\rlap. \refmark{\CDF,\priv} Due to the large separation cut, contributions from the final state bremsstrahlung (radiative $W/Z$ decay) diagrams to (2.1) and (2.2) are strongly suppressed. They can be eliminated almost completely by imposing the following additional cuts on the invariant mass of the lepton pair and the $\ell\ell\gamma$ system: $$ m_{\ell\ell} > 50~{\rm GeV,} \hskip 1.cm m_{\ell\ell\gamma}>100~{\rm GeV} \eqno (2.6) $$ in reaction~(2.2) (Ref.~\BaBe) and $$ m_T(\ell\gamma;p\hskip-6pt/_T)>90~{\rm GeV} \eqno (2.7) $$ in reaction~(2.1) (Ref.~\BB) where $$ m^2_T(\ell\gamma;p\hskip-6pt/_T)=\left [\left (m^2_{\ell\gamma}+|\bold p_T(\gamma) +\bold p_T(\ell)|^2\right)^{1/2}+p\hskip-6pt/_T\right ]^2-\left | \bold p_T(\gamma)+\bold p_T(\ell) + \bold{p\hskip-6pt/}_T\right |^2 \eqno (2.8) $$ is the square of the cluster transverse mass. In Eq.~(2.8), $m_{\ell\gamma}$ denotes the invariant mass of the $\ell\gamma$ pair. The cuts listed in Eqs.~(2.6) and~(2.7) ensure that the experimentally measured cross section ratios of Eq.~(1.7) virtually coincide with the ratios listed in Eqs.~(1.2) -- (1.5). Therefore, we shall not discriminate between the two sets of ratios subsequently. Uncertainties in the energy measurements of the charged leptons and the photon are taken into account in our numerical simulations by Gaussian smearing of the particle momenta with $$ {\sigma\over E}=\cases{ 0.135/\sqrt{E_T}~\oplus~0.02 & for $|\eta| < 1.1$ \cropen{10pt} 0.28/\sqrt{E}~\oplus~0.02 & for $ 1.1<|\eta|<2.4 $ \cropen{10pt} 0.25/\sqrt{E}~\oplus~0.02 & for $2.4<|\eta|<4.2$} \eqno (2.9) $$ corresponding to the CDF detector resolution\rlap.\refmark{\CDF} In Eq.~(2.9), $E$ ($E_T$) is the energy (transverse energy) of the particle and the symbol $\oplus$ signifies that the constant term is added in quadrature in the resolution. The overall resolution of the electromagnetic calorimeter of the D$0\hskip-6pt/$ detector\refmark{\John} ($\approx 0.15/\sqrt{E}$) is better than that of the CDF detector. Smearing effects are therefore less pronounced if the D$0\hskip-6pt/$ parametrization for $\sigma/E$ is used. The SM parameters used in our calculations are $\alpha=\alpha(m_Z^2) =1/128$, $\alpha_s(m_Z^2)=0.12$ (Ref.~\BC), $m_Z=91.1$~GeV, and $\sin^2\theta_W=0.23$. For the parton distribution functions we use the HMRSB set\refmark{\HMRS} with the scale $Q^2$ given by the parton center of mass energy squared, $\hat s$, unless stated otherwise. \section{Basic Properties of the Cross Section Ratios} Using the results obtained in Refs.~\BB\ and~\BaBe\ it is straightforward to calculate the cross section ratios (1.2) -- (1.5) within the SM. If the ratios are considered as a function of the minimum transverse momentum of the photon, $p_T^{\rm min}(\gamma)$, or the minimum weak boson -- photon invariant mass, $m_{\rm min}$, they reflect information carried by the $p_T(\gamma)$ and $m_{V\gamma}$ ($V=W,\,Z$) distributions. In the following we shall therefore study the cross section ratios listed in Eqs.~(1.2) -- (1.5) as a function of these parameters. We shall also investigate in detail how the ratios are influenced by the cuts imposed on the final state particles. Figure~1a shows ${\cal R}_{\gamma ,\ell}$ at the Tevatron as a function of $p_T^{\rm min}(\gamma)$ for the cuts summarized in Eqs.~(2.4) -- (2.7). The ratio of $Z\gamma$ to $W^\pm\gamma$ cross sections (solid line) is seen to increase rapidly with the minimum photon transverse momentum from ${\cal R}_{\gamma ,\ell}\approx 0.3$ at $p_T^{\rm min}(\gamma)=10$~GeV to ${\cal R}_{\gamma ,\ell}\approx 1.2$ at $p_T^{\rm min}(\gamma)=200$~GeV. This is in sharp contrast to the ratio $$ {\cal R}_{j,\ell} = {B(Z\rightarrow\ell^+\ell^-)\cdot\sigma(Zj)\over B(W\rightarrow\ell\nu)\cdot\sigma(W^\pm j)}~, \eqno (2.10) $$ which is shown versus the minimum jet transverse momentum, $p_T^{\rm min}(j)$, for the same cuts [with the photon replaced by the jet in Eq.~(2.4)] by the dashed line in Fig.~1a. ${\cal R}_{j,\ell}$ remains in the range from 0.10 to 0.15 over the whole range of $p_T^{\rm min}(j)$ considered. The slight increase with the minimum jet transverse momentum is due to the different $x$ behavior of the up- and down-type quark distribution functions. The ratio of $Zj$ to $W^\pm j$ cross sections is thus very similar to ${\cal R}_\ell$ [see Eq.~(1.1)], with the $Zj$ production rate suppressed by approximately a factor~10 with respect to the $W^\pm j$ cross section. On the other hand, the $Z\gamma$ production rate is at most a factor~3 smaller than the $W^\pm\gamma$ cross section in the SM. At large photon transverse momenta, the rates for $W^\pm\gamma$ and $Z\gamma$ production are similar in magnitude. The enhancement of the $Z\gamma$ cross section relative to the $W^\pm\gamma$ production rate can be understood as a consequence of the radiation zero present in the SM $q\bar q'\rightarrow W\gamma$ matrix elements\rlap,\refmark{\RADZ} which suppresses $W\gamma$ production. For $u\bar d\rightarrow W^+\gamma$ ($d\bar u\rightarrow W^-\gamma$) all contributing helicity amplitudes vanish for $\cos\Theta=-1/3$ (+1/3), where $\Theta$ is the angle between the quark and the photon in the parton center of mass frame. As a result, the photon rapidity distribution, $d\sigma/dy^*_\gamma$, in the $W\gamma$ rest frame develops a dip at zero rapidity when one sums over the $W$ charges\rlap,\refmark{\BB,\BZ} thus reducing the cross section in the central rapidity region. In contrast, there is no radiation zero present in $Z\gamma$ production, and the $y^*_\gamma$ distribution peaks at $y^*_\gamma=0$ for $q\bar q\rightarrow Z\gamma$. For increasing photon transverse momenta, events become more central in rapidity. The reduction of the $W^\pm\gamma$ cross section for small rapidities originating from the radiation zero thus becomes more pronounced at high $p_T(\gamma)$. This causes the photon transverse momentum distribution of $q\bar q'\rightarrow W^\pm\gamma$ to fall significantly faster than the $p_T(\gamma)$ spectrum of $q\bar q\rightarrow Z\gamma$, which immediately translates into a sharp increase of ${\cal R}_{\gamma ,\ell}$ with $p_T^{\rm min}(\gamma)$. As mentioned before, the cuts of Eqs.~(2.4) -- (2.7) have been used in order to obtain ${\cal R}_{\gamma ,\ell}$ shown in Fig.~1a. It is important to know how the slope of ${\cal R}_{\gamma ,\ell}$ versus $p_T^{\rm min}(\gamma)$ changes if the geometrical acceptances are varied. In Fig.~1b we display the variation of the cross section ratio, normalized to the ratio obtained with the cuts of Eqs.~(2.4) -- (2.7), $\Delta {\cal R}_{\gamma ,\ell}/{\cal R}_{\gamma ,\ell}$, if these cuts are changed. Only one parameter is varied at a time. The sensitivity of ${\cal R}_{\gamma ,\ell}$ to the cuts imposed in general decreases for increasing values of $p_T^{\rm min}(\gamma)$. Due to the radiation zero, the $W^\pm\gamma$ cross section is reduced more significantly than the $Z\gamma$ production rate, and ${\cal R}_{\gamma ,\ell}$ increases, if the photon is required to be more central. This is illustrated by the solid line in Fig.~1b, which shows the variation of ${\cal R}_{\gamma ,\ell}$ if the pseudorapidity cut is changed from $|\eta(\gamma)|<3$ to $|\eta(\gamma)|<1$. The shoulder in the region between $p_T^{\rm min}(\gamma)\approx 30$~GeV and $p_T^{\rm min}(\gamma) \approx 70$~GeV can also be traced back to the radiation zero. For small values of the photon transverse momentum, the $\eta(\gamma)$ distribution is very flat in the $W^\pm\gamma$ case. At large $p_T(\gamma)$, the photon rapidity spectrum develops a slight dip at $\eta(\gamma)=0$ qualitatively similar to that in $d\sigma/dy^*_\gamma$. This leads to a shoulder in $\Delta {\cal R}_{\gamma ,\ell}/{\cal R}_{\gamma ,\ell}$ if the photon rapidity cut is reduced from $|\eta(\gamma)|<3$ to $|\eta(\gamma)|<1$. If the photon rapidity range is reduced even further, this shoulder progresses into a local maximum, located at $p_T^{\rm min}(\gamma)\approx 50$~GeV. On the other hand, a more stringent rapidity cut on the charged lepton pseudorapidity of $|\eta(\ell)|<2$ slightly reduces the cross section ratio (dashed line). Changes in the lepton photon separation affect ${\cal R}_{\gamma ,\ell}$ very little, as demonstrated by the dot-dashed line in Fig.~1b. The dotted line in Fig.~1b, finally, shows the effect of increasing the $p_T(\ell)$ and $p\hskip-6pt/_T$ cuts from 15~GeV to 25~GeV. It exhibits an interesting structure in the region around $p_T^{\rm min}(\gamma)=m_W/2\approx 40$~GeV, where $m_W$ is the $W$ boson mass, which originates from the difference in the coupling of the leptons to $W$ and $Z$ bosons, and the Jacobian peak in the lepton $p_T$ distribution. Due to the $V-A$ coupling of the leptons to the $W$ boson, the charged lepton tends to be emitted in the direction of the parent $W$, thus picking up most of its momentum. Hence, the $p_T(\ell)$ distribution is significantly harder than the $p\hskip-6pt/_T$ spectrum in $W^\pm\gamma$ production, whereas the transverse momentum distributions of the leptons in $Z\gamma$ production, as a result of the almost pure axial vector coupling of the charged leptons to the $Z$ boson, almost coincide. Increasing the $p\hskip-6pt/_T$ and $p_T(\ell)$ cut from 15~GeV to 25~GeV therefore reduces the $W^\pm\gamma$ cross section more than the $Z\gamma$ production rate, leading to an increase of ${\cal R}_{\gamma ,\ell}$. In the region $p_T(\gamma)\mathrel{\mathpalette\atversim>} m_W/2$, the photon tends to recoil against (one of) the charged lepton(s). Because of the Jacobian peak in the $p_T(\ell)$ distribution, the sensitivity of ${\cal R}_{\gamma ,\ell}$ is strongly enhanced around $p_T^{\rm min}(\gamma) =40$~GeV. In the cross section ratio, the effect described above leads to a rather well defined kink in ${\cal R}_{\gamma, \ell}$ versus the minimum photon transverse momentum at $p_T^{\rm min}(\gamma)\approx m_W/2$, as demonstrated by the dotted line in Fig.~1a. At large values of $p_T^{\rm min}(\gamma)$, ${\cal R}_{\gamma ,\ell}$ is almost independent of the cuts imposed on the final state fermions. This ensures that the steep rise of ${\cal R}_{\gamma ,\ell}$ with $p_T^{\rm min}(\gamma)$ is not an artifact of the specific set of cuts applied. Although we have varied only one cut at a time, the curves in Fig.~1b correctly reflect the global sensitivity of ${\cal R}_{\gamma ,\ell}$ to the cuts imposed. For example, changing the lepton rapidity cut from $|\eta(\ell)|<3.5$ to $|\eta(\ell)|<2$ , and the $p_T(\ell)$ and $p\hskip-6pt/_T$ cut from 15~GeV to 25~GeV at the same time, gives a result for $\Delta{\cal R}_{\gamma ,\ell}/{\cal R}_{\gamma ,\ell}$ which is quite similar to that represented by the dotted line in Fig.~1b. For increasing lepton transverse momenta, events are automatically more central in rapidity. A more stringent rapidity cut in addition to an increased $p_T$ cut therefore changes the result only slightly. The cross section ratio ${\cal R}_{\gamma ,\ell}$ as a function of the minimum invariant mass of the weak boson -- photon system, $m_{\rm min}$, for Tevatron energies and the cuts of Eqs.~(2.4) -- (2.7) (solid line) is shown in Fig.~2a. Due to threshold effects originating from the $W/Z$ mass difference, ${\cal R}_{\gamma ,\ell}$ drops first, before it starts to slowly rise. For most $W^\pm\gamma$ events with large $W\gamma$ invariant mass, the photon transverse momentum is fairly small, whereas $|\eta(\gamma)|$ is large. The radiation zero therefore does not manifest itself in ${\cal R}_{\gamma ,\ell}$ if the cross section ratio is considered as a function of $m_{\rm min}$. At hadron colliders the $W\gamma$ invariant mass cannot be determined unambiguously because the neutrino from the $W$ decay is not observed. If the transverse momentum of the neutrino is identified with the missing $p_T$ of a given $W\gamma$ event, the unobservable longitudinal neutrino momentum can be reconstructed, albeit with a twofold ambiguity, by imposing the constraint that the neutrino and the charged lepton four-momenta combine to form the $W$ rest mass\rlap.\refmark{\Cort} On an event by event basis it is impossible to determine which of the two solutions corresponds to the actual neutrino longitudinal momentum. In the following we therefore use both solutions with equal probability when we consider cross section ratios as a function of the $W\gamma$ invariant mass. This is the most conservative approach possible. The cross section ratio ${\cal R}_{\gamma ,\ell}$ for the reconstructed $W\gamma$ invariant mass is shown by the dashed line in Fig.~2a. Only in the threshold region are the ratios for the true and reconstructed mass similar. Figure~2b displays the variation of ${\cal R}_{\gamma ,\ell}$ versus $m_{\rm min}$, using the reconstructed $W\gamma$ invariant mass, if the cuts of Eqs.~(2.4) -- (2.7) are changed, normalized to the cross section ratio obtained with these cuts. As demonstrated by the dashed and dash dotted curves, a more stringent rapidity cut on the charged leptons and a less severe separation cut have little influence on the cross section ratio. Changes in the transverse momentum and photon rapidity cuts, on the other hand, have a larger effect. If the $p_T(\ell)$ and $p\hskip-6pt/_T$ cut of Eq.~(2.4) is increased to 25~GeV, the relative change in ${\cal R}_{\gamma ,\ell}$ grows very rapidly with $m_{\rm min}$ (dotted line). Increasing the minimum lepton $p_T$ selects a phase space region where the two solutions of the longitudinal neutrino momentum tend to be closer together, so that ${\cal R}_{\gamma ,\ell}$ resembles more closely the cross section ratio obtained for the true $W\gamma$ invariant mass. Reducing the photon rapidity range covered, increases the cross section ratio by 50~--~70\% (solid line). The results presented in Figs.~1b and~2b have been based on the lowest order matrix elements of the contributing processes. As a result, the $W\gamma$ and $Z\gamma$ system is produced with zero transverse momentum. Higher order QCD corrections give the $W\gamma /Z\gamma$ system a finite $p_T$, and thus may change how the cross section ratio is affected when the $p_T(\ell)$ and $p\hskip-6pt/_T$ cuts are varied. In order to take these effects properly into account, a complete calculation of the $W\gamma /Z\gamma$ transverse momentum distribution, including soft gluon resummation effects, is needed. At present, such a calculation is not available. However, one expects that the shapes of the $W\gamma$ and $Z\gamma$ transverse momentum distributions are similar to those of the $W$ and $Z$ boson $p_T$ distributions. To roughly estimate how our predictions may change if the finite $p_T$ of the weak boson -- photon system is taken into account, we have recalculated $\Delta {\cal R}_{\gamma ,\ell}/{\cal R}_{\gamma ,\ell}$, smearing the transverse momentum components of the final state particles using the experimental $p_T$ distribution of the $W$ boson\rlap.\refmark{\Wpt} Possible differences in the shapes of $d\sigma/dp_T(W\gamma)$ and $d\sigma/dp_T(Z\gamma)$, and the sensitivity to details of the $p_T$ spectrum, are simulated by using different fits to the observed $W$ transverse momentum distribution. Each fit, appropriately normalized, is then identified with one of the transverse momentum distributions. The non-zero transverse momentum of the $W\gamma /Z\gamma$ system turns out to shift the dotted curves in Figs.~1b and~2b by typically a few percent. The shapes of the curves, however, remain almost unchanged. So far, we have only considered the ratio of $Z\gamma$ to $W^\pm\gamma$ cross sections for $Z$ decays into charged leptons, ${\cal R}_{\gamma , \ell}$. The cuts of Eqs.~(2.6) and~(2.7) efficiently suppress photon radiation from final state leptons, and for equal photon $p_T$ and rapidity cuts $$ {\cal R}_{\gamma ,\nu}\approx {B(Z\rightarrow\bar\nu\nu)\over B(Z\rightarrow\ell^+\ell^-)}\cdot{\cal R}_{\gamma ,\ell}\approx 6\cdot {\cal R}_{\gamma ,\ell}\,. \eqno (2.11) $$ The basic properties of ${\cal R}_{\gamma ,\nu}$ and ${\cal R}_{\gamma ,\ell}$ are thus the same. In particular, ${\cal R}_{\gamma ,\nu}$ also rises steeply with the minimum photon $p_T$, reflecting the radiation zero present in $W\gamma$ production in the SM. The lowest order prediction for ${\cal R}_{V\gamma}$ ($V=W,\, Z$) at the Tevatron is shown in Fig.~3, using the cuts summarized in Eqs.~(2.4) -- (2.7). The solid lines give ${\cal R}_{W\gamma}$, whereas the dashed curves display the corresponding ratio for the $Z\gamma$ case. In order to calculate the $Z$ boson cross section, $\sigma(Z)$, in ${\cal R}_{Z\gamma}$, we have assumed the lepton pair invariant mass to be in the range $65~{\rm GeV}<m_{\ell\ell}<115$~GeV. Photon exchange contributions and finite $Z$ width effects are fully included in our calculation. Figure~3a shows the two ratios versus $p_T^{\rm min}(\gamma)$. Due to the radiation zero present in the SM, ${\cal R}_{W\gamma}$ is considerably smaller than ${\cal R}_{Z\gamma}$, and drops faster with increasing values of the minimum photon $p_T$. In Fig.~3b the cross section ratios are plotted versus the minimum weak boson -- photon invariant mass $m_{\rm min}$. As a result of the twofold ambiguity in the reconstruction of the longitudinal neutrino momentum, ${\cal R}_{W\gamma}$ decreases more slowly with $m_{\rm min}$ than ${\cal R}_{Z\gamma}$. The shape of ${\cal R}_{V\gamma}$ versus $p_T^{\rm min}(\gamma)$ and $m_{\rm min}$ changes only very little if the cuts on the final state leptons are varied. The cross section ratios typically vary by 10 -- 30\%. For small values of $p_T^{\rm min}(\gamma)$ and $m_{\rm min}$ the changes in the cross sections cancel almost exactly in the ratio. \section{Theoretical and Systematic Uncertainties} Higher order QCD corrections, and the choice of the parametrization of the parton distribution functions and the factorization scale $Q^2$, are the premier sources of uncertainties in the calculation of cross sections in hadronic collisions. It is therefore vital to investigate their impact on the cross section ratios (1.2) -- (1.5). The sensitivity of the ratios to the parametrization of the parton distribution functions is illustrated in Figs.~4 and~5 for five representative sets: the MRSS0 and MRSD-- distributions of Ref.~\MRS, the GRVLO\rlap, \refmark{\GRV} the MTLO\rlap,\refmark{\MT} and the DO1.1\refmark{\Jeff} parametrization. The MRSS0 and MRSD-- sets take into account new NMC\refmark{\NMC} and CCFR\refmark{\CCFR} data which suggest valence and sea quark distributions at low $x$ which lead to considerably larger cross sections than previous fits. Figure~4 shows the variation of ${\cal R}_{\gamma ,\ell}$ versus $p_T^{\rm min}(\gamma)$ (Fig.~4a) and $m_{\rm min}$ (Fig.~4b), normalized to the cross section ratio obtained with the HMRSB set of distribution functions. Figure~5 displays $\Delta {\cal R}_{W\gamma}/{\cal R}_{W\gamma}$ (Fig.~5a) and $\Delta {\cal R}_{Z\gamma}/{\cal R}_{Z\gamma}$ versus $p_T^{\rm min}(\gamma)$ (Fig.~5b). Although the $Z\gamma$ and $W^\pm\gamma$ total cross sections vary individually by up to 25\% with the parametrization of the parton distributions, ${\cal R}_{\gamma ,\ell}$ is found to be very stable. For ${\cal R}_{\gamma ,\ell}$ as a function of $p_T^{\rm min}(\gamma)$ ($m_{\rm min}$), the changes are at most 8\% (13\%) in magnitude for the parametrizations used (see Fig.~4). ${\cal R}_{W\gamma}$ and ${\cal R}_{Z\gamma}$ are somewhat more sensitive. Here the ratios vary by up to 18\% and 12\%, respectively, if considered as a function of the minimum photon transverse momentum (see Fig.~5). The variation of ${\cal R}_{V\gamma}$ ($V=W,\, Z$) versus $m_{\rm min}$ with the parametrization of the parton distribution function is qualitatively and quantitatively similar to that of ${\cal R}_{\gamma ,\ell}$, and is therefore not shown. The dependence of the cross section ratios on the factorization scale $Q^2$ in the parton distribution functions is illustrated in Fig.~6. In this figure we show the variation of the cross section ratios, normalized to the corresponding ratio obtained with $Q^2=\hat s$, for $Q^2=m_W^2$ (solid lines) and $Q^2=100\cdot m_W^2$ (dashed lines) versus $p_T^{\rm min}(\gamma)$. By choosing these two rather extreme values, we obtain a fairly conservative estimate of how strongly the cross section ratios depend on the choice of $Q^2$. In tree level calculations, one usually chooses a typical energy scale of the hard scattering process, such as the parton center of mass energy squared, $\hat s$, for $Q^2$. If the cross section ratios are calculated to all orders in $\alpha_s$, the result is expected to be independent of $Q^2$. For small values of the minimum transverse momentum of the photon and the weak boson photon invariant mass, all cross section ratios are quite insensitive to variations in $Q^2$. At large $p_T^{\rm min}(\gamma)$, however, the changes can be quite large for ${\cal R}_{W\gamma}$ and ${\cal R}_{Z\gamma}$, as illustrated by the solid lines in Figs.~6a and~6b. The variations of the individual cross sections, however, cancel to a very good approximation in ${\cal R}_{\gamma ,\ell}$ (Fig.~6c). For $Q^2=100\cdot m_W^2$, the changes in the cross section ratios with respect to $Q^2=\hat s$ are always smaller than 10\%. Results similar to those shown in Fig.~6 are also obtained for ${\cal R}_{W\gamma}$ and ${\cal R}_{Z\gamma}$ as a function of $m_{\rm min}$. ${\cal R}_{\gamma ,\ell}$ is somewhat more sensitive to the choice of $Q^2$ if considered as a function of $m_{\rm min}$ than the $Z\gamma$ to $W^\pm\gamma$ cross section ratio versus $p_T^{\rm min}(\gamma)$ shown in Fig.~6c. The sensitivity of ${\cal R}_{W\gamma}$ and ${\cal R}_{Z\gamma}$ to the choice of $Q^2$ is expected to be reduced if next-to-leading log (NLL) QCD corrections are taken into account. NLL QCD corrections to $q\bar q\rightarrow Z\gamma$ and $q\bar q'\rightarrow W\gamma$ have been calculated recently in the framework of the SM in the limit of a stable, onshell $W/Z$ boson\rlap.\refmark{\JO} Naively one might expect that the cross section ratios of Eqs.~(1.2) -- (1.5) change very little if higher order QCD corrections are incorporated, similar to the ratio of $W^\pm$ and $Z$ cross sections, ${\cal R}_\ell$, of Eq.~(1.1) (Ref.~\Willy). Using the results of Refs.~\JO\ and~\BR, we have investigated the influence of NLL QCD corrections on the cross section ratios. Our results are shown in Figs.~7 and~8. In order to perform a meaningful comparison, the cross section for $q\bar q'\rightarrow W^\pm\gamma$ and $q\bar q\rightarrow Z\gamma$ in the Born approximation is also calculated in the limit of a stable, onshell $W/Z$ boson. To roughly simulate detector response, the following transverse momentum and rapidity cuts are imposed: $$ p_T(\gamma)>10~{\rm GeV},\hskip 3.mm |\eta(\gamma)|<1, \hskip 3.mm {\rm and} \hskip 3.mm |y(V)|<2.5. \eqno (2.12) $$ Here, $y(V)$ ($V=W,\,Z$) is the $W/Z$ rapidity. We also require the photon to be isolated by imposing a cut on the total hadronic energy in a cone of size $\Delta R=0.7$ about the direction of the photon of $$ \sum_{\Delta R<0.7} E_{\hbox{\tenrm had}}<0.15\, E_\gamma, \eqno (2.13) $$ where $E_\gamma$ is the photon energy. This requirement strongly reduces photon bremsstrahlung from final state quarks and gluons. The results shown in Figs.~7 and~8 demonstrate that, in contrast to ${\cal R}_\ell$, the NLL QCD corrections to the cross sections only partially cancel in ${\cal R}_{\gamma ,\ell}$, ${\cal R}_{W\gamma}$, and ${\cal R}_{Z\gamma}$, in particular when these cross section ratios are considered as a function of $p_T^{\rm min}(\gamma)$. Since the QCD corrections tend to wash out the SM radiation zero in $q\bar q'\rightarrow W^\pm\gamma$, ${\cal R}_{W\gamma}$ is significantly more sensitive to NLL order effects than ${\cal R}_{Z\gamma}$ (see Fig.~8a). At large values of $p_T^{\rm min}(\gamma)$, ${\cal R}_{W\gamma}$ increases by as much as 40\% if QCD corrections are included. On the other hand, ${\cal R}_{\gamma ,\ell}$ is reduced by typically 15~--~20\% by ${\cal O}(\alpha_s)$ corrections (solid line in Fig.~7a). Higher order QCD effects are known to change the shapes of the $p_T(\gamma)$ and invariant mass distributions in $W\gamma$ and $Z\gamma$ production\rlap.\refmark{\JO} This effect can be traced to the quark gluon fusion process $qg\rightarrow W\gamma q'$ and $qg\rightarrow Z\gamma q$, which carries an enhancement factor $\log^2(p_T^2(\gamma)/m_V^2)$ ($V=W,\,Z$) at large values of $p_T(\gamma)$. This enhancement factor arises from the kinematic region where the photon is produced at large transverse momentum and recoils against the quark, which radiates a soft $W/Z$ which is almost collinear to the quark\rlap.\refmark{\FNR} The shape of the photon $p_T$ distribution is therefore significantly affected by higher order QCD corrections, and the corrections to the cross section ratios as a function of $p_T^{\rm min}(\gamma)$ depend strongly on the minimum photon $p_T$. Since ${\cal O}(\alpha_s)$ corrections result in a harder $p_T(\gamma)$ distribution, the corrections to the cross section ratios grow with $p_T^{\rm min}(\gamma)$. The shape of the $Z\gamma$ and the reconstructed $W\gamma$ invariant mass distribution, on the other hand, is only slightly affected by higher order QCD corrections. Away from the threshold region, the corrections to the cross section ratios as a function of $m_{\rm min}$ are approximately constant. {}From the discussion above it is clear that the size of the ${\cal O}(\alpha_s)$ QCD corrections to the cross section ratios of Eqs.~(1.2) -- (1.5) can be significantly reduced by vetoing hard jets in the central rapidity region. Requiring $$ no~{\rm jets~with}\hskip 1.cm p_T(j)>10~{\rm GeV}, \hskip 1.cm |\eta(j)|<2.5 \eqno (2.14) $$ in the event, we obtain the results shown by the dashed line (dotted and dash-dotted lines) in Fig.~7 (Fig.~8). A ``zero-jet'' cut similar to that in Eq.~(2.14) has been imposed in the CDF measurement of the ratio of $W$ to $Z$ cross sections, ${\cal R}_\ell$\rlap,\refmark{\Rl} and the $W$ mass measurement\rlap.\refmark{\Wmass} Imposing the jet veto of Eq.~(2.14), reduces the corrections to the cross section ratios from higher order QCD effects to the few percent level in the $p_T^{\rm min}(\gamma)$ and $m_{\rm min}$ range studied. The results shown in Figs.~7 and~8 have been obtained for on-shell $W/Z$ bosons. No qualitative changes to these results are expected if decay correlations, finite $W/Z$ width effects, and photon exchange diagrams are taken into account. At present, a calculation of NLL QCD corrections to both, $W^\pm\gamma$ and $Z\gamma$ production, which fully takes into account these effects, does not exist. As mentioned before, ${\cal R}_{\gamma ,\nu}$ is approximately proportional to ${\cal R}_{\gamma ,\ell}$ for the cuts imposed [see Eq.~(2.11)]. The results shown in Figs.~4a, 6c, and~7a therefore apply also to ${\cal R}_{\gamma ,\nu}$. \chapter{Measuring Three Vector Boson Couplings in Cross Section Ratios} \section{${\cal R}_{\gamma ,\ell}$, ${\cal R}_{\gamma ,\nu}$, and the Standard Model Radiation Zero} In Section~2.2 we have seen that the strong increase of the ratios of $Z\gamma$ to $W^\pm\gamma$ cross sections, ${\cal R}_{\gamma ,\ell}$ and ${\cal R}_{\gamma ,\nu}$, as a function of the minimum transverse momentum of the photon can be traced to the radiation zero which is present in the SM $q\bar q'\rightarrow W\gamma$ differential cross section. In the last Section we have shown that ${\cal R}_{\gamma ,\ell}$ and ${\cal R}_{\gamma ,\nu}$ are quite insensitive to changes in the parametrization of the parton structure functions. Furthermore, at tree level the two ratios vary little with a change of the factorization scale $Q^2$. Finally, when a central jet veto is imposed, the ${\cal O}(\alpha_s)$ QCD corrections change ${\cal R}_{\gamma ,\ell}$ and ${\cal R}_{\gamma ,\nu}$ by only a few percent. The steep rise of ${\cal R}_{\gamma ,\ell}$ and ${\cal R}_{\gamma ,\nu}$ with $p_T^{\rm min}(\gamma)$ in the framework of the SM as a signal of the radiation zero, combined with small systematic and theoretical uncertainties, make these quantities excellent tools to probe the three vector boson vertices. As we shall see below, anomalous $WW\gamma$ couplings tend to decrease the two ratios, in particular at large $p_T^{\rm min}(\gamma)$. Non-standard $ZZ\gamma$ and $Z\gamma\gamma$ couplings, on the other hand, lead to an increase of ${\cal R}_{\gamma ,\ell}$ and ${\cal R}_{\gamma ,\nu}$ to values much larger than predicted by the~SM. In contrast to other quantities which are sensitive to the radiation zero, ${\cal R}_{\gamma ,\ell}$ is fairly simple to measure experimentally. The prospects for ${\cal R}_{\gamma ,\nu}$ depend on how well the $p\bar p\rightarrow\gammap\hskip-6pt/_T$ signal can be isolated\rlap. \refmark{\BaBe} As we have mentioned before, the photon rapidity distribution, $d\sigma/dy^*_\gamma$, in the $W\gamma$ center of mass system is a quantity which is sensitive to the radiation zero. The measurement of $d\sigma/dy^*_\gamma$ is complicated by the fact that the neutrino is not observed, which leads to a twofold ambiguity in the reconstruction of the $W\gamma$ center of mass system\rlap.\refmark{\Cort} On an event by event basis it is impossible to decide which of the two solutions is the correct one. As a result, the radiation zero is partially washed out. On the other hand, the measurement of ${\cal R}_{\gamma ,\ell}$ and ${\cal R}_{\gamma ,\nu}$ versus $p_T^{\rm min}(\gamma)$ is relatively easy, and essentially involves only counting the number of $W^\pm\gamma$ and $Z\gamma$ events as a function of the minimum photon transverse momentum. The ratio ${\cal R}_{\gamma ,\ell}$ may also be very useful in observing the radiation zero at the LHC [$pp$ collisions at $\sqrt{s}=15.4$~TeV (Ref.~\LHC)] and SSC ($pp$ collisions at $\sqrt{s}=40$~TeV). At these center of mass energies the higher order QCD corrections to $W^\pm\gamma$ production completely obscure the radiation zero in $d\sigma/dy^*_\gamma$ (Ref.~\JO), even when a rather tight central jet veto is imposed\rlap.\refmark{\BHO} The ${\cal O}(\alpha_s)$ QCD corrections to ${\cal R}_{\gamma ,\ell}$, on the other hand, are found to be well under control if one requires that no jets with $p_T(j)>50$~GeV and $|\eta(j)|<3$ are present in the event. The tree level prediction of ${\cal R}_{\gamma ,\ell}$ at hadron supercolliders as a function of the minimum photon transverse momentum is shown in Fig.~9. To simulate detector response, we have imposed the following set of cuts: $$\matrix{ p_T(\gamma)>100~{\rm GeV}, & \qquad & |\eta(\gamma)|<3, \cr p_T(\ell),~p\hskip-6pt/_T>20~{\rm GeV}, & \qquad & |\eta(\ell)|<3, \cr m_{\ell\ell}>50~{\rm GeV}, & \qquad & \Delta R(\ell\gamma)>0.7. } \eqno (3.1) $$ Energy mismeasurements in the detector were simulated by Gaussian smearing of the charged lepton and photon momenta using the expected resolution of the SDC detector\rlap. \refmark{\SDC} At the LHC and SSC, ${\cal R}_{\gamma ,\ell}$ grows with increasing values of $p_T^{\rm min}(\gamma)$, similar to the situation encountered for Tevatron energies. Due to the smaller center of mass energy available at the LHC, ${\cal R}_{\gamma ,\ell}$ rises faster than at SSC energies (solid line). For example, a minimum photon $p_T$ of 1~TeV at LHC energies corresponds to $p_T^{\rm min}(\gamma)\approx 2.6$~TeV at $\sqrt{s}=40$~TeV. For these values of $p_T^{\rm min}(\gamma)$, ${\cal R}_{\gamma ,\ell}$ is approximately equal for both energies. The ratio of $Zj$ to $W^\pm j$ cross sections, ${\cal R}_{j,\ell}$, on the other hand, stays approximately constant (${\cal R}_{j,\ell}\approx 0.12$) over the entire range of $p_T^{\rm min}(j)$ values considered (dotted and dash-dotted lines). Therefore, ${\cal R}_{\gamma ,\ell}$ reflects the radiation zero also at supercollider energies. Compared to ${\cal R}_{V\gamma}$ ($V=W,\, Z$), ${\cal R}_{\gamma ,\ell}$ and ${\cal R}_{\gamma ,\nu}$ have the advantage of reflecting the SM radiation zero. Moreover, at tree level, systematic and theoretical errors are significantly smaller for these cross section ratios than for ${\cal R}_{V\gamma}$. On the other hand, due to the large total $W$ and $Z$ cross section, statistical errors are expected to be considerably smaller in ${\cal R}_{V\gamma}$. Furthermore, cancelations between anomalous $WW\gamma$ and $ZZ\gamma/Z\gamma\gamma$ couplings may occur in ${\cal R}_{\gamma ,\ell}$ and ${\cal R}_{\gamma ,\nu}$ (see below). This is not possible in ${\cal R}_{V\gamma}=\sigma(V\gamma)/\sigma(V)$. The various $W^\pm\gamma$ and $Z\gamma$ cross section ratios listed in Eqs.~(1.2) -- (1.5) therefore yield complementary information on the structure of three vector boson vertices. \section{Probing Three Vector Boson Vertices via Cross Section Ratios} We shall now discuss the impact of non-standard three vector boson couplings on the $W^\pm\gamma$ and $Z\gamma$ cross section ratios in more detail. The couplings of $W$ and $Z$ bosons to quarks and leptons are assumed to be given by the SM. We shall also assume that there are no non-standard couplings of the $Z\gamma$ pair to two gluons\rlap. \refmark{\BHL} The $W$ and $Z$ bosons entering the Feynman diagrams for $q\bar q'\rightarrow W\gamma$ and $q\bar q\rightarrow Z\gamma$ couple to essentially massless fermions, which ensures that effectively $\partial_\mu V^\mu=0$ ($V=W,\, Z$). This together with gauge invariance of the on-shell photon restricts the tensor structure of the $WW\gamma$, $ZZ\gamma$, and $Z\gamma\gamma$ vertex to allow just four free parameters. The $WW\gamma$ vertex function for the process $q\bar q'\rightarrow W^\pm\gamma$ is then given by\refmark{\unit} (see Fig.~10 for notation) $$ \eqalign{ \Gamma^{\alpha\beta\mu}_{W\gamma W}(q_1,q_2,P) = \mp{1\over 2} \biggl\{ & (1+\kappa)(q_1-q_2)^\mu g^{\alpha\beta} + {\lambda\over m_W^2}\, (q_1-q_2)^\mu\,(P^2g^{\alpha\beta}-2P^\alpha P^\beta) \cropen{10pt} & - 4\,P^\beta g^{\mu\alpha} +2\, (1+\kappa+\lambda)\,P^\alpha g^{\mu\beta} + 2\, (\tilde\kappa+\tilde\lambda)\,\epsilon^{\mu\alpha\beta\rho}q_{2\rho} \cropen{10pt} & +{\tilde\lambda\over m_W^2}\, (q_1-q_2) ^\mu\epsilon^{\alpha\beta\rho\sigma}P_\rho\,(q_1-q_2)_\sigma\biggr\}~. \cropen{10pt}} \eqno (3.2) $$ The parameters $\kappa$ ($\tilde\kappa$) and $\lambda$ ($\tilde\lambda$) are related to the magnetic (electric) dipole moment $\mu_W$ ($d_W$) and the electric (magnetic) quadrupole moment $Q_W$ ($\widetilde Q_W$) of the $W$ boson by: $$\eqalignno{\mu_W &= {e\over 2M_W} \left( 1 + \kappa + \lambda \right)\>, &(3.3a) \cr Q_W &= -{e\over m^2_W} (\kappa -\lambda) \>, &(3.3b) \cr d_W &= {e\over 2m_W} (\tilde\kappa + \tilde \lambda) \>, &(3.3c)\cr \widetilde Q_W &= -{e\over m^2_W} (\tilde\kappa - \tilde\lambda) \>.&(3.3d) \cr}$$ While the $\kappa$ and $\lambda$ terms do not violate any discrete symmetries, the $\tilde\kappa$ and $\tilde\lambda$ terms are $P$ odd and $CP$ violating. Within the SM, at tree level, $$\eqalign{\kappa =1 \>,~~~~& \lambda =0 \>, \cropen{10pt} \tilde\kappa =0 \>,~~~~& \tilde\lambda =0 \>. \cr} \eqno(3.4) $$ The $CP$ violating couplings $\tilde\kappa$ and $\tilde\lambda$ are constrained by measurements of the electric dipole moment of the neutron to be smaller than $\sim 10^{-3}$ in magnitude\rlap.\refmark{\Hagi} Therefore, they will not be discussed subsequently. The $CP$ conserving couplings $\kappa$ and $\lambda$ have been measured recently by the UA2 Collaboration in the process $p\bar p\rightarrow e^\pm\nu\gamma X$ at the CERN $p\bar p$ collider\rlap: \refmark{\Pet} $$ \kappa=1\matrix{+2.6\crcr\noalign{\vskip -6pt} -2.2}~~({\rm for}~\lambda=0)\>,\hskip 1. cm\lambda=0\matrix{+1.7\crcr\noalign{\vskip -6pt} -1.8}~~({\rm for}~\kappa=1)\>, \eqno (3.5) $$ at the 68\% CL. The analysis of the 1988-89 CDF $W\gamma$ (and $Z\gamma$) data is still in progress\rlap.\refmark{\Benj} The most general anomalous $Z\gamma Z$ vertex function (see Fig.~10 for notation) is given by\refmark{\HHPZ} $$ \eqalign{ \Gamma^{\alpha\beta\mu}_{Z\gamma Z}(q_1,q_2,P) = {P^2-q_1^2\over m_Z^2}~ & \Biggl\{h_1^Z\left(q_2^\mu g^{\alpha\beta}-q_2^\alpha g^{\mu\beta}\right) \cropen{10pt} & + {h_2^Z\over m_Z^2}\,P^\alpha\left(\left(P\cdot q_2\right) g^{\mu\beta}-q_2^\mu P^\beta\right) \cropen{10pt} & + h_3^Z\epsilon^{\mu\alpha\beta\rho}q_{2\rho} \cropen{10pt} & + {h_4^Z\over m_Z^2}\,P^\alpha\epsilon^{\mu\beta\rho\sigma}P_\rho q_{2\sigma}\Biggr\}~,\cropen{10pt} } \eqno (3.6) $$ where $m_Z$ is the $Z$ boson mass. The most general $Z\gamma\gamma$ vertex function can be obtained from Eq.~(3.6) by the following replacements: $$ {P^2-q_1^2\over m_Z^2}~\rightarrow ~{P^2\over m_Z^2}\hskip 1.cm {\rm and}\hskip 1.cm h_i^Z\rightarrow h_i^\gamma,~~i=1\dots 4. \eqno (3.7) $$ Terms proportional to $P^\mu$ and $q_1^\alpha$ have been omitted in Eq.~(3.6) since they do not contribute to the cross section. The overall factor $(P^2-q_1^2)$ in Eq.~(3.6) is a result of Bose symmetry, whereas the factor $P^2$ in the $Z\gamma\gamma$ vertex function originates from electromagnetic gauge invariance. As a result the $Z\gamma\gamma$ vertex function vanishes identically if both photons are onshell\rlap.\refmark{\Yang} All $ZZ\gamma$ and $Z\gamma\gamma$ couplings are $C$ odd; $h_1^V$ and $h_2^V$ ($V=Z,\gamma$) violate $CP$. Combinations of $h_3^V$ ($h_1^V$) and $h_4^V$ ($h_2^V$) correspond to the electric (magnetic) dipole and magnetic (electric) quadrupole transition moment of the $Z$ boson. At tree level in the SM, all couplings $h_i^V$ vanish. Presently, there are no limits on $h^V_i$ from hadron collider experiments. LEP~I data give only very loose constraints of $h^V_i\sim{\cal O}(10-100)$ (Ref.~\BaBe). Without loss of generality we have chosen the overall $WW\gamma$, $ZZ\gamma$, and $Z\gamma\gamma$ coupling constant to be, $$ g_{WW\gamma}=g_{ZZ\gamma}=g_{Z\gamma\gamma}=e\, , \eqno (3.8) $$ where $e$ is the charge of the proton. Tree level unitarity restricts the $WW\gamma$, $ZZ\gamma$, and $Z\gamma\gamma$ couplings uniquely to their SM values at asymptotically high energies\rlap. \refmark{\Jog} This implies that the $WW\gamma$ and $Z\gamma V$ couplings $a=\kappa-1,\dots,\tilde\lambda$ and $h_i^V$ have to be described by form factors $a(q_1^2,q_2^2,P^2)$ and $h_i^V(q_1^2,q_2^2,P^2)$ which vanish when $q_1^2$, $q_2^2$, or $P^2$ becomes large. Following Refs.~\BaBe\ and~\BZ, we shall use generalized dipole form factors of the form $$ \eqalignno{ a(m_W^2,0,\hat s) = & {a_0\over \left(1+\hat s/\Lambda^2\right)^n}~, & (3.9a)\cr \noalign{\hbox{and}} h_i^V(m_Z^2,0,\hat s) = & {h_{i0}^V\over \left(1+\hat s/\Lambda^2\right) ^n}~. & (3.9b)} $$ In order to guarantee unitarity, $n$ must satisfy $n>1/2$ for $a=\Delta\kappa=\kappa-1,\, \tilde\kappa$, $n>1$ for $a=\lambda,\,\tilde\lambda$ (Ref.~\BZ), and $n>3/2$ ($n>5/2$) for $h^V_{1,3}$ ($h^V_{2,4}$) (Ref.~\BaBe). In Eq.~(3.9) $\Lambda$ represents the scale at which new physics becomes important in the weak boson sector. In the following, we chose $\Lambda=750$~GeV, $n=2$ for $WW\gamma$ couplings, and $n=3$ ($n=4$) for $h^V_{1,3}$ ($h^V_{2,4}$). The influence of anomalous $WW\gamma$ couplings on the ratio of $Z\gamma$ to $W^\pm\gamma$ cross sections is shown in Fig.~11 for the cuts summarized in Eqs.~(2.4) -- (2.7). For presentational reasons we display the inverse cross section ratio $${\cal R}^{-1}_{\gamma ,\ell}={B(W\rightarrow\ell\nu)\cdot\sigma(W^\pm\gamma) \over B(Z\rightarrow\ell^+\ell^-)\cdot\sigma(Z\gamma)}\,. \eqno(3.10) $$ The solid curves show the SM result. The error bars indicate the statistical errors, corresponding to the 68.3\% confidence level (CL) interval, expected for an integrated luminosity of $\int\!{\cal L}dt=25$~pb$^{-1}$ and considering only $W\rightarrow e\nu$ and $Z\rightarrow e^+e^-$ decays. If the muon final states of the weak boson decays are taken into account as well, the statistical errors may be significantly reduced. Details depend strongly on the rapidity coverage for muons, which is quite different for CDF\refmark{\CDFm} and D$0\hskip-6pt/$\rlap.\refmark{\John} When estimating the errors of the cross section ratios, care must be taken for large values of $p_T^{\rm min}(\gamma)$ and $m_{\rm min}$ where the number of events in both the numerator and denominator can be very small. To estimate the statistical errors in these regions, we have used the method described in Ref.~\JR. For an integrated luminosity of $\int\!{\cal L}dt=100$~pb$^{-1}$, as foreseen by the end of 1994, the error bars in Fig.~11, and all subsequent figures are reduced by a factor~1.5 --~2. The dashed and dotted curves show ${\cal R}_{\gamma ,\ell}$ for $\Delta\kappa_0=2.6$ and $\lambda_0=1.7$, the present UA2 68\%~CL limits on the $CP$ conserving $WW\gamma$ couplings\rlap.\refmark{\Pet} Only one coupling is varied at a time. For the form factor parameters used ($n=2$ and $\Lambda=750$~GeV), the values of the two couplings are about a factor~5 and~4 below the unitarity bound, respectively\rlap.\refmark{\BZ} The anomalous $ZZ\gamma$ and $Z\gamma\gamma$ couplings, $h_{i0}^V$, are assumed to be zero in Fig.~11. All numerical results shown in this Section are obtained using the tree level calculations of Refs.~\BB\ and~\BaBe. Since the anomalous terms in the helicity amplitudes grow like $\sqrt{\hat s}/m_W$ for $\Delta\kappa$ and $\hat s/m_W^2$ for $\lambda$, non-standard $WW\gamma$ couplings lead to an excess of events at large values of the photon transverse momentum and the $W\gamma$ invariant mass. As a result, ${\cal R}^{-1}_{\gamma ,\ell}$ is larger than in the SM if anomalous $WW\gamma$ couplings are present. Due to the radiation zero one expects ${\cal R}^{-1}_{\gamma ,\ell}$ to fall with increasing $p_T^{\rm min}(\gamma)$ in the SM (see Fig.~11a). For anomalous couplings, on the other hand, the inverse cross section ratio rises very rapidly with the minimum photon transverse momentum. Fig.~11 shows that it should be possible to measure ${\cal R}^{-1}_{\gamma ,\ell}$ for minimum photon transverse momenta up to 40~GeV, and values of $m_{\rm min}$ up to 200~GeV, with 25~pb$^{-1}$. Comparing Fig.~11a and~11b it is obvious that ${\cal R}^{-1}_{\gamma ,\ell}$ as a function of $p_T^{\rm min}(\gamma)$ is more sensitive to anomalous couplings than the inverse cross section ratio versus $m_{\rm min}$. The reduced sensitivity in ${\cal R}^{-1}_{\gamma ,\ell}$ as a function of the minimum weak boson photon invariant mass is mostly due to the ambiguity in the reconstructed longitudinal neutrino momentum, $p_{\nu L}$. As before, we have used both solutions for $p_{\nu L}$ with equal weight in Fig.~11b. The sensitivity of ${\cal R}^{-1}_{\gamma ,\ell}$ versus $m_{\rm min}$ would clearly improve if one could discriminate between the two solutions on a statistical basis. Finally, Fig.~11 demonstrates that, the UA2 limits on $\Delta\kappa$ and $\lambda$ can be considerably improved at the Tevatron with an integrated luminosity of 25~pb$^{-1}$. The impact of anomalous $ZZ\gamma$ couplings on ${\cal R}_{\gamma ,\ell}$ is shown in Fig.~12 for $h^Z_{30}=1$ and $h^Z_{40}=0.075$. For the form factor parameters used ($n=3$ [$n=4$] for $h^Z_3$ [$h^Z_4$], and $\Lambda=750$~GeV), these values are approximately a factor~2 below the limit allowed by unitarity\rlap.\refmark{\BaBe} The $WW\gamma$ and $Z\gamma\gamma$ vertex function are assumed to have SM form in Fig.~12. For equal coupling strengths, the numerical results obtained for the $Z\gamma\gamma$ couplings $h_3^\gamma$ and $h_4^\gamma$ are about 20\% below those obtained for $h_3^Z$ and $h_4^Z$, in the region where anomalous coupling effects dominate over the SM cross section. Results for the $CP$ violating couplings $h_{1,2}^V$ ($V=Z,\,\gamma$) are virtually identical to those obtained for the same values of $h_{3, 4}^V$. Anomalous $ZZ\gamma$ and $Z\gamma\gamma$ couplings are seen to increase ${\cal R}_{\gamma ,\ell}$ dramatically, especially at large values of the minimum photon~$p_T$. In contrast to the situation for anomalous $WW\gamma$ couplings, the sensitivity of ${\cal R}_{\gamma ,\ell}$ to $ZZ\gamma/Z\gamma\gamma$ couplings is not degraded substantially if the ratio is considered as a function of $m_{\rm min}$ (see Fig.~12b). For an integrated luminosity of 25~pb$^{-1}$, the sensitivity of ${\cal R}_{\gamma ,\ell}$ to non-standard three vector boson vertices is limited mostly by statistical errors. From the results of Section~2.3 we estimate the systematic errors for ${\cal R}_{\gamma ,\ell}$ to be approximately 10\%. Due to the larger branching ratio of the decay $Z\rightarrow\bar\nu\nu$, the statistical error in the cross section ratio of Eq.~(1.3), ${\cal R}_{\gamma ,\nu}$, is reduced by a factor~1.4~--~1.7. The cross section ratio ${\cal R}_{\gamma ,\nu}$ and its inverse are shown in Fig.~13 as a function of the minimum photon transverse momentum for the cuts summarized in Eq.~(2.4) and~(2.7). The photon transverse momentum cut in Fig.~13 has been increased to $p_T(\gamma)>30$~GeV, in order to suppress backgrounds from $p\bar p\rightarrow\gamma j$, with the jet rapidity outside the range covered by the detector and thus ``faking'' missing transverse momentum, and two jet production where one of the jets is misidentified as a photon while the other disappears through the beam hole\rlap.\refmark{\BaBe} Comparing Fig.~13 with Figs.~11a and~12a, the increased sensitivity of ${\cal R}_{\gamma ,\nu}$ to anomalous three vector boson couplings is evident. So far, we have only varied either $WW\gamma$ or $ZZ\gamma/Z\gamma\gamma$ couplings. If the three boson vertices contributing to $W\gamma$ and $Z\gamma$ production simultaneously deviate from the SM, cancelations may occur between the contributions to $\sigma(W^\pm\gamma)$ and $\sigma(Z\gamma)$. Couplings corresponding to operators of different dimension in the effective Lagrangian have a different high energy behavior, and thus do not cancel at a substantial level in the cross section ratios. On the other hand, the effects of $WW\gamma$ and $ZZ\gamma/Z\gamma\gamma$ couplings of equal dimension may cancel almost completely in ${\cal R}_{\gamma ,\ell}$ and ${\cal R}_{\gamma ,\nu}$, if the couplings are similar in magnitude. This is illustrated in Fig.~14, where we show ${\cal R}^{-1}_{\gamma ,\nu}$ versus $p_T^{\rm min}(\gamma)$ for the SM (solid line), and two combinations of anomalous $WW\gamma$ and $ZZ\gamma$ couplings. The error bars in Fig.~14 display the statistical errors expected for $\int\!{\cal L}dt=25$~pb$^{-1}$ and $W\rightarrow e\nu$ decays. The dashed line shows the expected result for $\lambda_0=1.7$ and $h^Z_{30}=1.5$. Both couplings correspond to operators of dimension~6 in the effective Lagrangian. It is clear that, for these couplings and with the integrated luminosity expected from the current Tevatron run, the deviation from the SM cannot be seen. The dotted line in Fig.~14 shows ${\cal R}^{-1}_{\gamma ,\nu}$ for $\Delta\kappa_0=2.6$ and $h^Z_{40}=0.075$. $\Delta\kappa_0$ corresponds to a dimension~4 operator, whereas $h^Z_{40}$ originates from an operator of dimension~8 in the effective Lagrangian. At small minimum photon transverse momenta, the effects of the anomalous $WW\gamma$ coupling dominate, and the inverse cross section ratio is larger than expected in the SM. For larger values of $p_T^{\rm min}(\gamma)$, the influence of the higher dimensional coupling on the $Z\gamma$ cross section increases, and ${\cal R}^{-1}_{\gamma ,\nu}$ drops below the SM value. Only for $p_T^{\rm min}(\gamma)\approx 100$~GeV do the effects of the two non-standard contributions cancel. Although no substantial cancelations over an extended region of $p_T^{\rm min}(\gamma)$ occur between $\Delta\kappa$ and $h^Z_4$, the error bars in Fig.~14 indicate that it will be difficult to discriminate between the SM prediction and the dotted curve at a statistically significant level with the data expected from the current Tevatron run. Possible cancelations between anomalous $WW\gamma$ and $ZZ\gamma/Z\gamma\gamma$ couplings in ${\cal R}_{\gamma ,\ell}$ and ${\cal R}_{\gamma ,\nu}$ can be excluded through a measurement of the ratios ${\cal R}_{W\gamma}$ and ${\cal R}_{Z\gamma}$. Since the three vector boson vertices do not enter the quantity in the denominator, and ${\cal R}_{W\gamma}$ (${\cal R}_{Z\gamma}$) is only sensitive to $WW\gamma$ ($ZZ\gamma /Z\gamma\gamma$) couplings, cancelations between the effects of non-standard $WW\gamma$ and $ZZ\gamma /Z\gamma\gamma$ couplings cannot occur in these cross section ratios. The SM result for ${\cal R}_{W\gamma}$ [${\cal R}_{Z\gamma}$] versus $p_T^{\rm min}(\gamma)$ is compared to $\sigma(W\gamma)/\sigma(W)$ [$\sigma(Z\gamma)/\sigma(Z)$] in the presence of anomalous $WW\gamma$ [$ZZ\gamma$] couplings in Fig.~15a [Fig.~15b]. The error bars indicate the statistical errors expected for 25~pb$^{-1}$, taking only the decays $W\rightarrow e\nu$ and $Z\rightarrow e^+e^-$ into account. Due to the large number of $W$ bosons expected, the statistical error of ${\cal R}_{W\gamma}$ is considerably smaller than that of ${\cal R}_{\gamma ,\ell}$ and ${\cal R}_{\gamma ,\nu}$. In the current Tevatron run it should be possible to measure ${\cal R}_{W\gamma}$ for minimum photon transverse momenta of up to $p_T^{\rm min}(\gamma)\approx 50$~GeV. The sensitivity of ${\cal R}_{V\gamma}$ ($V=W,\, Z$) to anomalous couplings is quite similar to that of ${\cal R}_{\gamma ,\nu}$. Similar to the situation encountered for ${\cal R}_{\gamma ,\ell}$, deviations from the SM predictions are less pronounced in ${\cal R}_{W\gamma}$ versus $m_{\rm min}$ than for the cross section ratio as a function of $p_T^{\rm min}(\gamma)$. \section{Sensitivity Limits} As we have demonstrated so far, the cross section ratios listed in Eqs.~(1.2) -- (1.5) are sensitive indicators of anomalous couplings. We now want to make this statement more quantitative by deriving those values of $\Delta\kappa_0$, $\lambda_0$, and $h^V_{i0}$ ($V=\gamma,\,Z$) which would give rise to a deviation from the SM at the level of one or two standard deviations in the various cross section ratios. We assume an integrated luminosity of 25~pb$^{-1}$ at the Tevatron and the cuts listed in Eqs.~(2.4) -- (2.7). For ${\cal R}_{\gamma ,\nu}$ the photon transverse momentum cut is increased to $p_T(\gamma)>30$~GeV, in order to reduce backgrounds from prompt photon and two jet production. Sensitivity limits are calculated for form factors of the form given in Eq.~(3.9) with $\Lambda=750$~GeV, $n=2$ for the $WW\gamma$ couplings $\Delta\kappa_0$ and $\lambda_0$, and $n=3$ ($n=4$) for $h^V_{10,30}$ ($h^V_{20,40}$) ($V=\gamma,\, Z$). Our analysis is based on cross section ratios obtained in the Born approximation and takes into account the expected theoretical and systematic uncertainties. Based on the results presented in Section~2.3, we roughly estimate the combined theoretical and systematic uncertainties from the parametrization of the parton distribution functions, the choice of the factorization scale $Q^2$, and higher order QCD corrections to be about 10\% for ${\cal R}_{\gamma ,\nu}$ and ${\cal R}_{\gamma ,\ell}$, 20\% for ${\cal R}_{W\gamma}$, and approximately 15\% for ${\cal R}_{Z\gamma}$ for the range of photon transverse momenta accessible in the current Tevatron run. In order to obtain these numbers we have added the various contributions in quadrature. Possible systematic errors originating from background processes are ignored. In estimating the uncertainties from higher order QCD corrections, we have assumed that the photon isolation cut (2.13) and the central jet veto of Eq.~(2.14) are imposed in addition to cuts of Eqs.~(2.4) -- (2.7). {}From the discussion in Section~3.2 it is clear that, in most cases, the best sensitivity limits are obtained if the ratios are viewed as functions of the minimum photon transverse momentum. In the following we therefore derive bounds only for cross section ratios viewed as a function of $p_T^{\rm min}(\gamma)$. In the ratios of $Z\gamma$ to $W^\pm\gamma$ cross sections we vary either $WW\gamma$ or $ZZ\gamma$ couplings. However, interference effects between $\Delta\kappa_0$ and $\lambda_0$, and between the various $ZZ\gamma$ couplings $h^Z_{i0}$, are fully taken into account in our analysis. Interference effects between $ZZ\gamma$ and $Z\gamma\gamma$ couplings are expected to be small\refmark{\BaBe} and are ignored. Sensitivity limits for $h^\gamma_{i0}$ are nearly identical to those derived for $h^Z_{i0}$. Furthermore, bounds for the $CP$ violating couplings $h^Z_{10,20}$ virtually coincide with those for $h^Z_{30,40}$. We therefore concentrate on $\Delta\kappa_0$, $\lambda_0$, $h^Z_{30}$, and $h^Z_{40}$ in the following. To estimate the sensitivity bounds which can be achieved at the Tevatron, we use the maximum likelihood technique. The likelihood function is calculated using binomial probability distributions for the cross section ratios\rlap.\refmark{\JR} The minimum photon transverse momentum is increased in steps of at least 5~GeV, starting at $p_T^{\rm min}(\gamma)=10$~GeV for ${\cal R}_{V\gamma}$ and ${\cal R}_{\gamma ,\ell}$, and at $p_T^{\rm min}(\gamma)=30$~GeV for ${\cal R}_{\gamma ,\nu}$. For smaller steps in $p_T^{\rm min}(\gamma)$, the cross section ratios for different minimum photon transverse momenta are strongly correlated, resulting in overly optimistic sensitivity limits. The resulting bounds for $\Delta\kappa_0$, $\lambda_0$, and $h^Z_{30,40}$ are presented in Table~1. Due to the larger statistical errors in ${\cal R}_{\gamma ,\ell}$, the limits achievable from this ratio are about 20 -- 30\% weaker than those from the other cross section ratios. The 95\% CL bounds from ${\cal R}_{\gamma ,\nu}$ and ${\cal R}_{V\gamma}$ ($V=W,\, Z$) are quite similar. The larger statistical errors in ${\cal R}_{\gamma ,\nu}$ are almost completely compensated by the smaller systematic and theoretical errors. Table~1 clearly demonstrates the advantage of ${\cal R}_{\gamma ,\nu}$ due to the larger branching ratio of the $Z\rightarrow\bar\nu\nu$ decay. The limits on the $WW\gamma$ couplings $\Delta\kappa_0$ and $\lambda_0$ depend only slightly on the form factor scale, whereas the bounds on $h^Z_{30,40}$ can easily change by a factor~3 --~6 if $\Lambda$ is varied by a factor~2 (Ref.~\BaBe). At Tevatron energies, nonnegligible interference effects are found between $\Delta\kappa$ and $\lambda$, and $h^Z_3$ and $h^Z_4$. As a result, different anomalous contributions to the helicity amplitudes may cancel partially, resulting in weaker bounds than if only one coupling at a time is allowed to deviate from its SM value. These effects are fully taken into account in Table~1. If only one coupling is varied at a time, the limits of Table~1 for $\Delta\kappa_0$ and $\lambda_0$ improve by 10 --~30\%. For example, one finds $$ \Delta\kappa_0=0\matrix{+0.9\crcr\noalign{\vskip -6pt} -0.7}~~({\rm for}~\lambda_0=0),~~{\rm and}~~\lambda_0=0\matrix{+0.28\crcr\noalign{\vskip -6pt} -0.29}~~({\rm for}~\kappa_0=1), \eqno (3.11) $$ at the $1\sigma$ level from ${\cal R}_{\gamma ,\nu}$. With $\int\!{\cal L}dt=25$~pb$^{-1}$, the present UA2 limit for $\kappa$ ($\lambda$) [see Eq.~(3.5)] thus may be improved by up to a factor~3~(5). For the form factor parameters used, the bounds for $h^Z_{30}$ and $h^Z_{40}$ in Table~1 improve by a factor 1.6 --~2 if only one coupling is varied at a time. The sensitivity to anomalous couplings stems from regions of phase space where the anomalous contributions to the cross sections are considerably larger than the SM expectation. As a result, the bounds scale essentially like $\left(\int\!{\cal L}dt\right)^{1/4}$. Therefore, increasing the integrated luminosity at the Tevatron to 100~pb$^{-1}$, as foreseen by the end of 1994, will improve the sensitivity limits of Table~1 by about a factor~1.4. Due to the smaller experimental, theoretical, and systematic uncertainties of the cross section ratios, the resulting bounds may be considerably better than those expected from analyzing the $p_T(\gamma)$ distribution\rlap.\refmark{\BB,\BaBe} The bounds listed in Table~1 have been obtained for a generic set of cuts [Eqs.~(2.4) -- (2.7)]. They also depend somewhat on the exact procedure used to extract the limits. For example, increasing $p_T^{\rm min}(\gamma)$ in steps of 30~GeV, weakens the bounds by 20 -- 30\%. Our limits thus should be regarded as guidelines, illustrating the capabilities of CDF and D$0\hskip-6pt/$ in improving our knowledge of $WW\gamma$ and $ZZ\gamma /Z\gamma\gamma$ couplings within the immediate future. As we have mentioned before, for 25~pb$^{-1}$ the sensitivity of the cross section ratios to anomalous couplings is limited mostly by statistical errors. For this situation, a calculation of the ratios at tree level is completely sufficient. For larger integrated luminosities, the theoretical and systematic errors become more important in limiting the sensitivity bounds which can be achieved. These errors could be improved substantially if a full ${\cal O}(\alpha_s)$ calculation of the ratios for general $WW\gamma$ and $ZZ\gamma /Z\gamma\gamma$ couplings is carried out. This would in particular reduce the uncertainty originating from the choice of the factorization scale $Q^2$, which dominates the systematic and theoretical errors in ${\cal R}_{W\gamma}$ and ${\cal R}_{Z\gamma}$ at large $p_T^{\rm min}(\gamma)$. \chapter{Summary and Conclusions} In this paper we have studied the theoretical aspects of cross section ratios for the processes $p\bar p\rightarrow W^\pm\gamma$ and $p\bar p\rightarrow Z\gamma$ at Tevatron energies. Four different ratios can be formed, which are listed in Eqs.~(1.2) -- (1.5). Compared to direct measurements of cross sections, experimental, theoretical, and systematic errors are expected to be significantly reduced in ratios of cross sections. Our main results can be summarized as follows: \item{1)} The ratios ${\cal R}_{\gamma ,\ell}=B(Z\rightarrow\ell^+\ell^-)\cdot \sigma(Z\gamma)/\allowbreak B(W\rightarrow\ell\nu)\cdot\sigma(W^\pm\gamma)$ and ${\cal R}_{\gamma ,\nu}=B(Z\rightarrow\bar\nu\nu)\cdot\sigma(Z\gamma)/\allowbreak B(W\rightarrow\ell\nu)\cdot\sigma(W^\pm\gamma)$ as a function of the minimum photon transverse momentum, $p_T^{\rm min}(\gamma)$, increase sharply with $p_T^{\rm min}(\gamma)$ in the SM, reflecting the radiation zero which is present in the lowest order $q\bar q'\rightarrow W^\pm\gamma$ helicity amplitudes. \item{2)} The systematic and theoretical errors of ${\cal R}_{\gamma ,\ell}$ and ${\cal R}_{\gamma ,\nu}$ are significantly smaller than those of ${\cal R}_{V\gamma}=\sigma(V\gamma)/\sigma(V)$ ($V=W^\pm,Z$). Theoretical and systematic uncertainties are well under control for all cross section ratios. \item{3)} Higher order QCD corrections only partially cancel in the cross section ratios, in particular at large photon transverse momenta. The imperfect cancelations can be traced to a phase space region where a high $p_T$ photon is balanced by a quark jet which emits a $W$ or $Z$ boson almost collinear with the quark. By applying a modest central jet veto requirement [see Eq.~(2.14)], the residual QCD corrections cancel almost completely in the cross section ratios over a wide range of photon transverse momenta. \item{4)} The $W^\pm\gamma$ and $Z\gamma$ cross section ratios listed in Eqs.~(1.2) -- (1.5) constitute powerful new tools which can be used to set new limits on physics beyond the SM. We have studied in detail the impact of non-standard $WW\gamma$ and $ZZ\gamma/Z\gamma\gamma$ couplings on the cross section ratios and have derived sensitivity limits (see Table~1) based on an integrated luminosity of 25~pb$^{-1}$ expected from the current Tevatron run. For anomalous $WW\gamma$ couplings, these limits improve present hadron collider bounds up to a factor~3~--~5. The various cross section ratios yield complementary information on the three vector boson~couplings. The bounds listed in Table~1 should be compared with theoretical expectations, existing low energy limits, and constraints obtained from LEP~I data. In models based on chiral perturbation theory, for example, one typically expects deviations from the SM of ${\cal O}(10^{-2})$ (Ref.~\BDV). Although bounds can be extracted from low energy and high precision measurements at the $Z$ pole, there are ambiguities and model dependencies in the results\rlap.\refmark{\De - \HISZ} From loop contributions to $(g-2)_\mu$ one estimates\refmark{\muon} limits which are typically of ${\cal O}(1-10)$. No rigorous bounds on $WW\gamma$ couplings can be obtained from LEP~I data, if correlations between different contributions to the anomalous couplings are fully taken into account. Without serious cancelations among various one loop contributions, one finds\refmark{\HISZ} $$ |\Delta\kappa|,~|\lambda|\mathrel{\mathpalette\atversim<} 0.5-1.5 \eqno (4.1) $$ at the 90\% CL from present data on $S$, $T$, and $U$ (Ref.~\PT) [or, equivalently, $\epsilon_1$, $\epsilon_2$, and $\epsilon_3$ (Ref.~\Alt)]. The limits which can be obtained from data expected in the current Tevatron run are already competitive with the bounds of Eq.~(4.1). Constraints on $ZZ\gamma$ and $Z\gamma\gamma$ couplings from $S$, $T$, and $U$ have not been calculated so far. LEP~I data on radiative $Z$ decays provide only very little information on the structure of the $ZZ\gamma /Z\gamma\gamma$ vertex\rlap.\refmark{\BaBe} Significant improvements of the bounds derived in Table~1 can be expected if an integrated luminosity of 100~pb$^{-1}$ is accumulated at the Tevatron, as foreseen by the end of 1994, and from $W$ pair and $Z\gamma$ production at LEP~II\rlap.\refmark{\HHPZ,\Foxl} Finally, the LHC and SSC\rlap,\refmark{\BZ} and a linear $e^+e^-$ collider with $\sqrt{s}=500$~GeV (Refs.~\Boud,\GG) will enable a measurement of the $WW\gamma$ and $ZZ\gamma /Z\gamma\gamma$ couplings at the 1\% level. In view of our present poor knowledge of the self interactions of $W$ bosons, $Z$ bosons, and photons, the limits which can be obtained from a measurement of the $W^\pm\gamma$ and $Z\gamma$ cross section ratios with the data accumulated in the current Tevatron run will represent a major step forward towards a high precision measurement of the three vector boson vertices. \vskip 0.35in \ack We would like to thank F.~Halzen, S.~Keller, G.~Landsberg, and D.~Zeppenfeld for stimulating discussions, and encouragement. We are also grateful to H.~Baer for providing us with the FORTRAN code of Ref.~\BR. One of us (UB) would like to thank the Fermilab Theory Group, where this work was completed, for its warm hospitality. This research was supported in part by the UK Science and Engineering Research Council, and in part by the U.~S.~Department of Energy under Grant No.~DE-FG02-91ER40677 and Contract No.~DE-FG05-87ER40319. \vfil\break \refout \vfil\break \centerline{TABLE~1} \vskip 3.mm \noindent Sensitivities achievable at the $1\sigma$ and $2\sigma$ confidence levels (CL) for the anomalous $WW\gamma$ and $ZZ\gamma$ couplings $\Delta\kappa_0$, $\lambda_0$, $h_{30}^Z$, and $h_{40}^Z$ from the cross section ratios ${\cal R}_{\gamma ,\ell}$, ${\cal R}_{\gamma ,\nu}$, ${\cal R}_{W\gamma}$, and ${\cal R}_{Z\gamma}$, for an integrated luminosity of 25~pb$^{-1}$ at the Tevatron. The procedure used to extract the sensitivity bounds is described in the text. The limits for $\Delta\kappa_0$ ($h_{30}^Z$) apply for arbitrary values of $\lambda_0$ ($h_{40}^Z$) and vice versa. For the form factors we use Eq.~(3.9) with $\Lambda=750$~GeV, $n=2$ for $WW\gamma$ couplings, and $n=3$ ($n=4$) for $h_{30}^Z$ ($h_{40}^Z$), respectively. The $W$ and $Z$ decay channels into muons are not included in deriving the sensitivity limits. Anomalous $Z\gamma\gamma$ couplings are assumed to be zero. \vskip 0.5in \tablewidth=5.5in \def\vrule height 4.3ex depth 2.7ex width 0pt{\vrule height 4.3ex depth 2.7ex width 0pt} \begintable coupling | CL | ${\cal R}_{\gamma ,\ell}$ | ${\cal R}_{\gamma ,\nu}$ | ${\cal R}_{W\gamma}$ \crthick $\Delta\kappa_0$ | $2\sigma$ | $\matrix{+2.5 \crcr\noalign{\vskip -6pt} -2.0}$ | $\matrix{+1.7 \crcr\noalign{\vskip -6pt} -1.3}$ | $\matrix{+1.7 \crcr\noalign{\vskip -6pt} -1.3}$ \nr | $1\sigma$ | $\matrix{+1.8 \crcr\noalign{\vskip -6pt} -1.3}$ | $\matrix{+1.2 \crcr\noalign{\vskip -6pt} -0.9}$ | $\matrix{+1.5 \crcr\noalign{\vskip -6pt} -1.1}$ \cr $\lambda_0$ |$2\sigma$ | $\matrix{+0.84 \crcr\noalign{\vskip -6pt} -0.98}$ | $\matrix{+0.49 \crcr\noalign{\vskip -6pt} -0.57}$ | $\matrix{+0.52 \crcr\noalign{\vskip -6pt} -0.60}$ \nr | $1\sigma$ | $\matrix{+0.54 \crcr\noalign{\vskip -6pt} -0.69}$ | $\matrix{+0.32 \crcr\noalign{\vskip -6pt} -0.40}$ | $\matrix{+0.44 \crcr\noalign{\vskip -6pt} -0.55}$ \crthick coupling | CL | ${\cal R}_{\gamma ,\ell}$ | ${\cal R}_{\gamma ,\nu}$ | ${\cal R}_{Z\gamma}$ \crthick $h_{30}^Z$ | $2\sigma$ |$\pm 1.0$ | $\pm 0.8$ | $\pm 0.9$ \nr | $1\sigma$ |$\pm 0.7$ | $\pm 0.5$ | $\pm 0.7$ \cr $h_{40}^Z$ | $2\sigma$ | $\pm 0.16$ | $\pm 0.13$ | $\pm 0.14$ \nr | $1\sigma$ | $\pm 0.11$ | $\pm 0.09$ | $\pm 0.11$ \endtable \vfil\break \figout \vfil\break \bye
1707.02962
\section{Introduction} The formulation of a string theory for hadrons has been an attractive proposal induced by the phenomenological success in explaining Venziano formula~\cite{Veneziano:1968yb} and even before the formulation of quantum chromodynamics (QCD). Despite the difficulties encountered in the quantization scheme in a fundamental string theory, the proposal to to describe the long-distance dynamics of strong interactions inside the hadrons by a low energy effective string~\cite{Luscherfr,Luscher:2002qv} has remained an alluring conjecture. String formation is realized in many strongly correlated systems and is not an exclusive property of the QCD color tubes~\cite{PhysRevB.36.3583,PhysRevB.78.024510,2007arXiv0709.1042K,Nielsen197345,Lo:2005xt}. The normalization group equations imply that the system flows towards a roughening phase where the string is no longer rigid and oscillations transverse to the classical world sheet with predictable measurable effects that can be and verified in numerical simulations in the rough phase of lattice gauge theories (LGT). In the leading Gaussian approximation of the NG action, the quantum fluctuations of the string bring forth a universal quantum correction to the linearly rising potential well known as the L\"uscher term in the meson and geometry-dependent L\"uscher-like terms~\cite{Jahn2004700,deForcrand:2005vv} in baryonic configurations. The width due to the quantum delocalizations of the string grows logarithmically~\cite{Luscher:1980iy} as two color sources are pulled apart. Logarithmic broadening is expected for the baryonic junction~\cite{PhysRevD.79.025022} as well. Precise lattice measurements of the $Q\bar{Q}$ potential in $SU(3)$ gauge model are in consistency with the L\"uscher subleading term for color source separation commencing from distance $R = 0.5$ fm~\cite{Luscher:2002qv}. This is typically the distance scale where the effects of the intrinsic thickness of the flux tube $1/T_c$~\cite{Caselle:2012rp} diminishes and the effective description is expected to hold. Many gauge models have unambiguously identified the L\"uscher correction to the potential with unprecedented accuracy~\cite{Juge:2002br,HariDass2008273,Caselle:2016mqu,caselle-2002,Pennanen:1997qm,Brandt:2016xsp}. The string model predicts, in addition, a logarithmic broadening~\cite{Luscher:1980iy} for the width profile of the string delocalization. This also has been observed in several lattice simulations corresponding to the different gauge groups~\cite{Caselle:1995fh,Bonati:2011nt,HASENBUSCH1994124,Caselle:2006dv,Bringoltz:2008nd,Athenodorou:2008cj,Juge:2002br,HariDass:2006pq,Giudice:2006hw,Luscher:2004ib,Pepe:2010na,Bicudo:2017uyy} An overlapping spectrum of the string's excited states would manifest in the high temperature regime of lattice gauge models. The free string approximation implies a decrease in the slop of the $Q\bar{Q}$ potential or in other-words a temperature-dependent effective string tension~\cite{PhysRevD.85.077501, Kac}. The leading-order correction for the mesonic potential turns into a logarithmic term of Dedekind $\eta$ function which encompasses L\"uscher term as the zero temperature $T=0$ limiting term ~\cite{Luscher:2002qv,Gao,Pisarski}. For large source separation distances the string's broadening exhibits a linear pattern before the deconfinement is reached from below~\cite{allais,Gliozzi:2010zv,Caselle:2010zs,Gliozzi:2010jh}. Nevertheless, the simplified picture of the bosonic string derived on the basis of the leading order formulation of NG action poorly describe the numerical data in the intermediate distances at high temperatures. For instance, substantial deviations~\cite{PhysRevD.82.094503,Bakry:2010sp,Bakry:2011zz,Bakry:2012eq} from the free string behavior have been found for the lattice data corresponding to temperatures very close to the deconfinement point. A comparison with the lattice Monte-Carlo data showed the validity of the leading-order approximation at sources separations larger than $R=0.9$ fm~\cite{PhysRevD.82.094503,Bakry:2010sp,Bakry:2012eq} for both the $Q\bar{Q}$ potential and color-tube width profile. The region extends behind source separation distances at which leading-order string model predictions are valid~\cite{Luscher:2002qv} at zero temperature. In the baryon ~\cite{Bakry:2014gea,Bakry:2016uwt,Bakry:2011kga}, taking into account the length of the Y-string between any two quarks, we found a similar behavior~\cite{refId0}. The fact that the lattice data substantially deviate from the free string description in the intermediate distance and high temperatures has induced many numerical experiments to verify the validity of higher-order model dependent corrections for the NG action~\cite{Caselle:2004jq,Caselle:2004er}. Even in the Nambu-Goto (NG) framework there is no reason to believe that all orders of the power expansion are relevant to the correct behavior of QCD strings~\cite{Giudice:2009di}. For example, a first-order term deviating from the universal behavior has been determined unambiguously in 3D percolation model ~\cite{Giudice:2009di}, no numerical evidence indicating universal features of the corrections beyond the L\"uscher term have been encountered among $Z(2)$, $SU(2)$ and $SU(3)$ confining gauge models~\cite{Caselle:2004jq}. Numerical simulations of different gauge models in different dimensions may culminate in describing both the intermediate and long string behaviour by different effective strings~\cite{Caselle:2004jq}. The pure $SU(3)$ Yang-Mills theory in four dimensions is the closest approximation to full QCD. Even though, we are lacking detailed understanding for the string behavior at high temperature and intermediate distance scale. In this region the deviations from the free string behavior occurs on scales that is relevant to full QCD before the string breaks~\cite{Bali:2005bg}. The nature of the QCD strings at finite temperatures can be very relevant to many portrayals involving high energy phenomena~\cite{Caselle:2015tza,GIDDINGS198955} such as mesonic spectroscopy~\cite{Bali:2013fpo,Kalashnikova:2002zz,Grach:2008ij}, glueballs~\cite{Caselle:2013qpa,Johnson:2000qz} and string fireballs~\cite{Kalaydzhyan:2014tfa}, for example. This calls for a discussion concerning the validity higher order string effects at temperature scales near QCD critical point which is the target of this report. The paper is organized as follows: In section(II), we review the most relevant string model to QCD and discuss the lattice data corresponding to the Casimir energy versus different approximation schemes. In section(III), the profile of the density distribution is compare to the mean-square width of the string fluctuations of both the Nambu-Goto (NG) and Polyakov-Kleinert (PK) strings. Concluding remarks are provided in the last section. \section{String phemenology and lattice data} \subsection{String actions and Casmir energy} The linear rise property of the confining potential advocated the conjecture that Yang-Mills (YM) vacuum admits the existence of a very thin string object~\cite{Luscherfr} transmitting the strongly interacting forces between the color sources. The intuition is in consistency~\cite{Olesen:1985pv} with the dual superconductive~\cite{Mandelstam76, Bali1996, DiGiacomo:1999a, DiGiacomo:1999b, Carmona:2001ja, Caselle:2016mqu} QCD vacuum and the squeeze of the color fields into a confining thin string dual to the Abrikosov line by the virtue of the dual Meissner effect. The formation of the string condensate spontaneously breakdown the transnational and rotational symmetries of the YM-vacuum and entails the manifestation of (D-2) massless transverse Goldstone modes in addition to their interactions. To establish an effective string description, a string action can be constructed from the derivative expansion of collective string co-ordinates satisfy Poincare and parity invariance. One particular form of this action is the L\"uscher and Weisz ~\cite{Luscherfr,Luscher:2002qv} which encompasses built-in surface/boundary terms to account for the interaction of an open string with boundaries. The L\"uscher and Weisz~\cite{Luscher:2004ib} (LW) effective action up to four-derivative term read \begin{widetext} \begin{equation} S^{LW}[X]=\sigma A+\dfrac{\sigma}{2} \int d\zeta_{1} \int d\zeta_{2} [(\dfrac{\partial X}{\partial \zeta_{\alpha}} \cdot \dfrac{\partial X}{\partial \zeta_{\alpha}}) +\sigma \int d\zeta_{1} \int d\zeta_{2} [\kappa_2(\dfrac{\partial X}{\partial \zeta_{\alpha}} \cdot \dfrac{\partial X}{\partial \zeta_{\alpha}})^2 +\kappa_3(\dfrac{\partial X}{\partial \zeta_{\alpha}} \cdot \dfrac{\partial X}{\partial \zeta_{\beta}})^2]+S_b \label{LWaction} \end{equation} \end{widetext} Invariance under party transform would keep only even number derivative terms. The vector $X^{\mu}(\zeta_{1},\zeta_{2})$ maps the region ${\cal C}\subset \mathbb{R}^{2}$ into $\mathbb{R}^{4}$ and couplings $\kappa_1$, $\kappa_2$ are effective low-energy parameters. The boundary term $S_b$ describes the interaction of the effective string with the Polyakov loops at the fixed ends of the string and is given by \begin{equation} S_{b}=\int d\zeta_0 [b_1 \partial_2 X_i. \partial_2 X^{i}+b_2\partial_2\partial_1 X_i.\partial_2\partial_1X^{i}+.....]. \end{equation} Consistency with the open-closed string duality~\cite{Luscher:2004ib} implies a vanishing value of the first boundary coupling $b_1=0$, the leading order corrections due to second boundary terms with the coupling $b_2$ appears at higher order than the four derivative term in the bulk. For the next-to-leading order terms in $D$ dimension the open-closed duality~\cite{Luscher:2004ib} imposes further constraint on the kinematically-dependent couplings to \begin{equation} (D-2)\kappa_2+\kappa_3=\left( \dfrac{D-4}{2\sigma} \right). \label{couplings1} \end{equation} Moreover, it has been shown~\cite{Billo:2012da} through a nonlinear Lorentz transform in terms of the string collective variables $X_i$ ~\cite{Aharony:2009gg} that the action is invariant under $SO(1,D-1)$. By this symmetry the couplings Eq.~\eqref{couplings1} of the four derivative term in the L\"uscher-Weisz (LW) action Eq.~\eqref{LWaction} are not arbitrary and are fixed in any dimension $D$ by \begin{equation} \kappa_2 + \kappa_3 = \dfrac{-1}{8\sigma}. \label{couplings2} \end{equation} Condition Eq.~\eqref{couplings1} implies that all the terms with only first derivatives in the effective string action Eq.~\eqref{LWaction} coincide with the corresponding one in Nambu-Goto action in the derivative expansion. The Nambu-Goto action is the most simple form of string actions proportional to area of the world-sheet \begin{equation} S^{NG}[X] =\sigma \int d\zeta_{1} \int d\zeta_{2} \, \sqrt{g}, \label{NGaction1} \end{equation} where g is the two-dimensional induced metric on the world sheet embedded in the background $\mathbb{R}^{4}$. On the quantum level the Weyl invariance of the NG action is broken in four dimensions; however, the anomaly is known to vanish at large distances \cite{Olesen:1985pv}. The physical gauge $X^{1}=\zeta_{1}, X^{4}=\zeta_{2} $ restricts the string fluctuations to transverse directions ${\cal C}$. The action after gauge fixing reads \begin{align} S^{NG}[X]& = \sigma \,\int_{0}^{L}\, d\zeta_{1} \,\int_{0}^{R}d\zeta_{2} \,\sqrt{(\,{1+(\partial_{\zeta_{1}} {X}_{\perp})^{2}+(\partial_{\zeta_{2}} {X}_{\perp})^{2}})}. \label{NGaction2} \end{align} The expansion of the above NG action up to the leading and next to leading order terms is \begin{align} S^{NG}_{\ell o}[X]=S_{LW}[X]=\sigma A+\dfrac{\sigma}{2} \int d\zeta_{1} \int d\zeta_{2} [(\dfrac{\partial X}{\partial \zeta_{\alpha}} \cdot \dfrac{\partial X}{\partial \zeta_{\alpha}}), \label{NGLO} \end{align} and \begin{align} S^{NG}_{n\ell o}[X]= \sigma \int d\zeta_{1} \int d\zeta_{2} [(\dfrac{\partial X}{\partial \zeta_{\alpha}} \cdot \dfrac{\partial X}{\partial \zeta_{\alpha}})^2 +(\dfrac{\partial X}{\partial \zeta_{\alpha}} \cdot \dfrac{\partial X}{\partial \zeta_{\beta}})^2], \label{NGNLO} \end{align} respectively. A simple generalization of the Nambu-Goto string ~\cite{Arvis:1983fp,Alvarez:1981kc,Olesen:1985pv} has been proposed by Polyakov~\cite{POLYAKOV19} and Kleinert~\cite{kleinert} to stabilize the NG action and was first investigated in the context of fluid membranes . This is a bosonic string with a term proportional to the extrinsic curvature of the surface as a next operator after NG action~\cite{kleinert,POLYAKOV19}. The action of the bosonic (Polyakov) string with the extrinsic curvature term reads \begin{equation} S^{PK}=\dfrac{\sigma}{2} \int d^{2}\zeta \sqrt{g} g^{\alpha\beta} \partial_\alpha X.\partial_\beta +S^{Ext}[X]. \label{PKaction} \end{equation} $S^{Ext}$ is defined as \begin{equation} S^{Ext}[X]=\alpha \int d^2\zeta \sqrt{g} {\cal K}^2, \label{Ext} \end{equation} with the extrinsic curvature ${\cal K}$ defined as \begin{equation} {\cal K}=\triangle(g) \partial_\alpha [\sqrt{g} g^{\alpha\beta}\partial_\beta], \end{equation} where $\triangle$ is Laplace operator. The term could also be considered as an additional term consistent with Poincare and parity invariance contributing to (LW) action~\eqref{LWaction}. The extrinsic-curvature term filters out the sharply curved string configurations. The rigidity parameter provides the relative weight between the term proportional to the surface area and the smoothing term in the effective string~\cite{Kleinert:1996ry,Ambjorn:2014rwa}. In non-abelian gauge theories this ratio remains constant when taking the continuum limit~\cite{Caselle:2014eka}. Rigid string effects may manifest in the IR region of SU(N) none-abelian gauge theories and could perhaps account for some fine structure deviations from the simple NG string. Prior to set a comparison with the numerical Yang-mills lattice data, we review in the following the corresponding expression for the Casimir energy due to each string action. Generally, the Casimir potential is extracted from the string partition function as \begin{equation} V(R,T)=-\dfrac{1}{T} \log(Z(R,T)). \label{Casmir} \end{equation} The partition function of the NG model in the physical gauge is functional integrals over all the world sheet configurations swept by the string, \begin{equation} Z(R,T)= \int_{{\cal C}} [D\, X ] \,\exp(\,-S^{NG}( X )). \label{PI} \end{equation} For a periodic boundary condition along the time direction with extend equals to the inverse of the temperature $L_{T}=\frac{1}{T}$ \begin{align} X (\zeta^{0}=0,\zeta^{1})= X (\zeta^{0}=L_T=\frac{1}{T},\zeta^{1}), \label{bc1} \end{align} and Dirichlet boundary condition at the sources position \begin{align} X(\zeta^{0},\zeta^{1}=0)= X(\zeta^{0},\zeta^{1}=R)=0. \label{bc2} \end{align} the path integral of Eq.~\eqref{PI} and Eq.~\eqref{Casmir} yield the static potential for the leading order contribution Eq.\eqref{NGLO} of the NG action using $\zeta$ function regularization scheme~\cite{PhysRevD.27.2944} as \begin{equation} \label{LO} V_{\ell o}(R,T)= \sigma R+(D-2)T\, \ln \eta \left(i\tau \right)+\mu(T), \end{equation} \noindent where $\mu(T)$ is a renormalization parameter and $\eta$ is the Dedekind eta function defined according to \begin{equation} \eta(\tau)=q^{\frac{1}{24}} \prod_{n=1}^{\infty}(1-q^{n});\quad q=e^{\frac{-\pi L_T}{R}}, \end{equation} where $\tau=\frac{L_{T}}{2 R}$ is the modular parameter of the cylinder. The second term on the right hand side encompasses the L\"uscher term of the interquark potential. This term signifies a universal quantum effect which is a characteristic of the CFT in infrared free-string limit and is independent of the interaction terms of the effective theory. Deitz and Filk~\cite{PhysRevD.27.2944} extracted the second model-independent corrections \cite{PhysRevLett.67.1681} to the Casimir effect from the explicit calculation of the two-loop approximation with the same regularization scheme as \begin{widetext} \begin{equation} \label{NLO} V_{n\ell o}(R,T)= \sigma R+(D-2)T\, \ln \eta \left(i\tau \right)- T \ln \left(1-\dfrac{(D-2)\pi^{2}T}{1152 \sigma_{o}R^{3}}\left[2 E_4(\tau) +(D-4)E_{2}^{2}(\tau)\right] \right)+\mu(T). \end{equation} \end{widetext} \noindent $E$ is the Eisenstein series given by \begin{equation} E_{2k}(\tau)=1+(-1)^{k}\dfrac{4k}{B_{k}} \sum^{\infty}_{n=1} \dfrac{n^{2k-1}q^{n}}{1-q^{n}}. \end{equation} The string tension as a function of the temperature which is the slop of the leading linear term of the potential. From Eq.~\eqref{NLO} the string tension up to the next to leading order is given by \begin{equation} \sigma(T)=\sigma_{0}-\dfrac{\pi(D-2)}{6}T^{2}-\dfrac{\pi^2(D-2)^2}{72 \sigma_{0}}T^{4}+O(T^6) \label{tensionNG} \end{equation} The coefficients of additional higher-order corrections $T^{6}$ can be inducted~\cite{Arvis:1983fp,Giudice:2009di}and leads to the string tension \begin{align} \sigma(T)&=\sigma_{0}-\dfrac{\pi(D-2)}{6}T^{2}-\dfrac{\pi^2(D-2)^{2}}{72 \sigma_{0}}T^{4}\\\notag &-\frac{(D-2)^{3}\pi^{3}T^{6}}{432\sigma_{0}}+O(T^7) \end{align} In addition to the consecutive expansion terms in NG action, the boundary term $S_b$ in L\"uscher-Weisz action encodes the interactions of the string with the boundaries. In Refs.~\cite{Aharony:2010cx,Billo:2012da} the first nonvanishing Lorentz-Invariant boundary term contribution has been calculated. The modification to the potential recieved from this interaction for Dirichelet boundary condition is given by \begin{equation} V_b=(D-2)b_2 \dfrac{\pi^{3} L_T}{60 R^{4}}E_{4}\left(q\right), \label{boundary} \end{equation} where $b_2$ is a fit parameter. Apart from the corrections of the area expansion of the worldsheet in NG action, the static potential can be given as a function of geometrical characteristics such as the extrinsic curvature Eq.~\eqref{PKaction}. The Casmir and free energies contribution due to an extrinsic curvature term was evaluated in ~\cite{German:1989vk,German:1991tc,Braaten:1987gq,Nesterenko:1991qp} for stiff strings. Employing zeta function regularization~\cite{Elizalde:1993af} the finite-temperature contribution is calculated~\cite{Nesterenko:1997ku,Viswanathan:1988ad} to the first loop. Recalling that the rigid string action is defined as additional extrinsic curvature term to the ordinary NG action, the quark-antiquark potential can be considered in conjunction with two subsequent orders in the NG perturbative expansion. In four dimensions the total static potential of rigid string in the leading order term \begin{widetext} \begin{align} V_{\ell o}^{Stiff}(R,T)=\sigma R+ T\, \ln \eta \left(i\tau \right)+ T \sum^{\infty}_{n=0} \ln \left( 1-e^{-2R \sqrt{\Omega_n^{2} + \omega_0^{2}}}\right) - \frac{\pi RT^2}{6} + \frac{T}{ 4 } \ln(\frac{1}{(2 T R)})+\mu(T), \label{StiffLO} \end{align} with $\Omega_n =2 \pi n T $. The above expression treats the oscillations of two dimensional scalar field but with mass equal to $\omega_0=\sigma_0/\alpha_{rig}^{2}$ in the massless limit~\cite{Nesterenko:1997ku} Eq.~\eqref{StiffLO} yields Eq.~\eqref{LO}. The potential of the stiff string with the next leading order NG contribution is \begin{eqnarray} V_{n\ell o}^{Stiff}(R,T)= V^{\ell o}_{Stiff}(R,T)-T \ln \left(1-\dfrac{(D-2)\pi^{2}T}{1152 \sigma_{o}R^{3}}\left[2 E_4(\tau)+(D-4)E_{2}^{2}(\tau)\right] \right), \label{StiffNLO} \end{eqnarray} \end{widetext} In the limit of short distance and low temperature the coefficient of the L\"uscher term is doubled. The string tension of rigid string is given in large d limit in Ref.~\cite{German:1991tc}. One way to address the string tension dependency on the temperature is to consider a universal relation that utilizes an overlap formalism of the two point correlators discussed in Ref.~\cite{Caselle:2011vk}). The partition function of a closed string with Dirichelet boundaries can be related to a tower of energy states~\cite{Luscher:2004ib}. \begin{equation} V_{Q\bar{Q}}=-T{\rm{ln}}\left( \sum_{n} c_{n}K_{0}(R E_n)\right)+\mu(T) \label{eq:SP_CaselleEn} \end{equation} which are functions in the energy levels of the closed NG string~\cite{Caselle:2011vk,Luscher:2004ib} given by \begin{align} K_{0}(RE_n)=\sqrt{\frac{\pi}{2E_{n}^{c}(\sigma,R,T)}}(1+\frac{4n^{2}-1}{8E_{n}^{c}(\sigma,R,T)}+\\\notag \frac{16n^{4}-40n^{2}+9}{128(E_{n}^{c})^{2}(\sigma,R,T)})e^{-E_{n}^{c}(\sigma,R,T)}, \label{eq:SP_Caselle_Kn} \end{align} and \begin{equation} E_{n}^{c}(\sigma,R,T)=\sigma RT\sqrt{1+\frac{8\pi}{\sigma T^{2}}\left(n-\frac{1}{12} \right)}. \label{En} \end{equation} The parameterization of the static potential through this expression is very relevant to the large distance behavior of the static potential. On the assumption that this behavior is dominated by the lowest state in the bessel functions sum, which is justified for $RT>1$, the static potential is \begin{equation} V_{Q\bar{Q}}=-T {\rm{ln}}\left( K_{0}(R E_0)\right)+\mu(T) \label{eq:SP_CaselleE0} \end{equation} In the following, we numerically measure the Polyakov-loop two point correlators and explore to what extend each of the above string actions can be a sufficiently good description for the potential between two static color sources. \subsection{Fit Analysis of $Q\bar{Q}$ potential} At fixed temperature $T$, the Polyakov loop correlators address the free energy of a system of two static color charges coupled to a heatbath~\cite{polyakov:78}. Within the transfer matrix formalism~\cite{Luscher:2002qv} the two point Polyakov-loop correlators are the partition function of the string. The Monte Carlo evaluation of the temperature dependent quark--antiquark potential at each $R$ is calculated through the expectation value of the Polyakov loop correlators \begin{align} \label{POT} \mathcal{P}_{\rm{2Q}} =& \int d[U] \,P(0)\,P^{\dagger}(R)\, \mathrm{exp}(-S_{w}), \notag\\ =& \quad\mathrm{exp}(-V(R,T)/T). \end{align} with the Polyakov loop defined as \begin{equation} P(\vec{r}_{i}) = \frac{1}{3}\mbox{Tr} \left[ \prod^{N_{t}}_{n_{t=1}}U_{\mu=4}(\vec{r}_{i},n_{t}) \right], \end{equation} Making use of the space-time symmetries of the torus, the above correlator is evaluated at each point of the lattice and then averaged. We perform simulations on large enough lattice sizes to gain high statistics in a gauge-independent manner ~\cite{Bali} in addition to reduce correlations across the boundaries. The two lattices employed in this investigation are of a typical spatial size of $3.6^3$ $\rm{fm^3}$ with a lattice spacing $a=0.1$ fm. We chose to perform our analysis with lattices with temporal extents of $N_t = 8$, and $N_t = 10$ slices at a coupling of value $\beta = 6.00$. The two lattices correspond to temperatures $T/T_c =0.9$ just before the deconfinement point, and $T/T_c = 0.8$ near the end of QCD plateau~\cite{Doi2005559}. \begin{figure*}[!hptb] \centering \includegraphics[scale=0.84]{QQpotentialT0908Caselle_fixedSigmaFlipErrBars_respad_uncpad_fitResults} \caption{The quark-antiquark $Q\bar{Q}$ potential measured at temperatures $T/T_c=0.8$ and $T/T_c=0.9$, the lines correspond to the potential in accord to Eq.~\eqref{eq:SP_CaselleE0} for the depicted fit ranges, the returned values of the string tension is inlisted in Table.\ref{T1}.}\label{CasellePot} \end{figure*} \begin{table}[!hpt] \begin{center} \begin{ruledtabular} \begin{tabular}{ccccccccccc} \multirow{2}{*}{} &Fit Range &\multirow{2}{*}{$\sigma a^{2}$} &\multirow{2}{*}{$\chi^{2}/N_{DOF}$}\\ &$n=R/a$ &&&\\ \hline \\ \multirow{7}{*}{\begin{turn}{90}\hspace{1cm}$V_{~T/T_{c}=0.8}$ \end{turn}} &7-12 &0.051855$\pm$$7.3921\times10^{-4}$ &2.7581/6\\ \multicolumn{1}{c}{} &7-12 &0.051855$\pm$$7.3921\times10^{-4}$ &2.7581/6\\ \multicolumn{1}{c}{} &8-12 &0.050386$\pm$$7.4579\times10^{-4}$ &0.8985/5 \\ \multicolumn{1}{c}{} &9-12 &0.048738$\pm$$7.0381\times10^{-4}$ &0.1842/4 \\ \hline \hline \multirow{7}{*}{\begin{turn}{90} $V_{~T/T_{c}=0.9}$ \end{turn}} \\ &7-12 &0.049810$\pm$$5.9853\times10^{-4}$ &28.0791/6\\ \multicolumn{1}{c}{} &7-12 &0.049810$\pm$$5.9853\times10^{-4}$ &28.0791/6 \\ \multicolumn{1}{c}{} &8-12 &0.048975$\pm$$5.4340\times10^{-4}$ &8.8142/5 \\ \multicolumn{1}{c}{} &9-12 &0.048122$\pm$$4.7426\times10^{-4}$ &1.5495/4 \\ \end{tabular} \end{ruledtabular} \end{center} \caption{ The returned values of the string tension and the corresponding $\chi^{2}$ from the fit to Eq.~\eqref{eq:SP_CaselleE0} at $T/T_{c}=0.8$ and $T/T_{c}=0.9$. }\label{T1} \end{table} The gauge configurations were generated using the standard Wilson gauge action employing a pseudo-heatbath algorithm~\cite{Fabricius,Kennedy} updating the corresponding to three $SU(2)$ subgroup elements~\cite{1982PhLB..119..387C}. Each update step/sweep consists of one heatbath and 5 micro-canonical reflections. The gauge configurations are thermalized following 2000 sweeps. The measurements are taken on 500 bins. Each bin consists of 4 measurements separated by 100 sweeps of updates. The correlator Eq.\eqref{COR} is evaluated after averaging the time links~\cite{Parisi} in Eq.\eqref{COR} \begin{align} \label{LI} \bar{U_t}=\frac{\int dU U e^{-Tr(Q\,U^{\dagger}+U\,Q^{\dagger}) }}{\int dU e^{-Tr(Q\,U^{\dagger}+U\,Q^{\dagger}) }}. \end{align} The temporal links are integrated out analytically by evaluating the equivalent contour integral of Eq.\eqref{LI} as detailed in Ref.~\cite{1985PhLB15177D}. The lattice data of the ($Q\bar{Q}$) potential are extracted from the two point Polyakov correlator Eq.~\eqref{POT} \begin{equation} V_{Q\bar{Q}}(R)=-\dfrac{1}{T} \log <P(x)P(x+R)> \label{COR} \end{equation} We look first at the fit and parameterization behavior according to the overlap of the Bessel function Eq.~\eqref{eq:SP_CaselleEn} for the $Q\bar{Q}$ potential. Fig.~\ref{CasellePot} shows the lattice data of the $Q\bar{Q}$ potential, normalized to the value retrieved at $R=1.2$, at both temperatures $T/T_c=0.8$ and $T/T_c=0.9$. The corresponding fits to the overlap potential with the lowest state Eq.~\eqref{eq:SP_CaselleE0}. The values of $\chi^{2}_{dof}$ in Table~\ref{T1} are indicating a good fit behavior for all considered fit region at $T/T_c=0.8$ with the string tension set as a free fit parameter. Nevertheless, the plots depict that the best fits is attainable when considering the whole fit region. At the temperature $T/T_c=0.9$ only for large distances commencing from $R \geq 1.0$ fm a good fit is returned (Table~\ref{T1}). However, the fits are returning the same value of the string tension as that received at the other temperature $T/T_c =0.8$ on the same fit interval. The returned value of the zero temperature string tension $\sigma_{0}=0.049$ is within the standard value of the string tension $\sqrt{\sigma}=440$ MeV~\cite{PhysRevD.47.661}. As a consistency check for our lattice data, we reproduce the same value of the string tension taken as a fit parameter as in Ref.~\cite{PhysRevD.85.077501} and ~\cite{Kac} with the corresponding function of the static potential~\cite{Gao,Luscher:2002qv} and fit domain. To set comparison with the string model predictions, we fit the $Q\bar{Q}$ potential to that derived from the NG string Eqs.\eqref{LO} and \eqref{NLO} for the exact expression of leading and next-to-leading order, separately. Similarly, we set the string tension $\sigma\, a^{2}$ and the renormalization constant $\mu(T)$ as a free fitting parameters. Table.~\ref{T2} enlists the returned value of the string tension $\sigma\, a^{2}$ and $\chi_{\rm{dof}}^2$ for various source separations commencing from $R=0.4, 0.5, 0.6$ and $0.7$ fm and extending to $R=1.2$ fm. A large value of $\chi^2$ is returned for fits of color sources separations commencing from $R=0.4$. For separations distance $R\leq 0.4$ fm the NG string description is showing increasingly significant deviations from the LGT data due to the short distance physics and the one dimensional idealization of NG string. In Ref.~\cite{Caselle:2012rp} the intrinsic thickness of the flux-tube has been discussed. Excluding the point $R=0.4$ fm dramatically decreases the returned value of $\chi^{2}$ for both the leading order LO and the next to leading approximation NLO Eqs.\eqref{LO} and Eqs.~\eqref{NLO}, respectively. The returned values of the string tension parameter quickly reaches stability even by the exclusion of further points at short distances $R=0.5$ fm and $R=0.6$ fm from the fit range. At this temperature, the string tension settles at a stable value of $\sigma\, a^{2}=0.0445$ measured in lattice units. \begin{figure}[hptb!] \includegraphics[scale=0.42]{QQpotentialT08_fitResults} \caption{ The quark-antiquark potential measured at temperature $T/T_c=0.8$, the solid and dashed lines correspond to fits to the LO and NLO string potential of Eqs.\eqref{LO} and Eq.~\eqref{NLO}, respectively. }\label{POT08} \end{figure} \begin{figure*}[!hptb] \centering \includegraphics[scale=0.84]{QQpotentialT09_fixedSigmaFlipErrBars_respad_uncpad_fitResults} \caption{The quark-antiquark $Q\bar{Q}$ potential measured at temperature $T/T_c=0.9$, the left and right plots correspond to the fits to LO and NLO perturbative expansion of Nambu-Goto string Eq.\eqref{LO} and Eq.~\eqref{NLO}, respectively.}\label{POT09} \end{figure*} \begin{figure}[!hptb] \centering \subfigure[]{\includegraphics[scale=0.8]{ChiLO.eps}} \subfigure[]{\includegraphics[scale=0.8]{ChiNLO.eps}} \caption{ (a)The returned $\chi^{2}_{dof}$ versus the string tension $\sigma a^2$, scaled by factor of $10^{2}$, from the fits of $Q\bar{Q}$ potential to leading order approximation of Nambu-Goto string Eq.~\eqref{LO} at $T/T_c=0.9$. (b)Similar to (a), however, the fits are for the next to leading order approximation Eq.~\eqref{NLO}. }\label{ChiNG} \end{figure} \begin{table}[!hpt] \begin{center} \begin{ruledtabular} \begin{tabular}{ccccccccccc} \multirow{2}{*}{$T/T_{c}=0.8$} &Fit Range &\multirow{2}{*}{$\sigma a^{2}$} &\multirow{2}{*}{$\chi^{2}/N_{DOF}$}\\ &$n=R/a$ &&&\\ \hline \multirow{7}{*}{\begin{turn}{90} $V_{\ell o}$ \end{turn}} &4-12 &0.043555$\pm$$2.9772\times10^{-4}$ &18.6977/9\\ \multicolumn{1}{c}{} &5-12 &0.044589$\pm$$3.8418\times10^{-4}$ &1.5966/8\\ \multicolumn{1}{c}{} &6-12 &0.044572$\pm$$4.6472\times10^{-4}$ &1.5950/7\\ \multicolumn{1}{c}{} &7-12 &0.044102$\pm$$5.3562\times10^{-4}$ &1.0899/6\\ \multicolumn{1}{c}{} &8-12 &0.043221$\pm$$5.8316\times10^{-4}$ &0.4754/5\\ \hline \hline \multirow{7}{*}{\begin{turn}{90}$V_{n\ell o}$ \end{turn}} &4-12 &0.042235$\pm$$3.3247\times10^{-4}$ &120.703/9\\ \multicolumn{1}{c}{} &5-12 &0.044899$\pm$$3.3302\times10^{-4}$ &2.9297/8\\ \multicolumn{1}{c}{} &6-12 &0.045387$\pm$$4.3017\times10^{-4}$ &1.2004/7\\ \multicolumn{1}{c}{} &7-12 &0.045104$\pm$$5.1431\times10^{-4}$ &1.0079/6\\ \multicolumn{1}{c}{} &8-12 &0.044295$\pm$$5.6790\times10^{-4}$ &0.4679/5\\ \end{tabular} \end{ruledtabular} \end{center} \caption{ The returned values of the string tension and the corresponding $\chi^{2}$, the table compares both values for fits to the leading order (LO) Eq.\eqref{LO} and next to leading order (NLO) Eq.\eqref{NLO}. } \label{T2} \end{table} Considering the fit of the same data of $Q\bar{Q}$ potential to the two-loop expression of the NG string Eq.\eqref{NLO}, the value of $\chi_{\rm{dof}}^2$ does not apprise significant differences for source separation distances commencing from $R=0.5$ fm. As shown in Table.~\ref{POT08}, for different fit ranges with a fixed end point at $R=12$ fm, the fit return acceptable values of $\chi^2$ with subtle changes in the free fit parameter $\sigma a^{2}$. \begin{figure}[!hptb] \centering \includegraphics[scale=0.92]{StringTension.eps} \caption{The temperature-dependence of the string tension for Nambu-Goto string action at LO, NLO and NNLO perturbative expansion. The dashed lines correspond to $\sigma_{0}a^{2}=0.044$ and solid line corresponds to $\sigma_{0}a^{2}=0.039$.}\label{TensionT} \end{figure} The numerical data for the $Q\bar{Q}$ potential match both the free leading-order NG string Eq.~\eqref{LO} and the NLO self-interacting form of Eq.\eqref{NLO}. Approximately the same difference in the value of the string tension is retrieved for fit domains involving short to large $Q\bar{Q}$ separation distances. The enclosure of the fourth derivative term of the NG/LW action appears, thereof, neither to alter the parameterisation behavior nor to indicate significant changes on the value of the string tension shown in Table.~\ref{POT08}. The absence of the mismatch between Eq.~\eqref{NLO} and the numerical data at this temperature scale does not rule out the validity at this temperature scale. This points out to the minor role of the higher order modes at the end of the QCD plateau $T/T_{c}=0.8$. The pale out of the thermal effects together with a flat plateau region at this temperature is present as well in the string tension measurements~\cite{PhysRevD.85.077501} and the more recent Monte-Carlo measurements ~\cite{Koma:2017hcm} which reproduces the same value of $0.044$ of the zero temperature string tension. \begin{table}[!hpt] \begin{center} \begin{ruledtabular} \begin{tabular}{cc|cc|ccccccc} \multirow{2}{*}{$T/T_{c}=0.9$} &String &\multicolumn{2}{c}{Fit Range[5-12]} &\multicolumn{2}{c}{Fit Range[8-12]}\\ &tension &\multicolumn{1}{c}{$\chi^{2}$} &\multicolumn{1}{c}{$\mu$} &\multicolumn{1}{c}{$\chi^{2}$} &\multicolumn{1}{c}{$\mu$}\\ \hline \multirow{10}{*}{\begin{turn}{90}$V_{{\ell o}}$ \end{turn}} &0.045 & 10624.1 &-0.446 & 1056.66 &-0.451376\\ \multicolumn{1}{c}{} &0.044 & 7561.95 &-0.438 & 822.14 &-0.442819\\ \multicolumn{1}{c}{} &0.043 & 5027.01 &-0.431 & 617.16 &-0.434261\\ \multicolumn{1}{c}{} &0.042 & 3019.32 &-0.423 & 441.72 &-0.425704\\ \multicolumn{1}{c}{} &0.041 & 1538.88 &-0.415 & 295.8 &-0.417147\\ \multicolumn{1}{c}{} &0.040 & 585.7 &-0.407 & 179.44 &-0.40859 \\ \multicolumn{1}{c}{} &0.039 & 159.77 &-0.400 & 92.60 &-0.400033\\ \multicolumn{1}{c}{} &0.038 & 261.09 &-0.392 & 35.3 &-0.391476\\ \multicolumn{1}{c}{} &0.037 & 890 &-0.384 & 7.53 &-0.382919\\ \multicolumn{1}{c}{} &0.036 & 2045 &-0.376 & 9.29 &-0.374361\\ \multicolumn{1}{c}{} &0.035 & 3728 &-0.376 & 40.59 &-0.362\\ \hline \hline \multirow{10}{*}{\begin{turn}{90}$V_{{n\ell o}}$ \end{turn}} &0.045 &4267.65 &-0.425272 &543.13 &-0.432\\ \multicolumn{1}{c}{} &0.044 &2367.67 &-0.41708 &373.12 &-0.422\\ \multicolumn{1}{c}{} &0.043 &1041.34 &-0.408868 &235.11 &-0.412\\ \multicolumn{1}{c}{} &0.042 &291.52 &-0.400637 &129.24 &-0.402\\ \multicolumn{1}{c}{} &0.041 &121.28 &-0.392385 &55.67 &-0.392\\ \multicolumn{1}{c}{} &0.040 &533.95 &-0.384109 &14.57 &-0.382\\ \multicolumn{1}{c}{} &0.039 &1533.16 &-0.375809 &06.12 &-0.371\\ \multicolumn{1}{c}{} &0.038 &3122.85 &-0.367482 &30.5 &-0.361\\ \multicolumn{1}{c}{} &0.037 &5307.33 &-0.367482 &87.99 &-0.351\\ \end{tabular} \end{ruledtabular} \end{center} \caption{ The $\chi^{2}$ values returned from fits for each corresponding value of the string tension, the table compares both values for fits to the leading order (LO) Eq.\eqref{LO} and next to leading order (NLO) Eq.\eqref{NLO}. } \label{TableT09NG} \end{table} Thermal effects are more noticeable in the present SU(3) Yang-Mills model~\cite{PhysRevD.85.077501,Doi2005559} if the temperature is scaled down to $T/T_{c}=0.9$ close to the critical point. The lattice data corresponding to the measured $Q\bar{Q}$ potential are depicted in Fig.~\ref{POT09}. We follow a different connive to disclose the fit behavior of the lattice data at this temperature scale with respect to the LO and NLO approximation. We schematically inspect the returned values of $\chi^{2}$ for an interval of selected values of the string tension $\sigma a^{2} \in [0.035, 0.046]$. The residuals and normalization constant $\mu(T)$ for the corresponding $\sigma a^{2}$ are inlisted in Table.~\ref{TableT09NG}. The fits to the $Q\bar{Q}$ potential are for two fit ranges. For the next to leading approximation, gradual descend of the string tension parameter from 0.045 to 0.041 reduces dramatically the values of $\chi^{2}$ -inlisted in Table~\ref{TableT09NG}- till a minimum is reached at $\sigma a^{2}=0.041$ for a fit interval from $R=[0.5,1.2]$ fm. Even so, excluding points at short distance, i.e, considering a fit interval $R \in [0.9,1.2]$, results in a smaller value of $\chi^{2}$ with a shifted minimal at $\sigma_{0} a^{2}=0.039$ as depicted in Fig.~\ref{ChiNG}. The fit of the numerical data to the leading order approximation Eq.~\eqref{LO} produce similar reduction in the residuals by excluding short distance points. The fits return a minimal of $\chi^{2}$ at $\sigma a^{2}=0.039$ on $R \in [0.5,1.2]$ fm and $\sigma_{0} a^{2}=0.037$ on $R \in [0.9,1.2]$ fm. However, the values of $\chi^{2}$ are outstandingly higher than the corresponding returned values considering the next-to-leading approximation Eq.~\eqref{NLO}. In spite of some improvements by stretching out the fits to the string's self interactions beyond the Gaussian approximation, the inclusion of the NLO terms does not provide an acceptable optimization for the potential data. Nevertheless, higher order terms in the free energy provide fine corrections for the free-parameter $\sigma_{0}\, a^{2}$ interpreted as the zero temperature string tension, i.e, the value returned from fits at $T/T_c=0.8$, or measured at zero temperature~\cite{Koma:2017hcm}. A probable role that could be played by even higher order terms of the power expansion of NG string action may be discussed in the context of the string tension dependency on the temperature. Equation ~\eqref{tensionNG} sets out the the perturbative temperature dependence of the string tension at both the fourth and six powers of temperature. In Fig~\ref{TensionT} each theoretical curve is a plot corresponding to the respective order in the NG power expansion. A single data point fixes the curve of the string tension corresponding to each temperature. A correct string tension behavior versus the temperature would entail that the other data point fall into the same line. The measured value of $\sigma a^{2}$ at $T/T_c=0.8$ making use of the fits of LO and NLO approximations are the same within the standard deviation of the measurements. We take this value of the string tension as a reference value for the zero-temperature string tension $\sigma_0=0.044$ measured also in ~\cite{Koma:2017hcm}. \begin{figure}[!hptb] \centering \subfigure[]{\includegraphics[scale=0.82]{Chibound.eps}} \caption{(a)The returned $\chi^{2}_{dof}$ versus the string tension $\sigma a^2$, scaled by factor of $10^{2}$, from the fits of $Q\bar{Q}$ potential to Nambu-goto string with boundary terms Eq.~\eqref{NGbound} at $T/T_c=0.9$. (b)The corresponding fits to strings with extrinsic curvature added to leading order and next leading order terms of NG action Eq.~\eqref{StiffLO} and Eq.~\eqref{StiffNLO}.}\label{Chibound} \end{figure} \begin{figure*}[!hptb] \centering \includegraphics[scale=0.84]{QQpotentialNesterenkoLONLO.eps} \caption{ The quark-antiquark $Q \bar{Q}$ potential at temperature $T/T_c=0.8$ and $0.9$, the lines correspond to the fits to the stiff string model Eq.~\eqref{TStiffLO} and Eq.~\eqref{TStiffNLO} at the depicted string tension.}\label{StiffPot} \end{figure*} \begin{table}[!hpt] \begin{center} \begin{ruledtabular} \begin{tabular}{cc|cc|ccccccc} \multirow{2}{*}{$T/T_{c}=0.9$} &String &\multicolumn{2}{c}{Fit Range[5-12]} &\multicolumn{2}{c}{Fit Range[7-12]}\\ &tension &\multicolumn{1}{c}{$\chi^{2}$} &\multicolumn{1}{c}{$b_2$} &\multicolumn{1}{c}{$\chi^{2}$} &\multicolumn{1}{c}{$b_2$}\\ \hline \multirow{10}{*}{\begin{turn}{90}\hspace{1cm}$V_{{n\ell o}}+V_{bound}$ \end{turn}} &0.043 & 717.74 & -1.89 & 332.49 & -40.92 \\ \multicolumn{1}{c}{} &0.042 & 261.90 & -0.57 & 169.42 & -21.15 \\ \multicolumn{1}{c}{} &0.041 & 70.49 & 0.75 & 63.98 & -1.34 \\ \multicolumn{1}{c}{} &0.040 & 145.07 & 2.07 & 16.45 & 18.51 \\ \multicolumn{1}{c}{} &0.0397& 249.74 & 2.60 & 13.53 & 24.8 \\ \multicolumn{1}{c}{} &0.039 & 487.30 & 3.40 & 27.19 & 38.40 \\ \multicolumn{1}{c}{} &0.038 & 1099.03 & 4.73 & 96.54 & 58.34 \\ \end{tabular} \end{ruledtabular} \end{center} \caption{The $\chi^{2}$ values returned from fits for each corresponding value of the string tension, the table compares both values for fits to the next to leading order (NLO) static potential with boundary terms Eq.~\eqref{NGbound}. } \label{Tboundary} \end{table} \begin{table}[!hpt] \begin{center} \begin{ruledtabular} \begin{tabular}{cc|cc|ccccccc} \multirow{2}{*}{$T/T_{c}=0.9$} &String &\multicolumn{2}{c}{Fit Range[5-12]} &\multicolumn{2}{c}{Fit Range[7-12]}\\ &tension &\multicolumn{1}{c}{$\chi^{2}$} &\multicolumn{1}{c}{$\alpha$} &\multicolumn{1}{c}{$\chi^{2}$} &\multicolumn{1}{c}{$\alpha$}\\ \hline \multirow{10}{*}{\begin{turn}{90}$V_{{\ell o}}+V_{ Stiff}$ \end{turn}} &0.046 & 60.89 & 0.73 & 60.7 & 0.73 \\ \multicolumn{1}{c}{} &0.045 & 42.15 & 0.78 & 34.87 & 0.79 \\ \multicolumn{1}{c}{} &0.044 & 43.01 & 0.84 & 20.25 & 0.86 \\ \multicolumn{1}{c}{} &0.043 & 58.45 & 0.89 & 13.92 & 0.92 \\ \multicolumn{1}{c}{} &0.042 & 84.60 & 0.95 & 13.74 & 0.99 \\ \multicolumn{1}{c}{} &0.041 & 118.41 & 1.02 & 18.07 & 1.06 \\ \multicolumn{1}{c}{} &0.040 & 157.51 & 1.08 & 25.67 & 1.14 \\ \multicolumn{1}{c}{} &0.039 & 200.03 & 1.15 & 35.55 & 1.21 \\ \multicolumn{1}{c}{} &0.038 & 244.46 & 1.22 & 46.96 & 1.29 \\ \multicolumn{1}{c}{} &0.037 & 289.64 & 1.29 & 59.28 & 1.38 \\ \end{tabular} \end{ruledtabular} \end{center} \caption{ The $\chi^{2}$ values returned from fits for each corresponding value of the string tension, the table compares both values for fits to the leading order (LO) approximation of the stiff string Eq.~\eqref{TStiffLO}. } \label{Tstiff} \end{table} \begin{table}[!hpt] \begin{center} \begin{ruledtabular} \begin{tabular}{cc|cc|ccccccc} \multirow{2}{*}{$T/T_{c}=0.9$} &String &\multicolumn{2}{c}{Fit Range[5-12]} &\multicolumn{2}{c}{}\\ &tension &\multicolumn{1}{c}{$\chi^{2}$} &\multicolumn{1}{c}{$\alpha$} &\multicolumn{1}{c}{$b$} &\multicolumn{1}{c}{$\mu$}\\ \hline \multirow{10}{*}{\begin{turn}{90}$V_{{n\ell o}}+V_{b}+V_{ Stiff}$ \end{turn}} &0.046 & 19.41 &0.92 & -2.67 &- 0.393 \\ \multicolumn{1}{c}{} &0.045 & 11.47 &1.00 & -3.42 &- 0.391 \\ \multicolumn{1}{c}{} &0.044 & 7.95 &1.08 & -4.09 &- 0.388 \\ \multicolumn{1}{c}{} &0.0435& 7.46 &1.12 & -4.68 &- 0.385 \\ \multicolumn{1}{c}{} &0.043 & 7.66 &1.16 & -5.21 &- 0.382 \\ \multicolumn{1}{c}{} &0.042 & 9.70 &1.25 & -5.68 &- 0.375 \\ \multicolumn{1}{c}{} &0.041 & 13.36 &1.35 & -6.11 &- 0.367 \\ \multicolumn{1}{c}{} &0.040 & 18.12 &1.45 & -6.49 &- 0.356 \\ \multicolumn{1}{c}{} &0.039 & 23.56 &1.56 & -6.49 &- 0.344 \\ \multicolumn{1}{c}{} &0.038 & 29.34 &1.69 & -6.83 &- 0.331 \\ \multicolumn{1}{c}{} &0.037 & 35.23 &1.83 & -7.13 &- 0.315 \\ \end{tabular} \end{ruledtabular} \end{center} \caption{The $\chi^{2}$ values returned from fits for each corresponding value of the string tension, the table compares both values for fits to the stiff string with boundary terms Eq.~\eqref{TStiffNLO}.} \label{Tstiff} \end{table} The lattice point at $T/T_c=0.8$ fixes the string tension curve as depicted in Fig.~\ref{TensionT}. The measurement taken for the string tension using formula Eq.~\eqref{tensionNG} for the lattice data at $T/T_c=0.9$ produces a minimal of the residual at $\sigma_0=0.041$ for the zero temperature string tension. The corresponding curve is shown in Fig.~\ref{TensionT} as solid line. The value of the string tension $\sigma(T)a^{2}=0.0192051$ for $\sigma_0=0.039$ and $\sigma(T)a^{2}=0.0245952$ for $\sigma_0=0.044$ at the fourth power of the temperature. However, very small correction is received from the term proportional to the six power in the temperature. At this order the string tension is $\sigma(T)a^{2}=0.0234638$ at $\sigma_0=0.044$ and is $\sigma(T)a^{2}=0.017765$ for $\sigma_0=0.039$. \begin{figure}[!hptb] \centering \subfigure[]{\includegraphics[scale=0.82]{ChiStiffLO.eps}} \subfigure[]{\includegraphics[scale=0.82]{ChiStiffNLO.eps}} \caption{ (a)Plot of $\chi^{2}_{dof}$ versus the string tension $\sigma a^2$, scaled by factor of $10^{2}$, from the fits of $Q\bar{Q}$ potential to Nambu-Goto string with boundary terms Eq.~\eqref{NGbound} at $T/T_c=0.9$. (b)The corresponding fits to strings with extrinsic curvature Eq.~\eqref{TStiffNLO}}\label{ChiboundnStiff} \end{figure} The deviations of the theoretical lines from the standard measured value at $\sigma_{0}a^{2}=0.044$ indicate that model-dependent corrections received even from the six derivative terms in NG do not provide a precise match with the present four-dimensional Yang-Mills model for the correct behavior of the temperature-dependent string tension. Effects such as the interaction of the string with the boundaries may relevant to the discrepancy in the string description for intermediate distance and the correct string tension dependency on the temperature. The contribution to the Casmir energy due to the leading nonvanshing term $S_b$ in L\"uscher-Weisz action Eq.~\eqref{LWaction} is given by Eq~\eqref{boundary}. Since the leading nonvanshing boundary term appears at the fourth order derivative ~\eqref{boundary} in L\"uscher-Weisz action Eq.~\ref{LWaction}, it is more suitable to discuss its effects~\eqref{boundary} in conjunction with the next to leading order approximation of NG action using Eqs~\ref{NLO} The quark-antiquark potential data at temperature $T/T_c=0.9$ are fit to the static potential \begin{equation} V_{Q\bar{Q}}=V_{n\ell o}+V_{b}, \label{NGbound} \end{equation} with $V_{n\ell o}$ and $V_{b}$ are given by Eq.~\eqref{boundary} and Eq.~\eqref{NLO}, respectively. The returened values of $\chi^{2}$ are inlisted in Table~\ref{Tboundary} considering two fit intervals. The boundary fit parameter $b_{2}$ is negative valued and is sensitive to the considered fit interval. The values of $\chi^{2}$ are still high when considering the fit interval $R \in [0.5,1.2]$ fm. Even though, none trivial improvements in the values of $\chi^{2}$ are retrieved as shown in Table.~\ref{Tboundary} and Fig.~\ref{Chibound} compared to that obtained by merely considering NG action Eqs.~\eqref{LO} and \eqref{NLO} (Table.~\ref{TableT09NG}). Moreover, the fits to the static potential with boundary term produces acceptable $\chi^{2}_{dof}$ value for the fit intervals commencing from $R \in [0.7,1.2]$ fm. Despite of the reductions in the values of $\chi^{2}_{dof}$, the consideration of the first boundary term does not significantly alter the value of string tension at the minimal of $\chi^{2}$. As shown in Fig.~\ref{Chibound} and Table~\ref{Tboundary} acceptable value of $\chi^{2}$ returns zero temperature string tension $\sigma_{0}a^{2}=0.0397$ on the fit interval $[0.7,1.2]$ fm. The absence of correct description of the $Q\bar{Q}$ potential data and the thermodynamical behavior of string tension on the basis of string models near the deconfinement point has been a long withstanding issue. Effects such as the rigidity of QCD flux tube may become noticeable in confined phase at high temperatures. In order to clearly appreciate the changes on the fits when stiffness effects are taken into account, we discuss the modified static potential by virtue of the rigidity in conjunction with both the leading and next to leading approximations to NG action, separately. That is, the fit to $Q\bar{Q}$ potential data, we consider Eq.~\eqref{StiffLO}, ~\eqref{StiffNLO} and ~\eqref{boundary} such that \begin{equation} V_{Q\bar{Q}}= V^{\ell o}_{stiff}, \label{TStiffLO} \end{equation} and \begin{equation} V_{Q \bar{Q}}= V^{n\ell o}_{stiff}+V_{b}. \label{TStiffNLO} \end{equation} We proceed fitting the data in a similar way by varying the string tension leaving the rigidity factor, $\alpha_{rig}$ which weighs the extrinsic curvature term in Polyakov-Kleinert action, and the ultraviolet cutoff $\mu$ as free fit parameters. Table.~\ref{Tstiff} summarizes values of $\chi^{2}$ obtained from fits to Eq.~\eqref{TStiffLO} and ~\eqref{TStiffNLO} at the temperature $T/T_c=0.9$ close to the deconfinement point. We remark the following points: \begin{itemize} \item Significant improvement in the fit behavior when considering both the leading Eq.~\eqref{TStiffLO} and the next to leading order approximation (NLO) Eq.~\eqref{TStiffNLO} together with stiffness effects. \item The consideration of the stiffness correction only with the leading order NG potential Eq.~\eqref{TStiffLO} does provide a smaller $\chi^{2}$ compared to the fits to LW static potential (NLO of NG potential in addition to leading nonvanishing boundary term) Eqs.~\eqref{NGbound} and ~\eqref{boundary}. In Fig.~\ref{ChiboundnStiff} the $\chi^{2}$ show more sharper peaks of $\chi^{2}_{dof}$ minimal around the corresponding values of the string tension. \item The returned values of the rigidity parameter approaches a stable value of the rigidity factor $\alpha_{rig}=1.12$ with good $\chi^{2}$ for fits to Eq.~\eqref{TStiffNLO} and flows towards this value considering shorter intervals $R\in[0.8,1.2]$ fm for fits to Eq.~\eqref{TStiffLO}. \item The residuals are minimized at values of the string tension that is shifted from the one obtained by considering merely the static potential of NG action for both LO and NLO approximations. The fit to Eq.~\eqref{TStiffNLO} results in a value of the string tension which is produced at $T/T_C=0.8$ using all the approximation schemes Eqs.~\eqref{LO},~\eqref{NLO},~\eqref{TStiffLO} and ~\eqref{TStiffNLO}. \item The above mentioned points and the comparison of the fit behavior of Eq.~\eqref{NGbound} and Eqs.~\eqref{TStiffLO} with Eq.~\eqref{TStiffNLO} points to the remark that the improvement received in the fit by considering a possible stiffness physics of the flux tube is not as merely due to acquiring additional adjustable parameters.\\ \end{itemize} At the temperature $T/T_c=0.8$ and string tension value of $\sigma_{0} a^{2}=0.044$, fits to Eq.~\eqref{TStiffNLO} are returning acceptable values of $\chi^2$ for the same value of the rigidity parameter $\alpha_{rig}^2=1.12$. This is consistent with the fact that at relatively lower temperatures the string's smooth fluctuations are dominant. The reduction in residual from the fits to Eq.~\eqref{TStiffNLO} and the subsequent retrieve of the correct behavior of the string tension would suggest a string with a suppressed sharp fluctuations with higher-order effects such as self-interactions and interactions with boundaries to significantly manifest by QCD string at high temperature. \newpage \section{The String Width Profile} \subsection{Width of Nambu-Goto string} The mean-square width of the string is defined as the second moment of the transverse fluctuations \begin{align} W^{2}(\zeta;\zeta^{0}) = & \quad \langle \, X^{2}(\zeta;\zeta^{0})\,\rangle \nonumber\\ = &\quad \dfrac{\int_{\mathcal{C}}\,[D\,X]\, X^2 \,\mathrm{exp}(-S^{NG}[X])}{\int_{\mathcal{C}}[D\,X] \, \mathrm{exp}(-S^{NG}[X])}, \label{StringWidth} \end{align} \noindent where $\zeta=(\zeta^{1},i\zeta^{0})$ is a complex parameterization of the world sheet, such that $\zeta^{1}\in [-R/2,R/2], \zeta^{0} \in [-L_T/2,L_T/2]$, L\"uscher, M\"unster and Weisz ~\cite{Luscher:1980iy} have shown long-ago the famed property of the logarithmic divergence of the mean square width of the free NG string at the middle plane in the zero temperature limit \begin{equation} \label{Wid} W^{2} \sim \frac{1}{\pi\sigma}\log(\dfrac{R}{R_{0}}), \end{equation} \noindent where $R_{0}$ is an ultra-violet scale. Considering the NG action in the leading order limit, Casselle et al.~\cite{Caselle:1995fh} employed the point-split technique to regularize the divergence of the quadratic operator~\eqref{StringWidth}. The expectation value \cite{allais,Gliozzi:2010zv} of the mean square width corresponds to the Green function correlator of the free bosonic string theory in two dimensions. In $D$ dimension and for cylindrical boundary conditions Eq.~\eqref{bc1} and Eq.~\eqref{bc2} the mean-square width would read \begin{equation} \label{WidthLO} W^{2}_{{\ell o}}(\zeta,\tau) = \frac{D-2}{2\pi\sigma}\log\left(\frac{R}{R_{0}(\zeta)}\right)+\frac{D-2}{2\pi\sigma}\log\left| \,\dfrac{\theta_{2}(\pi\,\zeta/R;\tau)} {\theta_{1}^{\prime}(0;\tau)} \right|, \end{equation} where $\theta$ are Jacobi elliptic functions \begin{eqnarray} \theta_{1}(\zeta;\tau)=2 \sum_{n=0}^{\infty}&(-1)^{n}q_1^{n(n+1)+\frac{1}{4}}\sin((2n+1)\,\zeta),\nonumber\\ \theta_{2}(\zeta;\tau)=2 \sum_{n=0}^{\infty}&q_1^{n(n+1)+\frac{1}{4}}\cos((2n+1)\zeta), \end{eqnarray} \noindent with $q_1=e^{\frac{-\pi}{2}\tau}$, with $\tau=\frac{L_T}{R}$ being the modular parameter of the cylinder, and $L_T=1/T$ is the temporal extent governing the inverse temperature and $R_{0}(\zeta)$ is the UV cutoff which has been generalized to be dependent on distances from the sources. The second logarithmic term in Eq.~\eqref{WidthLO} encrypts the dependency of the width on the modular parameter of the cylinder and implies an increase in the width with the temperature and the color source separations. In the limit of large separation distances $R>L_T$, a modular transform of Eq.~\ref{WidthLO} yields a linear broadening pattern in the string's width~\cite{allais,Gliozzi:2010zv}. Moreover, this term signifies a geometrical fine structure of the free string due to the non-constant width along the transverse planes. The curvature is more pronounced with the temperature and the string's length increase. F. Gliozzi. M. Pepe and Wiese \cite{Gliozzi:2010zv,Pepe:2010na} computed analytically the width of the string at next-to-leading order \begin{equation} W^2(\zeta)=W^2_{\ell o}(\zeta)+W^{2}_{n\ell o}(\zeta) \end{equation} with the leading order term $W^2_{\ell o}$ in accord to Eq.~\ref{WidthLO} and the next to leading term is given by \begin{widetext} \begin{align} \label{WidthNLO} ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~&\\ \nonumber W^{2}_{n\ell o}(\zeta)=\frac{\pi}{12 \sigma R^2}\left[E_2(i\tau)-4E_2(2i\tau)\right]\left(W_{lo}^2(\xi)-\frac{D-2}{4\pi \sigma}\right)+\dfrac{(D-2)\pi}{12\sigma^2 R^2}\Big\{\tau \left(q \frac{d}{dq}-\frac{D-2}{12}E_2(i\tau)\right)& \\ \nonumber \left[E_2(2i\tau)-E_2(i\tau)\right]-\frac{D-2}{8 \pi} E_2(i\tau)\Big\}, \end{align} \end{widetext} where $q=e^{-\pi \frac{L}{R} }$. The form of $W^2_{\ell o}$ in terms of Dedekind $\eta$ function given in Ref.~\cite{Gliozzi:2010zv} is equivalent to Eq.~\eqref{WidthLO} through the standard relations of elliptic functions. \subsection{The mean-square width of smooth string} The smooth configurations of quantum fluctuations swept in the Euclidean space-time by the Nambu-Goto string are favored by adding a new term resulting from the geometrical second fundamental form or the so-called extrinsic curvature/rigidity/stiffness term of the world-sheet. The second fundamental form (or the shape tensor) in the differential geometer notation defines a quadratic form on the tangent plane of a smooth surface in the three-dimensional Euclidean space. With a smooth choice of the unit normal vector at each point, this quadratic form is generalized for a smooth hypersurface in a Riemannian manifold. The mean-squared width of the Polyakov-Kleinert string is given by \begin{equation} W^2(\zeta) = \dfrac{\int DX( X(\zeta, \zeta^{0})− X_0 )^2 \exp(−S^{PK}[ X])}{\int DX \exp(−S^{PK}[X])}, \end{equation} with the action defined as \begin{equation} S^{PK}[X]= S^{NG}_{\ell o}[X]+S^{NG}_{n \ell o}[X]+S^{Ext}[X], \end{equation} and $S^{Ext}[X]$, $S^{NG}_{\ell o}[X]$ and $S^{NG}_{n \ell o}[X]$ are in accord with Eq.~\eqref{Ext}, Eq.~\eqref{NGaction2}, ~\eqref{NGLO} and Eq.~\eqref{NGNLO}, respectively. The field is replaced by $X(x,t) \longrightarrow X(x,t)+ \partial_\mu \partial_\mu X(x, t)$ at the next-to-leading order, where $\gamma$ is a low-energy parameter~\cite{Gliozzi:2010zt,Gliozzi:2010zv}. Expanding around the free-string action Eq.~\eqref{NGLO} the squared width of the string is then \begin{widetext} \begin{align} W^2(\zeta) = W_{\ell o}^2(\zeta)+ \langle X(\zeta, \zeta^{0})^2 (S^{NG}_{n \ell O} + S^{Ext}) \rangle_0 + 2 \gamma \langle (\partial_{\mu} X(\zeta, \zeta^{0}))^2 \rangle_0 + \gamma^2 \langle (\partial_\mu \partial_μ X(\zeta,\zeta^{0}))^2 \rangle_0 \notag\\ - \beta r \gamma^2 \int d\zeta^{0}~d\zeta~d\zeta^{0\prime}~d\zeta' \langle \partial_\mu \partial_\mu X(\zeta,\zeta^{0})·\partial_{\mu^\prime} \partial_{\mu^\prime} X(\zeta',\zeta^{0\prime}) \rangle_0. \notag \\ \label{TwoLoopExpansion} \end{align} \end{widetext} where $< >_0$ represents the vacuum expectation value with respect to the free-string action. In the following, we calculate the width of the rigid string up to one loop order. The extrinsic curvature term and the corresponding mean-square width are accordingly \begin{align} S^{Ext}= \alpha_{rig} \int_{0}^{L_T} d\zeta^{0} \int_{0}^{R} d\zeta[(\partial_\zeta\partial_\zeta X )^2 +(\partial_{\zeta^{0}}\partial_{\zeta^{0}} X)^2 ], \nonumber \end{align} The two loop expansion Eq.~\eqref{TwoLoopExpansion} is then \begin{equation} W^2(\zeta)= W^2_{\ell o}(\zeta)+ W^2_{n\ell o}(\zeta)+ W^2_{Ext}, \label{TWExt} \end{equation} the modification to the mean-square width by virtue of the rigidity \begin{equation} W^2_{Ext}= \left\langle X^2(\zeta,\zeta^{0})^2 S^{Ext} \right\rangle, \end{equation} Let us define Green function $G(\zeta,\zeta^{0};\zeta',\zeta^{0 \prime})=\left \langle X(\zeta,\zeta^{0}) X(\zeta',\zeta^{0 \prime}) \right \rangle$ as the two point propagator. Then the last term in Eq~\eqref{TWExt} representing an additional perturbation due to the instantaneous string in terms of the corresponding Green functions is \begin{widetext} \begin{align} \left\langle X(\zeta,\zeta^{0})^2 S^{Ext} \right\rangle =(D-2) \lim_{\epsilon, \epsilon' \to 0 }\int_{0}^{R}d\zeta'\int_{0}^{L_T}d\zeta^{0 \prime} (G(\zeta,\zeta^{0};\zeta',\zeta^{0 \prime}) \ \partial_{\mu } &\partial_{\mu } \partial_{\mu'} \partial_{\mu'} G(\zeta,\zeta^{0};\zeta',\zeta^{0 \prime}) +\\\notag &\partial_{\mu} \partial_{\mu} G(\zeta,\zeta^{0};\zeta',\zeta^{0\prime}) \ \partial_{\mu'} \partial_{\mu '} G(\zeta,\zeta^{0};\zeta',\zeta^{0\prime})). \label{Integral} \end{align} \end{widetext} The limit $\epsilon, \epsilon' \to 0 $ is such that the integral is ultimately regularized using the point-split method. The Gaussian correlator $ G(\zeta,\zeta^{0};\zeta',\zeta^{0\prime})$ on a cylindrical sheet of surface area $RL$ with Dirichlet and periodic boundary condition in $\zeta^{0}$ with period $L)_{T}$ according to Eqs.\eqref{bc1} and \eqref{bc2} is \begin{align} G(\zeta,\zeta^{0};\zeta',\zeta^{0 \prime})&= \frac{1}{\pi \sigma }\sum _{n=1}^{\infty } \frac{1}{n \left(1-q^n\right)}\sin \left(\frac{\pi n \zeta}{R}\right) \sin \left(\frac{\pi n \zeta'}{R}\right)\\\notag & \left(q^n e^{\frac{\pi n (\zeta^{0}-\zeta^{0 \prime})}{R}}+e^{-\frac{\pi n (\zeta^{0}-\zeta^{0 \prime})}{R}}\right), \end{align} The nonlocal contribution to the mean-square width from the extrinsic curvature component of the generalized smooth string can be calculated in detail ~\cite{BakryINPRE} and is given by \begin{align} W_{Ext}^2 =\left(D-2 \right)\dfrac{\alpha_{rig} \pi^{2} T }{24^{2}R^{3}\sigma^{2}}E^{2}_{2}(q) \label{WExt} \end{align} where $E_{2 n}(q)$ is Eisenstein Series. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\ \section{Action Density on the Lattice} \subsection{Width of the Action Density} In the following, we measure the mean square width of the action density in SU(3) gluonic configurations. The action density is related to the chromo-electromagnetic fields via $\frac{1}{2}(E^{2}-B^{2})$ and is evaluated via a three-loop improved lattice field-strength tensor~\cite{Bilson}. Constructing a color-averaged infinitely-heavy static quark-antiquark $Q\bar{Q}$ state by means of two Polyakov lines \begin{align*} \mathcal{P}_{2Q}(\vec{r}_{1},\vec{r}_{2}) = P(\vec{r}_{1})P^{\dagger}(\vec{r}_{2}), \end{align*} A scalar field characterizing the action density distribution in the Polyakov vacuum or in the presence of color sources~\cite{Bissey} can be defined as \begin{equation} \mathcal{C}(\vec{\rho};\vec{r}_{1},\vec{r}_{2} )= \frac{\langle\mathcal{P}_{2Q}(\vec{r}_{1},\vec{r}_{2}) \, S(\vec{\rho})\,\rangle } {\langle\, \mathcal{P}_{2Q}(\vec{r}_{1},\vec{r}_{2})\,\rangle\, \,\langle S(\vec{\rho})\, \rangle}, \label{Actiondensity} \end{equation} with the vector $\vec{\rho}$ referring to the spatial position of the energy probe with respect to some origin, and the bracket $< ...... >$ stands for averaging over gauge configurations and lattice symmetries. We make use of the symmetry of the four dimensional torus, that is, the measurements taken at a fixed color source's separations $R$ are repeated at each point of the three-dimensional torus and time slice then averaged. The lattice size is sufficiently large to avoid mirror effects or correlations from the other side of the finite size periodic lattice. The characterization Eq.~\eqref{Actiondensity} yields $C \rightarrow 1$ away from the quarks by virtue of the cluster decomposition of the operators. \begin{figure}[!hpt] \begin{center} \includegraphics[scale=0.35]{stringWidth_fitting_20sw_2Func_comb} \caption{The density distribution $ \mathcal{C}(r,\theta,z=R/2)$ at the center of the tube $z=R/2$ for source separation $R\,=0.5$ fm and $R\,=1.1$ fm at temperature $T/T_{c} \approx 0.9$. The solid and dashed lines correspond to the fit to Eq.\eqref{conGE} with $\sigma_1\neq\sigma_2$ and $\sigma_1=\sigma_2$, respectively. }\label{action} \end{center} \end{figure} \begin{figure*}[!hpt] \begin{center} \includegraphics[scale=0.62]{NGMidT09.eps} \caption{The mean-square width of the density distribution in the middle of the tube $z=R/2$ at the temperature $T/T_c \approx 0.9$. The solid and dashed lines correspond to the free and self-interacting NG string Eq.\eqref{WidthLO} and Eq.\eqref{WidthNLO}, respectively.}\label{MT09} \end{center} \end{figure*} \begin{table*}[!hptb] \begin{center} \begin{ruledtabular} \begin{tabular}{ccccccccccc} \multirow{1}{*}{Fit} &\multirow{1}{*}{$Q\bar{Q}$ distance,} &\multicolumn{2}{c}{Width of the action density $W^{2}(x)$} &\multicolumn{2}{c}{$\chi_{\rm{dof}}^{2}$} &\multicolumn{1}{c}{Relative}\\ \multirow{1}{*}{Range} &\multicolumn{1}{c}{$Ra^{-1}$} &$\sigma_1=\sigma_2$ &$\sigma_1\neq \sigma_2$ &$\sigma_1=\sigma_2$& $\sigma_1\neq \sigma_2$ &difference \\ \hline \multirow{8}{*}{7--28} &5 &10.1524$\pm$0.0563 &13.1490$\pm$0.0826 &63.2042 &0.7365 &22.79\%\\ &6 &11.5883$\pm$0.0832 &14.9462$\pm$0.1336 &39.2565 &0.2515 &22.47\%\\ &7 &13.4276$\pm$0.1277 &17.2725$\pm$0.2028 &21.7959 &0.0382 &22.26\%\\ &8 &15.4602$\pm$0.1985 &19.8105$\pm$0.3578 &10.2421 &0.0052 &21.96\%\\ &9 &17.4607$\pm$0.2986 &22.0775$\pm$0.6263 &4.6967 &0.0043 &20.91\%\\ &10 &19.6764$\pm$0.4636 &24.0357$\pm$1.1423 &1.9736 &0.0007 &18.14\%\\ &11 &21.9126$\pm$0.7022 &25.7155$\pm$1.9140 &1.1621 &0.0094 &14.79\%\\ &12 &24.3741$\pm$1.1399 &27.2671$\pm$3.2948 &0.8326 &0.0249 &10.61\%\\ \end{tabular} \end{ruledtabular} \end{center} \caption{ The mean-square width of the action density $W^{2}(z)$ and the corresponding $\chi^{2}$ at the temperature $T/T_{c}=0.9$ in the middle transverse plane intersecting the $Q\bar{Q}$ line $z=R/2$. The width estimates and the relative differences are obtained in accord to Eq.~\eqref{conGE}, with $\sigma_{1}=\sigma_{2}$ corresponding to the standard Gaussian.} \label{CompWidth} \end{table*} To eliminate statistical fluctuations, uncompromising the physical observables are left intact, only 20 sweeps of UV filtering using an over-improved algorithm~\cite{Morningstar,Moran} have been applied on all gauge configurations. Different UV filtering schemes can be calibrated~\cite{PhysRevD.82.094503,Bonnet} in terms of the corresponding radius of the Brownian motion. The prescribed number of stout-link sweeps would be the equivalent of 10 sweeps of APE~\cite{Albanese} algorithm~\cite{PhysRevD.82.094503,Bonnet} with an averaging parameter $\alpha=0.7$. A careful analysis performed earlier ~\cite{PhysRevD.82.094503} shows that for the prescribed number of sweeps no effects are detectable on either the quark-antiquark $Q\bar{Q}$ potential or the energy density profile for color source separation distances $R \geq 0.5$ fm which is the distance region under scrutiny here. \begin{table*}[!hptb] \begin{center} \begin{ruledtabular} \begin{tabular}{ccccccccccc} \multirow{2}{*}{$T/T_{c}=0.9$} &Fit Range &&\multicolumn{5}{c}{$\chi^{2}$ }\\ &$n=R/a$ &&$z=1$ &$z=2$ &$z=3$ &$z=4$ &$z=R/2$\\ \hline \multirow{9}{*}{\begin{turn}{90}Free String (LO)\end{turn}} &5-9 &&445.232 &425.601 &-- &-- &79.5311\\ &6-9 &&115.849 &149.802 &91.0467 &-- &50.6209\\ &7-9 &&24.3641 &30.7772 &26.5945 &11.6436 &14.2874\\ &4-12 &&1220.73 &620.346 &1018.86 &-- &84.183\\ \multicolumn{1}{c}{} &5-12 &&481.859 &467.078 &-- &-- &82.9888\\ \multicolumn{1}{c}{} &6-12 &&137.744 &178.265 &161.094 &39.245 &53.3045\\ \multicolumn{1}{c}{} &7-12 &&36.8014 &48.0687 &78.9119 &38.7756 &15.7182\\ \multicolumn{1}{c}{} &8-12 &&10.9884 &14.6037 &39.1034 &22.4106 &1.9209\\ \multicolumn{1}{c}{} &10-12 &&0.3363 &1.7424 &6.3758 &4.7030 &0.0071\\ \hline \multirow{9}{*}{\begin{turn}{90}2 Loops (NLO)\end{turn}} &5-9 &&199.681 &374.824 &-- &-- &79.6823\\ &6-9 &&37.0773 &86.6316 &56.1896 &-- &28.6494\\ &7-9 &&5.4318 &11.6294 &12.0872 &-- &5.4264\\ &4-12 &&692.334 &1025.6 &615.994 &-- &326.3\\ \multicolumn{1}{c}{} &5-12 &&211.424 &397.556 &-- &-- &80.8514\\ \multicolumn{1}{c}{} &6-12 &&42.9046 &99.4396 &92.2507 &-- &29.2459\\ \multicolumn{1}{c}{} &7-12 &&8.3665 &18.4657 &37.3376 &15.8853 &5.5947\\ \multicolumn{1}{c}{} &8-12 &&2.4583 &5.1658 &18.1117 &10.064 &0.2685\\ \multicolumn{1}{c}{} &10-12 &&0.1072 &0.8282 &3.54221 &2.8005 &0.0145\\ \end{tabular} \end{ruledtabular} \end{center} \caption{ The returned values of $\chi^{2}$ for fit to the free string (LO) Eq.\eqref{WidthLO} and self-interacting NG string (NLO) Eq.~\eqref{WidthNLO} at each selected transverse planes $z_{i}$ at $T/T_{c}=0.9$ with the last column is the retrieved $\chi^{2}$ at the middle of the string $z=R/2$. } \label{LO&NLO_fits_PlanesXT09} \end{table*} \begin{figure}[!hpt] \begin{center} \includegraphics[scale=0.56]{stringWidth_TTc09_GaussGaussC_fitResults_20sw_6up_sigma045_combset_Planes1234_1const_bin8to12} \caption{ The mean-square width $ W^{2}(z) $ of the string versus $Q\overline{Q}$ separations $R$ at temperature ~$ T/T_{c} \approx 0.9$ in lattice unit. Measurements are taken at consecutive planes $z=1$, $z=2$,~(c)~$z=3$ and ~$z=4$ from the the top to the bottom. The solid and dashed lines correspond to the one parameter fit to the string model, Eq.\eqref{WidthLO} and \eqref{WidthNLO}, respectively}\label{PlanesID1234T09} \end{center} \end{figure} To estimate the mean-square width of the gluonic action density a long each transverse plane to the quark-antiquark axis. Taking into consideration the axial cylindrical symmetry of the tube, we choose a double Gaussian function of the same amplitude, $A$, and mean value $\mu=0$ \begin{equation} G(r,\theta;z)= A (e^{-r^2/\sigma_1^2}+e^{-r^2/\sigma_2^2})+\kappa, \label{conGE} \end{equation} In the above form the constraint $\sigma_1=\sigma_2$ corresponds to the standard Gaussian distribution. Table.~\ref{CompWidth} compares the returned value of the $\chi^{2}$ for both optimization ansatz, namely, the constrained form $\sigma_1=\sigma_2$ and $\sigma_1 \neq \sigma_2$ unconstrained form. The fits of the double Gaussian form return acceptable values of $\chi^{2}$ at the intermediate distances. Good $\chi^{2}$ values are returned as well when fitting the action density profile to a convolution of the Gaussian with an exponential~\cite{PhysRevD.88.054504,Bakry:2014gea}, however, considering statistical uncertainties at large distances (see Fig.~\eqref{action}) we opt to the form Eq.~\ref{conGE} with $\sigma_1 \neq \sigma_2$ for stable fits. \begin{figure}[!hpt] \begin{center} \subfigure[$R=0.8$ fm]{\includegraphics[scale=0.4]{stringWidth_TTc09_midPlane_GaussGaussC_fitResults_2040sw_4up_sigma045_strWidthDiff_widepanel_Ra08}} \subfigure[$R=0.9$ fm]{\includegraphics[scale=0.4]{stringWidth_TTc09_midPlane_GaussGaussC_fitResults_2040sw_4up_sigma045_strWidthDiff_widepanel_Ra09}} \subfigure[$R=1.2$ fm]{\includegraphics[scale=0.43]{T09R12.eps}} \subfigure[$R=1.4$ fm]{\includegraphics[scale=0.43]{T09R14.eps}} \caption{The changes in the width from the middle plane $z=R/2$ at temperature~$ T/T_{x} \approx 0.9$. The co-ordinates $z$ are lattice coordinates (lattice units) and are measured from the quark position $z=0$. The solid and dash lines represent the free-string model (LO) and self-interacting (NLO) string Eqs.\eqref{WidthNLO} and \eqref{WidthLO}, respectively.}\label{DIFFT09} \end{center} \end{figure} A measurement of the width of the string's action density may be taken by fitting the density distribution $\mathcal{C}(\vec{\rho};z)$ to Eq.~\eqref{conGE} through each transverse to the cylinder's axis $z$ to Eq.~\eqref{conGE} \begin{equation} \label{width1} \mathcal{C}(r,\theta;z)=1-G(r,\theta;z) \end{equation} \noindent with $r^2=x^2+y^2$ in each selected transverse plane $\vec{\rho}(r,\theta;z)$. The second moment of the action density distribution with respect to the cylinder's axis $z$ joining the two quarks is \begin{equation} \label{widthg} W^{2}(z)=\quad \dfrac{\int \, dr\,r^{3}\,G(r,\theta;z)} {\int \,dr \,r \: G(r,\theta;z) }, \end{equation} which defines the mean square width of the tube on the lattice. The locus of the color sources corresponds to $z=0$ or $z=R$, respectively. In Table~\ref{CompWidth} the numerical values of the mean-square width of the string at the middle plane between the two color sources are in-listed. The percentage differences in the measured width measured with the use of both ansatz in Table~\ref{CompWidth} indicate an almost constant shift amounting approximately to $22\%$ of that measured using unconstrained optimization $\sigma_{1} \neq \sigma_{2}$. Further measurements of the mean-square width at consecutive transverse planes $z=1$ to $z=4$ are enlisted in Table~\ref{WidthPlanesT09} of Appendix.~A. The width is estimated in accord to Eqs.\eqref{conGE} and \eqref{widthg} at each selected plane $z_i$ fixed with respect to one color source. We found the unconstrained optimization Eq.\eqref{conGE} is returning $\sigma_{1} \neq \sigma_{2}$ at all color separation distances. The numerical values in Table~\ref{WidthPlanesT09} are indicating a broadening in mean-square width of the string at all transverse planes $z_{i}$ as the color sources are pulled apart. The plot of the width at consecutive planes in Fig.~\ref{PlanesID1234T09} more clearly depicts an increasing slop in the pattern of growth as one considers farther planes from the quark sources up to a maximum slop in the middle plane. \subsection{Width Profile and Nambu-Goto String} The broadening of the width at each selected transverse plane can be compared to that of the corresponding width of the quantum string Eqs.\eqref{WidthLO} and \eqref{WidthNLO}. The discussion in the previous section for the fit analysis of the two Polyakov loop correlator Eq.\eqref{COR} enlightens that both the LO and NLO approximations would return substantial differences at the closer to the deconfinement point at the temperature $T/T_{c}\simeq0.9$. \begin{table*}[!hpt] \begin{center} \begin{ruledtabular} \begin{tabular}{ccccccccccc} \multirow{2}{*}{$T/T_{c}=0.8$} &Fit Range &&\multicolumn{5}{c}{$\chi^{2}$ }\\ &$n=R/a$ &&$z=1$ &$z=2$ &$z=3$ &$z=4$ &mid.plane\\ \hline \multirow{7}{*}{\begin{turn}{90}Free String(LO)\end{turn}} &4-12 &&578.806 &453430 &-- &-- &880374\\ \multicolumn{1}{c}{} &5-12 &&60.5188 &36402.6 &266299 &-- &168150\\ \multicolumn{1}{c}{} &6-12 &&44.1409 &31.0112 &126.596 &-- &189.559\\ \multicolumn{1}{c}{} &7-12 &&19.8978 &10.4049 &4.0910 &3.0459 &2.7246\\ \multicolumn{1}{c}{} &8-12 &&10.7254 &5.3255 &2.2323 &2.6146 &2.7243\\ \multicolumn{1}{c}{} &10-12 &&2.5551 &2.0697 &0.7103 &0.6643 &0.6747\\ \\ \hline \multirow{6}{*}{\begin{turn}{90}2 Loops(NLO)\end{turn}} &4-12 &&241.135 &157140 &-- &-- &69091.1\\ \multicolumn{1}{c}{} &5-12 &&93.9843 &7415.33 &-- &-- &9118.51\\ \multicolumn{1}{c}{} &6-12 &&37.3147 &18.86 &26.4692 &-- &53.3679\\ \multicolumn{1}{c}{} &7-12 &&12.6149 &14.91 &9.0537 &2.8830 &4.1864\\ \multicolumn{1}{c}{} &8-12 &&7.0765 &4.6559 &2.1293 &2.5598 &2.6349\\ \multicolumn{1}{c}{} &10-12 &&2.0170 &1.8054 &0.5687 &0.5212 &0.5480\\ \end{tabular} \end{ruledtabular} \end{center} \caption{Inlisted are the returned value of $\chi^{2}$ corresponding to the fit to Nambu-Goto string in the leading-order (LO) Eq.\eqref{WidthLO} and the next to leading order (NLO) Eq.\eqref{WidthNLO} formulation at each selected transverse planes $z_{i}$, the last column corresponds to resultant fit at the middle plane of the string $z=R/2$.} \label{LO&NLO_fits_PlanesXT08} \end{table*} \begin{figure*}[!hpt] \begin{center} \includegraphics[scale=0.6]{stringWidth_TTc08_midPlane_GaussC_fitResults_20sw_4up_sigma045_widepanel_hole_bin8to12} \caption{ The broadening of the mean-square width of the density-distribution at the center of the tube $z = R/2$ versus the $Q\bar{Q}$ separation distance $R$ at temperature $T/T_c=0.8$. The solid and dashed lines correspond to the free and self-interacting Nambu-Goto (NG) string (LO) Eq.~\eqref{WidthLO} and (NLO) Eq.~\eqref{WidthNLO}, respectively.}\label{MT08} \end{center} \end{figure*} \begin{figure}[!hpt] \begin{center} \subfigure{ \includegraphics[scale=0.42]{stringWidth_TTc08_midPlane_GaussC_fitResults_40sw_4up_sigma045_widepanel_hole_bin8to12} } \caption{Similar to Fig.~\ref{MT08} except that the width measurements has been taken on gluonic configurations after $n_{sw}=40$ sweeps of UV filtering.}\label{MT0840} \end{center} \end{figure} \begin{figure}[!hpt] \begin{center} \subfigure{ \includegraphics[scale=0.55]{stringWidth_TTc08_GaussC_fitResults_20sw_4up_sigma045_combset_Planes1234_1const_bin8to12} } \caption{The mean-square width of the string $W^{2}(z)$ at ~$ T/T_{c} \approx 0.8$ versus $Q\overline{Q}$ separations measured in planes $z=1$,$z=2$, $z=3$, and $z=4$ from the top to bottom. The dashed and solid line denote the leading and next to leading order Nambu-Goto string model Eq.\eqref{LO} and Eq.\eqref{NLO}, respectively.}\label{PlanesID1234} \end{center} \end{figure} \begin{figure}[!hpt] \begin{center} \subfigure[The width differences from the middle plane of tube at $R=0.8$ fm]{ \includegraphics[scale=0.4]{stringWidth_TTc08_midPlane_GaussC_fitResults_2040sw_4up_sigma045_strWidthDiff_widepanel_Ra08} } \subfigure[Same as subfigure(a) at $R=0.9$ fm]{ \includegraphics[scale=0.4]{stringWidth_TTc08_midPlane_GaussC_fitResults_2040sw_4up_sigma045_strWidthDiff_widepanel_Ra09} } \subfigure[The action density in the quark plane at $R=1.2$ fm]{\includegraphics[scale=0.45]{T08R12.eps}} \subfigure[Same as subfigure (c) at $R=1.4$ fm]{\includegraphics[height=6.2cm, width=5.2cm]{T08R14.eps}} \caption{The density distribution is exhibiting a nonuniform pattern along the transverse planes, even though the tube's width is constant at all source separations.}\label{DIFFT08} \end{center} \end{figure} \begin{figure*}[!hpt] \includegraphics[scale=0.65]{./MiddleStiff.eps} \caption{ The mean-square width of the string $ W^{2}(z) $ versus $Q\overline{Q}$ separations measured in the middle plane $R/2$ at ~$ T/T_{c} \approx 0.9$. The solid and dashed line denote are fits to Nambu-Goto and Stiff string Eqs.~\eqref{WidthNLO} and ~\eqref{WExt} on the interval $R\in[0.5,1.2]$ fm, respectively.}\label{MidPlane} \end{figure*} \begin{figure}[!hpt] \includegraphics[scale=0.5]{./PlanesStiff.eps} \caption{ The mean-square width of the string $ W^{2}(z) $ versus $Q\overline{Q}$ separations measured in the planes $z=1$,$z=2$, $z=3$, and $z=4$ respectively from the top to bottom at ~$ T/T_{c} \approx 0.9$. The solid and dashed line are the fits to Nambu-Goto and Stiff string Eqs.\eqref{WidthNLO} and \eqref{WExt} on the interval $R\in[0.5,1.2]$ fm, respectively.}\label{FitsPlanes} \end{figure} In Table~\ref{LO&NLO_fits_PlanesXT09} summarized are the resultant values of the fit considering various range of sources’ separations. For this fut the string tension is fixed to its value returned at $T/T_{c}\simeq 0.8$ form fits of $Q\bar{Q}$ data. The leading order approximation would show a strong dependency on the fit range if the data points at small sources’ separations are considered. The first three entries in Table~\ref{LO&NLO_fits_PlanesXT09} compares the value of $\chi^{2}$ for both approximations at source separations $R=0.5$ fm up to $R=0.9$ fm, that is, excluding the last three points. The free string picture poorly describes the lattice data at short distances. With the data points at short distances excluded from the fit the values of $\chi^{2}$ decrease gradually. For example, first four points excluded from the fit, the returned $\chi^2$ is smaller, indicating that only the data points at large source separation are parameterized by the string model formula. With the consideration of the next leading order solution of the NG action the values of $\chi^{2}$ are reduced. Nevertheless, the values of $\chi^{2}$ are still significantly too large to precisely match the numerical data in intermediate distances. The highest values of $\chi^{2}$ are retrieved if the whole source separations $R=4a$ to $R=12a$ are included for both LO and NLO approximations. The fits in Table~\ref{LO&NLO_fits_PlanesXT09} divulge a strong dependency on the fit range if the points at small sources’ separations are excluded. Indeed, a gradual decrease in the values of $\chi^{2}$ with the consideration of larger source separations manifests. This is relatively in favor of the string description in the two loop approximation. The free string (LO) Eqs.\eqref{WidthLO} and self-interacting Eq.~\eqref{WidthNLO} (NLO) solutions are one parameter fit functions in the ultraviolet cutoff $R(\xi)$. While in the LO formula the ultraviolet cutoff has the effect of a constant shift, this is not the case when considering the NLO formula. In Figs.~\ref{MT09} and ~\ref{PlanesID1234T09} are plots of the fit for the mean square width at the middle plane of the tube, $z=R/2$ together with the corresponding fits to the free string Eqs.\eqref{WidthLO} and the self-interacting NLO form Eq.\eqref{WidthNLO}. The fit range for he free string Eqs.\eqref{WidthLO} is chosen for color source separations extending from $R=1.0$ fm to $R=1.2$ fm, however, for the NLO self interacting Eq.\eqref{WidthNLO} string, the fit range includes two additional points $R=0.8-1.2$ fm. The fit regions in Figs.~\ref{MT09} and ~\ref{PlanesID1234T09} are chosen so that both approximations give almost the same behavior in the asymptotic region at large color source separations $R \geq 10a$. Thus, in order to approach the NLO approximation in the asymptotic region fits to the leading order approximation Eqs.\eqref{WidthLO} should on considered on large distances. The string fluctuations have an almost constant cross-section at the intermediate distances $0.8 \textless R \textless 1.1$ fm which is not what is expected from the free string approximation Eqs.\eqref{WidthLO}~\cite{allais,PhysRevD.82.094503,Bakry:2010sp} at this distamce scale. The analysis of the lattice data has revealed curvatures along the planes transverse to the quark-antiquark line at large distances~\cite{PhysRevD.82.094503,Bakry:2010sp}. In the intermediate distances the profile along the transverse planes is geometrically more flat than the free-string picture would imply. Re-render of the mean-square width of lattice data together with fits to Eqs.\eqref{WidthLO} and \eqref{WidthNLO} discloses the geometrical effects of the inclusion of NLO order terms. The width $W^2(z) a^{-2}$ at the middle plan $z=R/2$ is subtracted from that at the plane $W^2(z_i)$, shown in Fig~\ref{DIFFT09} for two typical $Q\bar{Q}$ configurations at $R=0.8$ fm and $R=0.9$ fm. The string profile when considering the NLO terms in the effective action Eq.\eqref{WidthNLO} show improvements in the match with lattice data. The suppression of the tube curvature and the constant width property at the intermediate region can be conceived as geometrical features due to the higher loops in the string interactions. Although the statistical fluctuations increase with the decrease of the temperature (see Fig.~\ref{HIST0809}), the width estimates obtained through fitting action density to Eq.\eqref{conGE} can be stabilized with the use of the standard Gaussian form $\sigma_{1}=\sigma_{2}$ in Eq.\eqref{conGE} at the temperature $T/T_c=0.8$ instead. Our expectations from the fit behavior of $Q\bar{Q}$ potential at $T/T_{c}=0.8$ to both the LO and NLO formulas that higher order effects are negligible at this temperature scale. Most of the considerations concerning the validity of both approximations to the $Q\bar{Q}$ potential at At temperature $T/T_{c}=0.8$ seem to hold for the string profile. Considering the same fit range, the solid and dashed lines corresponding to approximations Eqs.\eqref{WidthLO} and ~\eqref{WidthNLO} in Fig.~\ref{MT08} coincide, with exception of subtlety at the end point $R=0.5$ fm, in both the asymptotic region and intermediate distances regardless of the adopted fit range. This mismatch at 0.5 fm is less obvious when considering fits at other transverse planes than the middle as can be seen in Fig~\ref{PlanesID1234}. This can be attributed to the high value of $\chi^{2}_{dof}$ (only at $R=0.5$ and $R=0.6$ fm) when measuring the width through the standard Gaussian distribution, i.e, $\sigma_1=\sigma_2$ in Eq.~\eqref{conGE}. The increase of the uncertainties in the string's width for color source separation $R\textgreater1.0$ fm does not loose our argument that the string model in both approximation scheme provide a good description for the string profile at this temperature. Indeed, if one considers 40 sweeps of UV filtering~\cite{ahmed,Bakry:2010sp} on the gauge links before the evaluation of the correlator Eq.\eqref{Actiondensity}. Inspection of the plot Fig.~\ref{MT0840} for fits to non-normalized width~\cite{Bakry:2010sp} at the middle plane would yield the same reasoning. In Table~\ref{LO&NLO_fits_PlanesXT08} the fit to the LO approximation unveil good values of $\chi^2$ for color source separation up to $R=0.6$ fm, the next to leading order fits, however, improves with respect to fit range when including these source separations $R=0.5$ fm and $R=0.6$ fm, this manifests at the middle and other the consecutive planes $z$ as well. \begin{table*}[!hptb] \begin{center} \begin{ruledtabular} \begin{tabular}{ccccccccccc} \multirow{2}{*}{$T/T_{c}=0.8$} &Fit Range &&\multicolumn{5}{c}{$\chi^{2}$ }\\ &$n=R/a$ &&$z=1$ &$z=2$ &$z=3$ &$z=4$ &$z=R/2$\\ \hline \multirow{9}{*}{\begin{turn}{90}(Rigid String)\end{turn}}\\ &5-9 &&199.681 &374.824 &-- &-- &79.6823\\ \multicolumn{1}{c}{} &6-9 &&4.57 &7.79266 &-- &-- &--- \\ \multicolumn{1}{c}{} &7-12 &&9.3854 &1.112 &1.92674 &-- &6.10655 \\ \multicolumn{1}{c}{} &8-12 &&3.61765 &6.2292 &0.89879 & 1.057 & 1.55448 \\ \multicolumn{1}{c}{} &10-12 &&0.123321 &0.529496 &0.293811 &0.286096 &0.300564 \\ \\ \hline \multirow{9}{*}{\begin{turn}{90}\end{turn}} \multirow{2}{*}{$T/T_{c}=0.9$} & &&\multicolumn{5}{c}{ }\\ &5-9 && 71.53&10.8 &-- &-- &0.92\\ \multicolumn{1}{c}{} &5-12 &&87.56 &15.60 &87.60 &-- &3.26 \\ \multicolumn{1}{c}{} &6-12 &&6.27 &3.66 &3.90 &-- &3.17\\ \multicolumn{1}{c}{} &7-12 &&1.05 &0.67 &3.17 &3.17 &1.54\\ \multicolumn{1}{c}{} & && & & & &\\ \end{tabular} \end{ruledtabular} \end{center} \caption{The corresponding $\chi^{2}$ retrieved from the fit to Polyakov-Kleinert string Eq.\eqref{TWExt} for temperatures $T/T_c=0.8$ and $T/T_{c}=0.9$ at each selected transverse planes $z_{i}$, the last column summarizes the fit results at the middle of the string $z=R/2$.} \label{StiffFits} \end{table*} At the temperature $T/T_{c}=0.8$, a re-render of the mean-square width differences $\delta W^2=\vert W^2(z_i)-W^{2}(R/2)\vert$ of lattice data and the corresponding fits to string model is displayed in Fig.~\ref{DIFFT08}. The subtracted width from the middle plan $z=R/2$ unveils an almost constant width along the transverse planes to the color sources. The curvatures induced by thermal effects~\cite{Bakry:2010sp,PhysRevD.82.094503} only manifests at temperatures closer to the deconfinement point and at large distances. This shows the diminish of the geometrical effects on the flux-tube profile near the end of QCD plateau $T/T_{c}=0.8$ in the contrary changes observed at $T/T_{c}=0.9$ as mentioned in Ref~\cite{Giataganas}. The expectations of an almost flat width geometry along the transverse planes is consistent with the analysis shown in Figs~\ref{DIFFT09}, which indicates that the thermal effects strongly diminishes near the QCD plateau region~\cite{PhysRevD.85.077501}. In Figs.~\ref{DIFFT08} the render of the action densities corresponding to the both temperatures $T/T_{c}=0.8$ and $T/T_{c}=0.9$ unveils an independent prolate-shaped action density for the color map. These are two typical instances where the string's width profile is exhibiting a constant width along the tube. The first is due to the diminish of thermal effects near the end of QCD plateau, the second manifests at the intermediate color source separations and the temperature close to the deconfinement point as a result of the role played by the string-self interactions. This is culminated in the squeeze/suppression along the transverse planes. Figs.~\ref{DIFFT09} and ~\ref{DIFFT08} disclose the fact that the geometry of the density isolines are quite independent from both the width profile The values of $\chi^2$ in Tables~\ref{LO&NLO_fits_PlanesXT08} and ~\ref{LO&NLO_fits_PlanesXT09} from the fits to the LO and the NLO approximations to NG string compare favorably to the temperature near the end of QCD plateau indicating that the NG string would successfully model QCD strings in the thermal regime up to temperatures as high as $T/T_{c}=0.8$. In spite of the improvements in the parameterization behavior at high temperatures $T/T_{c}=0.9$ in the intermediate distance region, the yet large values of $\chi^{2}$ raises the question whether it is sufficient to consider the NG action up to next leading order or a more general string action encompassing the leading terms of NG action as a limiting approximation ought be considered. Nevertheless, the resolution of our lattice data is enough to disclose the roughness of the NG action expanded up to four derivative terms in the precise description of the stringy color tube profile. \subsection{Polyakov-Kleinert String and the Width Profile} Similar to the Nambu-Goto string, the broadening of the width at each selected transverse plane can be set into comparison with that of the corresponding width of the quantum stiff strings Eqs.~\eqref{TWExt} and ~\eqref{WExt}. Our expectations based on the fit analysis of the $Q\bar{Q}$ potential data of the ordinary Nambu-Goto string Eq.~\eqref{NGaction1} and Polyakov-Kleinert action Eq.~\eqref{PKaction} would unveil substantial improvements in the fit behavior at the temperature very close to the deconfinement point $T/T_{c}\simeq0.9$. This improvements manifests also in the case of compact $U(1)$ flux-tube~\cite{Caselle:2016mqu}. The resultant fits to the smooth string consisting Eq.~\eqref{TWExt} of the stiffness terms Eqs.~\eqref{WExt} in addition to the next to leading order solution of the NG action Eq.~\eqref{WidthNLO} are inlisted in Table~\ref{StiffFits}. The values of $\chi^{2}$ in the second panel of Table~\ref{StiffFits} are indicating significant reduction in the values of $\chi^{2}$ at the temperature $T/T_c=0.9$ compared to returned residuals (Table~\ref{LO&NLO_fits_PlanesXT09}) considering only the NG string Eq.~\eqref{WidthNLO}. Moreover, the fit to stiff string Eq.~\eqref{TWExt} are returning good values of $\chi^{2}$ on the whole the intermediate source separation distances at all transverse planes along the tube. The solid and dashed lines in the plot of Figs.~\ref{MidPlane} and ~\ref{FitsPlanes} corresponding to the NG string in the interaction approximations Eqs.\eqref{WidthNLO} and stiff strings Eqs.~\eqref{TWExt} and ~\eqref{WExt} shows the dramatic improvement in the fits with respect to the stiff strings when considering fit range covering the whole fit range $R \in [0.5-1.2]$ fm. At temperature $T/T_c=0.8$, the fit results summarized in Tables~\ref{LO&NLO_fits_PlanesXT08} and of the LO and NLO versions of NG string return very close parameterization behavior in both the asymptotic and intermediate distances regions regardless of the selected fit range. Indeed, higher order effects are almost suppressed at this temperature scale. The fit to the NG approximation Eq.~\eqref{WidthNLO} returns good values of $\chi^2$ for the mean square width of the string in the middle plane. However, the fit to the stiff string Eq.~\eqref{WExt} exhibits improvements with respect the planes near to the color sources, this could indicate the relevance of stiff effects near the quark sources. The resultant values of fits considering various range of sources’ separations $T/T_c=0.8$ are summarized in Table~\ref{StiffFits} We conclude that the free NG string can be only a good approximation to QCD strings up to temperatures near the end of QCD plateau $T/T_{c}=0.8$. However, at higher temperatures other effects, implied in LW and PK actions, such as self-interactions, interactions with boundaries and the rigidity come into play and must be taken into account to extrapolate to the correct description of QCD string. \section{Summary and Conclusion} In this work we discussed the effective bosonic string model of confinement in the vicinity of critical phase transition point~\cite{talk}. The corrections received from the Nambu-Goto (NG) action expanded up next to leading order terms have been set into comparison with the corresponding $SU(3)$ Yang-Mills lattice data in four dimensions. The effects of boundary terms in L\"uscher-Weisz (LW) action and extrinsic curvature in Polyakov-Kleinert (PK) action have been also considered. The region under scrutiny is the source separation $R=0.5$ to $R=1.2$ fm for two temperatures scales near the end of QCD plateau and just before the critical point. The theoretical predictions laid down by both the LO and the NLO approximations of Nambu-Goto string show a good fit behavior for the data corresponding to the $Q\bar{Q}$ potential near the end of the QCD plateau region at $T/T_{c}=0.8$. The fit returns almost the same parameterization behavior with negligible differences for the measured zero temperature string tension $\sigma_{0}a^{2}$. The returned value of this fit paramter is in agreement with the measurements at zero temperature~\cite{Koma:2017hcm} Considering a higher temperature near the deconfinement point $T/T_{c}=0.9$. The fits of the $Q\bar{Q}$ potential data to the Nambu-Goto string model considering either of its approximation schemes return large values of $\chi^{2}$ if the fit region span the whole source separation distances $R=0.5$ fm to $R=1.2$ fm. Even though the fits still compare favorably to the next to leading order approximation of the NG string on each corresponding fit interval. The values of the residuals decrease by the exclusion of the data points at short distances for both approximations. The effective description based only on Nambu-Goto model does not accurately describe the $Q\bar{Q}$ potential data which occur as a deviation from the standard value of the string tension and the static potential data. The fit to the Casimir energy of the self-interacting string returns a value of the zero temperature string tension $\sigma_{0} a^{2}=0.041$ which deviates by $11\%$ of that measured at $T/T_{c}=0.8$ and zero temperature. This motivated discussing other effects such as the interaction with the boundaries and stiffness of the flux-tube. The inclusion of leading boundary term of L\"uscher-Weisz action in the approximation scheme improves the fits at all the considered source separations, however, deviations from the value of the zero temperature string tension $\sigma_{0}a^{2}$ do not diminish. Near the deconfinement point, the fit of the static potential considering boundary terms of LW action and contributions from the extrinsic curvature of PK action show a significant improvement compared to that considering merely the ordinary Nambu-Goto string for the intermediate and asymptotic color source separation distances $R \in [0.5,1.2]$. The fits reproduce an acceptable value of $\chi^{2}$ and a zero temperature string tension $\sigma_{0}a^{2}$ measured at $T/T_c=0.8$ or at $T=0$ ~\cite{Koma:2017hcm}, thus, indicating a correct temperature dependence of the string tension. Similarly, we consider the implication of the NG string for the mean-square width profile near the end of the QCD plateau region. We find that the mean-square width of the NG string in both LO and NLO approximations to fit well to the lattice data. Negligible differences are observed in the intermediate distances and the asymptotic long string limit. We conclude that the potential and mean square width extracted from the string partition function up to next leading order are consistent with the lattice data in the intermediate distances for temperature scales upto $T/T_{c}=0.8$. At higher temperature $T/T_c=0.9$, the color tube exhibits a suppressed growth profile in the intermediate region. The fits considering both intermediate and asymptotic color source separation distances show noticeable improvement with respect to the string self-interacting picture (NLO) compared to that obtained on the basis of the free string approximation. Nevertheless, the next to leading approximation does not provide an accurate match the numerical data. This manifests as significantly large values of the returned $\chi^{2}$ when considering distances less than $R\textless 0.8$ fm. However, we found that the rigid string width profile accurately matches the width measured from the numerical lattice data near the deconfinement point. This suggests that the rigidity effects can be very relevant to the correct description of Monte-Carlo data of the field density and motivates scrutinizing the stiffness physics of QCD-flux-tube in other frameworks~\cite{Cea:2014uja}. The oscillations of a free NG string fixed at the ends by Dirichlet boundaries traces out a nonuniform width profile with a geometrical curved fine structure. This is detectable~\cite{PhysRevD.82.094503} at source separations $R>1.0$ fm and near to the critical temperature. However, in the intermediate region the lattice data are not in consistency with the curved width of the free fluctuating string. The fits to mean-square width extracted from the NLO expansion of NG string, however, indicate that self interactions flatten the width profile in the intermediate region. The string's self-interactions accounts for the constant width along consecutive transverse plane to the tube in addition to the decrease in slop of the suppressed width broadening. At the end of the QCD plateau region at temperature $T/T_c=0.8$ the constant width property is manifesting at all source separation distances and is in consistency with the pure NG action. These results indicate not only the fade out of the thermal effects at this temperature but also indicate a form of the action density map independent from the geometrical changes induced by the temperature. That is, the main features of the density map would persist at lower and zero temperature. In this investigation, we found that the consideration of higher-order loops of the string interactions together with the geometrically smooth string configurations to successfully eliminate the deviations of the string model from the Yang-Mills lattice data at high temperature. \begin{figure*}[!hpt] \begin{center} \subfigure{\includegraphics[scale=0.28]{ActionDensity_4up_20sw_TTc08_histErr}} \subfigure{\includegraphics[scale=0.28]{ActionDensity_6up_20sw_TTc09_histErr}} \subfigure{\includegraphics[scale=0.28]{ActionDensity_4up_80sw_TTc08_R12_histErr}} \caption{The action density profile of quark-antiquark $Q\bar{Q}$ separation distances $R=5,7,9,11$ at the center of the tube, $z=R/2$. The pad below show uncertainty distributions of the corresponding action densities. Profile are shown for the depicted temperatures $T/T_{c}=0.8$ and $T/T_{c}=0.9$.}\label{HIST0809} \end{center} \end{figure*} \begin{table*}[!hpt] \begin{center} \begin{ruledtabular} \begin{tabular}{ccccccccccc} \multirow{1}{*}{Fit} &\multirow{1}{*}{$Q\bar{Q}$ distance,} &\multicolumn{5}{c}{Width of the action density $W_{z}^{2}(x)$}\\ \multirow{1}{*}{Range} &\multicolumn{1}{c}{$Ra^{-1}$} &$z=1$ &$z=2$ &$z=3$ &$z=4$ &$z=R/2$\\ \hline \multicolumn{7}{c}{$T/T_{c}=0.9$}\\ \hline \multirow{10}{*}{7--28} &5 &14.1199$\pm$0.098 &13.149$\pm$0.083 &13.149$\pm$0.083 &14.1199$\pm$0.098 &13.149$\pm$0.083\\ &6 &16.1792$\pm$0.153 &15.223$\pm$0.128 &14.9462$\pm$0.124 &15.223$\pm$0.128 &14.9462$\pm$0.134\\ &7 &17.96$\pm$0.239 &17.3606$\pm$0.203 &17.2725$\pm$0.203 &17.2725$\pm$0.203 &17.2725$\pm$0.203\\ &8 &19.3835$\pm$0.363 &19.1211$\pm$0.316 &19.5429$\pm$0.336 &19.8105$\pm$0.358 &19.8105$\pm$0.358\\ &9 &20.7768$\pm$0.541 &20.5916$\pm$0.477 &21.3668$\pm$0.532 &22.0775$\pm$0.626 &22.0775$\pm$0.626\\ &10 &22.5249$\pm$0.805 &22.3885$\pm$0.722 &23.2364$\pm$0.817 &23.9204$\pm$1.019 &24.0357$\pm$1.142\\ &11 &24.1116$\pm$1.166 &24.8579$\pm$1.112 &26.2653$\pm$1.303 &26.6043$\pm$1.618 &25.7155$\pm$1.914\\ &12 &23.7865$\pm$1.495 &26.9632$\pm$1.649 &29.2494$\pm$0.628 &30.9147$\pm$0.804 &27.2671$\pm$3.295\\ &13 &21.4501$\pm$1.711 &26.587$\pm$2.094 &31.8025$\pm$0.829 &34.7752$\pm$1.073 &31.3649$\pm$5.516\\ &14 &19.7353$\pm$2.013 &23.8117$\pm$2.319 &32.5125$\pm$1.067 &38.3333$\pm$1.394 &35.9662$\pm$10.677\\ \end{tabular} \end{ruledtabular} \end{center} \caption{ The mean square width of the action density $W^{2}(z)$ measured at the temperature $T/T_{c}=0.9$. The width is estimated with $\sigma_1 \neq \sigma_2$ in Eq.\eqref{conGE} at five consecutive transverse planes $z_i$ to the $Q\bar{Q}$ line. } \label{WidthPlanesT09} \end{table*} \begin{table*}[!hpt] \begin{center} \begin{ruledtabular} \begin{tabular}{ccccccccccc} \multirow{1}{*}{Fit} &\multirow{1}{*}{$Q\bar{Q}$ distance,} &\multicolumn{5}{c}{Width of the action density $W_{z}^{2}(x)$}\\ \multirow{1}{*}{Range} &\multicolumn{1}{c}{$Ra^{-1}$} &$z=1$ &$z=2$ &$z=3$ &$z=4$ &$z=R/2$\\ \hline \multicolumn{7}{c}{$T/T_{c}=0.8$}\\ \hline \multirow{10}{*}{7--28} &5 &7.3822$\pm$0.042 &7.2984$\pm$0.038 &7.3822$\pm$0.042 &7.3822$\pm$0.042 &7.2984$\pm$0.038\\ &6 &8.2016$\pm$0.068 &8.1932$\pm$0.062 &8.2032$\pm$0.063 &8.1932$\pm$0.062 &8.2032$\pm$0.063\\ &7 &8.9061$\pm$0.113 &9.0557$\pm$0.105 &9.2114$\pm$0.109 &9.2114$\pm$0.109 &9.2114$\pm$0.109\\ &8 &9.5545$\pm$0.191 &9.8594$\pm$0.178 &10.2916$\pm$0.192 &10.4928$\pm$0.204 &10.4928$\pm$0.204\\ &9 &10.3981$\pm$0.321 &10.7443$\pm$0.298 &11.4887$\pm$0.332 &12.1310$\pm$0.378 &12.131$\pm$0.378\\ &10 &11.8279$\pm$0.553 &11.9383$\pm$0.631 &12.4012$\pm$0.681 &13.7030$\pm$0.853 &14.5621$\pm$0.975\\ &11 &14.0864$\pm$0.992 &14.0718$\pm$1.201 &13.7877$\pm$1.175 &15.0550$\pm$1.447 &17.2586$\pm$1.909\\ &12 &17.4755$\pm$1.863 &18.5431$\pm$1.702 &17.0604$\pm$2.460 &18.2364$\pm$1.898 &20.8867$\pm$2.542\\ &13 &21.2485$\pm$3.256 &24.1686$\pm$3.066 &26.6827$\pm$3.341 &25.0955$\pm$3.766 &28.1915$\pm$5.456\\ &14 &22.462$\pm$4.675 &26.9322$\pm$4.510 &31.9809$\pm$5.093 &39.9434$\pm$7.260 &50.1043$\pm$16.622\\ \end{tabular} \end{ruledtabular} \end{center} \caption{Similar to Table.~\ref{WidthPlanesT09}, the width of the action density $W_{z}^{2}(x)$ measured at the temperature $T/T_{c}=0.8$ with the use of the fit formula Eq.\eqref{conGE} setting $\sigma_1 \neq \sigma_2$. } \label{WidthPlanesT08} \end{table*} \begin{acknowledgments} We thank Thomas Filk for useful comments. This work has been funded by the Chinese Academy of Sciences President's International Fellowship Initiative grants No.2015PM062 and No.2016PM043, the Recruitment Program of Foreign Experts, NSFC grants (Nos.~11035006,~11175215,~11175220) and the Hundred Talent Program of the Chinese Academy of Sciences (Y101020BR0). \end{acknowledgments}
2204.13048
\section{Introduction} The goal of computational protein design is to identify a protein sequence that has a specified structure and, thereby, function \cite{Frappier2021}. Protein design has been used to design enzymes that catalyze important organic reactions \citep{Siegel2010} and to identify miniproteins that bind to the spike protein of SARS-CoV-2 and inhibit infection \citep{Cao2020}. However, protein design methods such as Rosetta, which are rooted in approximations of molecular physics, are too inaccurate to be reliably used without multiple rounds of expensive experimental trial-and-error \citep{Ingraham2019, Frappier2021}. Recently developed deep learning methods based on rigid coordinate featurizations have been applied to this problem and have achieved remarkable success using architectures like graph neural networks \citep{Ingraham2019} or 3D equivariant neural networks \citep{Jing2021}. An alternative featurization of proteins using Tertiary Motifs (TERMs) has recently shown success quantifying the sequence-structure relationships needed for design using purely statistical methods \citep{Zhou2020}. TERMs are small, compact structural units that recur frequently in unrelated proteins and have been shown to cover a large portion of protein space \citep{Mackenzie2016}. A scoring function can be derived by defining TERMs within a given protein, searching for the closest matches across the PDB, and evaluating the sequence statistics of the resulting matches. TERMs are fuzzier than coordinate-based features, since they do not need to match exactly across different proteins. Statistical models based on TERMs can capture sequence-structure relationships \citep{Zheng2015}, predict mutational changes in protein stability \citep{Zheng2017}, and be applied directly to protein design \citep{Zhou2020, Frappier2019}. The success of statistical models based on fuzzy TERMs suggests that these models capture information that can potentially augment models that consider only rigid coordinates. In this work, we designed a deep neural network that takes both TERM data and coordinate data as inputs and returns a scoring function that can be applied to evaluate any sequence on a given structure. Our method, named TERMinator, outperforms previous state-of-the-art methods on native sequence recovery tasks and we show through ablation studies that the use of TERM data is essential for the best performance. Our results suggest that TERMs provide a useful featurization of proteins for deep learning models. \section{Methods} \subsection{Dataset} Following \citet{Ingraham2019} and \citet{Jing2021} we use the CATH 4.2 split curated by \citet{Ingraham2019}. For every chain, we describe the structure using a set of TERMs that includes sequence-local singleton TERMs (usually 3 contiguous residues) and sequence non-local pair TERMs (usually 2 interacting 3-residue segments). For each TERM, we perform substructure lookup \citep{Zhou2020} against a non-redundant database generated from the Protein Data Bank (PDB) on Jan. 22, 2019. Self-matches with the protein itself are discarded. For each match, we record the sequence along with a number of geometric features of the backbone. For more details about the TERM data, see the Appendix. \subsection{Architecture} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{img/arch_overview.png} \caption{Model Architecture. The TERM Information Condenser extracts information from structural matches to TERMs in the target protein to construct node and edge embeddings. The GNN Potts Model Encoder takes in TERM data and coordinate features and outputs a Potts model over positional amino acid labels (see Appendix: Potts Model for functional form). We use MCMC simulated annealing to generate optimal sequences given the Potts model.} \label{fig:architecture} \end{figure} The network, shown in Figure \ref{fig:architecture}, can be broken into two sections. The first section, the TERM Information Condenser, learns local structure via graph convolutions on small, local TERM graphs. The second section, the GNN Potts Model Encoder, learns global structure via graph convolutions over nearest neighbors across the entire chain. Further details about the network architecture can be found in the Appendix. \paragraph{TERM Information Condenser} We represent TERMs as bidirectionally fully-connected graphs. Per TERM, we take the top 50 matches in the PDB with lowest RMSD. Each residue in each TERM match is converted to a set of geometric features describing backbone geometry and residue accessibility along with a one-hot encoding of the match residue identity. Node embeddings are generated from these initial features via the Matches Condenser, an attention-based pooling network that operates on the same residue across all matches per TERM. Edge embeddings, representing residue interactions within a TERM, are computed by constructing weighted cross-covariance matrices between the two residues' initial features, flattening them to vector form, and compressing them with a feedforward network. We feed these preliminary TERM graph embeddings through a 3-layer message-passing network known as the TERM MPNN, which operates on the local graph for each TERM. This results in a collection of residue embeddings and edge embeddings per TERM. We stitch together a full structure embedding by taking the mean of all replicate embeddings for both nodes and edges, since all residues and edges are covered by at least one TERM. \paragraph{GNN Potts Model Encoder} The GNN Potts Model Encoder combines the structure embedding produced by the TERM Information Condenser and target protein backbone coordinate features to produce a Potts model over the structure's sequence space. We use the coordinate-based structure embedding presented in \citet{Ingraham2019} and concatenate the TERM-based structure embedding, where it overlaps with the global $k$-nearest neighbors graph of residues, to generate the initial full-chain representation (see the Appendix for further details). The GNN Potts Model Encoder is another message-passing network that is identical to the TERM MPNN in architecture, but instead operates on the global $k$-NN graph, including self-edges. We produce a Potts model from the output of this network by projecting each edge embeddings into a matrix of residue-pair interaction energies. Self-energies are defined as the diagonal of this matrix, while pair-energies are defined by the entire matrix. \subsection{Training} The loss function is the negative log \textit{composite psuedo-likelihood} averaged across residue pairs. The composite psuedo-likelihood is the probability that any pair of interacting residues has the same identity as that pair of residues in the target sequence, given the remainder of the target sequence. Unlike the classic psuedo-likelihood, the composite psuedo-likelihood depends on all values in the Potts model and thus penalizes pathological Potts models that only display reasonable energies on observed values. A mathematical definition of composite psuedo-likelihood in the context of a Potts model can be found in the Appendix. \subsection{Performance Metrics} We evaluate performance by native sequence recovery, following \citet{Ingraham2019} and \citet{Jing2021}. For each Potts model, we sample $100$ sequences via MCMC-simulated annealing, reducing the temperature from $kT = 1$ to $kT = 0.1$. The lowest energy sequence is used to compute sequence recovery, and the sequence recovery for the entire test set is reported as the median recovery over all chains in the test set. We run all of our models in triplicate and report mean $\pm$ standard deviation. \section{Results} \subsection{Model performance and ablation studies} \begin{table}[ht] \caption{Ablation Studies of TERMinator. Native sequence recovery is listed as mean $\pm$ standard deviation for triplicate train/test runs on the same data split, where triplicate data are available.} \label{tab:results} \centering \begin{tabular}{ll} \toprule \textbf{Model version} & \textbf{Percent Sequence Recovery} \\ \midrule TERM Information Condenser + GNN Potts Model Encoder & $41.73 \pm 0.27$ \\ Ablate TERM Information Condenser & $40.29 \pm 0.26$ \\ \quad Ablate TERM MPNN & $41.19 \pm 0.05$ \\ \quad \textbf{Ablate Singleton Features} & $\textbf{42.22} \bm{\pm} \textbf{0.06}$ \\ \quad Ablate Pairwise Features & $41.53 \pm 0.16$ \\ Ablate GNN Potts Model Encoder & $29.94 \pm 1.10$\\ \quad Ablate Coordinate-based Features, Retain $k$-NN graph & $35.87 \pm 0.18$\\ GNN Potts Model Encoder Alone (no TERM information) & $39.66 \pm 0.30$ \\ dTERMen & 24.32 \\ \midrule \citet{Jing2021} & 40.2\\ \citet{Ingraham2019} (as reported by \citet{Jing2021}) & 37.3\\ \bottomrule \end{tabular} \end{table} We train TERMinator, alongside several ablated versions, to understand how the different modules and input features affect performance on native sequence recovery. Results are shown in Table \ref{tab:results} and descriptions of the ablated models are in the Appendix. Our best models outperform state-of-the-art methods. Interestingly, our best performing model is TERMinator with the singleton features from the TERM matches ablated. This could be attributed to overfitting by the Matches Condenser, which uses an expressive attention-based mechanism on a relatively small number (50) of TERM matches. Our ablation studies suggest that while TERM-based features and coordinate-based features are largely redundant, neither serves as a full replacement for the other. A TERMinator model trained purely on coordinate data can achieve a respectable sequence recovery of 39.7\%. Similarly, training on TERM data with no coordinate information (but with the benefit of the global $k$-NN graph, an inherently fuzzy feature) also achieves a respectable recovery of 35.87\%. However, when TERMinator utilizes both source of data, it outperforms both models, as well as published state-of-the-art models, on native sequence recovery. We also note that when we ablate the GNN Potts Model Encoder, TERMinator effectively acts as a better version of dTERMen, a statistical framework which utilizes TERM information to compute Potts models over proteins \citep{Zhou2020}. The ablated TERMinator and dTERMen share access to essentially the same TERM input data (see the Appendix for a more detailed discussion), but the learned model makes better use of this information and is likely a better choice for future design tasks. \subsection{Confusion Matrices} \begin{figure}[ht] \centering \includegraphics[width=0.45\linewidth]{img/confusion_matrix.png} \caption{Confusion matrix reporting percent confusion with respect to true residue identity. Values report aggregate performance by the singleton-features-ablated TERMinator model across triplicate runs. `X' is an out-of-alphabet label used to represent non-canonical residues.} \label{fig:confusion_matrix} \end{figure} To better understand the performance of TERMinator, we examine the amino-acid confusion matrix. A representative matrix is shown in Figure \ref{fig:confusion_matrix}. The strong diagonal in the matrix reflects the high overall native sequence recovery; glycine (G) and proline (P) are particularly well recovered. Interestingly, the "mistakes" the model makes are physically realistic. We see confusion within the EKR block, which contains charged amino acids glutamate, lysine, and arginine, with the switch between charges potentially attributable to the model reversing the direction of salt bridges. Other blocks include: ST, with highly similar hydroxyl sidechains; VI, with sterically similar branched aliphatic sidechains; FWY, encompassing aromatic residues; and DN, with isosteric side chains. These substitutions are highly plausible and illustrate that TERMinator learns physicochemically realistic representations of proteins, lending confidence in the model's utility for design applications. \section{Conclusion} In this work, we present TERMinator, a graph-based network that utilizes both TERM and coordinate features to produce a Potts model over sequence space for any given protein chain. This model outperforms state-of-the-art neural models on the native sequence recovery task, and we attribute this performance boost to the inclusion of TERM data. Additionally, confusion matrices suggest that the model learns physically realistic representations. In future work, we hope to apply this model to real-world protein design tasks, as well as extend the model to be trainable over a variety of target outputs, such as binding affinity. \begin{ack} This work was funded by an award from the National Institutes of General Medical Sciences to G.G. and A.K., 5R01GM132117. The authors acknowledge Dartmouth Anthill, MGHPCC C3DDB, and MIT SuperCloud for providing high-performance computing resources used to generate research results for this paper. A.L. acknowledges funding from MIT UROP and would like to thank Sebastian Swanson for teaching him how to use the dTERMen software suite. V.S. acknowledges funding from the Fannie and John Hertz Foundation. \end{ack} \newpage \section*{Appendix} \subsection*{Potts Model} Our model outputs a Potts model over positional amino acid labels, commonly known as an energy table in the protein design community. A Potts model describes a mapping from sequence $S$ of length $L$ to energy $E(S)$ with the functional form $$ E(S) = \sum_{i=1}^L h_i(s_i) + \sum_{i=1}^L \sum_{j>i}^L J_{ij}(s_i, s_j) $$ where \begin{itemize} \item Singleton terms $h_i(s_i)$ describe the energy contribution of position $i$ in $S$. \item Pairwise interaction terms $J_{ij}(s_i, s_j)$ describe the energy contribution from the interaction between positions $i$ and $j$ in $S$. \end{itemize} In our Potts model, the singleton term takes the form $h_i(s_i) = E_s(R_i = m)$, where $E_s$ is a lookup table of energies for placing residue $m$ at position $i$. The pairwise interaction term takes the form $J_{ij}(s_i, s_j) = E_p(R_i=m, R_j=n)$, where $E_p$ is a lookup table of energies of placing residue $m$ at position $i$ and residue $n$ at position $j$. This functional form is attractive because it can be used to rapidly evaluate any sequence's energy, and is easy to optimize via MCMC-based methods. \subsection*{TERM data} Our goal was to feed into our neural network similar data as would normally be mined by dTERMen \citep{Zhou2020}. This would enable us to differentiate the limitations of TERM data themselves from the limitations associated with the specific statistical approach in dTERMen. To this end, we modified an in-house version of the dTERMen program with the ability to output TERM match information for all of the motifs used in the standard procedure (as described in \citet{Zhou2020}). Briefly, dTERMen defines three types of TERMs in the input structural template: singleton, near-backbone, and pair TERMs. Singleton TERMs are defined around each residue $i$ via the contiguous fragment between residues $(i-n)$ and $(i+n)$, where $n$ is a parameter ($n=1$ was used in this study). Near-backbone TERMs combine the local backbone around residue $i$ (i.e., the singleton fragment) with local backbone fragments around each residue $j$ whose backbone is geometrically poised to interfere with amino-acid sidechains at $i$. Finally, pair TERMs are defined around each pair of residues $i$ and $j$ that are geometrically poised to affect each other's amino-acid choice. As described in Zhou et al. \citep{Zhou2020}, it is frequently the case that the full near-backbone TERM around a residue (i.e., the generally multi-segment motif that captures all relevant surrounding backbone fragments) does not contain sufficient structural matches in the database, in which case the dTERMen procedure seeks to optimally partition the overall near-backbone contribution into as few sub-motifs as possible. This step adds considerable search time. We reasoned that a learning-based approach may be better at extracting relevant statistical couplings between residue sites, such that a detailed breakdown of sidechain-to-sidechain versus backbone-to-sidechain coupling statistics may not be necessary. Therefore, we omitted near-backbone TERMs in this study for computational efficiency. In finding close structural matches, dTERMen uses a motif complexity-based empirical RMSD cutoff (defined in Mackenzie et al. \citep{Mackenzie2016}), with additional settings used to control the minimal and maximal number of matches. In this study, we set these limits to lower values than previously reported \cite{Zhou2020} for computational efficiency. Specifically, the minimal/maximal match counts were 200/500 for singleton TERMs, and 400/500 for pair TERMs. Under these settings, dTERMen takes roughly 4 minutes per residue (single-core, 8GB RAM). The native sequence recovery rate of dTERMen was estimated on the basis of energy tables produced with these settings. As input into the neural-network models, only data from the top 50 TERM matches were used. \begin{figure}[ht] \centering \fbox{\begin{minipage}{0.9\linewidth} \fontsize{9pt}{9pt}\selectfont \texttt{ * TERM k \\ list of position indices covered by the TERM (suppose N of them)\\ sequence of match 0; RMSD; N phi values; N psi values; N environment values\\ sequence of match 1; RMSD; N phi values; N psi values; N environment values } \begin{center} \texttt{...} \end{center} \end{minipage}} \caption{TERM matches information structure.} \label{fig:term_data} \end{figure} For each considered TERM, we output which positions of the structural template it covers along with information on each of its matches in the database of known structures, i.e. the match sequence, best-fit backbone RMSD from the query, backbone $\phi$ and $\psi$ values at each residue, and the "environment" of each residue--a scalar ranging from 0 to 1 that describes how solvent-exposed the residue is (the freedom metric defined in \citep{Zheng2017}); see Figure \ref{fig:term_data}. Additionally, for every residue in a TERM we compute a ``contact index'' that specifies the sequence distance from a central residue in the TERM. TERMs are constructed either around a single central residue (for a singleton TERM) or a pair of residues (for a pair TERM). We assign central residues an index of 0 and define the contact index for the remaining residues as the directional sequence distance to the closest intra-chain center residue. More specifically, non-central residues closer to the N-terminus than their corresponding central residue are assigned a negative integer contact index, while non-central residues closer to the C-terminus than their corresponding central residue are assigned a positive integer contact index. \subsection*{Matches Condenser} The Matches Condenser serves to compress the per-residue matches information across all TERM matches into one latent residue representation. The initial featurization of each residue per TERM match consists of: \begin{itemize} \item a one-hot encoding of the match residue's identity \item the residue's torsion angles lifted to the $3$-torus $\{\sin, \cos \} \times \{\phi, \psi, \omega\}$ \item the residue's environment value \item the TERM match's overall RMSD. \end{itemize} The Matches Condenser operates using an attention-based pooling mechanism, reminiscent of the CLS token used in BERT \citep{Devlin2018}. Along with every set of TERM matches, we generate and feed in a ``pool token'' per residue as an additional match. The intuition behind the pool token is that the Matches Condenser is able to update the pool token with the most important summary information from the TERM matches through multiple rounds of self-attention updates. The pool token is intended to capture the most important information about TERM matches, which we can then take as node embeddings for the TERM graph. For every residue in the TERM, we concatenate the residue's true torsion angles (lifted to the 3-torus) and its environment value, and feed this vector through a linear layer to create a pool token. We also create an associated set of ``target vectors'', which are derived from the same set of features as the pool tokens but are instead used in the attention mechanism itself. First, the base features per TERM match are fed through a two layer dense network. Then, across all TERM matches as well as the pool token, we perform MatchAttention, four rounds of alternating multi-headed self-attention ($n=4$) and feedfoward updates. For the attention computation, key and values vectors are computed by concatenating the target vector and the current match embedding and projecting using a linear layer, while query vectors are computed solely by projecting the current match embedding using a linear layer. In other words, for residue $n$ in TERM $t$, with the $i$th match embedding $h_{t,n,i}$ and target structure information embedding $\tau_{t,n}$, we compute query $q$, key $k$, and value $v$ as \begin{align*} q &= W_Q [h_{t,n,i}]\\ k &= W_K [h_{t,n,i}; \tau_{t,n}]\\ v &= W_V [h_{t,n,i}; \tau_{t,n}]\\ \end{align*} where $W_Q, W_K, W_V$ are linear layers and $[;]$ is the concatenation operator. \subsection*{Weighted Cross-Covariance Matrix Features} For each pair of residues, we compute the weighted cross-covariance matrix between the initial featurization of the TERMs across all matches. Let $r_i$ represent the RMSD of match $i$. Then, the weight of a match is computed as $$w_i = \frac{e^{-r_i}}{\sum_{j} e^{-r_j}}$$ This matrix is then flattened into a vector and its dimensionality is reduced using a two-layer feedforward network with ReLU activations. This output is used as the edge feature between the two residues of concern in the TERM graph. \subsection*{TERM MPNN} We define the following notation: \begin{itemize} \item $h_{i,t}$: the embedding for residue $i$ in TERM $t$ \item $h_{i \rightarrow j, t}$: the embedding for directional edge $i \rightarrow j$ in TERM $t$ \item $f_n, f_e$: three-layer feedforward networks with ReLU activations \item $g_n, g_e$: two-layer feedfoward networks with ReLU activations \item $\mathcal{N}_t$: the set of residues in TERM $t$ \item $s_{i,t}$: the sinusoidal embedding of the contact index of residue $i$ in TERM $t$ (see positional embedding in \citet{Vaswani2017}) \item $[;]$ represents the concatenation operation \end{itemize} The TERM MPNN utilizes alternative edge-update and node-update layers. The update for a directional edge $i \rightarrow j$ in TERM $t$ is computed as \begin{align*} h_{i \rightarrow j,t,\text{update}} = \frac{1}{2} \Big[ f_e([h_{i,t}; s_{i,t}; h_{i\rightarrow j, t}; h_{j,t}; s_{j,t}]) + f_e([h_{j,t}; s_{j,t}; h_{j\rightarrow i, t}; h_{i,t}; s_{i,t}]) \Big] \end{align*} And this update is applied as follows: \begin{align*} h_{i\rightarrow j, t} &\leftarrow \text{LayerNorm}(h_{i \rightarrow j,t} + \text{Dropout}(h_{i \rightarrow j,t,\text{update}}))\\ h_{i\rightarrow j, t} &\leftarrow \text{LayerNorm}(h_{i \rightarrow j,t} + \text{Dropout}(g_e(h_{i \rightarrow j,t}))) \end{align*} The update for a node is computed as \begin{align*} h_{i,t,\text{update}} = \frac{1}{\left| \mathcal{N}_t \right|} \sum_{j \in \mathcal{N}_t} f_n([h_{i,t}; s_{i,t}; h_{i\rightarrow j, t}; h_{j,t}; s_{j,t}]) \end{align*} And this update is applied as follows: \begin{align*} h_{i, t} &\leftarrow \text{LayerNorm}(h_{i,t} + \text{Dropout}(h_{i,t,\text{update}}))\\ h_{i, t} &\leftarrow \text{LayerNorm}(h_{i,t} + \text{Dropout}(g_e(h_{i,t}))) \end{align*} The TERM MPNN contains three layers, with each layer containing an edge update followed by a node update. After these updates, all bidirectional edges are merged into undirected edges via taking the mean of the two edge embeddings. \subsection*{GNN Potts Model Encoder} The GNN Potts Model Encoder is another message-passing network that is identical to the TERM MPNN in architecture but takes in different input features. The GNN Potts Model Encoder operates on a $k$-NN graph rather than a fully-connected graph, meaning node updates are computed over a residue's $k$ nearest neighbors. Additionally, because there is no notion of ``contact index'' when it comes to global structure, the update function does not take such features as inputs. Before running message passing, the GNN must first stitch together the TERM-based structure embedding and the coordinate-based structure embeddings. Node embeddings for the GNN are computed by concatenating the coordinate-based features from \citet{Ingraham2019} and the TERM-based features and feeding that vector through a linear layer to compress the vector back to the original dimensionality. Edge embeddings are also formed by computing the coordinate-based edge embedding from \citet{Ingraham2019}, concatenating the corresponding TERM edge embeddings, and feeding that vector through a linear layer to compress the vector back to the original dimensionality. In the case that a TERM edge embedding does not exist for that particular $k$-NN graph edge, a zero-vector of equal dimensionality is used instead. The edge embeddings derived after message-passing is completed are then projected to a $400$-dimensional vector by a feedforward network and reshaped to form a matrix containing interaction energies between pairs of interacting residues. The interaction energy matrices give the pair energies of the Potts model. Due to the inclusion of self-edges in the $k$-NN graph, we can also compute self-energies for the Potts model by taking the diagonal of the self-interaction matrix produced by the self-edge for each residue. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{img/arch_details.png} \caption{TERMinator Submodule Architectures.} \label{fig:arch_detas} \end{figure} \subsection*{Loss Function and Training} Our loss function is derived from a quantity known as ``composite psuedo-likelihood'', the probability that any pair of interacting residues has the same identity as that pair of residues in the target sequence, given the remainder of the target sequence. Composite psuedo-likelihood can be defined using the energies described by the Potts model as follows. As stated in Appendix: Potts Model, a Potts model is defined by two functions: the self-energy function $E_s(R_i = m)$ evaluates the energy of residue $i$ with identity $m$, and the pair-energy function $E_p(R_i = m, R_j = n)$ evaluates the energy of residue $i$ with identity $m$ interacting with residue $j$ with identity $n$. From the Potts model, we compute the contextual pairwise energy $E_{cp}$ for a sequence with residue $i$ having identity $m$ and residue $j$ having identity $n$, provided all other residues $R_u$ with identity $r_u$, as: \begin{align*} &E_{cp}(R_i=m, R_j=n, \{R_u = r_u\}) = \\ &\quad\quad E_s(R_i=m) + E_s(R_j=n)\\ &\quad\quad + E_p(R_i=m, R_j=n)\\ &\quad\quad + \sum_{u \neq m,n} \left( E_p(R_i=m, R_u=r_u) + E_p(R_u = r_u, R_j=n) \right)\\ \end{align*} From this energy, we can compute the composite psuedolikelihood $p(R_i = m, R_j = n, \{R_u = r_u\})$: $$ p(R_i = m, R_j = n, \{R_u = r_u\}) = \frac{\exp[-E_{cp}(R_i=m, R_j=n, \{R_u = r_u\})]}{\sum_{k,l} \exp[-E_{cp}(R_i=k, R_j=l, \{R_u = r_u\})]}$$ We train TERMinator to minimize negative log composite psuedo-likelihood, averaged across all pairs of interacting residues. Training occurs over 100 epochs using the Noam learning rate scheduler \citep{Vaswani2017}. Training time ranges from 3-5 days, depending on the particular ablations of TERMinator (using 2 NVIDIA Tesla V100 GPUs). Inference time on the Ingraham test set was on average 97 ms, or 61 $\mu$s/residue (using 1 NVIDIA Tesla V100 GPU). \subsection*{Hyperparameters} TERMinator uses two different hidden dimensionalities: the TERM Information Condenser uses a hidden dimension of 32, while the GNN Potts Model Encoder uses a hidden dimension of 128. This is largely due to GPU memory issues, as raw TERM data are much larger than coordinate data. Given more compute power, one direction to explore is how the model's performance is affected by varying the hidden dimensionality of both portions of the network. \subsection*{Description of Ablated Models} The following list provides a brief description of TERMinator ablation models: \begin{itemize} \item \textbf{TERM Information Condenser + GNN Potts Model Encoder:} All neural modules included. \item \textbf{Ablate TERM Information Condenser:} The TERM Information Condenser is reduced to a series of linear transformations. \begin{itemize} \item \textbf{Ablate TERM MPNN:} Singleton and pairwise TERM features are directly passed to the GNN Potts Model Encoder. \item \textbf{Ablate Singleton Features:} Singleton TERM features are set to 0. \item \textbf{Ablate Pairwise Features:} Pairwise TERM features are set to 0. \end{itemize} \item \textbf{Ablate GNN Potts Model Encoder:} Outputs of the TERM Information Condenser are embedded on a $k$-NN graph and then projected to form a Potts Model. \begin{itemize} \item \textbf{Ablate Coordinate-based Features, Retain $k$-NN graph:} Coordinate-based features are set to 0. The $k$-NN graph is still retained. \end{itemize} \item \textbf{GNN Potts Model Encoder Alone (no TERM information):} Outputs of the TERM Information Condenser are set to 0. \end{itemize} \textit{Ablate GNN Potts Model Encoder vs. dTERMen} In the main text, we claim that the version of TERMinator with the GNN Potts Model Encoder ablated has access to essentially the same features as dTERMen. In order to make a fair comparison, it is important to acknowledge a few slight differences in the precise inputs. Regarding TERMinator, while this particular ablation form does not have access to coordinates directly, it does have access to a $k$-NN graph, which dTERMen does not get. However, when we ablate the GNN Potts Model Encoder, it has no opportunities to perform graph operations over the $k$-NN graph; instead, the $k$-NN graph is only used to restrict pair interactions in the Potts model to those present in the graph. Due to the nature of TERMs being constructed out of small sets of spatially-proximal residues (<7 residues), it is almost always the case that all TERM edges will be included in the $k$-NN graph (in this work, $k=30$), leading to negligible utilization of the $k$-NN graph itself. On the other hand, the version of dTERMen used uses more matches to compute the Potts model than does TERMinator, which was restricted to using the top 50 matches. We also note that the published version of dTERMen uses near-backbone TERMs that were not included for the dTERMen sequence recovery results reported here, although this also means that TERMinator did not have access to these TERMs either. All things considered, it is reasonable to assume that this ablation of TERMinator and dTERMen effectively have access to the same types of information, with dTERMen performing worse despite having access to more matches. \newpage \bibliographystyle{plainnat}
2003.09851
\section{Introduction} A very large orbital angular momentum (OAM) can be generated in peripheral heavy ion collisions in the direction perpendicular to the reaction plane. Such a large OAM in the system can be converted into the spin polarization of hadrons along the direction of the OAM through spin-orbit couplings \cite{Liang:2004ph}, see, e.g., Ref. \cite{Wang:2017jpl} for a recent review. This effect is called the global polarization and differs from the polarization effect of a particle with respect to the production plane which depends on the particle's momentum. Non-vanishing global polarization of $\Lambda$ and $\overline{\Lambda}$ hyperons has been measured by the STAR collaboration \cite{STAR:2017ckg,Adam:2018ivw}. Various theoretical models have been proposed to describe the global polarization effect \cite{Liang:2004ph,Liang:2004xn,Voloshin:2004ha,Betz:2007kg,Becattini:2007sr,Gao:2007bc,Huang:2011ru,Becattini:2013fla, Florkowski:2017ruc,Fang:2016vpj,Weickgenannt:2019dks}. Recently we proposed a microscopic model for the global polarization from particle scatterings \cite{Zhang:2019xya}. The model does not rely on the assumption that the spin degree of freedom has reached a local equilibrium. The spin-vorticity coupling naturally emerges from scatterings of particles at different space-time points which incorporate polarized scattering amplitudes with the spin-orbit coupling \cite{Gao:2007bc}. \section{Polarization rate for spin-1/2 particles} In this note, we apply the result of Ref. \cite{Zhang:2019xya} to calculate the global polarization in heavy ion collisions and compare it with the STAR data at $\sqrt{s_{NN}}=200$ GeV \cite{Adam:2018ivw}. We consider all 2-to-2 parton scattering processes $A+B\rightarrow 1+2$ with at least one quark or anti-quark in the final state. Here $A$ and $B$ represent incoming partons, and 1 and 2 represent outgoing partons in which 2 is chosen to be the quark or anti-quark. In this section, we will denote a quantity in the center of mass (CMS) frame with a subscript 'c'. There is no 'c' index for a quantity in the laboratory frame. The spin asymmetry rate for the quark (as parton 2) per unit volume at the space-time point $X$ is given by Eqs. (21,22) of Ref. \cite{Zhang:2019xya}, \begin{eqnarray} \frac{d^{4}\mathbf{P}_{AB\rightarrow 1q}(X)}{dX^{4}} & = & -\frac{1}{(2\pi)^{4}}\int\frac{d^{3}p_{A}}{(2\pi)^{3}2E_{A}}\frac{d^{3}p_{B}}{(2\pi)^{3}2E_{B}}\frac{d^{3}p_{c,1}}{(2\pi)^{3}2E_{c,1}}\frac{d^{3}p_{c,2}}{(2\pi)^{3}2E_{c,2}}\nonumber \\ & & \times|v_{c,A}-v_{c,B}|\int d^{3}k_{c,A}d^{3}k_{c,B}d^{3}k_{c,A}^{\prime}d^{3}k_{c,B}^{\prime}\nonumber \\ & & \times\phi_{A}(\mathbf{k}_{c,A}-\mathbf{p}_{c,A})\phi_{B}(\mathbf{k}_{c,B}-\mathbf{p}_{c,B})\phi_{A}^{*}(\mathbf{k}_{c,A}^{\prime}-\mathbf{p}_{c,A})\phi_{B}^{*}(\mathbf{k}_{c,B}^{\prime}-\mathbf{p}_{c,B})\nonumber \\ & & \times\delta^{(4)}(k_{c,A}^{\prime}+k_{c,B}^{\prime}-p_{c,1}-p_{c,2})\delta^{(4)}(k_{c,A}+k_{c,B}-p_{c,1}-p_{c,2})\nonumber \\ & & \times\frac{1}{2}\int d^{2}\mathbf{b}_{c}\exp\left[i(\mathbf{k}_{c,A}^{\prime}-\mathbf{k}_{c,A})\cdot\mathbf{b}_{c}\right]\mathbf{b}_{c,j}[\Lambda^{-1}]_{\;j}^{\nu}\frac{\partial(\beta u_{\rho})}{\partial X^{\nu}}\nonumber \\ & & \times\left( p_{A}^{\rho}-p_{B}^{\rho}\right) f_{A}\left(X,p_{A}\right)f_{B}\left(X,p_{B}\right)\nonumber \\ & & \times\sum_{s_{A},s_{B},s_{1},s_{2} }\sum_{\text{color}}2 s_{2} \mathbf{n}_{c} \mathcal{M}\left(\{s_{A},k_{c,A};s_{B},k_{c,B}\} \rightarrow\{s_{1},p_{c,1};s_{2},p_{c,2}\}\right)\nonumber \\ & & \times\mathcal{M}^{*}\left(\{s_{A},k_{c,A}^{\prime};s_{B},k_{c,B}^{\prime}\} \rightarrow\{s_{1},p_{c,1};s_{2},p_{c,2}\}\right). \label{eq: N2} \end{eqnarray} Here the 3D form of the term $\epsilon^{0j\rho\nu}\partial(\beta u_{\rho})/\partial X^{\nu}\mathbf{e}_{j}=2\nabla_{X}\times(\beta\mathbf{u})$, with $\mathbf{e}_{j}$ ($j=x,y,z$) being the basis vector in the laboratory frame, $\beta\equiv1/T(x)$, $u^{\rho}$ is the fluid four-velocity, $\mathbf{u}$ is the spatial part of $u^{\rho}$, $X$ and $y$ are defined as $X \equiv \frac{1}{2}(x_{A}+x_{B})$ and $y \equiv x_{A}-x_{B}$ for two incoming partons that are located at $x_{A}=(t_{A},\mathbf{x}_{A})$ and $x_{B}=(t_{B},\mathbf{x}_{B})$, $\mathbf{b}_{c}$ is the transverse part of $y_c=(0,\mathbf{b}_{c})$. $\mathbf{n}_c=\hat{\mathbf{b}}_{c}\times\hat{\mathbf{p}}_{c,A}$ is the normal direction of the reaction plane of the scattering, $s_2=\pm 1/2$ denote the spin state of the quark or anti-quark along $\mathbf{n}_c$, $v_{c,A}=|\mathbf{p}_{c,A}|/E_{c,A}$ and $v_{c,B}=-|\mathbf{p}_{c,B}|/E_{c,B}$ are the longitudinal velocities in the CMS (with $\mathbf{p}_{c,A}=-\mathbf{p}_{c,B}$), $f_{A}$ and $f_{B}$ are the phase space Boltzmann distributions for the incident particle $A$ and $B$ respectively. The two Gaussian wave packets for the incoming particles are given by \begin{equation} \phi_{i}(\mathbf{k}_{i}-\mathbf{p}_{i})=\frac{(8\pi)^{3/4}}{\alpha_{i}^{3/2}}\exp\left[-\frac{(\mathbf{k}_{i}-\mathbf{p}_{i})^{2}}{\alpha_{i}^{2}}\right],\label{eq:wave-packet-gs} \end{equation} where $\alpha_{i=A,B}$ denote the width of the wave packet $A$ and $B$ respectively. The definitions of other quantities can be found around Eqs. (18,21,22) of Ref. \cite{Zhang:2019xya}. We see in Eq. (\ref{eq: N2}) that $\mathbf{P}$ is actually the difference between the number of quarks (parton 2) with spin up and that with spin down at $X$. The polarization rate at $X$ is given by \begin{equation} \frac{d\overline{\mathbf{P}}}{dt} = \frac{1}{n_q(X)}\sum_{A,B,1=\{q_{a},\bar{q}_{a},g\}}\frac{d^{4}\mathbf{P}_{AB\rightarrow1q}(X)}{dX^{4}} =\frac{1}{n_q(X)} \frac{\partial(\beta u_{\rho})}{\partial X^{\nu}}\mathbf{W}^{\rho\nu} , \end{equation} where $n_q(X)$ denotes the quark number density at $X$ and is given by \begin{equation} n_q(X) = 6 \int \frac{d^3\mathbf{p}}{(2\pi )^3} \mathrm{exp} \left(-\beta \sqrt{m_q^2+\mathbf{p}^2}\right) = \frac{3}{\pi ^2} m^2_q T K_2\left(\frac{m_q}{T} \right), \end{equation} where we have neglected the quark chemical potential and $K_2(z)$ is the modified Bessel function of the second kind. We note that using the thermal distribution for particle number (not for the spin) is just to estimate the polarization magnitude and compare with data. This does not contradict with the fact that our formalism does not assume thermal equilibrium for the spin degrees of freedom. Our numerical results show that the tensor $\mathbf{W}^{\rho\nu}$ has the form \begin{equation} \mathbf{W}^{\rho\nu}=W\epsilon^{0\rho\nu j}\mathbf{e}_{j} , \end{equation} where we see that $\rho$ and $\nu$ should be spatial indices or $\mathbf{W}^{0\nu}=\mathbf{W}^{\rho0}=\mathbf{0}$. Then Eq. (\ref{eq:polar-2}) is simplified as \begin{equation} \frac{d\overline{\mathbf{P}}}{dt} = 2\overline{W}\nabla_{X}\times(\beta\mathbf{u}), \end{equation} where $\overline{W}\equiv W/n_{\mathrm{q}}(X)$. To illustrate the mechanism of the polarization effect in a fluid moving along the $z$ direction, we propose a toy model in which particles follow the Boltzmann distribution \begin{equation} f = e^{-\beta u\cdot p}=\exp\left[-\beta\gamma\left(\sqrt{m+\mathbf{p}^{2}}-axp_{z}\right)\right],\label{eq:dis} \end{equation} where $u(\mathbf{x})=\gamma(1,0,0,ax)$ denotes the fluid four-velocity with $a$ being a small positive number, $\gamma=1/\sqrt{1-a^{2}x^{2}}$ is the Lorentz factor. Note that the velocity $u_{z}=\gamma ax$ depends on $x$. We set the particle mass $m=0.2\text{GeV}$, $a=0.2\;\text{fm}^{-1}$, and the phase space volume is a box of $\left[-5\text{fm},5\text{fm}\right]^{3}\times\left[-2\text{GeV},2\text{GeV}\right]^{3}$. For the fluid velocity along the $z$ direction with an $x$-gradient, the vorticity is in the $-y$ direction. We sample three-momenta of particles at each space-time point. We randomly choose a pair of particles and transform to their CMS frame according to their momenta (we use the index 'c' to indicate quantities in the CMS frame of two particles). We limit the time difference and longitudinal distance between two particles to be small enough, i.e. $|\Delta t_{c}|<\Delta t_{\mathrm{cut}}$ and $|\Delta z_{c}|<\Delta z_{\mathrm{cut}}$, where $\Delta t_{\mathrm{cut}}\sim\Delta z_{\mathrm{cut}}\sim 0$. We also require that their distance in the transverse direction be smaller than a cutoff, i.e. $|\Delta\mathbf{x}_{c,T}|<b_{0}$. Then we can determine the direction of the orbital angular momentum of the pair in the CMS frame, which we denote as $\mathbf{n}_{c}=(n_{c,x},n_{c,y},n_{c,z})$. Figure \ref{fig:distributions-nxnynz} shows the distributions of the $y$ component of $\mathbf{n}_{c}$. Note that only the $n_{c,y}$ distribution is asymmetric with respect to negative and positive $n_{c,y}$ while both $n_{c,x}$ and $n_{c,z}$ distributions are symmetric. This gives rise to a negative $\left\langle n_{c,y}\right\rangle $, which indicates that $\mathbf{n}_{c}$ favors the $-y$ direction or the vorticity direction. \begin{figure} \begin{centering} \includegraphics[scale=0.35]{pz_x} \includegraphics[scale=0.35]{n_y} \caption{Left panel: Distribution of $p_z$ in terms of $x$. Right panel: Distributions of $n_{c,y}$, the $y$ component of the orbital angular momentum direction. \label{fig:distributions-nxnynz}} \end{centering} \end{figure} \section{Numerical results for polarization} In order to evaluate $\overline{W}$ numerically we need to set values of parameters: the quark mass $m_{q}=0.2$ GeV for quarks of all flavors ($u,d,s,\bar{u},\bar{d},\bar{s}$), the gluon mass $m_{g}=0$ for the external gluon, the internal gluon mass (Debye screening mass) $m_{g}=m_{D}=0.2$ GeV in gluon propagators in the t and u channel to regulate the possible divergence, the width of the Gaussian wave packet $\alpha=0.28$ GeV, and the temperature $T=0.3$ GeV. The values of these parameters we choose are all generic ones for a parton system in high energy heavy ion collisions \cite{Zhang:2019uor}. Figure \ref{fig:-as-functions} shows the results of $\overline{W}$ as a function of the cutoff $b_{0}$ for the impact parameter, which corresponds to the coherence length of the colliding partons. \begin{figure} \begin{centering} \includegraphics[scale=0.3]{Wp} \caption{$\overline{W}$ as a function of the cutoff $b_{0}$. \label{fig:-as-functions}} \end{centering} \end{figure} The STAR collaboration has measured the global polarization of $\Lambda$ and $\overline{\Lambda}$ in heavy ion collisons. We note that the polarization of $\Lambda$ and $\overline{\Lambda}$ come from $s$ and $\overline{s}$ respectively \cite{Yang:2017sdk}. We can estimate the polarization of quarks or anti-quarks (here we mean $s$ or $\overline{s}$) from $\overline{W}$ and compare with the STAR data at $\sqrt{s}=200$ GeV. We assume the average vorticity $(1/2)\langle \nabla_{X}\times(\beta\mathbf{u})\rangle _y \sim 0.04$, the time interval for the polarization is about 5 fm, $b_0=2.5$ fm, then we have $\overline{\mathbf{P}}_y \sim 0.3\% $, which agrees with the STAR data \cite{Adam:2018ivw}: $0.277\pm 0.040 \pm 0.039\; [\% ]$ for $\Lambda $ and $0.240\pm 0.045\pm 0.061\; [\% ]$ for $\overline{\Lambda}$ . \textbf{Acknowledgement.} QW and RHF are supported in part by the National Natural Science Foundation of China (NSFC) under Grant No. 11535012, 11890713 and 11847220. \bibliographystyle{elsarticle-num}
1906.08877
\section{Introduction} In previous decades, several correlations between the mass of supermassive black holes (SMBH) and the properties of their host spheroids (spheroidal mass, luminosity, stellar velocity dispersion) point towards the co-evolution between SMBHs and host galaxies \citep[see][and references therein]{1998AJ....115.2285M,2000ApJ...539L...9F,2009ApJ...698..198G,2013ARA&A..51..511K,2016arXiv161107872B}. There are several observational studies that point towards the relationship between black hole activity and star-formation rate in the hosts of active galactic nuclei \citep[AGN, see e.g.][]{2006NewAR..50..677H}. In addition, the observed bimodality in the colours of local galaxies \citep{2001AJ....122.1861S,2003MNRAS.346.1055K,2004ApJ...600..681B,2004ApJ...615L.101B} points towards the correlation between the star formation activity and the morphological type of the galaxy. The more massive elliptical and lenticular galaxies are preferentially located in the massive red cloud, while less massive spiral and irregular galaxies are located in the blue cloud. The intermediate region, the so-called ``green valley'', is far less occupied \citep{2007ApJ...665..265F} and the ``green'' sources belong to mixed types, that is, red spirals. This implies that the AGN are involved in the process of star formation quenching, which is commonly referred to as AGN feedback in galaxy evolution theories and simulations. A useful tool for distinguishing the galaxies with different prevailing photoionization sources is the Baldwin, Phillips, and Terlevich diagram \citep[BPT,][]{1981PASP...93....5B}, in which the source location is determined by a pair of low-ionization, emission-line intensity ratios. A commonly used pair is the ratio [\ion{N}{ii}]/H$\alpha$ and [\ion{O}{iii}]/H$\beta$ (see Fig.~\ref{fig_redshift_flux_surplot_parent}, right panel), but ratios of other line intensities of a similar wavelength can be used as well. BPT diagrams include forbidden lines, which are for AGN sources associated with the narrow line region (NLR) and hence type 2 AGN. The emission-line ratios depend on the strength and shape of the ionizing radiation field, as well as the physical properties of the line-emitting gas including gas density, metal abundances, dust, and cloud thickness \citep[e.g.][]{1987ApJS...63..295V,1997A&A...323...31K,1997A&A...327..909B,2004ApJS..153....9G,2004ApJS..153...75G,2016MNRAS.458..988R}. Systematic trends and correlations in different sections of the diagnostic diagrams have been traced back to systematic changes in the ionization parameter, the shape of the ionizing continuum, the fraction of matter-bound clouds, and/or the role of metal abundances. According to a given set of ratios, four spectral classes of galaxies are commonly distinguished. In star-forming galaxies (SF), the ionizing flux is provided mostly by hot, massive, young stars and associated supernovae that are surrounded by HII regions. They have lower [OIII]/$H\beta$ and [NII]/$H\alpha$ ratios than pure AGN sources (see also Figs.~\ref{fig_redshift_flux_surplot_parent} and \ref{fig_effelsberg_sample}). In between SF and AGN sources are composite (COMP) galaxies, with a mixed contribution from star formation (HII regions) and AGN. The AGN spectral class was further subdivided into Seyfert 2 sources (Sy) and low-ionization nuclear emission regions (LINERs). LINERs are characterized by a lower [OIII]/$H\beta$ ratio in comparison with Seyfert 2 AGN sources and a higher [NII]/$H\alpha$ ratio with respect to star-forming sources (both SF and COMP). Systematic changes of line ratios across different regimes, from the HII regime across the composite region and into the AGN regime are seen in spatially-resolved spectroscopy of nearby Seyfert galaxies \citep[e.g.][]{2006A&A...459...55B,2006A&A...456..953B,2011AJ....142...43S}, while \citet{1994A&A...291..713S} speculated about an ionization sequence from the AGN into the LINER regime, based on systematic continuum-dilution processes. \begin{figure}[tbh] \centering \includegraphics[width=0.5\textwidth]{redshift_flux.jpg}\\ \includegraphics[width=0.5\textwidth]{surplot_parent_sample.jpg} \caption{Distribution of the low-flux and high-flux samples in the redshift-flux plane and in the BPT diagram. \textbf{Top panel:} Two samples of radio galaxies in the redshift-flux density plane: a high-flux sample of 119 sources previously analysed by \citet{2015A&A...573A..93V} and a sample extended towards lower radio flux densities at $1.4\,{\rm GHz}$, $10\,{\rm mJy}\leq F_{1.4} \leq 100\,{\rm mJy}$ (black points). The grey points represent a subset of the parent sample in the redshift range of $0.04\leq z \leq 0.4$. \textbf{Bottom panel:} Selected radio galaxies for the Effelsberg observations in the [NII]-based BPT diagnostic diagram (black points). The sample of radio galaxies with $F_{1.4}\geq 100\,{\rm mJy}$ analysed by \citet{2015A&A...573A..93V} is denoted by green crosses. The parent sample is colour coded depending on the density per bin in the [OIII]-[NII] plane (bin size is set to $0.1$ along both ratios).} \label{fig_redshift_flux_surplot_parent} \end{figure} Radio galaxies are generally considered as active galaxies that are luminous in radio bands. \citet{1980A&A....88L..12S} argued that the radio loudness of quasars, which is typically defined as the ratio of the radio to the optical flux densities, shows a certain degree of bimodality. \citet{1989AJ.....98.1195K} confirmed the bimodality in the radio loudness in the sense that radio-quiet quasars are five to ten times more frequent than radio-loud quasars, that is, they have radio emission comparable to the optical emission. A similar result was also obtained by \citet{2016ApJ...831..168K}. Recent studies based on deep radio studies, namely the FIRST (Faint Images of the Radio Sky at Twenty centimeters) and the NVSS (National Radio Astronomy Observatory Very Large Array Sky Survey -- NVSS) \citep{1995ApJ...450..559B,1998AJ....115.1693C}, in combination with sensitive optical surveys SDSS (Sloan Digital Sky Survey) and 2dF (Two Degree Field Survey) \citep{2000AJ....120.1579Y,2001MNRAS.322L..29C} demonstrate the large scatter in the radio loudness. They are, however, generally inconclusive about the bimodal character of the distribution \citep{2000ApJS..126..133W,2002AJ....124.2364I,2003MNRAS.346..447C,2003MNRAS.341..993C,2003ApJ...590...86L}. For a sample of galaxies, it was possible to determine the estimates of black hole masses and hence study the dependence of the radio loudness, $R=F_{\nu_{\rm R}}/F_{\nu_{\rm O}}$, on the Eddington ratio, $\eta \equiv L_{\rm Bol}/L_{\rm Edd}$, where $F_{\nu_{\rm R}}$ and $F_{\nu_{\rm O}}$ are monochromatic radio and optical flux densities at frequencies $\nu_{\rm R}$ and $\nu_{\rm O}$, respectively, and $L_{\rm Bol}$ and $L_{\rm Edd}$ are the bolometric luminosity and the Eddington limit of the AGN. The general trend found is that the radio loudness increases with a lower Eddington ratio, that is, there seems to be an anti-correlation between $R$ and $\eta$ with a large scatter \citep{2002ApJ...564..120H,2007ApJ...658..815S}. This trend is so far consistent with the hardness-luminosity diagram for X-ray binaries \citep{2004MNRAS.355.1105F,2017A&A...603A.127S} and leads to the conclusion that accretion at lower Eddington rates leads to smaller cooling rates, and hence to hot, geometrically thick, and optically thin radiatively inefficient accretion flows \citep[RIAFs,][]{1977ApJ...214..840I,1982Natur.295...17R}. RIAFs are often modelled as advection-dominated flows \citep[ADAFs,][]{1995ApJ...444..231N}, which can launch jets more effectively \citep{1982Natur.295...17R} than colder, optically thick, and geometrically thin accretion discs that are associated with larger Eddington ratios due to efficient cooling \citep[e.g.][]{1973A&A....24..337S}. This yields the trend of larger radio loudness for sources with lower Eddington ratios. However, since $R\propto 1/L_{\rm O}$, where $L_{\rm O}$ is the optical luminosity, and $\eta\propto L_{\rm O}$, the anti-correlation is generally expected and one should be careful when interpreting the results. Moreover, optical emission as a tracer of accretion rate only makes sense for type I AGN, since for type II sources the optical emission is influenced by obscuration and continuum dilution \citep{1994A&A...291..713S}. Therefore, adding more parameters to the study of trends, such as spectral slopes in the radio domain, may shed more light on the radio-optical properties of galaxies and the physical processes involved, namely the formation, acceleration, and collimation of jets \citep{2007ApJ...658..815S} or the effect of mergers on the radio loudness of the AGN \citep{2015ApJ...806..147C}. At frequencies $\lesssim 10\,{\rm GHz}$, the radio-continuum spectra are dominated by non-thermal synchrotron emission with the characteristic power-law slope, $S_{\nu} \propto \nu^{\alpha}$, while the thermal bremsstrahlung (free-free) emission is negligible \citep{1988AJ.....96...81D}, contributing less than 10\% at 10 GHz \citep{1982A&A...116..164G}, and becomes more prominent towards mm-wavelengths. Concerning the synchrotron spectral slope for radio galaxies with jets, primary components (cores) are generally self-absorbed with positive slopes of $\alpha\sim 0.4$, while secondary components have negative spectral indices with mean values $\alpha \sim -0.7$ \citep{1986A&A...168...17E}, which is consistent with optically thin synchrotron emission. The integrated radio spectrum of radio jets is typically associated with a flat spectral slope due to the superposition of self-absorbed synchrotron spectra. The role of radio galaxies in the galaxy evolution at low to intermediate redshifts $(z<0.7)$ is unclear \citep{2016A&ARv..24...10T}. In general, the star formation activity in their hosts is expected to be one order of magnitude smaller than for the peak of quasar and star-formation activity at $z\approx 1.9$ \citep{2014ARA&A..52..415M}. However, the AGN in these radio galaxies are expected to come through phases of intermittent accretion activity \citep{2009ApJ...698..840C}, as well as mergers that influence the overall host properties, and are expected to prevent the host gas reservoirs from cooling and forming stars. The contribution of different sources to the overall radio luminosity of galaxies remains largely unclear. The value of the radio spectral index $\alpha$ helps to distinguish between the prevalence of optically thin and optically thick emission mechanisms. \citet{2019MNRAS.482.5513L} made use of the high-resolution Very Large Array (VLA) observations at 5 and 8.4 GHz of optically selected radio-quiet (RQ) Palomar-Green (PG) quasars. They determined the corresponding spectral slopes $\alpha_{5/8.4}$ for 25 RQ PG sources and found a significant correlation between the slope value and the Eddington ratio. Specifically, high Eddington-ratio quasars ($L/L_{\rm Edd}>0.3$) have steep spectral slopes, $\alpha_{5/8.4}<-0.5$, while lower Eddington-ratio sources ($L/L_{\rm Edd}<0.3$) have flat to inverted slopes, $\alpha_{5/8.4}>-0.5$. A correlation is also found with an Eigenvector I (EV1) set of properties \citep[\ion{Fe}{ii}/H$\beta$, H$\beta_{\rm asym}$, X-ray slope $\alpha_{\rm X}$; see ][]{1992ApJS...80..109B,2000ApJ...536L...5S,2001ApJ...558..553M}, where the flat to inverted RQ PG sources have low \ion{Fe}{ii}/H$\beta$ and a flat soft X-ray slope. \citet{2019MNRAS.482.5513L} found a dichotomy between radio-quiet PG quasars and 16 radio-loud (RL) PG quasars, which in contrast with RQ sources do not exhibit the correlations with EV1 and the radio slope is instead determined by the black hole mass, which implies a different radiation mechanism for RL sources. These findings provide a motivation for the investigation of further correlations between radio slopes and optical emission-line properties for a larger sample of radio galaxies. Previously, we performed radio continuum observations of intermediate redshift $(0.04 \leq z \leq 0.4)$ SDSS-FIRST sources at $4.85\,{\rm GHz}$ and $10.45\,{\rm GHz}$ to determine their spectral index and curvature distributions \citep{2015A&A...573A..93V}. This sample included star-forming, composite, Seyfert, and LINER galaxies that obeyed the flux density cut of $\geq 100\,{\rm mJy}$ at $1.4\,{\rm GHz}$ (see Fig.~\ref{fig_redshift_flux_surplot_parent}, green crosses). \citet{2015A&A...573A..93V} searched for the radio spectral index trends in BPT diagnostic diagrams as well as for the relation between optical and radio properties of the sources. For the limited sample of 119 sources, they found a rather weak trend of spectral index flattening in the [NII]-based diagnostic diagram along the star-forming--composite--AGN Seyfert branch. The radio spectral index flattening trend triggered the motivation to study more sources with lower radio continuum fluxes at $1.4\,{\rm GHz}$. The sample was extended by 381 additional sources towards lower radio flux densities at $1.4\,{\rm GHz}$, with integrated flux densities $10\,{\rm mJy} \leq F_{\rm 1.4}\leq 100\,{\rm mJy}$. Using the cross-scan observations conducted by the 100-m Effelsberg radio telescope, for point-like sources we determined flux densities at two frequencies, $4.85\,{\rm GHz}$ and $10.45\,{\rm GHz}$, which enabled us to determine the spectral indices $\alpha_{[1.4-4.85]}$ (for 298 sources) and $\alpha_{[4.85-10.45]}$ (for 90 sources). In this paper, we present the findings based on the radio-optical properties of the low-flux sample along with radio-brighter sources previously reported in \citet{2015A&A...573A..93V}. We searched for any trends of the radio spectral slope in the low-ionization diagnostic diagrams. In other words, we were interested in whether the changes in emission-line ratios across the BPT diagram are systematically reflected in the radio spectral index. Furthermore, the relation between the spectral index and the radio loudness of the sources was unknown. The paper is structured as follows. In Sect.~\ref{radio_optical_samples} we introduce the optical (SDSS survey) and radio (FIRST) samples used in our study. Subsequently, in Sect.~\ref{effelsberg_sample_observation} we present the selection of radio sources for follow-up observations with the Effelsberg 100-m telescope at two higher frequencies. The basic statistical properties of spectral index distributions are presented in Sect.~\ref{analysis_results} in combination with the optical properties of the sources. Subsequently, we describe the fundamental trends of the radio spectral slope in optical diagnostic diagrams as well as with respect to the radio loudness in Sect.~\ref{sec_trends}. We continue with the interpretation of the results in Sect.~\ref{interpretation}. Finally, we summarize the main results in Sect.~\ref{conclusions}. Additional materials, specifically radio flux densities for new low-flux sources, are included in Appendix~\ref{appa}. \section{Radio and optical samples} \label{radio_optical_samples} \subsection{Sloan Digital Sky Survey} The Sloan Digital Sky Survey (SDSS) is a photometric and spectroscopic survey of celestial sources that covers one quarter of the north Galactic hemisphere \citep{2000AJ....120.1579Y,2002AJ....123..485S}. The spectra and magnitudes have been obtained by a $2.5$-m wide field-of-view (FOV) telescope at Apache Point in New Mexico, USA. The spectra have an instrumental resolution of $\sim 65\,{\rm km\,s^{-1}}$ in the wavelength range of $380$--$920\,{\rm nm}$. The identified galaxies have a median redshift of $0.1$. The spectra were obtained by fibers with 3'' diameter (the linear scale of $5.7\,{\rm kpc}$ at $z=0.1$), which makes the sample sensitive to aperture effects, that is, low-redshift galaxies are dominated by nuclear emission \citep[see e.g.][]{2015A&A...580A.113T}. The seventh data release of SDSS \citep[SDSS DR7, ][]{2009ApJS..182..543A} contains parameters of $\sim 10^6$ galaxies inferred from the spectral properties based on the Max Planck Institute for Astrophysics (MPIA) and Johns Hopkins University (JHU) emission-line analysis. The stellar synthesis continuum spectra \citep{2003MNRAS.344.1000B} were applied for the continuum subtraction, after which emission line characteristics were derived. In particular, SDSS DR7 contains the emission-line characteristics of low-ionization lines that are used to distinguish star-forming galaxies from AGN (Seyfert galaxies and LINERs or high-excitation and low-excitation systems, respectively) in the diagnostic diagrams. DR7 also contains the source images as well as stellar masses inferred from the broad-band fitting of spectral energy distributions by stellar population models. \subsection{Faint Images of the Radio Sky at Twenty-centimeters Survey} The Faint Images of the Radio Sky at Twenty-centimeters Survey \citep[FIRST][]{1995ApJ...450..559B} was performed by the Very-Large-Array (VLA) in its B-configuration at $1.4\,{\rm GHz}$. The FIRST survey covers $\sim 10\,000\,{\rm deg^2}$ in the north Galactic cap, partially overlapping the region mapped by SDSS. The sky brightness was measured with a beam-size of $5.4''$ and an rms sensitivity of $\sim 0.15\,{\rm mJy/beam}$. At the sensitivity level of $\sim 1 {\rm mJy}$, the FIRST survey contains $\sim 10^6$ sources, of which about a third are resolved with structures on the angular scale of $2''$--$30''$ \citep{2002AJ....124.2364I}. The survey contains both the peak and the integrated flux densities, which allows us to distinguish resolved and unresolved sources. The flux density measurements have uncertainties smaller than $8\%$. The images for each source are provided on the website. \subsection{SDSS-FIRST cross-identification} As described in \citet{2015A&A...573A..93V}, we performed a cross-identification of SDSS DR7 and FIRST source catalogues using SDSS DR7 CasJobs, with a matching radius set to 1'' \citep{2005cs........2072O}. This results in a total of $37\,488$ radio-optical emitters as a basis for further studies. This initial sample constitutes $\sim 4\%$ of SDSS sources and contains mostly active, metal-rich galaxies \citep[see also][for details]{2012A&A...546A..17V}. As was already done in \citet{2015A&A...573A..93V} for a high-flux sample, we apply the following selection criteria: \begin{itemize} \item the redshift limits, $0.04 \leq z \leq 0.4$, \item the signal-to-noise lower limit of $S/N>3$ on the equivalent width $EW$ of the emission lines used in the low-ionization optical diagnostic diagrams. \end{itemize} The lower redshift limit of $0.04$ is due to the fact that nearby sources have angular sizes larger than the optical fiber used for the SDSS survey. Hence they are dominated by their nuclear emission \citep[see e.g.][]{2003AAS...20311901K}. The upper redshift limit is imposed to make sure that the emission-line diagnostics concerning [\ion{N}{ii}] and H$\alpha$ lines is reliable, meaning that they fall into the observable spectral window. By imposing the redshift constraints, we obtain our parent sample with 9951 sources, which are shown as grey points in the redshift-flux plot (see Fig.~\ref{fig_redshift_flux_surplot_parent}). There is no evident dependency of the flux density on the redshift, apart from the expected tendency of having more radio galaxies towards lower radio flux densities. Out of nearly $10\,000$ radio sources, only $\sim 1\%$ of the sources has the integrated flux density at $1.4\,{\rm GHz}$ above $100\,{\rm mJy}$. The combined radio-optical properties of these brightest sources were investigated in \citet{2015A&A...573A..93V}. By decreasing the lower boundary of the flux density cut by one order of magnitude to $10\,{\rm mJy}$, the number of sources increases to $5.6\%$, that is, by a factor of five. Most of the sources of the parent sample, $93.5\%$, have flux densities at $1.4\,{\rm GHz}$ below $10\,{\rm mJy}$. Under the assumption that many of these sources have steep to flat spectra, $S_{\nu} \propto \nu^\alpha$, where $\alpha \leq 0$, the detection of their flux density at frequencies larger than $1.4\,{\rm GHz}$ would be beyond the detection limit of the Effelsberg telescope, which is $\sim 5\,{\rm mJy}$. In Fig.~\ref{fig_redshift_flux_surplot_parent}, the subset of the parent sample in the redshift range $0.04<z<0.4$ is plotted in the [\ion{N}{ii}]-diagnostic diagram with the aim of showing the source density in the $[\ion{O}{iii}]-[\ion{N}{ii}]$ plane. We see that most of the sources are located in the star-forming composite branch, where AGN are supposed to turn on. \begin{figure*}[tbh] \centering \begin{tabular}{ccc} \includegraphics[width=0.33\textwidth]{diagnostic_diagram_NII.jpg} & \hspace{-0.5cm} \includegraphics[width=0.33\textwidth]{diagnostic_diagram_S2.jpg}& \hspace{-0.5cm} \includegraphics[width=0.33\textwidth]{diagnostic_diagram_O1.jpg} \end{tabular} \caption{The low- and high-flux sample distribution in different optical diagnostic (BPT) diagrams. \textbf{Left:} [NII]-based diagnostic diagrams of the parent (grey) and Effelsberg samples: a low-flux sample represented by black points and a high-flux sample denoted by green crosses. Demarcation lines were derived by \citet{2001ApJ...556..121K} to set an upper limit for the star-forming galaxies and by \citet{2003MNRAS.346.1055K} to distinguish purely star-forming galaxies. The dividing line between Seyferts and LINERs was derived by \citet{2007MNRAS.382.1415S}. The new Effelsberg sample (black points), extended towards lower radio fluxes, covers the whole diagnostic diagram. \textbf{Middle:} The same samples as in the left panel in [SII]-based diagnostic diagram. \textbf{Right:} High- and low-flux samples in [OI]-based diagram.} \label{fig_effelsberg_sample} \end{figure*} \section{Effelsberg sample and observations} \label{effelsberg_sample_observation} The aim of the previous study by \citet{2015A&A...573A..93V} was to analyse the radio-optical properties across the parent sample. The radio information was complemented by the radio flux density measurements at two higher frequencies -- $4.85$ and $10.45\,{\rm GHz}$ -- using the 100-m Effelsberg radio telescope. \citet{2015A&A...573A..93V} selected the sources from the parent sample using the lower limit for the integrated flux density of $F_{1.4}\geq 100\,{\rm mJy}$ at $1.4\,{\rm GHz}$. In total, 119 sources selected according to the criteria above were observed by the Effelsberg radio telescope at two additional frequencies. Thus, for these sources it was possible to determine the spectral slopes $\alpha_{[1.4-4.85]}$ and $\alpha_{[4.85-10.45]}$. This sample is denoted as a high-flux sample and the main properties of the sources are summarized in Table A.1 of \citet{2015A&A...573A..93V}. In Fig.~\ref{fig_redshift_flux_surplot_parent}, the sample occupies the upper part of the redshift-integrated flux plot. Due to the large radio flux, the sources are dominated by galaxies with an AGN. In the [\ion{N}{ii}]-based optical diagnostic diagram (BPT diagram) the high-flux radio-optical sample is dominated by composite sources, Seyfert, and LINER galaxies (see green points in Fig.~\ref{fig_redshift_flux_surplot_parent}). There are only few metal-rich star-forming sources whose radio emission generally originates in the shocks from supernovae and in (re)ignited AGN and jet activity. Therefore the high-flux sample is certainly not complete in a statistical sense and the results published in \citet{2015A&A...573A..93V} are not representative of the whole radio-optical parent sample. This was the main motivation for the extension of the Effelsberg sample towards lower radio flux densities; the integrated radio flux density at $1.4\,{\rm GHz}$ was considered in the interval $10\,{\rm mJy} < F_{1.4} < 100\,{\rm mJy}$, that is, we decreased the upper and the lower flux limit of the high-flux sample by one order of magnitude. As shown in Figs.~\ref{fig_redshift_flux_surplot_parent} and \ref{fig_effelsberg_sample}, the low-flux sample (black points) covers the whole [\ion{N}{ii}]-based diagnostic diagram and its coverage is also more uniform. It should therefore better represent the radio-optical properties of the parent sample. However, the bias towards the AGN and LINER sources is partially maintained, as can be inferred from the [\ion{N}{ii}]-, [\ion{S}{ii}]-, and [\ion{O}{i}]-based diagrams in Fig.~\ref{fig_effelsberg_sample}. By imposing the redshift limits $0.04\leq z \leq 0.4$, as well as the signal-to-noise criterion on the equivalent width of the optical emission lines, $S/N>3$, the low-flux sample initially consisted of $381$ galaxies with the integrated flux densities $10\,{\rm mJy} \leq F_{1.4} \leq 100\,{\rm mJy}$. These sources were first observed at $4.85\,{\rm GHz}$ with the 100-m telescope in Effelsberg. Observations were performed between April 2014 and June 2015. The receiver at $4.85\,{\rm GHz}$ is mounted on the secondary focus of the Effelsberg antenna. It has multi-feed capabilities with two horns, which allow real-time sky subtraction in every subscan measurement. The total intensity of each source was determined via scans in the azimuth and the elevation. According to the brightness of the source, several subscans were used, ranging from 6 up to 24. In the data reduction process, subscans were averaged to produce final subscans used for further processing. Each scan had a length equal to 3.5 of that of the beam size at the corresponding frequency to ensure the correct subtraction of linear baselines. Before combining the subscans, we checked each for possible radio interference, bad weather effects, or detector instabilities. During each observational run, we observed standard bright calibration sources, such as 3C286, 3C295, and NGC7027, which were used for correcting gain instabilities and elevation-dependent antenna sensitivity. Finally, we used these sources for the absolute flux calibration. The whole data reduction was performed using a set of Python and Fortran scripts. The flux density was obtained by fitting Gaussian functions to the signal in the averaged single-dish cross scans. Further details on the data reduction can be found in \citet{2015A&A...573A..93V}, who applied the same routines for brighter sources. From 381 sources, we managed to determine the reliable flux density at $4.85\,{\rm GHz}$ for 298 sources. The flux densities range between $350\,{\rm mJy}$ and $4\,{\rm mJy,}$ with the mean and median values of $30\,{\rm mJy}$ and $17\,{\rm mJy}$, respectively. Other sources were too faint or extended at least in one direction and therefore it was not possible to reliably determine an integrated flux density. For $10.45\,{\rm GHz}$ observations, 256 sources out of 298 were scheduled for observing based on the extrapolated flux density based on the non-simultaneous $1.4\,{\rm GHz}$ and $4.85\,{\rm GHz}$ flux densities. At $10.45\,{\rm GHz}$ some sources were too faint, were extended at least in one direction, or the reliable flux density determination was not possible due to a higher sensitivity to weather effects at $10.45\,{\rm GHz}$. In the end, we obtained flux densities at $10.45\,{\rm GHz}$ for 90 sources. The maximum and minimum flux densities are $206\,{\rm mJy}$ and $6\,{\rm mJy}$, respectively. The mean and median values are $32\,{\rm mJy}$ and $19\,{\rm mJy}$, respectively. The three non-simultaneous flux densities for 90 sources in the low-flux sample, $F_{1.4}$, $F_{4.85}$, and $F_{10.45}$, are listed in Table~\ref{tab_three_freq1} in Appendix~\ref{appa}, along with the radio spectra for each source as well as the mean radio spectrum for each galaxy spectral class (star-forming, composite, AGN Seyfert, and LINER galaxies). In the following analysis, unless otherwise stated, we use the low-flux sample in combination with the high-flux sample of \citet{2015A&A...573A..93V}. For the study of radio continuum properties between $1.4$ and $4.85\,{\rm GHz}$, we have $298$ low-flux sources and $119$ high-flux sources available. Between $4.85$ and $10.45\,{\rm GHz}$, there are $90$ low-flux and $119$ high-flux sources. \section{Spectral index properties} \label{analysis_results} \subsection{General properties of radio spectral index distributions} We present the radio flux densities at $1.4$ (FIRST), $4.85$, and $10.45\,{\rm GHz}$ (both Effelsberg) for $90$ low-flux sources in the table in Appendix~\ref{appa}. The flux densities at $1.4\,{\rm GHz}$ (FIRST) and $4.85\,{\rm GHz}$ (Effelsberg) are non-simultaneous (more than one year apart from each other), while the Effelsberg observations at $4.85\,{\rm GHz}$ and $10.45\,{\rm GHz}$ were performed within one year of each other. For high-flux sources, the analysis as well as the optical and the radio images were presented in \citet{2015A&A...573A..93V}. The catalogue of the high-flux sources is available in \citet{2015yCat..35730093V}, where the (quasi)-simultaneous flux densities (obtained during a single observing session) are listed. For the flux density in the radio domain, we assume the power-law dependency on frequency, using the notation $F(\nu) \propto \nu^{+\alpha}$, where $\alpha$ is the spectral index. Based on the non-simultaneous measurements of flux densities $F_{1.4}$ (FIRST) and $F_{4.85}$ (Effelsberg), we calculated the spectral index $\alpha_{[1.4-4.85]}$ using \begin{equation} \alpha_{[1.4-4.85]}=\frac{\log{(F_{1.4}/F_{4.85})}}{\log{(1.4/4.85)}}\,. \label{eq_spec_index_low} \end{equation} We note that there is a large beam-size difference between the VLA in the B-configuration and the Effelsberg telescope at $4.85\,{\rm GHz}$: the half-power beam-width (HPBW) at $20\,{\rm cm}$ for the VLA is $\theta_{\rm HPBW}^{1.4}\approx 4.3''$, whereas for the Effelsberg telescope at $4.85\,{\rm GHz}$, $\theta_{\rm HPBW}^{4.85}\approx 2.4'$. This could have led to the exclusion of extended structures for the VLA measurements for approximately one third of the sources that are clearly extended on the scales of $2''-30''$ \citep{2002AJ....124.2364I} and thus, for such extended sources, it influences the integrated flux densities and spectral indices $\alpha_{[1.4-4.85]}$ as well. Moreover, the observations were more than one year apart from each other, thus possibly contaminated by the variability of the sources. We include the distribution of $\alpha_{[1.4-4.85]}$ for completeness, however, due to the potential beam-size effect, we exclude it from further analysis. For the spectral index at higher frequencies, $\alpha_{[4.85-10.45]}$, which is determined analogously to Eq.~\eqref{eq_spec_index_low} as \begin{equation} \alpha_{[4.85-10.45]}=\frac{\log{(F_{4.85}/F_{10.45})}}{\log{(4.85/10.45)}}\,, \label{eq_spec_index_high} \end{equation} the effect of excluding extended structures is largely diminished, since for the analysis in this paper we considered only point sources that were consistent with HPBWs of $2.4'$ and $1.1'$ at $4.85\,{\rm GHz}$ and $10.45\,{\rm GHz}$, respectively. Moreover, the observations at $4.85\,{\rm GHz}$ and $10.45\,{\rm GHz}$ were performed within one year of each other, hence the spectral index $\alpha_{[4.85-10.45]}$ captures the integrated radio continuum better than the spectral index $\alpha_{[1.4-4.85]}$, which can be influenced by the core emission due to the small VLA beam width. The uncertainty of the spectral index $\sigma_{\alpha}$ was calculated by propagating the measurement errors of flux densities at the corresponding frequencies, \begin{equation} \sigma_{\alpha}=\frac{1}{|\log{(4.85/10.45)}|}\sqrt{(\sigma_{4.85}/F_{4.85})^2+(\sigma_{10.45}/F_{10.45})^2}\,, \end{equation} where $\sigma_{4.85}$ and $\sigma_{10.45}$ are the measurement uncertainties of flux densities at the corresponding frequencies. We show exemplary error bars in Fig.~\ref{fig_2D_spectrindex}. The median value of $\sigma_{\alpha}$ at higher frequencies for the joint sample is $\sigma_{\alpha}=0.1$. The distributions of spectral indices $\alpha_{[1.4-4.85]}$ and $\alpha_{[4.85-10.45]}$ for all observed sources are plotted in Fig.~\ref{fig_specindex_histograms_allsources} in the left and the right panel, respectively. For the lower frequencies, the mean spectral index is $\overline{\alpha}_{[1.4-4.85]}=-0.25\pm 0.54$ (median $-0.36$). For the higher frequencies, the mean spectral index is $\overline{\alpha}_{[4.85-10.45]}=-0.51\pm 0.63$ (median $-0.58$). The two-dimensional distribution of spectral indices at both lower and higher frequencies is in Fig.~\ref{fig_2D_spectrindex}. \begin{figure*}[tbh] \centering \includegraphics[width=0.45\textwidth]{histogram_specindex_1_4-4_85GHz.pdf} \includegraphics[width=0.45\textwidth]{histogram_specindex_4_85-10_45GHz.pdf} \caption{Distributions of spectral indices $\alpha_{[1.4-4.85]}$ and $\alpha_{[4.85-10.45]}$. \textbf{Left panel:} A two-point spectral index ($\alpha_{[1.4-4.85]}$) distribution for the combined low-flux and high-flux sample (in total 417 sources). \textbf{Right panel:} A two-point spectral index ($\alpha_{[4.85-10.45]}$) distribution for the combined low-flux and high-flux sample (in total 209 sources).} \label{fig_specindex_histograms_allsources} \end{figure*} For the general classification of radio spectra with the power-law shape $F(\nu) \propto \nu^{+\alpha}$, we use the following categories based on the spectral slope $\alpha$: \begin{itemize} \item[(i)] $\alpha<-0.7$, for short denoted as steep, which are typical for optically thin synchrotron structures, where electrons have cooled off, such as radio lobes; \item[(ii)] $-0.7 \leq \alpha \leq -0.4$, denoted as flat, with the mixed contribution of optically thin and self-absorbed synchrotron emission, typical for jet emission; \item[(iii)] $ \alpha > -0.4$, denoted as inverted, which are characteristic for sources where synchrotron self-absorption becomes important, such as in AGN core components. \end{itemize} This division reflects the distributions of the spectral index for lower and higher frequencies as found for samples of radio-loud galaxies, such as in the S5 polar-cap sample \citep{1986A&A...168...17E}. We adopt it for all the histograms of the radio spectral index, from Fig.~\ref{fig_specindex_histograms_allsources} onwards. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{jointplot_spectrindex.pdf} \caption{Two-dimensional distribution of spectral indices at lower ($x$-axis) and higher frequencies ($y$-axis). The green points mark the high-flux sources, while the red crosses depict the low-flux sample. Some of the sources have error bars for $\alpha_{[4.85-10.45]}$ to show typical uncertainties of the spectral index.} \label{fig_2D_spectrindex} \end{figure} The mean and median radio spectra calculated for different spectral classes according to the BPT diagram -- star-forming, composites, Seyferts, LINERs -- are scaled for comparison in Fig.~\ref{fig_mean_radio_spectra}. The values of spectral indices for mean spectra are $0.37$, $0.28$, $0.39$, $-0.01$ at smaller frequencies $1.4\,{\rm GHz}-4.85\,{\rm GHz}$ and $-0.71$, $-0.41$, $-0.81$, $-0.23$ at higher frequencies $4.85\,{\rm GHz}-10.45\,{\rm GHz}$ for star-forming, composite, Seyfert, and LINER sources, respectively. Spectral indices for median spectra, which were calculated from the spectra of individual sources after the normalization with respect to the mid-frequency, are $0.46$, $-0.22$, $-0.19$, $0.02$ at smaller frequencies $1.4\,{\rm GHz}-4.85\,{\rm GHz}$ and $-0.88$, $0.0$, $-0.59$, $0.13$ at higher frequencies $4.85\,{\rm GHz}-10.45\,{\rm GHz}$ for star-forming, composite, Seyfert, and LINER sources, respectively. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{flux_median_mean.jpg} \caption{Median and mean spectra for different spectral classes scaled for the common value at the mid-frequency of $4.85\,{\rm GHz}$.} \label{fig_mean_radio_spectra} \end{figure} The distribution of spectral indices is very reminiscent of the distribution that one finds for higher redshift quasars. The S5-survey \citep{1981A&AS...45..367K, 1984AJ.....89..323G} shows the transition between the lobe and jet-dominated structures, which are more prominent at lower radio frequencies, and the flat to inverted cores that are dominant at higher radio frequencies. Unfortunately, the spectral slope expected from the synchrotron emission of supernova remnants, which are a clear tracer of high-mass star formation, has a similar slope as the radio lobe-jet structures. The radio emission from star-forming regions can be expected to be relevant for objects on the division lines between star-forming and Seyfert/LINER objects. In the Seyfert/LINER domain of the diagnostic diagrams, objects with starburst-AGN mixing can be found \citep[e.g.][]{2006MNRAS.372..961K, 2016MNRAS.462.1616D}. However, a detailed and sensitive structural investigation at high angular resolution is required to differentiate between the presumably extended radio contribution from star formation and that of the core-jet-nucleus structure of a radio-active AGN. This will be possible with instruments dedicated for low-surface brightness investigations like the Square Kilometer Array \citep[SKA; see e.g.][]{2015aska.confE..93A}. \subsection{Radio spectral index between $1.4$ and $4.85$ GHz} The mean and median values of the spectral index $\alpha_{[1.4-4.85]}$ for the whole low-flux and high-flux sample are in Table~\ref{tab_spectralindex_1.4_4.85}. LINER sources have the flattest median and mean radio spectral indices in comparison with galaxies in other spectral classes. This is also apparent in the histogram in Fig.~\ref{fig_histogram_1.4-4.85GHz} (left panel), where we plot the two-point spectral index distribution for each spectral class. Another way of representing this trend is shown in the plot in Fig.~\ref{fig_histogram_1.4-4.85GHz} (right panel), which combines the radio classification (steep, flat, inverted) and the optical spectral classification of galaxies (SF, Comp, Sy, LINER). While among the composites $50.5\%$ are inverted sources and among the Seyferts $44.5\%$ are inverted sources, most of the LINERS ($58.8\%$) have inverted spectra. The fraction of steep sources $(\alpha<-0.7)$ among LINER sources is only $17.6\%$, which is comparable to composite sources ($17.8\%$). The fraction of steep Seyferts is $28.2\%$. In terms of the mean and median spectral slopes, LINERs are the flattest index $\alpha_{[1.4-4.85]}$ with a mean of $-0.22$ and a median of $-0.24$, followed by the composites with a mean and median of $-0.25$ and $-0.40$, respectively, and the Seyferts with mean and median values of $-0.31$ and $-0.49$, respectively. \begin{table}[h!] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{cccccc} \hline \hline Spectral class & Mean $\alpha_{[1.4-4.85]}$ & $\sigma$ & Median $\alpha_{[1.4-4.85]}$ & $16\%\,P$ & $84\%\,P$\\ \hline SF & $-0.25$ & $0.45$ & $-0.33$ & $-0.66$ & $0.24$\\ COMP & $-0.25$ & $0.54$ & $-0.40$ & $-0.73$ & $0.22$\\ SY & $-0.31$ & $0.61$ & $-0.49$ & $-0.83$ & $0.21$\\ LINER & $-0.22$ & $0.50$ & $-0.24$ & $-0.72$ & $0.26$\\ \hline Total & $-0.25$ & $0.54$ & $-0.36$ & $-0.76$ & $0.24$\\ \hline \end{tabular}} \caption{Mean, standard deviation, median, $16\%$-, and $84\%$- values of the radio spectral index $\alpha_{[1.4-4.85]}$, respectively, for each optical spectral class of galaxies and the overall sample.} \label{tab_spectralindex_1.4_4.85} \end{table} \begin{figure*}[tbh] \centering \includegraphics[width=0.45\textwidth]{histogram_specindex_1_4-4_85GHz_classif.pdf} \includegraphics[width=0.45\textwidth]{specindex_alpha1_4-4_85_classif_allgalaxies.pdf} \caption{Distribution of the spectral slope $\alpha_{[1.4-4.85]}$ at lower frequencies among different spectral classes of galaxies. \textbf{Left:} The two-point spectral index distribution $\alpha_{[1.4-4.85]}$ for different optical spectral classes of galaxies (SF, composites, Seyferts, and LINERs). The mean and median values are listed in Table~\ref{tab_spectralindex_1.4_4.85}. \textbf{Right:} Fractional distribution of steep, flat, and inverted sources among star-forming, composite, Seyfert, and LINER galaxies.} \label{fig_histogram_1.4-4.85GHz} \end{figure*} \subsection{Radio spectral index between $4.85$ and $10.45$ GHz} In an analogous way to frequencies $1.4-4.85\,{\rm GHz}$, we determine the spectral index $\alpha_{[4.85-10.45]}$ between the non-simultaneous Effelsberg measurements at $4.85\,{\rm GHz}$ and $10.45\,{\rm GHz}$. In this case, the primary beam size is comparable, hence the resolution effects should not be so significant as for $1.4\,{\rm GHz}$ obtained from the FIRST survey. Those sources whose emission profiles in cross-scans were clearly extended were excluded from further analysis. \begin{figure*}[tbh] \centering \includegraphics[width=0.45\textwidth]{histogram_specindex_4_85-10_45GHz_classif.pdf} \includegraphics[width=0.45\textwidth]{specindex_alpha4_85-10_45_classif_allgalaxies.pdf} \caption{Distribution of the spectral slope $\alpha_{[4.85-10.45]}$ at higher frequencies among different spectral classes of galaxies. \textbf{Left:} The two-point spectral index distribution $\alpha_{[4.85-10.45]}$ for different optical spectral classes of galaxies (SF, composites, Seyferts, and LINERs). The mean and median values are listed in Table~\ref{tab_spectralindex_1.4_4.85}. \textbf{Right:} Fractional distribution of steep, flat, and inverted sources among star-forming, composite, Seyfert, and LINER galaxies.} \label{fig_histogram_4.85-10.45GHz} \end{figure*} \begin{figure*}[tbh] \centering \includegraphics[width=\textwidth]{hist_spectral_index_flattening.pdf} \caption{ Low-flux density sample experiences an increasing influence of flat-spectrum sources. \textbf{Top and bottom left:} $\alpha_{1.4/4.85}$ and $\alpha_{4.85/10.45}$ index distribution for the high-flux density sample (HF). \textbf{Top and bottom middle:} $\alpha_{1.4/4.85}$ and $\alpha_{4.85/10.45}$ index distribution for the low-flux density sample (LF) (two sources - not shown here - have a $\alpha_{4.85/10.45}$ below -3). \textbf{Top and bottom right:} $\alpha_{1.4/4.85}$ index distribution for the low-flux sources (LF) without and with 10.45~GHz measurements. } \label{fig_new-histograms} \end{figure*} The distribution of the spectral index $\alpha_{[4.85-10.45]}$ for each optical spectral class is in Fig.~\ref{fig_histogram_4.85-10.45GHz} (left panel). The fractions of three radio classes -- steep, flat, and inverted -- are calculated for each spectral class in the right panel of Fig.~\ref{fig_histogram_4.85-10.45GHz}, where composites have the largest fraction of inverted sources $(\alpha>-0.4), 42.9\%$, followed by LINERs, $37.9\%$. In terms of the overall fraction of sources with a spectral index larger than $\alpha>-0.7$ (non-steep spectra), composites have the largest fraction with $73.9\%$, followed by LINERs $(66.3\%)$ and Seyferts $(58.3\%)$. In the higher frequency range $4.85-10.45\,{\rm GHz}$, the composite sources have the flattest spectral slope $\alpha_{[4.85-10.45]}$ , with a mean of $-0.42$ and a median of $-0.43$, followed by LINERs with mean and median values of $-0.46$ and $-0.59$, respectively, and Seyferts with a mean and median spectral slope of $-0.63$ and $-0.64$, respectively. \begin{table}[h!] \centering \resizebox{\linewidth}{!}{ \begin{tabular}{cccccc} \hline \hline Spectral class & Mean $\alpha_{[4.85-10.45]}$ & $\sigma$ & Median $\alpha_{[4.85-10.45]}$ & $16\%\,P$ & $84\%\,P$\\ \hline SF & $-0.61$ & $0.65$ & $-0.65$ & $-1.02$ & $0.05$\\ COMP & $-0.42$ & $0.80$ & $-0.43$ & $-0.90$ & $0.19$\\ SY & $-0.63$ & $0.52$ & $-0.64$ & $-0.89$ & $-0.29$\\ LINER & $-0.46$ & $0.59$ & $-0.59$ & $-0.95$ & $0.02$\\ \hline Total & $-0.51$ & $0.63$ & $-0.58$ & $-0.92$ & $0.00$\\ \hline \end{tabular}} \caption{Mean, standard deviation, median, $16\%$-, and $84\%$- values of the spectral index $\alpha_{[4.85-10.45]}$, respectively, for each optical spectral class of galaxies and the overall sample.} \label{tab_spectralindex_4.85_10.45} \end{table} \begin{figure*}[h!] \centering \includegraphics[width=\textwidth]{BPT_L14GHz.pdf} \includegraphics[width=\textwidth]{BPT_L14GHz_10_1000.pdf} \includegraphics[width=\textwidth]{BPT_L14GHz_eff.pdf} \caption{Distribution of the radio luminosity at $1.4$ GHz, $L_{\rm 1.4GHz}$, in optical diagnostic diagrams. \textbf{Top row:} The distribution of the radio luminosity $L_{\rm 1.4GHz}$ for the parent sample. The contours correspond to the distribution of the sources for the luminosity bins, $\log{(L_{\rm 1.4GHz})}<40.5$, $40.5<\log{(L_{\rm 1.4GHz})}<41.2$, and $\log{(L_{\rm 1.4GHz})}>41.2$, with the increasing radio luminosity from left to right. \textbf{Middle row:} The distribution of the radio luminosity $L_{\rm 1.4GHz}$ as in the top row but for a flux-limited subsample of the parent sample, $10\,{\rm mJy} \leq F_{\rm 1.4GHz} \leq 1000\,{\rm mJy}$. The luminosity bins are the same as in the top row. \textbf{Bottom row:} The distribution of the radio luminosity $L_{\rm 1.4GHz}$ for the Effelsberg sample only (low$+$high flux sources). The luminosity bins are the same as in the top row.} \label{fig_1_4_luminosity} \end{figure*} The spectral-index distributions for the high-flux density sample are dominated by sources with low spectral indices as can be seen in the left column of Fig.~\ref{fig_new-histograms} where we denote high-flux sources as HF and low-flux sources as LF. The $\alpha_{1.4/4.85}$ index distribution has a median of -0.53 and a median width of 0.50\footnote{For the uncertainty in the median we quote the median deviation from the median or twice the value if we refer to it as the width.}. Towards higher frequencies the index even drops and the distribution becomes narrower. The $\alpha_{4.85/10.45}$ index has a median of -0.67 and a median width of 0.20. However, the distributions have weak tails towards flatter spectra, indicating that several sources contain flat spectrum components at a flux density level lower than the fluxes for the steep components. Therefore, the situation changes for the low-flux density sample. The flux densities are lower now and for more sources they are close to the typical fluxes of the flat components. Hence, the portion of the sources with flat spectral index increases leading to a more prominent shoulder towards the flat side of the distributions (see the middle and right columns of Fig.~\ref{fig_new-histograms}). This is very pronounced for the $\alpha_{1.4/4.85}$ index. For about 2/3 of the sources (i.e. 89 out of 289) the 10.45~GHz flux drops below the detection limit. Correspondingly, the $\alpha_{4.85/10.45}$ distribution is biased towards flat-spectrum sources. For the low-flux density sample, the $\alpha_{1.4/4.85}$ index distribution has a median of -0.26 and a median width of 0.33. The distribution is highly skewed to the steep side and has a peak at -0.6. The $\alpha_{4.85/10.45}$ index distribution has median of -0.25 and a width of 0.86, with a similar but more pronounced 0.3 wide secondary peak around $\alpha_{4.85/10.45}$=-0.10. Separate plots of the $\alpha_{1.4/4.85}$ index for the sources with (median -0.11, median width 0.38) and without (median -0.36, width 0.31) a 10.45~GHz measurement show that the low-spectral index $\alpha_{1.4/4.85}$ of the 10.45~GHz detected sources is also flatter by about 0.25. The dropout in sources with $\alpha_{4.85/10.45}$ measurements in our low-flux sample is predominantly an effect of the flux sensitivity we reached. From the uncertainties in Table~A1 we estimate a 3$\sigma$ flux density limit of 12 mJy. We find that about 70\% of all sources without $\alpha_{4.85/10.45}$ measurements indeed fall below this flux density limit if one uses their low-frequency spectral index combined with the 4.85~GHz flux density to predict their 10.45~GHz flux density. A stronger spectral steeping towards 10.45~GHz and the effect of radio-source angular extension may account for the remaining $30\%$. For the observations at 4.85 GHz, 64 sources out of the original 381 ($16.8\%$) were extended at least in one direction of cross-scans (larger than the HPBW at 4.85 GHz$\sim 144''$). For the subsequent observations at 10.45 GHz of 256 selected sources, we had 50 sources ($19.5\%$) broader than the HPBW of $66''$ at least in one direction. We excluded the extended sources from further analysis due to the fact that it was not possible to determine flux densities by Gaussian fitting to the combined cross-scan intensity profiles. In terms of properties in the diagnostic diagrams, the sources with and without $\alpha_{4.85/10.45}$ measurements do not differ significantly. There, median $\log{([\ion{O}{iii}]/H\beta)}$ and $\log{([\ion{N}{ii}]/H\alpha)}$ values of 0.23$\pm$0.19 and -0.03$\pm$0.11 and values of 0.22$\pm$0.22 and -0.09$\pm$0.14 are reached, respectively. However, as in the high-flux density sample, the trend that the sources with flat-spectrum components show a higher excitation remains: for the sources with $\alpha_{4.85/10.45}$ measurements, we find median $\log{([\ion{O}{iii}]/H\beta)}$ and $\log{([\ion{N}{ii}]/H\alpha)}$ values of 0.26$\pm$0.19 and -0.03$\pm$0.11, and for the sources without $\alpha_{4.85/10.45}$ measurements, we find $\log{([\ion{O}{iii}]/H\beta)}$ and $\log{([\ion{N}{ii}]/H\alpha)}$ values of 0.19$\pm$0.26 and -0.10$\pm$0.15, respectively. \section{Trends of the spectral index in optical diagnostic diagrams} \label{sec_trends} While in the previous section we looked for trends in the radio spectral index between different activity classes as traced by the optical diagnostic diagram, we will pursue the reverse way and investigate how different radio spectral indices are reflected in the optical diagnostic diagram. Considering the radio luminosity of the sources at $1.4\,{\rm GHz}$, $L_{\rm 1.4GHz}$\footnote{The integrated flux at $\nu_0 = 1.4\,\mathrm{GHz}$ in Jansky is derived from the FIRST survey. We derive the luminosity distance, $D_L$, from the redshift, using a standard cosmology with $H_0=70\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{Mpc}^{-1}$, $\Omega_m = 0.3$ and $\Omega_\Lambda = 0.7$. The luminosity is then given by $L_{1.4\,\mathrm{GHz}} = 4\pi D_L^2 \, \nu_0 f_{\nu_0}$.}, expressed in ${\rm erg\,s^{-1}}$ (which traces better the radio-core emission due to a smaller beam size of the VLA), in the parent sample the general trend is that the radio luminosity increases from the star-forming, through the composites, LINERs, and up to the Seyfert sources (Fig.~\ref{fig_1_4_luminosity}, top row). The distribution of radio luminosities implies that our source selection lacks metal-rich star-forming galaxies with radio luminosities below $\log{(L_{\rm 1.4GHz}/{\rm erg\, s^{-1}})}<40.5$, and as redshift increases, systematically fainter sources in all luminosity bins. Hence, our radio measurements and analysis mostly trace nearby luminous, active galaxies in the composite--LINER--Seyfert regions. This is demonstrated in Fig.~\ref{fig_1_4_luminosity}, where we first show the subsample of the parent sample with the integrated flux density at $1.4\,{\rm GHz}$ greater than $10\,{\rm mJy}$ and less than $1000\,{\rm mJy}$ (middle row). For the sources detected at $4.85\,{\rm GHz}$ as well as $10.45\,{\rm GHz}$ (low$+$high sample, 209 sources; see the bottom row), we see distribution peaks in the Seyfert-LINER part of the optical diagram as for the flux-limited subsample for all luminosity bins, with the apparent loss of radio sources in the composite and star-forming parts in the lowest luminosity bin, $\log{(L_{\rm 1.4GHz})}<40.5$, due to source drop-outs. \begin{figure*}[h!] \centering \includegraphics[width=\textwidth]{BPT_NII_specindex.pdf} \includegraphics[width=\textwidth]{BPT_SII_specindex.pdf} \includegraphics[width=\textwidth]{BPT_OI_specindex.pdf} \caption{Spectral index trends in the optical diagnostic diagrams. From the top to the bottom panels: [NII]-, [SII]-, and [OI]-based diagrams, respectively, with the progressively increasing spectral index from the left to the right panels: $\alpha_{[4.85-10.45]}<-0.7$, $-0.7\leq\alpha_{[4.85-10.45]}\leq-0.4$, and $\alpha_{[4.85-10.45]}>-0.4$, respectively. Contours indicate Gaussian kernel density estimates.} \label{fig:BPT-index-reverse} \end{figure*} Figure~\ref{fig:BPT-index-reverse} shows the position of the sources in all three classical optical diagnostic diagrams, binned by the radio spectral index. Using the previous definitions, we distinguish between steep ($\alpha_{[4.85-10.45]} > -0.7$), flat ($-0.7 < \alpha_{[4.85-10.45]} < -0.4$), and inverted radio spectra ($\alpha_{[4.85-10.45]} < -0.4$). The plots show that when going from steep via flat to inverted spectra, the optical line ratios, in particular the ratio $\log([\ion{O}{iii}]/\mathrm{H}\beta)$, decrease. This indicates that the ionization potential of sources with inverted radio spectra is weaker than that of sources with a steep radio spectrum. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{radioloudness-ionization_data.pdf} \caption{Clustering of the radio sources in the radio loudness--ionization ratio plane. The galaxies are grouped with respect to the spectral index in the following categories: $\alpha_{[4.85-10.45]}<-0.7$, $-0.7\leq\alpha_{[4.85-10.45]}\leq-0.4$, and $\alpha_{[4.85-10.45]}>-0.4$.} \label{fig_loudness_oiii} \end{figure} Since this vertical movement in the optical diagnostic diagram follows the same direction as the usual division line between Seyfert and LINER galaxies \citep{2006MNRAS.372..961K,2007MNRAS.382.1415S}, this trend will lead to the previously discussed increase of radio spectral index towards LINER sources, namely that steeper sources tend to fall into the Seyfert and more inverted into the LINER category. Since we select brighter radio emitters with optical counterparts, it is quite possible that all our sources have a contribution from an AGN to some extent. In the further search for general trends, we added another parameter, radio loudness, which can directly trace the energetics of AGN and their hosts rather than the purely astronomical division into low- and high-flux sources. Using the flux density at $20\,{\rm cm}$ from FIRST , $F_{1.4}$, we converted the radio flux density into the $AB_{\nu}$-radio magnitude system of \citet{1983ApJ...266..713O} according to \citet{2002AJ....124.2364I}, \begin{equation} m_{1.4}=-2.5\log{\left(\frac{F_{1.4}}{3631\,{\rm Jy}}\right)}\,, \label{eq_AB_system} \end{equation} in which the zero point $3631\,{\rm Jy}$ does not depend on the wavelength. Subsequently, the radio loudness can be calculated as the ratio of the radio flux density to the optical flux density, \begin{equation} R_{\rm g}\equiv \log{\left(\frac{F_{\rm radio}}{F_{\rm optical}} \right)}=0.4(g-m_{1.4})\,, \label{eq_radio_loudness} \end{equation} where we use the optical $g$-band for each source from the SDSS-DR7 catalogue \begin{figure*}[h!] \centering \includegraphics[width=0.45\textwidth]{GMM_2d.pdf} \includegraphics[width=0.45\textwidth]{GMM_1d_paper.pdf} \caption{Results of the Gaussian mixture model. \textbf{Left panel:} Radio sources in the radio loudness--ionization ratio plane according to the spectral index divisions as found from the Gaussian fitting. \textbf{Right panel:} The spectral-index histogram with three basic groups that are very similar to the manual cut with these limits: $\alpha_{[4.85-10.45]}<-0.7$, $-0.7\leq\alpha_{[4.85-10.45]}\leq-0.4$, and $\alpha_{[4.85-10.45]}>-0.4$.} \label{fig_gaussian_mixture} \end{figure*} Here we note that the optical flux density is related to the host galaxy and not to the AGN, since our sample is radio selected. An inspection of the optical spectra also shows that only a handful of sources display the typical power-law continuum shape that is associated to the accretion disk. Hence, the radio loudness derived in this way expresses the ratio between the radio power of the AGN to the optical emission of the host galaxy and can be taken as an upper limit of the intrinsic AGN radio loudness. Figure~\ref{fig_loudness_oiii} shows the location of the three radio classes, with steep, flat, and inverted radio spectra in the $R_g - \log([\ion{O}{iii}]/\mathrm{H}\beta)$ plane where the radio loudness $R_{\rm g}$ is along the $x$-axis and the low-ionization ratio $\log([\ion{O}{iii}]/\mathrm{H}\beta)$ is along the $y$-axis. We show that sources cluster in well-discriminated regions. To justify these spectral-index categories, we fit a Gaussian mixture model (GMM), which is an unsupervised machine learning algorithm. For this, we consider the quantities radio loudness, $R_g$, low-ionization ratio, $\log([\ion{O}{iii}]/\mathrm{H}\beta)$, and radio spectral index, $\alpha_{[4.85-10.45]}$ , as a three-dimensional space; this means every galaxy is represented by a vector \begin{equation} \vec{x} = \begin{pmatrix} R_g \\ \log([\ion{O}{iii}]/\mathrm{H}\beta) \\ \alpha_{[4.85-10.45]}. \end{pmatrix} .\end{equation} The GMM assumes that the data points can be described by a superposition of a finite number of multivariate Gaussian distributions with unknown parameters in this parameter space. The model is the probability density function represented as a weighted sum of the Gaussian component densities. Using the model, we can then give probabilities for data points belonging to one of these classes. For the fit, we assume three components and use the expectation maximization technique implemented in the \textsc{Python} library \textsc{Scikit-Learn} \citep{scikit-learn}. In Fig.~\ref{fig_gaussian_mixture}, we show the three classes in the $R_g - \log([\ion{O}{iii}]/\mathrm{H}\beta)$ plane and, to represent the third dimension, we show histograms of the radio spectral index $\alpha_{[4.85-10.45]}$. In accordance with our results based on a manual cut in radio spectral index (Fig.~\ref{fig_loudness_oiii}), we find three distinct classes: \begin{itemize} \item[(1)] associated with a steep radio index, high ionization ratio, and high radio loudness; \item[(2)] associated with a flat radio index, lower ionization ratio, and intermediate radio loudness; \item[(3)] associated with an inverted radio index, low ionization ratio, and low radio loudness. \end{itemize} \begin{figure*}[h!] \centering \includegraphics[width=\textwidth]{OIII-L14GHz.pdf} \includegraphics[width=\textwidth]{Eddratio-L14GHz.pdf} \caption{Distribution of the Effelsberg sources with respect to the radio luminosity, luminosity of [$\ion{O}{iii}$] line, and the Eddington ratio. \textbf{Top row}--Left panel: The distribution of the radio spectral index $\alpha_{[4.85-10.45]}$ with respect to the radio luminosity $L_{\rm 1.4GHz}$ and the luminosity of [$\ion{O}{iii}$] line. Right panel: The localization of the Effelsberg sources (low$+$high flux; orange points) in the plane of $L_{\rm 1.4GHz}$ and $L_{[\ion{O}{iii}]}$ with respect to the parent sample (grey points). \textbf{Bottom row}--Left panel: The distribution of steep, flat, and inverted radio spectral indices with respect to the radio luminosity $L_{\rm 1.4GHz}$ and the Eddington ratio $\eta$. Right panel: The distribution of the Effelsberg sources (orange points) in the same plane as in the left figure with respect to the parent sample (grey points).} \label{fig_1_4_luminosity_OIIIlum} \end{figure*} \section{Interpretation of the results} \label{interpretation} \subsection{General trends in radio-optical properties} By measuring flux densities with the Effelsberg radio telescope, calculating spectral indices, and analysing their distributions in the optical diagnostic diagrams, we can recover these basic trends in the radio-optical properties of our selected sources, \begin{itemize} \item[(a)] the radio luminosity increases in the direction of increasing low-ionization ratio [$\ion{O}{iii}$]/H$\beta$ (see Fig.~\ref{fig_1_4_luminosity}) with the exception of the LINERs, which show lower radio luminosities and low ionization ratios (plus higher stellar masses) compared to the Seyferts with high radio luminosities; \item[(b)] there is a trend of the radio spectral index steepening in the direction of increasing [$\ion{O}{iii}$]/H$\beta$ (see Fig.~\ref{fig:BPT-index-reverse}); \item[(c)] the radio loudness increases in the same direction, as shown by Fig.~\ref{fig_gaussian_mixture}. \end{itemize} Our results are representative for the nearby luminous, active SDSS-FIRST sources, which are predominantly located in the AGN (Seyfert-LINER) region of the optical diagnostic diagram. In comparison with previous studies, the determination of spectral indices allows us to connect the radio luminosity and radio-loudness trends \citep{1989AJ.....98.1195K,2007ApJ...658..815S} with the radio-morphological structures, such as the activity of radio primary components (cores), jet components, and radio lobes, as is known from the studies of quasar and blazar studies \citep{1986A&A...168...17E}. Considering the bolometric luminosities of AGN sources, for which the luminosity of the emission line [$\ion{O}{iii}$] serves as a proxy, there is a trend of less radio-luminous galaxies being located towards lower $L_{[\ion{O}{iii}]}$ and these sources have progressively flatter to inverted radio spectra (see the top row of Fig.~\ref{fig_1_4_luminosity_OIIIlum}, left panel). On the other hand, more radio-luminous sources have larger $L_{[\ion{O}{iii}]}$ and steeper spectra. To analyse the trends (a), (b), and (c) in relation with the accretion rate $\dot{M}_{\rm acc}$, we calculate the Eddington ratio, defined as $\eta \equiv L_{\rm bol}/L_{\rm Edd}$, where the bolometric luminosity is derived from the [$\ion{O}{iii}$] luminosity using $L_{\rm bol} = 3500\, L_{[\ion{O}{iii}]}$ \citep{2004ApJ...613..109H} and the Eddington luminosity is $L_{\rm Edd}=4\pi GM_{\bullet}m_{\rm p}c/\sigma_{\rm T}=1.3 \times 10^{38} (M_{\bullet}/M_{\odot})\,{\rm erg\,s^{-1}}$. The black-hole masses for SDSS-FIRST sources can be estimated from the black-hole mass--velocity dispersion $M_{\bullet}-\sigma_{\star}$ correlation, using the relation found by \cite{2009ApJ...698..198G}, \begin{equation} \log{(M_{\bullet}/M_{\odot})}=8.12+4.24\log{(\sigma_{\star}/200\,{\rm km\,s^{-1}})}\,. \label{eq_mass_dispersion} \end{equation} In the bottom row of Fig.~\ref{fig_1_4_luminosity_OIIIlum} (left panel), we depict the distribution of the spectral index $\alpha_{[4.85-10.45]}$ with respect to the radio luminosity $L_{\rm 1.4GHz}$ and the Eddington ratio $\eta$. All sources irrespective of their radio spectral index have very similar Eddington ratios, with $2/3$ of them within $\log{\eta}\sim[-3,-2]$. This similarity in accretion rates is driven by the strong dependency of $\eta$ on the stellar-velocity dispersion ($\eta \propto \sigma^4$) and the characteristics of our sample: the brightest radio galaxies with optical counterparts (dominated by the stellar emission) on the sky are -- mostly -- very massive, $M_{\star} \sim 10^{11-12}\, M_{\odot}$, have a large velocity dispersion, $\sigma_{\star} \sim 180-320\,{\rm km s^{-1}}$, and thereby have a heavy SMBH with $\log(M_{\bullet}/{M_{\odot}}) \sim 8.5$. \subsection{Relation between radio emission and Eddington ratio} Our results seem to be at odds with previous findings that the radio loudness anti-correlates with the Eddington ratio \citep{2007ApJ...658..815S,2011MNRAS.417..184B}. Since the ionization ratio $\log{[\ion{O}{iii}]/{\rm H\beta})}$ is proportional to the hardness of the ionization field, which in turn depends on the accretion efficiency expressed by the Eddington ratio, we find that sources with a lower radio loudness correspond to those with lower ionization ratio and their radio spectral indices are inverted, which is indicative of self-absorbed synchrotron emission (see Fig.~\ref{fig_gaussian_mixture}). On the other hand, the radio-louder sources are associated with larger ionization ratio and their spectral indices demonstrate optically thin synchrotron emission. Although our sample spans shorter ranges in $L_{\rm 1.4GHz}$ and $\log{\eta}$ when compared with extended samples like that of \citet{2007ApJ...658..815S}, it displays similar luminosities to their broad line radio galaxies (BLRGs) and radio loud quasars (RLQs) when using only core radio powers \citep[see][]{2011MNRAS.417..184B} indicating that our sample might represent the narrow line optical counterpart of those objects. To estimate how our radio-loudness definition and sample selection bias our results, we study integral quantities and compare the sources selected for the Effelsberg observations to the parent sample. One main source of discrepancy could be the use of the host-galaxy optical luminosity in the radio-loudness calculation (see Eq.~\eqref{eq_radio_loudness})\footnote{We refrain from using $L_{[\ion{O}{iii}]}$ to estimate the optical AGN emission to avoid obtaining an artificial trend by comparing $R \propto 1/L_{[\ion{O}{iii}]}$ with $\eta \propto L_{[\ion{O}{iii}]}$}. However, galaxy hosts of Effelsberg sources seem to be quite similar; $\gtrsim 90\%$ of them have elliptical morphologies spanning only around 0.5dex in $g$-band luminosities. Therefore we conclude that the main trend of decreasing radio spectral index with radio loudness is caused by the variations in the core radio luminosities. The basic explanation of the trend of decreasing spectral indices $\alpha_{[4.85-10.45]}$ with the increasing ionization ratio [$\ion{O}{iii}$]/H$\beta$, which corresponds to the LINER--Seyfert transition in the optical diagnostic diagrams, might be the renewal of AGN activity in the past $\sim 10^5$--$10^7$ years \citep{2016A&ARv..24...10T,2017A&ARv..25....2P}. At least one third of the sources that have larger [$\ion{O}{iii}$]/H$\beta$ and steeper radio spectral indices have extended jet structures, that is, radio emission is dominated by older radio lobes, where electrons cooled down. Since the timescales for the formation of the radio extended structures and the optical narrow-line region brightening are quite different, the observed trend can be explained by two scenarios: objects in the Seyfert region must have been optically-active for a long enough period for the radio lobes to develop, or the radio structures formed in the past and their activity has been re-triggered some decades ago. On the other hand, only $1/8$ of the sources with lower ionization ratios display jet-like structures. They might have started (or re-started) their nuclear activity very recently, which could explain inverted spectral indices corresponding to compact self-absorbed core emission. Since they did not have enough time to develop extended radio lobes, their radio luminosities are also smaller. Visual inspection of FIRST images and literature research for each object in the Effelsberg sample allowed us to classify them as jetted or non-jetted sources. We find that 2/3 of flat and steep radio spectrum objects do not show extended structures that could be related to jet emission. Thus, we do not confirm the one-to-one relation between morphologies and spectral indices (compact/extended versus inverted/flat-steep) found by \citet{2011MNRAS.412..318M}. Our results seem to be in accordance with the correlations found for radio-quiet Palomar-Green quasars by \citet{2019MNRAS.482.5513L}. They found an increase in the line ratio \ion{Fe}{ii}/H$\beta$ from inverted, through flat, up to steep radio spectral indices, in the same way as we found for [\ion{O}{iii}]/H$\beta$. They interpret the correlations of the radio spectral index with the Eigenvector 1 parameters using the nuclear-outflow interpretation, which at least partially can be used for our findings as well. Higher excitation for steep sources, which are also radio louder (see Fig.~\ref{fig_loudness_oiii}) could be indicative of a large-scale nuclear outflow, which is a source of optically thin synchrotron emission. On the other hand, sources with lower [\ion{O}{iii}]/H$\beta$ with flat to inverted radio slopes, which are radio weaker, could lack an outflow and the dominant source of radio emission would be the compact nucleus (coronal emission) that emits optically thick synchrotron emission. \begin{figure*}[h!] \centering \includegraphics[width=0.45\textwidth]{stellar_mass.jpg} \includegraphics[width=0.45\textwidth]{bh_mass.jpg} \caption{Distribution of stellar and black-hole masses for the SDSS-FIRST parent sample. \textbf{Left panel:} The SDSS-FIRST distribution of stellar masses in the optical BPT diagram. The colour bar indicates the logarithm of the stellar masses in units of solar masses. \textbf{Right panel:} The distribution of the black hole masses inferred from the measurement of the stellar velocity dispersion in the SDSS-FIRST sources.} \label{fig_stellar_bh_masses} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=0.9\textwidth]{BPT_stellar_mass.pdf} \includegraphics[width=0.9\textwidth]{BPT_stellar_mass_eff.pdf} \caption{Distribution of the average stellar mass across the parent and the low$+$high flux sample. \textbf{Top panel:} The SDSS-FIRST distribution of stellar masses in the optical BPT diagram for the whole parent sample. From the left to the right figures: galaxies with the average stellar mass in the range $\log(M_{\star}[M_{\odot}])<11.25$, $11.25<\log(M_{\star}[M_{\odot}])<11.5$, $\log(M_{\star}[M_{\odot}])>11.5$, respectively, are represented with density contours. \textbf{Bottom panel:} The distribution of the average stellar mass across the BPT diagram for the same mass bins as in the top panel, but for the combined low$+$high flux sample.} \label{fig_stellar_bh_masses} \end{figure*} \subsection{Flat and inverted spectral index towards LINERs: Implications for their character} The interpretation and the nature of LINERs remains still unclear and several recent studies have attempted to shed more light on their characteristics and the potential effect of the environment \citep{2013A&A...558A..43S,2017MNRAS.467.3338C,2018arXiv180300946C}. The fact that $\gtrsim 50\%$ of LINERS in both frequency ranges have flat to inverted spectral indices points to the activity of the nucleus and the dominant contribution of the core and the jet radio emission to the overall radio emission of LINER galaxies. In our study of the radio-optical properties of SDSS-FIRST sources, LINERs are characterized by a lower ionization ratio [$\ion{O}{iii}$]/H$\beta$ in comparison with Seyfert AGN sources. Keeping in mind the colour-stellar mass sequence \citep[][]{2009AIPC.1201...17S}, LINERs have the largest stellar and black hole masses (see Fig.~\ref{fig_stellar_bh_masses} for the distribution of stellar and black hole masses across the parent sample). In addition, they are associated with redder colours and smaller star formation rates than other optical spectral classes (SF, COMP, Seyfert) \citep{2016MNRAS.455L..82L}. The optical low-ionization emission could be explained by the presence of the extended population of hot and old post-asymptotic giant branch (AGB) stars, as is indicated by studying the radial brightness distribution by \citet{2013A&A...558A..43S}, which is consistent with a spatially extended ionizing source (stars) rather than a point source (nucleus). However, in the radio domain, the mean spectral indices $\overline{\alpha}_{[1.4-4.85]}$ and $\overline{\alpha}_{[4.85-10.45]}$ indicate the increase in the mean radio spectral slope in comparison with Seyferts (see also the trend of the increasing spectral index in Fig.~\ref{fig:BPT-index-reverse}). This trend can be explained by the dominant contribution of the jet emission (the overall flat radio spectrum due to the collective contribution of self-absorbed structures) and the nuclear activity (inverted, optically thick synchrotron source). Therefore, we cannot support the claim of \citet{2013A&A...558A..43S} that LINERs are defined by the lack of AGN activity, at least not for our subsample of radio-emitting LINERs. The AGN ionizing field is likely not dominant in the optical domain, but according to our statistical study, the nuclear and jet activity must be present. This is also in agreement with the studies of \citet{2017MNRAS.467.3338C,2018arXiv180300946C} where they find that LINERs are redder and older than the control sample of galaxies in environments of different density. They also show that LINERs are likely to populate low-density regions in spite of their elliptical morphology, that is, their occurrence in low-density galaxy groups is two times higher than the occurrence of the control, non-LINER galaxies. The fact that LINERs are more likely to be found in low-density regions points towards nuclear activity, since active galaxies typically do not follow the morphology-density relation \citep{2006A&A...460L..23P,2009MNRAS.399...88C,2010MNRAS.409..936P,2014MNRAS.437.1199C}. This also indicates the relevance of major mergers in the galaxy evolution, since the low-density galaxy groups favour major mergers due to lower velocity dispersion among members. Major mergers can restart the nucleus as has been found in several studies \citep{2006A&A...460L..23P,2009MNRAS.399...88C,2007MNRAS.375.1017A}. In addition, \citet{2015ApJ...806..147C} found that major mergers are a trigger for radio-loud AGN and the launching of relativistic jets. Using the luminosity-hardness diagram, which is applied to stellar black hole binaries, \citet{2005A&A...435..521N} argue that LINERs seem to occupy a ``low/hard'' state (geometrically thick and optically thin, hot accretion flows, low Eddington ratio, launching collimated jets), while low-luminosity Seyfert sources are in a ``high" state (geometrically thin and optically thick, cold accretion discs, high Eddington ratios, incapable of launching collimated jets). In this picture, LINERs would be characterized by radiatively inefficient flows with recently (re)started nuclear and jet activity, which could explain their overall lower [OIII]/H$\beta$ ratio in the optical diagnostic diagrams and the increasing spectral index on the transition between Seyfert and LINER sources. \section{Summary} \label{conclusions} We studied the radio-optical properties of selected cross-matched SDSS-FIRST sources, with a particular focus on the spectral index trends in the optical diagnostic diagrams. Combining the high-flux sample ($S_{1.4}\ge 100\,{\rm mJy}$; Vitale et al. 2015a) and the low-flux sample presented in this paper ($10\,{\rm mJy} \leq S_{1.4} \leq 100\,{\rm mJy}$), we cover a total of 417 star-forming, composite, Seyfert, and LINER sources based on the standard spectral classification using the emission-line ratios. For a total of 209 sources (90 from the low-flux sample and 119 from the high-flux sample) we have flux density measurements at 10.45~GHz. First, we searched for potential trends of the radial spectral index between the classical optical spectral classes of galaxies. Second, we looked at how the different ranges of the radio spectral index are positioned in the optical diagnostic diagrams. While the first approach yielded basic statistics, the second approach turned out to be more appropriate for our sample in the context of radio-optical trends. We find a scenario that is largely consistent with models in which the source population shows a dichotomy that reflects a switch between radiatively efficient and radiatively inefficient accretion modes at similarly (compared to QSOs and quasars) low accretion rates. The location of radio sources in the narrow emission-line diagnostic (BPT) diagrams shifts with the increasing importance of a radio-loud AGN away from galaxies dominated by radio emission powered by star formation. Hence, the radio weakness dominates the radio loudness over stellar mass estimate (for the stellar mass, see Fig.~\ref{fig_stellar_bh_masses}) and leads to a clear separation from the radio-loud objects. This is seen in the diagnostic diagrams in this paper and has also been put forward by similar investigations (e.g. Fig. A1 by \cite{2012MNRAS.421.1569B}). Comparing the $\alpha_{1.4/4.85}$ values of all the sources in the low-flux density sample with each other, one finds that while the high-flux density sources are dominated by the steep spectral index components and steepen towards higher frequencies, the low-flux density sample is significantly influenced by the increasing presence of flat-spectrum components. A detailed investigation of the spectral-index distributions of the high and low-flux density samples shows that in both samples the presence of flat-spectrum components implies a higher excitation in the optical diagnostic diagrams. In particular, the fainter sources that contain a significant contribution by a compact flat-spectrum component can be investigated through the present low-flux density sample. If we turn to the sources from both samples that have 10.45~GHz measurements and hence a known $\alpha_{[4.85-10.45]}$ value, we find the following: considering steep ($\alpha_{[4.85-10.45]} > -0.7$), flat ($-0.7 < \alpha_{[4.85-10.45]} < -0.4$), and inverted radio spectra ($\alpha_{[4.85-10.45]} < -0.4$), we recovered three basic classes with respect to the radio loudness and ionization ratio [$\ion{O}{iii}$]/H$\beta$: \begin{itemize} \item[(1)] sources with a steep radio index, high ionization ratio, and high radio loudness; \item[(2)] sources with a flat radio index, lower ionization ratio, and intermediate radio loudness; \item[(3)] sources with an inverted radio index, low ionization ratio, and low radio loudness. \end{itemize} In the optical diagnostic diagrams, these three classes correspond to the transition from Seyfert to LINER classification in terms of the ionization line ratios. Seyfert sources with higher ionization ratio are dominated by older, optically thin radio emission. Towards the lower ionization ratio, LINERs exhibit a flat to inverted radio spectral index, which is indicative of the compact, self-absorbed core and the jet emission. In the local Universe, these trends may result from re-triggered nuclear and jet activity. \begin{acknowledgements} We are grateful to the referee, Brent Groves, for very constructive comments that helped to improve the manuscript. We thank Stefanie Komossa (MPIfR), Pavel Kroupa (University of Bonn), Thomas Krichbaum (MPIfR), Madeleine Yttergren (University of Cologne), Bozena Czerny (CFT PAN), Mary Loli Martinez-Aldama (CFT PAN), and Swayamtrupta Panda (CFT PAN) for very helpful discussions and input. This work was done with the financial support of SFB956 -- ``Conditions and Impact of Star Formation: Astrophysics, Instrumentation and Laboratory Research '' at the Universities of Cologne and Bonn and MPIfR, in which M.Z., G.B., A.E., N.F., S.B., J.-A. Z., K. H. are members of sub-group A2 -- ``Conditions for Star Formation in Nearby AGN and QSO Hosts'' and A1 --``Understanding Galaxy Assembly'', with M. V. and L. F. being its former members. Michal Zaja\v{c}ek acknowledges the financial support by National Science Centre, Poland, grant No. 2017/26/A/ST9/00756 (Maestro 9). \end{acknowledgements} \bibliographystyle{aa}
1906.09048
\section{Introduction} Reaction-diffusion systems are universally invoked as a reference modeling tool, owing to their inherent ability to sustain the spontaneous generation of -- spatially extended -- patterned motifs. The theory of pattern formation was laid down by A. Turing~\cite{Turing52} and later on fertilized in a cross-disciplinary perspective to eventually become a pillar to explain self-organization in nature~\cite{Strogatz01Nature,Pecora_etal97,Pismen06}. In the original Turing setting, two chemicals, here termed species, can relocate in space following Fickean diffusion. In general, a form of local feedback is needed: this activates the short range production of a given species, which should be, at the same time, inhibited at long ranges. In standard reaction-diffusion systems, this combination is accomplished via autocatalytic reaction loops and having the inhibitors diffusing faster than activators. The Local Activation and Lateral Inhibition (LALI) paradigm provides hence the reference frame for the onset of self-organized patterns of the Turing type. Both in the continuous and discrete (lattice or network based ~\cite{OthmerScriven74,NakaoMikhailov10,AsllaniChallengerPavoneSacconiFanelli14}) versions, diffusion is a key ingredient for the Turing paradigm to apply. Assume in fact that a stable fixed-point (or, alternatively a time dependent equilibrium, e.g. a limit cycle) exists for the scrutinized reaction scheme in its a-spatial local version. Then, the spatial counterpart of the inspected model admits a homogeneous equilibrium. This follows trivially by recalling that by definition the Laplacian, the operator that implements the diffusive transport, returns zero when applied to a uniform density background. Tiny erratic disturbances, that happen to perturb the uniform solution, may therefore activate the diffusion term, and consequently feed on the reaction part, in this way promoting a self-sustained instability which sits at the core of the Turing mechanism. Natural systems display however an innate drive towards self-organization, which certainly transcends the realms of validity of Turing's ideas. In many cases of interest, the quantities to be monitored are anchored to the nodes of a virtual graph. The information to be processed on site is carried across the edges of the graph. Nevertheless, the spreading of information springing from nearby nodes is not necessarily bound to obey linear diffusion, as in the spirit of the original Turing formulation. At variance, it can in principle comply with a large plethora of distinct dynamical modalities. In ecology, the number of nodes reflects biodiversity: inter-species interactions are usually epitomized by a quadratic response function, which accounts for competitive predator-prey or symbiotic dependences~\cite{May72,ThebaultFontaine10}. In genetic networks, inputs between neighboring genes are shaped by a sigmoidal profile, a non-linear step function often invoked to mimic threshold activation processes~\cite{BecskeiSerrano00}. At a radically different scale, the dynamics of opinions in sizable social systems are customarily traced back to pairwise exchanges, which can echo physical encounters or be filtered by device-assisted long-ranged interactions~\cite{castellano2009statistical}. Similarly, global pandemic events mirror complex migration patterns, which cannot be explained as resorting to the simplistic archetype of diffusion~\cite{pastor2001epidemic}. A first insight into the study of pattern formation for a system made of immobile species was offered in ~\cite{BullaraDecker15}. In this latter paper the colored motifs on the skin of a zebrafish were reproduced by postulating an experimentally justified long ranged regulatory mechanism, in absence of cell motility. The differential growth mechanism identified by Bullara and De Decker~\cite{BullaraDecker15} yields a LALI paradigm alternative to that stemming from conventional reaction diffusion schemes. Motivated by these observations, we here take one leap forward by proposing a generalized theory of pattern formation which is not bound to satisfy the LALI conditions. {Interactions are assumed to be mediated} by local (on site) and non local (distant, possibly long-ranged) pairings among species. More specifically, we will assume that non local interactions, as specified by a off-diagonal elements of a binary or weighted adjacency matrix, can be scaled by the node degree, following a mean-field working ansatz. The elemental units of the system being inspected (species) are permanently linked to the nodes they are bound to, and therefore classified as immobile. {Local interactions will be also referred to as reactions, to establish an ideal bridge with standard reaction-diffusion systems.} \section{The scheme of interaction} Consider a reactive system made of two, mutually interacting species and label with $x$ and $y$ their respective concentrations. Assume: \begin{equation} \left\{\begin {split} \dot x =& f(x,y) \\ \dot y =& g(x,y).\\ \end{split}\right. \label{unc_sys} \end{equation} where $f(\cdot,\cdot)$ and $g(\cdot,\cdot)$ are generic non-linear functions. Suppose that the system admits an equilibrium solution, $(x^*,y^*)$, which can be either a fixed-point or a limit cycle. Linearizing equations (\ref{unc_sys}) around ${\boldsymbol w^*}= (x^*,y^*)$ returns a (linear) first order system for the evolution of the imposed perturbation $\delta {\boldsymbol w} = (\delta x, \delta y)^T$, where $T$ stands for the transpose operation. In formulae, one gets $\delta \dot{\boldsymbol w} = \boldsymbol{J} (\boldsymbol w^*) \delta{\boldsymbol w}$, where $\boldsymbol{J}(\boldsymbol w^*)=\left( \begin{array}{ c c} \partial_xf^* & \partial_yf^* \\ \partial_xg^* & \partial_yg^* \end{array} \right)$ is the Jacobian matrix of the system evaluated at equilibrium. When the equilibrium is a fixed-point, the stability of ${\boldsymbol w^*}$ is set by the spectrum of $\boldsymbol{J}$: if the largest real part of the eigenvalues of $\boldsymbol{J}$ is negative the imposed perturbation fades away exponentially and the fixed-point is deemed stable. This amounts to requiring $\Tr(\boldsymbol J)<0$ and $\det(\boldsymbol J)>0$, a pre-requisite condition for our analysis to apply. When the equilibrium solution is a limit cycle, the Jacobian exhibits a periodic dependence on time and the stability can be assessed by evaluating the Floquet exponents~\cite{Grimshaw17,ChallengerBurioniFanelli15,LucasFanelliCarletti18}. In the following, we develop our reasoning by assuming a constant Jacobian. Our analysis and conclusions, however, readily extend to a limit cycle setting, provided one replaces $\boldsymbol{J}$ with the time-independent Floquet matrix. Starting from a local formulation of the reactive model, we move forward and account for non local interactions. We begin by replicating the examined system on $N$ different patches, defining lumps of embedding space or specific criteria to effectively group the variables involved. The collection of $N$ isolated spots where the local {interaction} scheme is replicated are the nodes of a network, identified by the index $i=1,...,N$. The edges of the network provide a descriptive representation of the non-local {interaction} scheme: $A_{ij}=1$, if nodes $i$ and $j$ share a link, {i.e. if they are bound to mutually interact} (the formalism extends readily to account for weighted interactions). For the sake of simplicity we limit the analysis to undirected networks, $A_{ij}=A_{ji}$. We note that, by definition, we have $A_{ii}=1$. The connectivity of node $i$ is hence $k_i^A=\sum_j A_{ij}$. Rewrite the non-linear function $f$ as $f(x,y)=f_0(x,y)+f_1(x,y)$, where $f_0$ speaks for the interaction terms that are genuinely local, i.e. those not modulated by distant couplings. Conversely, $f_1(x,y)$ will be replaced by a reaction function, which encodes for the remote interaction. In the spirit of a nearest neighbors mean-field approximation, we shall substitute $f_1(x,y)$ with its averaged counterpart in the heterogeneous space of the interaction network. Similarly, $g$ will be split as $g(x,y)=g_0(x,y)+g_1(x,y)$. Then we set: \begin{equation} \label{system1} \left\{\begin {split} \dot x_i =& f_0(x_i,y_i) + \frac{1}{k_i^A}\sum_jA_{ij}\tilde{f}_1(x_i,x_j,y_i,y_j)\\ \dot y_i =& g_0(x_i,y_i) + \frac{1}{k_i^A}\sum_jA_{ij}\tilde{g}_1(x_i,x_j,y_i,y_j). \end{split}\right. \end{equation} where $\tilde{f}_1$ (resp. $\tilde{g}_1$) reduces to $f_1$ (resp. $g_1$) when the system is in a homogeneous state. Mathematically, we require $\tilde{f}_1(x_i,x_j,y_i,y_j) \delta_{ij} = f_1(x_i,y_i)$ and $\tilde{g}_1(x_i,x_j,y_i,y_j) \delta_{ij} = g_1(x_i,y_i)$. As an illustrative example, consider a local quadratic interaction of the type $f_1(x_i)=x_i^2$. This latter can be turned into $\tilde{f}_1(x_i,x_j)=x_i x_j$, a choice that matches the above constraints. This would yield $(x_i/k_i^A) \sum_j A_{ij} x_j = x_i \langle x \rangle_i$ in the first of equations (\ref{system1}), where the average $\langle \cdot \rangle_i$ runs over the nodes adjacent to $i$. In practical terms, as anticipated above, we replace the local reaction term $f_1(x_i,y_i)$ to include interactions at distance, where the average runs over the $k_i^A$ terms defining the neighborhood of $i$. The remote interaction is assumed to depend only on the species that populate the selected pair of nodes. By adding and subtracting $f_1(x_i,y_i)$ (resp. $g_1(x_i,y_i)$ ) in the first (resp. second) of equations (\ref{system1}) and making use of the definition of $f(\cdot,\cdot)$ and $g(\cdot,\cdot)$ yields: \begin{equation} \left\{\begin {split} \dot x_i = & f(x_i,y_i) + \sum_j\LL_{ij}^{\rm R}\tilde{f}_1(x_i,x_j,y_i,y_j)\\ \dot y_i = & g(x_i,y_i)+ \sum_j\LL_{ij}^{\rm R}\tilde{g}_1(x_i,x_j,y_i,y_j)\\ \end{split}\right. \label{syst_eq} \end{equation} where $\LL_{ij}^{\rm R} = \frac{A_{ij}}{k_i^A} - \delta_{ij}$. Observe that the above equations are formally identical to system (\ref{unc_sys}) when $\boldsymbol A \equiv \mathbb{1}_N$, where $\mathbb{1}_N$ stands for the $N \times N$ identity matrix. In this case, in fact, the $N$ nodes are formally decoupled and system (\ref{syst_eq}) reduces to (\ref{unc_sys}), for each node of the ensemble. System (\ref{syst_eq}) admits an obvious homogeneous solution. This latter is found by assigning $(x_i,y_i)=(x^*,y^*)$ $\forall i$. Can one make the homogeneous equilibrium unstable to tiny non homogeneous perturbations by acting on the network of interactions, as encoded in the operator $\boldsymbol{{\LL}}^{\rm R}$? {Answering this question amounts to develop a general theory for pattern formation for interacting system, in absence of species diffusion.} To expand along this line one needs to characterize the operator $\boldsymbol{{\LL}}^{\rm R}$. As shown in the Appendix, it is easy to prove that $\boldsymbol{{\LL}}^{\rm R}$ displays a non-positive spectrum and that has an eigenvector $(1,\dots,1)^T$ associated to the $0$ eigenvalue. This justifies referring to $\boldsymbol{{\LL}}^{\rm R}$ as to a {\it reactive Laplacian}. In the literature it is also known as the consensus Laplacian~\cite{Ghosh_etal14,Lambiotte_etal11,OlfatiSaber07,Krause09}. It is important to notice that this is different from the diffusive Laplacian ($L^{diff}_{ij} = A_{ij} - k_j^A\delta_{ij}$) and its normalized version ($L^{RW}_{ij} = \frac{A_{ij}}{k_j^A} - \delta_{ij}$) which describes random walk. {It is not possible to reduce the reactive Laplacian to the Laplacian of diffusion, unless the complex network reduces to a simple regular lattice.} \section{Linear stability analysis} The stability of the homogeneous equilibrium $(x_i,y_i)=(x^*,y^*)$, $\forall i$ can be analytically probed {\cite{pecora1998master}}. Introduce a small inhomogeneous perturbation, $x_i=x^*+u_i$, $y_i=y^*+v_i$, and expand the governing equations up to the leading (linear) order to yield: \begin{equation} \left( \begin{array}{ c } \dot{\boldsymbol u}\\ \dot{\boldsymbol v} \end{array} \right) = \boldsymbol{J} \otimes \mathbb{1} \left( \begin{array}{ c } \boldsymbol u\\ \boldsymbol v \end{array} \right) + \boldsymbol{J}_R \otimes \boldsymbol{{\LL}}^{\rm R} \left( \begin{array}{ c } \boldsymbol u\\ \boldsymbol v \end{array} \right) \label{pert_syst} \end{equation} where $\boldsymbol u=\left(u_1,u_2, ..., u_N \right)$, $\boldsymbol v=\left(v_1,v_2, ..., v_N \right)$, $\boldsymbol{J}_R = \left( \begin{array}{ c c} \partial_2\tilde{f}_1^* & \partial_4\tilde{f}_1^* \\ \partial_2\tilde{g}_1^* & \partial_4\tilde{g}_1^* \end{array} \right)$. The symbols $\partial_2$ and $\partial_4$ indicate that the derivatives are computed with respect to the second and fourth arguments of the function to which they are applied. The symbol $\otimes$ refers to the Kronecker product. More details on the derivation of eq. (\ref{pert_syst}) are given in the Appendix. To solve the above linear system, we introduce the eigenvectors ($\phi_j^{(\alpha)}$) and eigenvalues ($\Lambda^{(\alpha)}$) of the reactive Laplacian $\boldsymbol{{\LL}}^{\rm R}$, i.e. $\sum_j\LL_{ij}^{\rm R}\phi_j^{(\alpha)}=\Lambda^{(\alpha)}\phi_i^{(\alpha)}$ with $\alpha=1,..., N$. Expanding the perturbations on the basis of $\phi_j^{(\alpha)}$, $\alpha=1,..., N$, we obtain (see Appendix) the $2 \times 2$ modified Jacobian $\boldsymbol{J}_{\alpha} =\boldsymbol{J} + \Lambda^{(\alpha)}\boldsymbol{J}_R$, which determines the stability of the system. For each choice of $\Lambda^{(\alpha)}$, one needs to compute $\lambda(\alpha)$, the largest real part of the eigenvalues of $\boldsymbol{J}_{\alpha}$. The discrete collection of $\lambda(\alpha)$ vs. $-\Lambda^{(\alpha)}$ defines the so-called dispersion relation. The homogeneous equilibrium is stable if $\lambda(\alpha) < 0$, $\forall \alpha$. Conversely, the perturbation grows if at least one $\lambda(\alpha)$ turns out to be positive. In this latter case patterns can emerge, thus breaking the symmetry of the unperturbed initial condition. The condition of instability can be expressed in a compact form (provided $\boldsymbol{J}^{-1}$ exists), as we prove in the Appendix. Specifically, we find that the instability sets in if one of the following conditions holds: (i) if $\Tr(\boldsymbol{J}_R)<0$ and provided $\Lambda^{(\alpha)} < -\frac{\Tr(\boldsymbol{J})}{\Tr{\boldsymbol{J}_R}}$, for at least one choice of $\alpha$; (ii) if $\det(\boldsymbol{J}_R)<0$ and provided $\Lambda^{(\alpha)} < \Lambda_{-}$, for at least one choice of $\alpha$; (iii)\ if $\det(\boldsymbol{J}_R)>0$, $\Tr(\boldsymbol{J}^{-1}\boldsymbol{J}_R)>0$ and provided $\Lambda_-<\Lambda^{(\alpha)} < \Lambda_+$, for at least one choice of $\alpha$. Here, $\Lambda_{\pm} = \frac{1}{2}\biggl[ -\Tr(\boldsymbol{J}^{-1}\boldsymbol{J}_R)\pm \sqrt{[\Tr(\boldsymbol{J}^{-1}\boldsymbol{J}_R)]^2 - 4\det(\boldsymbol{J}^{-1}\boldsymbol{J}_R)}\biggr]$. The above conclusions hold for a two species model that displays a constant homogeneous equilibrium. {As such, it } generalizes the renowned Turing instability conditions for reaction-diffusion models, to systems {made of immobile species} subject to local and {non local (remote)} interactions. The analysis extends to the setting where the homogeneous state is a synchronous collection of oscillators performing in unison. As we shall also remark in the following, the novel route to pattern formation overcomes the LALI constraint. {Before turning to discussing the applications, we remark that generalized transport schemes have been considered in the past which enable one to relax the LALI request. This includes for instance accounting for cross-diffusion \cite{vanag2009cross,madzvamuse2015cross}. It should be emphasized however that within the current scheme, species are immobile, hence permanently bound to their reference nodes. It is in fact the combination of local and non-local interaction rules that materializes in the discrete non-local operator, different from the Laplacian of diffusion, which rules the dynamics of the system in its linear approximation.} \begin{figure}[htb] \centering \begin{adjustbox}{center} \includegraphics[width=1.05\columnwidth]{figure_Volterra_3.pdf} \end{adjustbox} \caption{Panels (a) and (b): node colors code for the asymptotic stationary concentration of the species (predators, outer ring, and prey, inner ring), for the Volterra model for two distinct interaction networks. Panel (c): the associated dispersion relations $\lambda(\alpha)$ vs. $-\Lambda^{(\alpha)}$. Red symbols refer to the one-dimensional lattice with periodic boundary conditions, and blue symbols follow from a ER network. Panel (d): for specific choices of the parameters, different from those adopted in panels a) and b) , the random inhomogeneous perturbations of the homogeneous equilibrium give raise to predator (dashed line)-prey (continuous line) oscillations, testifying on the rich gallery of generated patterns. } \label{fig:fig1} \end{figure} \section{Applications and discussions} As a first example, we consider the reaction scheme known as the Volterra model~\cite{mckane2005predator}, $f(x,y)= c_1 xy - d x$ and $g(x,y)=ry - sy^2 - c_2xy$. The variable $x$ identifies the concentration of predators, while $y$ stands for the prey. The parameters are positive definite. The Volterra equations admit a fixed-point, $x^* = \frac{c_1r-sd}{c_1c_2}$, $y^*=\frac{d}{c_1}$, which is meaningful and stable provided $c_1r-sd>0$. We now turn to considering $N$ replicas of the model, by making the variables to depend on the node index $i$, associated to different ecological niches. Following the above lines of reasoning, we assume that species can sense the remote interaction with other communities populating the neighboring nodes. For instance, the competition of prey for food and resources can be easily extended so as to account for a larger habitat which embraces adjacent patches. At the same time, predators can benefit from a coordinated action to hunt in team. Assume that the linear terms in $f(x,y)$ and $g(x,y)$ stay local (death term for predators and birth term for prey), i.e. $f_0(x,y)=-dx$, $g_0(x,y)=ry$. We generalize the terms $f_1(x,y)=c_1xy$ and $g_1(x,y)=-sy^2-c_2xy$ to the case of remote interactions as: \begin{equation} \left\{\begin {split} \dot x_i =& -d x_i + \beta \frac{c_1}{k_i} y_i\sum_j A_{ij}x_j + (1-\beta)\frac{c_1}{k_i}x_i\sum_j A_{ij}y_j\\ \dot y_i =& ry_i - \frac{s}{k_i}y_i\sum_j A_{ij}y_j - \frac{c_2}{k_i}y_i\sum_j A_{ij}x_j\\ \end{split}\right. \label{Volterra} \end{equation} where $\beta \in (0,1)$ modulates the relative strength of the non local terms that drive the evolution of the predators. By simple manipulations, we cast the previous equations so as to bring into evidence the dependence on the reactive Laplacian $\boldsymbol{{\LL}}^{\rm R}$. In formulae, one gets \begin{eqnarray} \nonumber \dot x_i &=& c_1 x_iy_i - d x_i + \alpha c_1y_i\sum_j \LL^{\rm R}_{ij}x_j + (1-\alpha)c_1x_i\sum_j\LL^{\rm R}_{ij}y_j\\ \nonumber \dot y_i &=& ry_i - sy_i^2 - c_2x_iy_i - sy_i\sum_j\LL^{\rm R}_{ij}y_j - c_2y_i\sum_j\LL^{\rm R}_{ij}x_j. \end{eqnarray} The system can then be studied by resorting to the strategy developed above. The interaction parameters can be set so as to make the dispersion relation positive, as displayed in Fig. \ref{fig:fig1}(c). Red symbols refer to the generalized Volterra model organized on a close lattice made of $N=30$ nodes (panel (a) of Fig.~\ref{fig:fig1}), while the blue symbols are obtained when the sites are connected by an Erd\H{o}s-R\'enyi (ER) network with $\langle k \rangle = 3.3$ (panel (b)). The colors of the nodes stand for the asymptotic stationary state of the system across the networks, showing the formation of distinct patterns for predators (outer ring) and prey (inner ring), in the two cases. While the lattice yields a regular, equally spaced, pattern, a disordered distribution of populated patches is instead obtained for the ER network, thus reflecting the randomness of the connections. We remark that in both cases, the extinction in a patch of one of the species is typically associated to the disappearance of the other one in the same node, resulting in a global habitat with populated patches alternated with uninhabited ones. These latter play the role of natural barriers separating colonized niches. The emerging patterns of coexistence differ from those observed from standard reaction-diffusion systems, where predators and prey tend to cluster in distinct sites. The mutually exclusive distribution of species, as obtained in classical reaction diffusion scheme, bears the imprint of the LALI assumption. This latter does not apply for the patterns reported in Fig 1. As a matter of fact, the associated Jacobian matrix displays non positive diagonal entries. As a side result, we notice that in some cases, Fig. \ref{fig:fig1}(d), the perturbation may trigger the emergence of regular oscillations, typical of a predator-prey cycle. This behavior cannot be reproduced for a deterministic Volterra model in absence of remote couplings~\cite{mckane2005predator}. \begin{figure}[htb] \centering \begin{adjustbox}{center} \includegraphics[width=1.05\columnwidth]{figuraSL3.pdf} \end{adjustbox} \caption{In (a) the instability domain is depicted: $c=\sigma_{Re}+\sigma_{Im}\beta_{Im}/\beta_{Re}$ is plotted at varying $\sigma_{Im}$ and $\beta_{Im}$, while $\sigma_{Re}=\beta_{Re}=1$. The instability condition for the limit cycle $w^*(t)$ coincides with the request $c<0$. Panel (b) shows the dispersion relation for two different networks, a lattice with $k=6$ (red empty circles) and a Watts-Strogatz network with the same mean degree (blue full circles). Panels (c) and (d) show the patterns of $|w_i|$, in the space of the nodes and against time, corresponding to the previously mentioned network structures and ensuing dispersion relations.} \label{fig:fig2} \end{figure} As a second example, we consider a set of {interacting} Stuart-Landau oscillators \cite{BordyugovPikovskyRosenblum10,Teramae04,Selivanov_etal12,PanaggioAbrams15,Zakharova_etal14}. The Stuart-Landau equation takes the form $\dot w = \sigma w - \beta w |w|^2$, where $w=w_{Re} + i w_{Im}$ is a complex number, as well as the parameters $\sigma, \beta$. The above equation yields a stable limit cycle which can be cast in the explicit form $w^*(t) = \sqrt{\frac{\sigma_{Re}}{\beta_{Re}}}\exp(i \omega t)$, where $\omega = \sigma_{Im} - \beta_{Im} \sigma_{Re}/\beta_{Re}$. To determine the stability of the time dependent periodic solution $w^*$, one can introduce a perturbation of $w^*$ in the form $w^*\left(1+u \right) \exp( v)$, where $u$ and $v$ are small quantities. Expanding at the linear order of approximation yields a Jacobian matrix $\boldsymbol{J} = \left( \begin{array}{ c c} -2\sigma_{Re} & 0 \\ -2\beta_{Im} \sigma_{Re}/\beta_{Re} & 0 \end{array} \right).$ Interestingly, even if $w^*$ is periodic in time, $\boldsymbol{J}$ is constant, owing to the specificity of the Stuart-Landau equation and to the form of the imposed perturbation. This observation allows us to proceed in the analysis without resorting to the Floquet machinery. The request of a stable limit cycle $w^*$ translates into $\sigma_{Re}>0$, which consequently implies $\beta_{Re}>0$ (since $\sigma_{Re} / \beta_{Re}>0$ for the solution to exist). Stuart-Landau oscillators are often coupled diffusively, either on a regular lattice or on a complex network, so as to result in the celebrated Ginzburg-Landau equation. For suitable choices of the parameters, the oscillators evolve in unison, giving rise to a fully synchronized homogenous state. This latter can however break apart, upon injection of a tiny disturbance, if the parameters are assigned in the complementary reference domain. This modulational instability, known in the literature as the Benjamin-Feir instability, follows from the subtle interplay between reaction and diffusion terms~\cite{DiPattiFanelliCarletti16,BenjaminFeir67}. Inspired by the analysis carried out above, here we consider a collection of non diffusive Stuart-Landau oscillators, interacting via their linear reaction term. This choice materializes in a variant of the traditional Ginzburg-Landau equation, ensuing a novel instability of the Benjamin-Feir type. Consider an ensemble made of $i=1,...,N$ oscillators at the nodes of a network, characterized by the complex amplitude $w_i$, with $i=1,...,N$. The family of Stuart-Landau oscillators evolves therefore as: $\dot w_i = \frac{\sigma}{k_i} \sum_jA_{ij} w_j - \beta w_i |w_i|^2$ or, equivalently, $\dot w_i = \sigma w_i - \beta w_i |w_i|^2 + \sigma \sum_j \LL_{ij}^{\rm R} w_j$. A linear stability analysis which moves from the ansatz $w_i=w^*\left(1+u_i \right) \exp( v_i)$ (with $u_i,v_i << 1$) and implements the above calculation strategy leads to the $\boldsymbol{J}_{\alpha} = \boldsymbol{J} + \Lambda^{(\alpha)}\boldsymbol{J}_R$, where $\boldsymbol{J}_R = \left( \begin{array}{ c c} \sigma_{Re} & -\sigma_{Im} \\ \sigma_{Im} & \sigma_{Re} \end{array} \right).$ As $\Tr(\boldsymbol{J}_R) = 2 \sigma_{Re}>0$, the homogeneous solution turns unstable only if $\det(\boldsymbol J_{\alpha})<0$. It is easy to prove that $\det(\boldsymbol J_{\alpha})<0$ given $\Lambda^{(\alpha)} > 2\frac{\sigma_{Re}^2+\sigma_{Re}\sigma_{Im}\beta_{Im}/\beta_{Re}}{\sigma_{Re}^2+\sigma_{Im}^2}$, for at least one choice of $\alpha$. Since the eigenvalues of the reactive Laplacian are by definition negative and $\sigma_{Re}>0$, the instability request materializes in the compact necessary condition $\sigma_{Re}+\sigma_{Im}\frac{\beta_{Im}}{\beta_{Re}}<0$, which constitutes the revisitation of the Benjamin-Feir instability to the present setting. Fig. \ref{fig:fig2}(a) displays the region of instability in the reference plane ($\sigma_{Im}, \beta_{Im}$) (when setting $\sigma_{Re}$ and $\beta_{Re}$ to unit). The dashed white line marks the transition to the region of instability. Emerging patterns, and their associated dispersion relations, $\lambda(\alpha)$ vs. $-\Lambda^{(\alpha)}$, are depicted for the Stuart-Landau oscillators distributed on, respectively, a lattice with non-local couplings and a Watts-Strogatz graph. In the Appendix, we show that long-range couplings promotes the stability of the system. Summing up, we have here introduced a novel mechanism for the emergence of coherent patterns {for systems made of interacting, although immobile, species}. Remote interactions, treated in a mean-field approximation, lead to a reactive Laplacian, whose spectrum ultimately sets the conditions for the instability. Taken all together, our theory paves the way for investigating novel mechanisms of pattern formation, by providing a flexible alternative to traditional schemes that implements the LALI constraint. \addcontentsline{toc}{chapter}{Bibliography} \bibliographystyle{apsrev4-1}
1906.09181
\section{Results} We present the results of our evaluation using the two discussed approaches. For each dataset collected during the two data collection sessions, we used the first 80\% of the ECG trace across 49 subjects as the training data and the remaining 20\% of the signal as the test data. Thus, we obtained two training sets and two test sets from each session in total. We are also interested in examining the stability of ECG as a biometric, in other words, how invariant it remains for each individual over long periods of time. For this reason, each table includes results under three conditions. In the first two, both training and test sets are taken from the same data collection session (first or second). In the third condition, the training set is taken from the first session, while the test set comes from the second session, collected four months later. The results of the first evaluation approach are presented in Table~\ref{tab:first}. The average performance of classifiers for each target user is assessed using equal error rate (EER) as the metric, presented as the average of individual EER scores obtained for each of the 49 users. \begin{table}[htbp] \centering \begin{tabular}{c c c c} \hline \hline Training & Testing & Average EER & Standard Deviation\\ \hline S1 & S1 & 3.22\% & 2.99\% \\ S2 & S2 & \textbf{2.44\%} & 2.40\% \\ S1 & S2 & 9.65\% & 11.35\% \\ \hline \hline \end{tabular} \caption{Results obtained using the first evaluation approach. The first two columns reflect from which session (S1 or S2) the corresponding dataset originates. Lower scores indicate better performance.} \label{tab:first} \end{table} The results of the second evaluation approach are shown in Table~\ref{tab:second}. In this case, we use half target error rate (HTER) as the metric, as we set the decision threshold during the training process, which represents a more realistic scenario. We obtain the average HTER score by averaging the individual HTER scores for each evaluated classifier. \begin{table}[htbp] \centering \begin{tabular}{c c c c} \hline \hline Training & Testing & Average HTER & Standard Deviation\\ \hline S1 & S1 & 5.86\% & 10.00\% \\ S2 & S2 & \textbf{4.58\%} & 9.35\% \\ S1 & S2 & 30.02\% & 17.40\% \\ \hline \hline \end{tabular} \caption{Results obtained using the second evaluation approach. The first two columns reflect from which session (S1 or S2) the corresponding dataset originates. Lower scores indicate better performance.} \label{tab:second} \end{table} \section{Discussion and Conclusion} In this work, we examine the usage of ECG as a biometric, focusing on the stability of the ECG signal and performance of classifiers trained using data collected from a consumer-grade ECG monitor. Comparing results reported in the literature proves to be difficult in practice, as no standardized dataset for ECG-based biometric research exists, and different authors collect their data under different conditions. Nevertheless, we present our results alongside existing studies in Table~\ref{tab:results}. We only list studies that follow a more realistic and usable ``off-the-person`` approach, in which the monitor sensors are not placed directly on the individual. \begin{table}[htbp] \vspace{3mm} \centering \begin{tabular}{l c c c c c} \hline \hline Study & Subjects & Duration & EER \\ \hline \textbf{Present Work} & 49 & Short & \textbf{2.4\%} \\ \textbf{Present Work} & 49 & Long & \textbf{9.7\%} \\ Carreiras et al.~\cite{carreiras} & 63 & Short & \textbf{13.3\%} \\ Coutinho et al~\cite{string_matching} & 19 & Short & \textbf{0.4\%} \\ Falconi et al.\tablefootnote{Authors do not provide EER, thus HTER is presented instead.}~\cite{mobile} & 10 & Short & \textbf{9.8\%} \\ Silva et al.~\cite{finger_ecg} & 63 & Short & \textbf{1.0\%} \\ Silva et al.~\cite{finger_ecg} & 63 & Long & \textbf{9.1\%} \\ Singh et al.~\cite{singh} & 126 & Short & \textbf{3.4\%} \\ Komeili et al.~\cite{toronto} & 70 & Short & \textbf{11.0\%} \\ \hline \hline \end{tabular} \caption{Results from studies on ECG-based biometric authentication. All studies follow the ``off-the-person'' approach and use a single-lead ECG monitor. `Duration' indicates whether the result is obtained using short- or long-term data.} \label{tab:results} \end{table} The results presented in this work provide a positive perspective on ECG-based biometrics, by showing that individuals can be authenticated by using their ECG trace. This project has also confirmed the results of previous authors showing that the performance of ECG biometrics degrades over time. Improving the performance of ECG over longer periods of time could be done by synchronizing the stored biometric with the new signal after each successful authentication. This work also demonstrates a high potential of using consumer-grade ECG monitors for authentication. The introduction of low-cost sensors allows system designers to embed them into existing access control systems. Nevertheless, more research needs to be done on extracting features from ECG signals obtained from consumer-grade monitors, preventing spoofing attacks and guaranteeing that ECG-based biometric systems are socially accepted by the general public. \section{Introduction} Traditional passwords represent the most common mechanism of authenticating users online, despite numerous usability and security problems~\cite{adams1997making, furnell2006replacing, bonneau2012quest}. Passwords create a burden for users, as they have to be memorized and, ideally, should be long and unique. It should not come as a surprise, therefore, that many users opt to use easy-to-guess passwords that are reused across different services~\cite{ives2004domino, das2014tangled}, leading to account takeovers and personal data compromise. Research has shown, for instance, that over 50\% of users have the same passwords for different services~\cite{das2014tangled} and 81\% of data breaches occur due to poor password behavior~\cite{verizon}. The research community has been looking at alternative authentication schemes in order to replace or to complement traditional passwords, including push notifications~\cite{sanin2014systems}, graphical passwords~\cite{wiedenbeck2005passpoints}, trust scores~\cite{jakobsson2012implicit}, and gestures~\cite{sherman2014user}. Of particular interest is biometric authentication, which proves the identity of the user with ``something they are''. Biometrics improve system usability, as users are no longer required to remember any passwords or always carry a physical token. The ease of using a biometric for authentication has led to the rapid adoption of biometrics in private and public sectors, and the global market for biometric technology is expected to reach \$59.31 billion by 2025~\cite{grandview}. While existing research has focused on common modalities, such as fingerprints, face recognition, and iris scans, insufficient work has been done to explore novel biometrics. In this work, we investigate a biometric based on the electrical activity of the human heart in the form of electrocardiogram (ECG) signals. Past research has demonstrated that ECG is sufficiently unique to each individual \cite{carreiras} and could be used for user authentication. This work further explores the stability (i.e. invariability) of ECG as a biometric over long periods of time. Moreover, we investigate whether ECG signals recorded using a consumer-grade ECG monitor can be used for user authentication. These monitors are more affordable and less intrusive than their medical-grade counterparts, and present a more realistic scenario of collecting an ECG from an embedded sensor. Finally, we evaluate the performance of the classifiers responsible for user authentication using two approaches, one of which is a standard method found in the existing literature and another one that provides better estimates of the mistakes made by the classifiers. \section{Background and Related Work} This work evaluates ECG as a biometric for user authentication using data collected from a consumer-grade monitor over a period of four months. In this section, we provide a brief overview of biometrics and discuss related work on evaluating ECG for user authentication. \subsection{Biometrics} The term `biometrics' is used to describe measurable and distinctive characteristics that can be used to perform recognition of individuals~\cite{nist}. These characteristics are often divided into two categories: physiological and behavioral~\cite{nist}. Physiological biometrics relate to human physiology; these include fingerprints, facial features, iris patterns or DNA. Behavioral biometrics are based on human behavior, such as keystroke dynamics, voice or gait. In order for a biometric to be applicable for access control, it needs to be \textit{universal} (present and measurable in every individual), \textit{unique} (different in every individual), and \textit{stable} (invariant over the individual's lifetime)~\cite{handbook}. Digital representations of the unique features extracted from a biometric sample are known as \textit{biometric templates}. \noindent \textbf{Authentication and Identification.} Biometrics can be used to achieve two important access control goals, user authentication and identification~\cite{handbook}. Biometric \textit{authentication} involves the user presenting an identity claim and a biometric sample. The system then decides whether this claim is valid based on the recorded biometric for this identity. In contrast, user \textit{identification} involves finding the closest match to presented biometrics among the stored templates. In this project, we focus on user authentication leaving identification as future work. \noindent \textbf{Limitations.} Although biometrics offer greater usability than traditional passwords, there are still concerns over the security and privacy of biometric data~\cite{prabhakar2003biometric}. Once compromised, biometrics cannot be easily revoked, as they depend on persistent physiological or behavioral characteristics of an individual. Furthermore, operators of biometric recognition systems might obtain additional unintended information from a user's biometric data. For instance, fingerprint patterns might be correlated with certain diseases~\cite{woodward1997biometrics}. Finally, some biometric characteristics cannot be easily kept as a secret, such as an individual's face. Therefore, a user who wishes to remain anonymous might still be identified without their knowledge and consent~\cite{woodward1997biometrics}. \subsection{Electrocardiogram as a Biometric} The heart is a muscle that pumps blood filled with oxygen and nutrients through the blood vessels to the body tissues \cite{heart_anatomy}. In order to pump blood, the heart muscle must contract, which generates an electrical impulse. This impulse can be detected on the surface of the body using electrodes placed on the skin, which is done during an electrocardiogram (ECG) test. An ECG trace captures the process of depolarization and repolarization of the heart chambers, which causes them to contract and relax. Several studies examine the uniqueness and stability of ECG. Most of these follow an ``on-the-person" approach for signal acquisition, such that electrodes are located directly on the individual~\cite{biel, carreiras, cross_correlation, autoencoder, israel, russian}. There are fewer studies that follow an ``off-the-person" approach, but they illustrate a more realistic use-case scenario for ECG-based recognition systems. Such examples include installing ECG sensors into a smartphone case \cite{mobile}, embedding the sensor into a keyboard wrist rest \cite{finger_ecg}, and installing an ECG monitor into the steering wheel of a car \cite{cardiowheel}. Furthermore, existing studies often use data collected over a single data collection session as seen in~\cite{biel, mobile, israel, kyoso, string_matching, autoencoder, toronto}. Although single-session datasets are easier to create, they cannot be used to draw conclusions about the stability of ECG as a biometric. While several authors used longitudinal ECG data in their studies \cite{russian, cross_correlation}, only one study explicitly provided a side-by-side comparison of results achieved using both single-session and multiple-session data collected over a period of four months \cite{finger_ecg}. It concluded that ECG-based biometrics exhibit promising recognition rates using both short-term and long-term data. In terms of scale, most works that explore ECG for personal identification do not assess the performance of their ECG authentication systems on very large datasets, as was done for other biometric modalities. A notable exception is \cite{carreiras}, which evaluated the performance of a biometric system using a database of ECG recordings collected from 618 subjects and obtained high recognition rates. Using ECG for authentication can also address some of the common limitations of other biometrics. For instance, ECG cannot be observed without using dedicated sensors and, thus, can be used to make the authentication process inconspicuous. This can be useful to prevent leakage attacks, such as when an adversary obtains user credentials by shoulder surfing their victims. \section{Methodology} In this section, we describe the methods used to collect and preprocess the dataset, train the classifiers to match biometric templates with presented identities, and perform the evaluation of obtained models.\footnote{A more in-depth overview of the methodology is available in the full report at \url{https://groups.inf.ed.ac.uk/tulips/projects/1718/samarin.pdf}} \subsection{Dataset} We collected ECG readings from 55 participants over two sessions with a period of four months in between. Most of the subjects were affiliated with the university, either as students, support staff or faculty members. According to the demographic survey, 30 males and 25 females enrolled in the data collection aged between 18 and 60 (median $=$ 22). Furthermore, none of the participants reported any serious health issues, though several were feeling exhausted or sleep deprived at the moment of the experiment. There were no restrictions on eligibility, as long as the subject was at least 18 years old. We note that due to technical issues that occurred during data collection, only 49 participants had sufficient data points for use in subsequent analysis. We used an AliveCor Kardia Mobile ECG monitor~\cite{alivecor} as the best approximation to a biometric sensor that could be deployed in a real authentication system. During operation, the monitor is connected to a smartphone application, which stores the data as a single-lead ECG recording. In order to record an ECG, the user has to place two fingers from each hand onto each of the two electrodes, as shown in Figure~\ref{fig:experiment}. \noindent \textbf{Procedure.} The data collection took place in one of the open workspaces in the School of Informatics at the University of Edinburgh. The procedure involved each participant recording their ECG trace using the monitor for 4 minutes. The recording was performed twice for a total of 8 minutes, with a break in between. The participants were not restricted in their actions and were allowed to talk and to perform movements, as long as that did not interfere with data collection. As part of a survey, subjects were asked to self-report their physical and emotional states, although this did not have any impact on the data collection. Participants who took part in both sessions received a \pounds5 Starbucks gift card at the end of the second session. \noindent \textbf{Ethics.} The experimental methodology used in this project adheres to the ethics regulations of the University of Edinburgh and the setup was reviewed and authorized by the School of Informatics Ethics Panel. All subjects signed a consent form, which confirmed their voluntary participation in the data collection procedure. \begin{figure}[th] \includegraphics[width=4.5cm]{figures/experiment.jpg} \centering \caption{ECG monitor connected to the smartphone application.} \label{fig:experiment} \end{figure} \subsection{Data Preprocessing and Classification} We describe the approach we took to preprocess our dataset, extract biometric templates and train classifiers. \noindent \textbf{Signal Preprocessing.} We preprocessed raw ECG traces before using them for evaluation. We used filters supplied by the monitor, including a Mains filter to remove power line interference and Butterworth band-pass filters to remove baseline wander noise and high-frequency noise from the ECG signal. We then divided the continuous ECG traces into segments representing individual heartbeats. First, we accentuated QRS complexes using wavelet transform and located the R peaks present in every heartbeat using a thresholding technique based on the running mean, as shown in Figure~\ref{fig:preprocessing}. We then used the located R peaks to partition the ECG traces into individual heartbeat waveforms. We removed noisy or incorrectly partitioned waveforms by comparing each segment to a median waveform of an individual and dropping 20\% of the most dissimilar segments as defined by the Euclidean distance. We used the remaining heartbeat waveforms as features to train the classifiers for user authentication. We additionally standardized the features using z-scores and performed Principal Component Analysis (PCA) to select the first 25 principal components as the input features. Figure~\ref{fig:waveforms} illustrates the obtained heartbeat waveforms (before standardization) and demonstrates the intersubject variability of the ECG. \begin{figure}[htbp] \includegraphics[width=8.5cm]{figures/different_class.pdf} \centering \caption{ECG variation among 8 individuals.} \label{fig:waveforms} \end{figure} \begin{figure*}[htbp] \includegraphics[width=16cm]{figures/complex_threshold.pdf} \centering \caption{Peak detection using a threshold based on the running mean.} \label{fig:preprocessing} \end{figure*} \noindent \textbf{Template Classification.} In order to match biometric templates with the provided identity, we experimented with several machine learning algorithms, including logistic regression, k-nearest neighbors, and support vector machines (SVM). We chose SVM as our final model and performed 5-fold cross-validation to select the hyperparameters for the model using 80\% of data as the training set, leaving the remaining 20\% as the test set. It is important to note that we trained separate models for each user in our dataset, such that each classifier aims to predict the probability with which a provided biometric template belongs to that specific user. \subsection{Evaluation} In general, a biometric system can exhibit two types of errors. A \textit{false accept} happens whenever a system incorrectly accepts an intruder and a \textit{false reject} happens whenever a system incorrectly rejects a genuine user. The decision threshold of the classifier can be further tuned to improve either the overall usability (reduce the number of false rejects) or security (reduce the number of false accepts) of the system~\cite{learning_biometrics}. As is common in biometrics research~\cite{eberz}, we used equal error rate (ERR) and half target error rate (HTER) as our performance metrics. ERR is the error rate that is achieved when the decision threshold of the classifier is tuned such that the number of false rejects and false accepts is equal, while HTER represents the error rate at some predefined decision threshold. In this work, we evaluated the performance of our authentication models using two different approaches. Using the first method, we included the same users in the training and test sets for each evaluated classifier. While this approach is easier and commonly seen in the literature~\cite{eberz}, it underestimates the number of false accepts, as the classifier learns to distinguish the target user (for whom the classifier is trained) from every other user in the dataset. The second approach is to exclude data of a specific (non-target) user from the training set, but retain it in the test set. Therefore, during the training phase, the classifier does not learn to distinguish the readings from the target user from the readings of the excluded user, minimizing the bias in the evaluation. For each evaluated classifier, we can repeat this procedure excluding a different user every time.
2002.01956
\section{Introduction} Galaxy clusters emerged from the largest overdensities in the primordial universe. Their evolution is sensitive to both the growth rate of structure and the expansion history of the universe. For this reason, they are a useful probe to test cosmological theories. The observed size, mass, and abundance of galaxy clusters are a valuable tool to constrain the parameters that formulate our cosmological models. In particular, the abundance of galaxy clusters is a sensitive probe of the matter density $\Omega_\text{m}$ and the normalization of the matter power spectrum $\sigma_8$ \citep[e.g.][]{2007gladders}. However, the strong degeneracy between $\sigma_8$ and $\Omega_\text{m}$ prevents the constraint of each parameter independently with the cluster mass function alone. This degeneracy can be alleviated by combining cluster mass functions over a wide range of redshift \citep[e.g.][]{2006albrecht}. A dominant systematic uncertainty in using galaxy clusters as cosmological probes is their mass calibration. Many of the large studies of galaxy clusters estimate mass through scaling relations, such as velocity dispersion or X-ray temperature, that rely on equilibrium or quasi-equilibrium state assumptions. The systematic errors innate to the mass estimate are then inherited by the cosmological constraint. Weak lensing (WL hereafter) provides a mass estimation free of an assumption of the dynamical state of the cluster and has the ability to provide more robust mass estimates. This merit is particularly important for galaxy clusters at high redshift where they tend to be in an early stage of formation and thus subject to a large departure from dynamical equilibrium. To date, very few high-redshift galaxy clusters have been measured with WL. The vast majority of WL surveys have been focused on redshift less than unity. In fact, of the large WL surveys, the \textit{Hubble Space Telescope} (\textit{HST} hereafter) studies of \cite{2011jee} and \cite{2018schrabback} are the only to include clusters at $z>1$. Beyond a redshift of 1.5, only a single galaxy cluster has been studied with WL, IDCS J1426+3508 \citep{2016mo, 2017jee} at redshift 1.75. The lack of studies at high redshift can primarily be attributed to the difficulty of detecting the lensing signal. The lensing distortions are caused by a massive intervening object between source galaxies and the observer. When a high-z cluster is the lens, more distant galaxies need to be probed to detect the lensing signal. This requires very deep imaging \added{at infrared wavelengths} to robustly detect galaxies in the 25-28th magnitude range. Fortunately, \added{some imaging programs with} the \textit{HST} \replaced{is}{are} probing these depths of the universe. One of the goals of the See Change program (PI: Perlmutter) is to probe the WL mass function of galaxy clusters at redshift greater than one. The See Change sample includes 11 galaxy clusters in the redshift range $1.10$ to $1.75$, with IDCS J1426+3508 the highest. The second highest redshift cluster in the sample is SpARCS1049+56 (hereafter SpARCS1049 for brevity) and it is the focus of this study. SpARCS1049 was \deleted{first}discovered in the \textit{Spitzer} Adaptation of the Red-sequence Cluster Survey (SpARCS) \citep{2009muzzin, 2009wilson}. This survey utilized a two IR filter system to detect galaxy overdensities by the 4000\AA\ break \citep{2006wilson}. The survey footprint included 11 square degrees of the Lockman Hole, a 59 square degree region that is relatively clear of galactic HI emission and within this region lies SpARCS1049. The first detailed study of SpARCS1049 was achieved by \cite{2015webb}. They used the archival \textit{Spitzer} observations and supplemented them with their own observations of the cluster from the James Clark Maxwell Telescope, \textit{HST}, and Keck. Their Keck-MOSFIRE spectroscopy determined the galaxy overdensity redshift to be centered at $z=1.709$. Based on this redshift, they classified \added{27} cluster member galaxies \replaced{to be}{as} those within $1500$ km s$^{-1}$ and 1.8 Mpc projected distance of the brightest cluster galaxy (BCG). The velocity dispersion ($\sigma=430^{+80}_{-100}$ km s$^{-1}$) of these galaxies provides a mass M$_{vir}$ of $8\pm3\times10^{13}\ \text{M}_\odot$. The authors go into detail about the shortcomings of the velocity dispersion from this sample. Their classification of cluster member galaxies goes well beyond the expected virial radius of the cluster (\mytilde1 Mpc). Futhermore, the redshifts were detected by the H$\alpha$ emission line, which only selects active galaxies. In addition to this mass estimate, they found the richness of the cluster to be $N_{\text{gal}}=30\pm8$ and used the mass-richness scaling relation from \cite{2014andreon} to infer a mass M$_{500\text{kpc}}$ of $3.8\pm1.2\times10^{14}\ \text{M}_\odot$. We present a WL characterization of SpARCS1049 through the \textit{HST} Wide Field Camera 3 (WFC3) IR filters. The mass estimate from WL is an independent test of the previous two masses because it does not rely on the dynamical state of the galaxy cluster. WL using the \textit{HST} IR filters has been achieved once before in \cite{2017jee}. Their WL analysis of SPT-CL J2040-4451 (z=1.48) and IDCS J1426+3508 (z=1.75) clearly detected the WL signals and quantified the masses of the two young, massive clusters. In \textsection\ref{section:data_reduction} we describe the \textit{HST}-IR observations, data reduction, and PSF modeling. The details of WL and our shape measurement pipeline are outlined in \textsection\ref{sec:wl_method}. We present our mass map and mass estimation in \textsection\ref{sec:results}. The mass of the cluster and its rarity \replaced{is}{are} discussed in \textsection\ref{sec:discussion} before we conclude in \textsection\ref{sec:conclusion}. In this paper, we use the cosmological parameters from \cite{2016planck}. The notation M$_{200}$ represents a spherical mass within the radius $r_{200}$, inside which the mean density is equal to 200 times the critical density of the universe at the cluster redshift. At $z= 1.71$, the plate scale is \mytilde8.70 kpc$/\arcsec$. \section{Observations} \label{section:data_reduction} Observations of SpARCS1049 were obtained with the \textit{HST} in programs 13677 (PI: S.~Perlmutter) and 13747 (PI: T.~Webb) \added{\edit1{from 2014 February to 2015 May}}. In both programs the cluster was imaged with WFC3 using the UVIS F814W and the IR F105W/F160W filters. Combining the two programs, the total exposure times are 2846s, 8543s, and 9237s for F814W, F105W, and F160W, respectively. The joining of these two programs provides very deep imaging data, which is critical for resolving faint source galaxies in high-$z$ cluster WL. Both observing runs were centered on the BCG location with camera rotations and small dithers between pointings. This technique is ideal for WL analyses because it minimizes the effect of diffraction spikes in stacked images and improves sampling of the point spread function (PSF). For our WL analysis, we use the F160W coadd to measure shapes because it is the deepest among the three filters and also the emission in this bandpass represents the rest-frame optical emission of source galaxies at $z\sim2$, which has a smoother light profile than bluer light that traces clumpy star formation regions of high redshift galaxies. The calibrated individual exposures (FLT/FLC images) were retrieved from the Mikulski Archive for Space Telescopes (MAST)\footnote{https://archive.stsci.edu/}. Prior to retrieval, these exposures were processed by the STSci OPUS pipeline using the \textit{calwf3} software task. The \textit{calwf3} task performs the standard calibration steps of dark subtraction, flat fielding, etc. Note that the calibration methods for the WFC3-UVIS and WFC3-IR detectors differ in some aspects. The WFC3-UVIS channel is a CCD detector and has a degraded ability to transfer charges during readout. Recent versions of the \textit{calwf3} task correct for charge transfer efficiency (CTE) \added{degradation} \citep{2016bajaj}. On the other hand, the WFC3-IR detector does not perform readout through charge transfer as CCDs do and thus does not suffer from CTE \added{degradation}. However, the detector possesses other systematic effects, which we discuss in \textsection\ref{sec:systematics}. \texttt{Multidrizzle} \citep{2003koekemoer} was used on the calibrated exposures to perform cosmic ray rejection, sky subtraction, and geometric distortion correction. Individual exposures were ``single-drizzled" to a north-up orientation with the common World Coordinate System (WCS) to prepare them to be stacked into a mosaic image. We then performed alignment of the individual exposures by iterative minimization of the offset of astronomical sources that are common within overlapping regions. This method of alignment was shown to be sufficient for cluster WL applications in \cite{2014jee}. With the astrometric solution obtained, a second \texttt{Multidrizzle} was performed to combine the images into a well-aligned, stacked mosaic. We chose to tune the input parameters of \texttt{Multidrizzle} to optimize the F160W image quality as it is used for our lensing analysis. The full width at half maximum (FWHM) of the PSF in the IR detector is fractionally larger (FWHM\raise.19ex\hbox{$\scriptstyle\sim$}$0\farcs16$) than the \added{native} pixel scale 0\farcs13. This causes \deleted{an}undersampling of the PSF. The DrizzlePac handbook \citep{2012drizzle} suggests that upsampling to a final pixel scale that samples the PSF by about 2.0 to 2.5 pixels is ideal. Following this advice, we chose a final pixel scale of 0\farcs05 pix$^{-1}$ to \replaced{minimize}{mitigate} the effect of undersampling the PSF. Although this pixel scale is larger than the UVIS \added{native} pixel scale of 0\farcs04, the downsampled F814W images are strictly used for color image generation and not in the scientific analysis. We set \texttt{final\_pixfrac} to 0.7 and used a Gaussian kernel to drizzle the images. The color-composite image in Figure \ref{fig:color_image} was created by combining the F160W, F105W, and F814W filter images. The BCG is the deep orange galaxy located in the center of the image with the ``beads-on-a-string'' interacting galaxy stretching from east of the BCG to \raise.19ex\hbox{$\scriptstyle\sim$} 50 kpc southwest. These features are more obvious in the zoomed inset. For more on the galaxies of SpARCS1049 see \cite{2015webb}, \cite{2017webb}, and \cite{2019trudeau}. \begin{figure*}[!ht] \centering \includegraphics[width=\textwidth]{sparcsj1049_color_zoomedinset_gaialaigned.pdf} \caption{\textit{HST} color-composite image of SpARCS1049 from stacking the F160W, F105W, and F814W filter images as RGB, respectively. The deep orange galaxy at the center of the image is the BCG (10$^\text{h}$49$^\text{m}$22$^\text{s}$.6, 56$^\circ$40\arcmin33\arcsec) and is shown in the inset image. The magnificent tidal feature discussed in \cite{2015webb} is seen in the inset image stretching from the center to the southwest.} \label{fig:color_image} \end{figure*} We created a detection image by weight-averaging the F105W and F160W images with weights from \texttt{Multidrizzle}. Objects were detected with \texttt{Source Extractor}\footnote{https://www.astromatic.net/software/sextractor} \added{\edit1{\citep{1996bertin}}} in dual-image mode by selecting sources in the detection image and measuring them in each filter-specific image (F105W or F106W). Objects that subtend at least 5 pixels having signal at least 1.5-$\sigma$ above the background \replaced{level}{rms} were measured. WL studies using the \textit{HST} have shown that the background galaxy density is high (\mytilde100 galaxies arcmin$^{-2}$). An issue that arises with high source galaxy density is blending (overlapping) of galaxy images. \cite{2018mandelbaum} discusses the bias arising from deblending galaxies in detail. The primary concern for deblending in this study occurs when two images overlap from galaxies at large separation in redshift. To mitigate the effect, we deblended objects using \texttt{Source Extractor} with \texttt{DEBLEND\_NTHRESH} = 8 and \texttt{DEBLEND\_MINCONT} = 0.005. However, such rigorous deblending can cause a foreground galaxy to be deblended into multiple objects. These spurious detections, that contain no lensing signal, were removed after source selection (Section \ref{section:source_selection}) through visual inspection. In total, \mytilde6,900 objects were detected in the $\raise.19ex\hbox{$\scriptstyle\sim$} 3\arcmin \times 3\arcmin$ WFC3-IR mosaic image and compiled into an object catalog. \subsection{PSF Model} Ground-based WL analyses rely heavily on the correction of the PSF as it causes a significant dilution of the observed lensing shear. In addition, the PSF tends to have a characteristic direction that mimics shearing. These two PSF effects are also present in the space-based \textit{HST} imaging but to a lesser extent because of the lack of atmosphere. This is the first study to use the WFC3-IR/F160W channel for a WL analysis. We modeled the PSF using a version of our PSF modeling pipeline based on principal component analysis (PCA) and updated for the F160W channel. This pipeline has been described in detail in our previous papers \citep[e.g.][]{2007jee, 2017finner}. Here we will briefly explain the PSF pipeline for the WFC3-IR/F160W channel and refer the reader to our previous work for an in-depth discussion. A major hurdle for modeling the PSF of \textit{HST}, which depends on time and position on the focal plane, is the lack of stars available in a single science frame. Fortunately, the \textit{HST} PSF variation possesses a repeatable pattern \citep{2007jee} \added{\edit1{that is dependent on the focus (breathing) of the telescope following the 1.5 hour orbit of the telescope around Earth}}. This allows a utilization of dense archival stellar images to model the PSF, which can then be applied to the science frames that are taken at a different epoch. Table \ref{table:psf_frames} contains a list of the dense stellar fields that we tested for our PSF modeling pipeline. In the majority of these fields, the frames are overcrowded with stars and overlapping diffraction spikes significantly hamper our ability to model the PSF. However, the exposures of NGC104 (also known as 47 Tuc) and NGC2808 in programs 11453, 11664, and 11665 contain the best spatial star sampling to characterize the PSF and we relied on these frames for our PSF pipeline. These images were drizzled with the same settings as the single-drizzled science images (Section \ref{section:data_reduction}) and will be referred to as the stellar frames from here on. We ran our PSF modeling pipeline on the stellar frames and designed a position-dependent PSF for each frame. Switching to the science frames, we selected several stars ($5\sim10$) from each single-drizzled science image and recorded their pixel coordinates and ellipticity. At the coordinates of these stars, we retrieved the modeled PSF for each stellar frame. This resulted in a catalog of PSFs for each stellar frame at the defined science frame's star locations. To find the best-fit model stellar frame, we minimized the difference between the ellipticity of the modeled PSFs and science stars. \added{\edit1{The median reduced $\chi^2$ value is 1.8 for the best-fit models. Furthermore the residual ellipticities when comparing our model to the measured stellar ellipticities are $de\mytilde0.008$, which is sufficient for cluster lensing.}} Finally, PSFs for all objects in the F160W mosaic image were built by retrieving the best-fit PSF model at each object location for each science frame and stacking them into a final PSF. \begin{table}[] \caption{Archived F160W \textit{HST}/WFC3-IR images tested for our PSF modeling pipeline. Bold-font programs were selected for PSF modeling.} \label{table:psf_frames} \centering \def1.5{1.5} \begin{tabular}{l c c c} \hline \hline Object & Program ID & Exposures & \added{\edit1{Obs. year}} \\ \hline \textbf{47 Tuc} & \textbf{11453} & \textbf{18} & \textbf{2009} \\ \textbf{NGC104} & \textbf{11664} & \textbf{6} & \textbf{2010} \\ \textbf{NGC2808} & \textbf{11665} & \textbf{6} & \textbf{2011} \\ NGC6388 & 11739 & 10 & 2010 \\ NGC6441 & 11739 & 20 & 2010 \\ OmegaCen & 11928 & 27 & 2009 \\ OmegaCen & 12353 & 15 & 2011 \\ OmegaCen & 13691 & 6 & 2015 \\ \hline \hline \end{tabular} \end{table} \subsection{WFC3-IR Detector WL Systematics} \label{sec:systematics} Systematic effects inherent to the IR detector are a cause for concern for WL studies because they may falsely contribute to the WL signal. In the first WL analysis to use the WFC3-IR detector, \cite{2017jee} reported four systematic effects that need to be considered: interpixel capacitance (IPC), persistence, detector non-linearity, and undersampling. Readers are referred to \cite{2017jee} for detailed discussions on these four topics from a WL perspective. Here, after briefly describing these aforementioned effects, we will provide a detailed discussion on the brighter-fatter effect. \textbf{IPC:} The WFC3-IR detector is a 1024x1024 HgCdTe array with a plate scale of $0\farcs13$ per pixel. \cite{2006brown} investigated the correlated noise in HgCdTe detectors and found charge sharing between neighboring pixels from capacitive coupling. This IPC is also present in the WFC3-IR detector \citep{2011hilbert}. We follow the same method as \cite{2017jee} and let our PSF model correct for IPC. \textbf{Persistence:} IR detectors are also susceptible to a persistence of signal after a reset. The effect is described in detail in \cite{2008smith}. Their investigation showed that the persistence of charge is greater for pixels that have been exposed near saturation in previous imaging. The STScI provides a tool\footnote{https://archive.stsci.edu/prepds/persist/search.php} to search for persistence in archived observations. Our search shows that persistence levels are low in observations of SpARCS1049, with a persistence of $\gtrsim0.01$ e$^-$ s$^{-1}$ in at most 0.1\% of the pixels and $\gtrsim0.1$ e$^-$ s$^{-1}$ for 0.03\% of the pixels. \textbf{Undersampling:} The FWHM of the WFC3-IR F160W PSF is approximately the same size as the native plate scale (0\farcs13 pixel$^{-1}$), which causes signals to not be Nyquist sampled. As a first step to alleviate undersampling, a dithering of the individual exposures was done during observations. Combining the dithering technique with upsampling during drizzling allows us to catch some of the sampled details of the PSF. As done in our previous IR WL analysis \citep{2017jee}, we let our calibration of galaxy shapes take care of the remaining undersampling bias. \textbf{Non-linearity:} The response of the WFC3-IR detector follows a nearly linear relation until close to saturation where it then becomes nonlinear. Nonlinearity in the detector was reported at the 5\% level for saturated pixels \citep{2018dressel}. The \textit{calwf3} pipeline corrects the detector nonlinearity for pixels below the saturation level. As a precaution, we selected stars that are well below the saturation level when modeling the PSF. \textbf{Brighter-fatter:} Analyzing the size-magnitude relation of the stellar frames that were used to model the PSF, we found a slope to the stellar locus with brighter objects tending to be larger. The brighter-fatter effect is well studied in CCDs and is thought to be caused by the electric field from the charges that have been accumulated in a pixel \citep{2014antilogus, 2015guyonnet}. For CCDs, \cite{2014antilogus} report that the size of the PSF increases by 2\% over the full dynamic range. However, few studies \citep{2017plazas, 2018plazas} have been carried out on the brighter-fatter effect in IR detectors. The brighter-fatter effect requires attention for WL analyses because it will introduce a multiplicative bias to the measured shear. This is especially important for the faint galaxies that carry the WL distortion where forward-modeling an overly large PSF may lead to an overestimation of the shear. \cite{2015bmandelbaum} showed that a 1\% inflated PSF size introduces a systematic bias of m = 0.06 for a galaxy near the resolution limit. Our analysis of the stellar locus in the NGC104 frames shows that the average size of stars varies by as much as 5\% from the faintest detected objects to the saturation magnitude of the detector. In our PSF modeling, we intentionally avoid the stars near saturation. Thus, 5\% should be taken as an upper limit. Nevertheless, we desire to understand the systematic bias that might be introduced when forward-modeling a PSF with a size up to 5\% larger than the true PSF size. To do so, we simulated our forward-modeling shape measurement using Galsim\footnote{https://github.com/GalSim-developers/GalSim} \added{\edit1{\citep{2015rowe}}}. Simulated images of 10,000 S\'ersic profile galaxies (100 x 100 equally spaced on a grid) were created with the S\'ersic parameters sampled from the real galaxies of SpARCS1049. A uniform shear typical of a galaxy clusters ($\mytilde0.05$) was applied to the images. These simulated galaxy images were then convolved with a circular Gaussian PSF. Multiple passes of our shape-measurement pipeline were performed while forward modeling PSFs of size ranging from -15\% to +15\% of the true PSF size. This experiment showed that the multiplicative bias varies by m = 0.02 for a 5\% change in PSF size. At this level, the brighter-fatter effect has a low impact on galaxy cluster studies where shape noise is still the dominant uncertainty. However, in cosmic shear studies the brighter-fatter effect will need to be addressed. \section{Weak-lensing method} \label{sec:wl_method} \subsection{Theory} At the core of weak gravitational lensing studies is the measurement of the minute distortion of galaxies. In the context of SpARCS1049, these distortions are caused by the altered light path that a photon travels while crossing the gravitational potential of the galaxy cluster. The altered light path can be described by its deflection angle - the angle between its original path away from its galaxy to its new path toward our telescope. The deflection angle is the gradient of the deflection potential. The differential transformation from the photon's emission position to the observed position is described by the Jacobian matrix: \begin{equation} A = \left( \begin{array}{cc} 1- \kappa - \gamma_1 & -\gamma_2 \\ -\gamma_2 & 1 - \kappa + \gamma_1 \\ \end{array} \right) \label{eqn_A} \end{equation} where the convergence $\kappa$ is an isotropic distortion defined as \begin{equation} \label{eq:sigma_c} \kappa = \frac{\Sigma}{\Sigma_c}. \end{equation} \noindent In equation~\ref{eq:sigma_c}, $\Sigma$ is the projected mass density while $\Sigma_c$ is the WL critical surface density: \begin{equation} \Sigma_c = \frac{c^2}{4\pi G D_l \beta} \end{equation} where c is the speed of light, G is the gravitational constant, $D_l$ is the angular diameter distance of the lens, and $\beta=D_{ls}/D_s$ is the lensing efficiency, which is the lens-source over source angular diameter distances. In equation~\ref{eqn_A}, the shear $\gamma$ is an anisotropic distortion and its two components can be combined to formulate the complex shear, $\gamma = \gamma_1 + i\gamma_2$. Observationally, the two distortion effects cannot be separated and the observed effect is the reduced shear $g_i = \gamma_i / (1 - \kappa)$. Without the prior knowledge of the shape (ellipticity) of each galaxy image, measurement of $g$ directly based on a single galaxy image is not possible. Instead, the average complex ellipticity of an ensemble of galaxies is used to find $g$. This is done under the assumption that the average galaxy ellipticity is zero. We adopt the value of $\sigma_\text{int}=0.25$ for the intrinsic ellipticity dispersion; a value recently confirmed with the CANDELS data in \cite{2018schrabback}. This value of the intrinsic ellipticity dispersion is used in inverse-variance weighting when fitting models for mass measurement (Section \ref{sec:results}). \subsection{Shape Measurement} The WL observable, the reduced shear $g$, is ascertained by averaging the shapes of source galaxies. Our method of shape measurement is to fit a PSF-convolved elliptical Gaussian function to each object in the source catalog (source catalog defined in Section \ref{section:source_selection}). Postage stamp images of each object are cut from our F160W mosaic image. The size of each postage stamp image is chosen to be 12 times the semi-major axis of the object as determined by \texttt{Source Extractor}. This size reduces the effect of truncation bias that occurs when the light profile is prematurely truncated. However, a large postage stamp image increases the number of neighboring objects whose signal may contaminate the fit. We mask out the signal of the neighboring objects using the segmentation map output from \texttt{Source Extractor}. The difference between the light profile of the postage stamp image and the PSF-convolved elliptical Gaussian model is minimized with MPFIT \citep{2009markwardt}. We fix the centroid and background levels to the measurements from \texttt{Source Extractor} to reduce the free parameters of the fit. From the MPFIT output, we catalog the two complex ellipticity components \begin{align} e_1 &= \frac{a-b}{a+b}\cos 2\phi, \\ e_2 &= \frac{a-b}{a+b}\sin 2\phi, \end{align} where $a$ and $b$ are the semi-major and -minor axes of the ellipse, respectively, and $\phi$ is the angle measured counter-clockwise from the positive x-axis. The ellipticity error $\sigma_e$ is also included into the catalog. Measuring a galaxy's shape by fitting the light profile with an analytic function that does not perfectly represent the light profile introduces model bias. Moreover, the non-linear relation of the ellipticity measurement with the pixel noise causes noise bias. We correct for these biases by calibrating the ellipticities with a multiplicative factor \added{\edit1{of 1.25}} that is derived through simulations. Our method has been shown as effective by the sFIT method in the GREAT3 challenge \citep{2015mandelbaum}. \subsection{Source Selection} \label{section:source_selection} Selecting the source galaxies is an intricate step of a WL analysis. The lensing signal is observable only in the galaxies that are sufficiently behind the lens. Selection of source galaxies by spectroscopic or photometric redshift would be ideal but obtaining them is expensive and currently not possible with the limited \textit{HST} filter coverage. Instead, we select source galaxies based on their measured shape and photometric properties. Galaxies residing in a cluster tend to be redder than field galaxies. The 4000\AA\ break, caused by the absorption of stellar light by ionized metals in stellar atmospheres, is a common feature in cluster galaxies and often gives rise to a red sequence in a color-magnitude diagram (CMD). For SpARCS1049, the 4000\AA\ break is redshifted to \raise.19ex\hbox{$\scriptstyle\sim$} 10,800\AA. This wavelength is encapsulated in the F105W and F160W filters. Figure 2 shows the CMD for SpARCS1049 with black dots representing the full object catalog. Cluster member galaxies selected from the Keck spectroscopic observations within $1.67 < z < 1.75$ and within the \textit{HST} imaging footprint are shown as red circles. These spectroscopic redshifts are detected from the $H\alpha$ emission line and give an active-galaxy selection bias to our cluster member sample. The BCG is shown as a red star and has a large magnitude separation from the other cluster members. The lack of a clear red sequence suggests that SpARCS1049 may be in an early state of formation. \begin{figure}[!ht] \centering \includegraphics[width=0.45\textwidth]{CMD.pdf} \caption{Color-magnitude diagram for SpARCS1049. Cluster member galaxies are marked red. The BCG is marked with a red star. Lensing source galaxies are depicted in blue. The bright limit of the source galaxies was chosen to maximize the lensing S/N and to mitigate contamination of cluster and foreground galaxies. The faint limit was set by requiring sources to have fitted ellipticity error $< 0.25$. } \label{fig:cmd} \end{figure} A pure source catalog is one that only contains lensed galaxies. Cluster member galaxies and foreground objects in the source catalog will contaminate the sample and dilute the lensing signal. Removal of these false sources is challenging without precise distances to each. Unfortunately, most removal techniques also filter out some true source galaxies. This is a problem because the lensing signal is proportional to the purity of the sample, whereas, the noise is proportional to $1/\sqrt{N}$. Furthermore, the uncertainty of the lensing efficiency $\beta$ increases with decreasing number of sources. Methods to maximize purity and source counts in the catalog vary. As a first step in defining a source catalog, we exclude foreground galaxies with an apparent magnitude cut that is fainter than the faintest spectroscopically confirmed cluster member. Our S/N tests show that retaining galaxies of F160W magnitude $>$ 25 provides the highest S/N. Including brighter galaxies decreases the detected WL signal and subsequently the S/N. In WL, sampling the faintest galaxies is desired because the most distant source galaxies are subject to the greatest lensing distortions. However, fitting a model to a low S/N galaxy is difficult and is subject to noise bias. To decrease noise bias, we exclude galaxies with a measured ellipticity error greater than 0.25. This constraint causes the faint magnitude limit seen in the CMD. In addition, galaxies in the source catalog are constrained to have a semi-minor axis greater than 0.3 pixels and ellipticity less than 0.9 to remove objects that are too small or too elongated to be galaxies. The total galaxy number density in our source catalog is \mytilde105 galaxies arcmin$^{-2}$. To test the source catalog for contamination by cluster galaxies, we analyze the radial variation of source density. In Figure \ref{fig:radial_density}, the radial source density is shown with radial bins centered on the BCG. Contamination by cluster galaxies could manifest as an overdense region near the cluster center relative to the cluster outskirts. As seen in the figure, the radial number density of source galaxies is flat to $50\arcsec$. Beyond $50\arcsec$ the number density slightly decreases. This decrease is likely due to the limited frame coverage near the edge of the mosaic image and from the bright foreground galaxy drowning out background galaxies in the northern region of the image. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{radial_density_weighted.pdf} \caption{Black circles are radial number density of source galaxies centered on the BCG. Contamination by cluster galaxies might manifest as an overdensity near the cluster central region. The flat profile suggests that cluster member contamination is a minimum. The dashed vertical line represents the radius at which a circle is no longer complete in the \textit{HST} image. Error bars are Poissonian errors. \added{\edit1{Blue circles are the radial number density weighted by shape measurement ellipticity error 1 / ($\sigma_{int}^2$ + $\sigma_e^2$).}}} \label{fig:radial_density} \end{figure} \subsection{Source Redshift Estimation} As shown in Equation \ref{eq:sigma_c}, the WL signal is proportional to the lensing efficiency, $\beta$. A proper characterization of $\beta$ relies on accurate knowledge of the angular diameter distances to the galaxy cluster and the source galaxies. However, the limited filter coverage for SpARCS1049 prevents direct calculation of distances to the source galaxies. As an alternative, we use the UVUDF photometric redshift catalog \citep{2015rafelski} as a control field, model it to represent our source catalog, and infer a representative distance to the source galaxies. We constrain the UVUDF catalog with the same magnitude constraint specified in \textsection\ref{section:source_selection}. A comparison of the number density of galaxies in the source catalog and the UVUDF catalog is shown in Figure \ref{fig:redshift}. The number density of galaxies in the two catalogs is consistent in the 25 to 26 magnitude range. After the 26th magnitude the number density discrepancy can be attributed to the much deeper imaging of the UVUDF. To make the UVUDF catalog representative of our source catalog, we weight the UVUDF control catalog by the ratio of UVUDF to SpARCS1049 galaxy number density. The effective redshift and corresponding $\beta$ is calculated from the weighted UVUDF catalog as \begin{equation} \beta = \left< \mathrm{max} \left[ 0 , \frac{D_{ls}}{D_s} \right] \right>, \end{equation} where all foreground galaxies are assigned zero before averaging because they contain no lensing signal. From the weighted UVUDF catalog, we infer an effective redshift of 2.08 and $\beta = 0.107$ for our source catalog. Bias is introduced when representing all source galaxies by a single redshift. We reduce the bias as suggested in \cite{1997seitz} by taking the width of the beta distribution, $\left<\beta^2\right>=0.03$ into consideration. One may question whether the $\beta$ derived from a small field such as the UVUDF is representative of the small field of SpARCS1049. \cite{2014jee} compared the UVUDF to the UDF, GOODS-S, and GOODS-N redshift catalogs and found comparable $\beta$ values for each catalog. They reported the uncertainty of $\beta$ values between catalogs to affect mass estimates by at most \mytilde4\%. This small sample variance is attributed to the great depth of the {\it HST} image, which provides access to large distances along the line of sight. Adding this uncertainty to the statistical uncertainty (\mytilde25\%) in quadrature shows that the statistical uncertainty on the mass will be dominant. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{uvudf_histogram.pdf} \caption{Magnitude distribution of galaxies in the SpARCS1049 WL source catalog. The UVUDF redshift catalog is used as a control field to estimate the redshift of the WL source catalog. In the 25 to 26 magnitude range the completeness in the SpARCS1049 and UVUDF catalogs are consistent. The discrepancy fainter than the 26th magnitude arises from the vastly different exposure times between our cluster imaging and the UVUDF. We compute the error bars assuming Poisson distributions. } \label{fig:redshift} \end{figure} \section{Results} \label{sec:results} \subsection{Mass Reconstruction} A powerful aspect of WL is its ability to measure the projected mass distribution of the lens with minimal assumptions. There are multiple techniques that can be used to convert the observed shear $g$ to the convergence $\kappa$. We rely on the MAXENT method of \cite{2007bjee}, which converges to a solution that maximizes the entropy of a pixelized mass map while providing a reasonable goodness-of-the-fit for galaxy shapes. Figure \ref{fig:convergence} is the convergence map for SpARCS1049. The convergence is smoothed with a $\sigma=10\arcsec$ Gaussian kernel to remove a pixellation artifact. The convergence shows a slight elongation in the east-west direction but in general has a relaxed distribution \added{for the applied smoothing scale}. The mass peak lies \raise.19ex\hbox{$\scriptstyle\sim$} 10\arcsec (\mytilde90 kpc) to the southwest of the BCG. This offset, if significant, could be interpreted as an indication that the cluster mass is not centered at the BCG. To test the significance of the offset and the strength of our WL signal, we bootstrap the source catalog 1000 times. From the bootstrapped samples, we find that the cluster is detected at the 3.3$\sigma$ significance. The resampled catalogs also reveal that the $1\sigma$ uncertainty of the convergence peak location is \mytilde13\arcsec. Thus, we conclude that the mass map shows no statistically significant offset from the BCG. \begin{figure*}[!ht] \centering \includegraphics[width=\textwidth]{kmap_sparcsj1049_gaiaaligned.pdf} \caption{Mass reconstruction for SpARCS1049. Contour labels are subject to mass-sheet degeneracy. The distribution appears relaxed and does not show signs of substructures but does have a slight elongation in the east-west direction. The apparent offset of the mass peak and the BCG is shown to be insignificant through a bootstrap analysis. \added{\edit1{The significance (S/N) of the contours from the bootstrap result range from 2.0$\sigma$ for the lowest contour to 2.5$\sigma$ for the highest. The peak significance is 3.3$\sigma$. }}} \label{fig:convergence} \end{figure*} \subsection{Mass Estimation} Accurate estimation of the cluster mass is the primary goal of this work. There are numerous techniques that can be used to estimate the mass of a cluster from the observed galaxy ellipticity distribution. We choose to estimate the mass by fitting model profiles to the azimuthally averaged tangential shear. The reduced tangential shear $g_T$ at radius $r$ is a measure of the surface density contrast between the mean value within $r$ and the specific value at $r$ divided by $1-\kappa$. It is often written as the tangential components of the complex shear $g$ \begin{equation} g_T = -g_1\cos2\theta - g_2\sin2\theta \end{equation} where $\theta$ is the angle measured from the center of the cluster to the source, counter-clockwise. Rotating $\theta$ by 45 degrees gives the cross shear, which should be consistent with zero in the presence of no systematic effects and a circularly symmetric projected mass distribution. Figure \ref{fig:nfw_bcg} is the tangential shear measured in $10\arcsec$ bins centered on the BCG. The tangential shear profile is sensitive to the choice of center, particularly at small radii. We center our shear measurements on the BCG because it is consistent with the lensing peak and is an independent tracer of the cluster center. \added{\edit1{We also tested using the convergence peak as the center of the tangential shear fit and found that the derived mass is consistent with using the BCG as the center.}} The tangential shear profile clearly shows the detection of the lensing signal and the cross shear is consistent with zero. The outer limit of the tangential shear profile is set by the edge of the mosaic image and the data points beyond $60\arcsec$ are affected by the bright galaxy in the north. To estimate the mass, we fit 1D density models to the tangential shear as shown in Figure \ref{fig:nfw_bcg}. The first density profile that we fit is the singular isothermal sphere (SIS). The SIS profile returns a fitted velocity dispersion of $\sigma_v=833\pm84$ km s$^{-1}$. Many density profiles have been derived from cosmological simulations that would all be appropriate to fit to the tangential shear. We fit some of the popular NFW-based models \added{\citep{1997navarro}} to our tangential shear so that direct comparison can be made with published galaxy cluster lensing masses. \added{\edit1{These fits are done by assuming the tangential shear profile follows a fixed concentration - mass (c-M) relation and fit only mass $M_{200c}$.}} \added{We utilize the \texttt{Colossus} code \citep{2018diemer} when performing the fits.} The masses are summarized in Table \ref{table:masses}. All three models return consistent masses. However, not all $c$-$M$ relations should be considered equal. As explained in detail in \cite{2019diemer}, the $c$-$M$ relation strongly depends on redshift and cosmology. Models that fit average concentration, such as \cite{2008duffy} and \cite{2014dutton}, are only valid under the assumed cosmology and redshift range of the simulations they are extracted from. Furthermore, power-law fits do not capture the upturn at high redshift and high mass as is shown in \cite{2015diemer}. \cite{2013ludlow} attribute the upturn to unrelaxed clusters. As SpARCS1049 is at a redshift of 1.71 and is likely in an early stage of formation, we suggest that the \cite{2019diemer} model is a good choice for a reasonably high mass cluster. Furthermore, of the three $c$-$M$ models that we fit, the \cite{2019diemer} model provides the best fit with reduced $\chi^2 = 1.03$. Throughout the discussion, we will use $3.5\pm1.2\times10^{14}$ M$_\odot$ for our WL mass estimation. \begin{table}[] \centering \caption{NFW density model fits to the tangential shear.} \label{table:masses} \def1.5{1.5} \begin{tabular}{l c c c} \hline \hline Model & $c_{200}$ & $M_{200c}$ $[\times10^{14} \text{M}_\odot]$ & $\chi_r^2$ \\ \hline \cite{2008duffy} & $2.2\pm0.2$ & $5.9\pm3.5$ & 1.27\\ \cite{2014dutton} & $3.1\pm0.1$ & $4.5\pm2.3$ & 1.16 \\ \cite{2019diemer} & $4.5\pm0.3$ & $3.5\pm1.2$ & 1.03 \\ \hline \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{nfw_tanshear_fit_paper.pdf} \caption{Density model fits to the tangential shear profile centered at the BCG. Blue circles are the tangential shear and black crosses are the cross shear. Error bars are the Poisson error. The cross shear has been shifted by $1\arcsec$ for display purposes. Three density profiles are shown: the SIS sphere, the $c$-$M$ relation of \cite{2014dutton}, and the $c$-$M$ relation of \cite{2019diemer}.} \label{fig:nfw_bcg} \end{figure} \begin{figure} \end{figure} \section{Discussion} \label{sec:discussion} \subsection{Mass Comparison with Previous Studies} A single previous study on the mass of SpARCS1049 exists. \cite{2015webb} estimated the mass of SpARCS1049 through a mass-richness scaling relation and by velocity dispersion of cluster member galaxies. They determined the abundance of cluster galaxies from Spitzer 3.6 $\mu$m observations to be $30\pm8$. This returned a mass of $\text{M}_{500\text{kpc}}=3.8\pm1.2\times 10^{14}\ \text{M}_{\odot}$ from application of the mass-richness scaling relation of \cite{2014andreon}. The authors note that the \mytilde30\% uncertainty on this mass does not take into consideration any redshift evolution of the scaling relation. To find the velocity dispersion of cluster galaxies, \cite{2015webb} obtained Keck spectroscopic measurements. The classification of cluster galaxies by these observations relied on the detection of the H$\alpha$ line. As the authors noted, this biases the sample to emission galaxies. Nevertheless, they classified 27 cluster member galaxies within 1500 km s$^{-1}$ of the mean cluster redshift and within 1.8 Mpc cluster-centric radius. The authors also mentioned that this included galaxies beyond the virial radius of the cluster. From the classified cluster members, the resulting velocity dispersion is $\sigma=430^{+80}_{-100}$ km s$^{-1}$ and the inferred mass is $\text{M}_\text{virial}=8 \pm 3 \times 10^{13}\ \text{M}_{\odot}$, after applying the velocity dispersion to virial mass relation of \cite{2008evrard}. The \mytilde40\% uncertainty reflects the unreliability of using strictly emission galaxies to derive the velocity dispersion. It is peculiar that the mass from velocity dispersion is much lower than from the mass-richness relation. It goes against the notion that emission galaxies should be infalling and have inflated velocity dispersion. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{SPARCSJ1049_vdisp.pdf} \caption{Velocity histogram of cluster member galaxies selected within 1500 km s$^{-1}$ of the average velocity. The blue line is the best-fit Gaussian model. An Anderson-darling test fails to reject the null hypothesis that the galaxies follow a normal distribution.} \label{fig:vel_disp} \end{figure} Using an updated spectroscopic redshift catalog, we selected member galaxies in the same manner as \cite{2015webb}. Applying the bi-weight velocity dispersion to the 27 detected members gives $\sigma_v=446\pm80$ km s$^{-1}$ and $\text{M}_\text{virial}=1.0\pm0.4\times10^{14}$ M$_\odot$ with the conversion from \cite{2008evrard}. We attach the same 40\% uncertainty on mass as \cite{2015webb}. This mass is consistent with the findings of \cite{2015webb}. Figure \ref{fig:vel_disp} shows the histogram of cluster galaxies. Performing an Anderson-darling test on the cluster galaxies fails to reject the null hypothesis that they follow a normal distribution. Our WL mass estimate $\text{M}_{200} = 3.5\pm1.2\times10^{14}\ \text{M}_\odot$ provides the first mass estimate free of a dynamical equilibrium assumption. This mass estimate is consistent with the mass-richness estimation. However, there is a discrepancy with the mass from the velocity dispersion. \subsection{Rarity} Massive galaxy clusters at high redshift are expected to be rare according to the hierarchical structure formation model. SpARCS1049 was selected for this study because of its known large mass and should be tested for its rarity. Future work will fully analyze the rarity of the See Change sample of massive galaxy clusters between redshift 1.10 and 1.75. We determine the rarity of this cluster by integrating the number of clusters above a minimum mass and redshift as \begin{equation} N(M,z) = \int^\infty_{z_{\text{min}}}\int^\infty_{M_{\text{min}}} \frac{dV(z)}{dz} \frac{dn}{dM}dM dz \end{equation} where $dV/dz$ is the volume element and $dn/dM$ is the mass function. We set the lower limits of the integrals to $z_{\text{min}}=1.71$ and $M_{\text{min}}=3.5\times10^{14}\ \text{M}_\odot$, the central mass estimate. The exact upper limits of the integral are insignificant because the rarity of the cluster (steepness of the mass function in this regime) causes the integral to converge quickly. Using HMFCalc \citep{2013murray}, we adopt the mass function of \cite{2008tinker} that has been updated by \cite{2013behroozi}. The estimated abundance of a cluster with mass and redshift of SpARCS1049 is \mytilde12 over the full sky or \mytilde0.01 clusters within the \mytilde41.9 deg$^2$ footprint of SpARCS. Alternatively, taking the 1-$\sigma$ lower limit of our mass estimation result $M_{\text{min}}=2.3\times10^{14}\ \text{M}_\odot$ gives a rarity of \mytilde185 clusters in the entire sky or \mytilde0.2 in the SpARCS field. For comparison, the rarity of two additional See Change clusters, IDCS J1426+3508 (z=1.75) and SPT-CL J2040-4451 (z=1.48), are \mytilde1200 and \mytilde1 clusters in the full sky, respectively, using their WL measured central mass values \citep{2017jee}. Thus, SpARCS1049 is similar in rarity to other See Change clusters. This type of rarity calculation has well-documented limitations \citep[][]{2011hotchkiss, 2012hoyle, 2013harrison}. As pointed out by \cite{2011hotchkiss}, the rarity integral only considers clusters that have mass and redshift greater than or equal to the selected lower limits. The rarity calculation neglects equally rare clusters that exist at higher mass but lower redshift and vice versa, which results in a bias that causes low rarities. Furthermore, the rarity calculation relies on integration of a mass function that is derived from cosmological simulations that often poorly reproduce the high-mass high-redshift end of the mass function. \cite{2013murrayb} report that the halo mass function has \mytilde20\% uncertainty at the high mass end. An additional limitation comes from Eddington bias \citep{1913eddington,2011mortonson}. Eddington bias occurs because the mass function of the universe is steeply declining with increasing mass at the mass and redshift of SpARCS1049. Therefore, it is more likely to overestimate a cluster mass than to underestimate a cluster mass for such an extreme object. \section{Conclusions} \label{sec:conclusion} An \textit{HST}-IR WL analysis of the massive galaxy cluster SpARCS1049 is presented. \textit{HST}-IR detector systematics have been quantified with a specific focus on the brighter-fatter effect. Our simulations show that the brighter-fatter effect gives at most a 2\% shape bias in our shear measurements. The systematics discussed will be important for future WL studies with next generation telescopes, such as JWST, Euclid, and WFIRST. The projected mass distribution has been reconstructed from the averaged background galaxy ellipticities. The mass distribution is seemingly relaxed \added{for the applied smoothing scale} with the centroid consistent with the BCG. We have found the mass of the cluster to be $3.5\pm1.2\times10^{14}\ \text{M}_\odot$ for our best-fit NFW model. This mass is consistent with the mass estimated from a mass-richness scaling relation. However, it is inconsistent with the mass from velocity dispersion of spectroscopically confirmed cluster galaxies. Finally, we have tested the mass of the cluster for its rarity. We have found the expected abundance of similarly massive clusters to be $<1$ within the parent survey, thus suggesting that SpARCS1049 is a uniquely massive cluster. \acknowledgments Support for the current {\it HST}~program was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. This study is supported by the program Yonsei University Future-Leading Research Initiative. M. J. Jee acknowledges support for the current research from the National Research Foundation of Korea under the programs 2017R1A2B2004644 and 2017R1A4A1015178. GW acknowledges support from the National Science Foundation through grant AST-1517863, by HST program number GO-15294, and by grant number 80NSSC17K0019 issued through the NASA Astrophysics Data Analysis Program (ADAP).
0903.0533
\section{Introduction} The motion of a general barotropic compressible fluid is described by the following system: \begin{equation} \begin{cases} \begin{aligned} &\partial_{t}\rho+{\rm div}(\rho u)=0,\\ &\partial_{t}(\rho u)+{\rm div}(\rho u\otimes u)-{\rm div}(\mu(\rho)D(u))-\nabla(\lambda(\rho){\rm div} u)+\nabla P(\rho)=\rho f,\\ &(\rho,u)_{/t=0}=(\rho_{0},u_{0}). \end{aligned} \end{cases} \label{0.1} \end{equation} Here $u=u(t,x)\in\mathbb{R}^{N}$ stands for the velocity field and $\rho=\rho(t,x)\in\mathbb{R}^{+}$ is the density. The pressure $P$ is a suitable smooth function of $\rho$. We denote by $\lambda$ and $\mu$ the two viscosity coefficients of the fluid, which are assumed to satisfy $\mu>0$ and $\lambda+2\mu>0$ (in the sequel to simplify the calculus we will assume the viscosity coefficients as constants). Such a conditions ensures ellipticity for the momentum equation and is satisfied in the physical cases where $\lambda+\frac{2\mu}{N}>0$. We supplement the problem with initial condition $(\rho_{0},u_{0})$ and an outer force $f$. Throughout the paper, we assume that the space variable $x\in\mathbb{R}^{N}$ or to the periodic box ${\cal T}^{N}_{a}$ with period $a_{i}$, in the i-th direction. We restrict ourselves the case $N\geq2$.\\ The problem of existence of global solution in time for Navier-Stokes equations was addressed in one dimension for smooth enough data by Kazhikov and Shelukin in \cite{5K}, and for discontinuous ones, but still with densities away from zero, by Serre in \cite{5S} and Hoff in \cite{5H1}. Those results have been generalized to higher dimension by Matsumura and Nishida in \cite{MN} for smooth data close to equilibrium and by Hoff in the case of discontinuous data in \cite{5H2,5H3}. All those results do not require to be far from the vacuum. The existence and uniqueness of local classical solutions for (\ref{0.1}) with smooth initial data such that the density $\rho_{0}$ is bounded and bounded away from zero (i.e., $0<\underline{\rho}\leq\rho_{0}\leq M$) has been stated by Nash in \cite{Nash}. Let us emphasize that no stability condition was required there. On the other hand, for small smooth perturbations of a stable equilibrium with constant positive density, global well-posedness has been proved in \cite{MN}. Many works on the case of the one dimension have been devoted to the qualitative behavior of solutions for large time (see for example \cite{5H1,5K}). Refined functional analysis has been used for the last decades, ranging from Sobolev, Besov, Lorentz and Triebel spaces to describe the regularity and long time behavior of solutions to the compressible model \cite{5So}, \cite{5V}, \cite{5H4}, \cite{5K1}. Let us recall that (local) existence and uniqueness for (\ref{0.1}) in the case of smooth data with no vacuum has been stated for long in the pioneering works by J. Nash \cite{Nash}, and A. Matsumura, T. Nishida \cite{MN}. For results of weak-strong uniqueness, we refer to the work of P. Germain \cite{PG}.\\ Guided in our approach by numerous works dedicated to the incompressible Navier-Stokes equation (see e.g \cite{Meyer}): $$ \begin{cases} \begin{aligned} &\partial_{t}v+v\cdot\nabla v-\mu\Delta v+\nabla\Pi=0,\\ &{\rm div}v=0, \end{aligned} \end{cases} \leqno{(NS)} $$ we aim at solving (\ref{0.1}) in the case where the data $(\rho_{0},u_{0},f)$ have \textit{critical} regularity.\\ By critical, we mean that we want to solve the system functional spaces with norm in invariant by the changes of scales which leaves (\ref{0.1}) invariant. In the case of barotropic fluids, it is easy to see that the transformations: \begin{equation} (\rho(t,x),u(t,x))\longrightarrow (\rho(l^{2}t,lx),lu(l^{2}t,lx)),\;\;\;l\in\mathbb{R}, \label{1} \end{equation} have that property, provided that the pressure term has been changed accordingly.\\ The use of critical functional frameworks led to several new weel-posedness results for compressible fluids (see \cite{DL,DG,DW}). In addition to have a norm invariant by (\ref{1}), appropriate functional space for solving (\ref{0.1}) must provide a control on the $L^{\infty}$ norm of the density (in order to avoid vacuum and loss of ellipticity). For that reason, we restricted our study to the case where the initial data $(\rho_{0},u_{0})$ and external force $f$ are such that, for some positive constant $\bar{\rho}$: $$(\rho_{0}-\bar{\rho})\in B^{\frac{N}{p}}_{p,1},\;u_{0}\in B^{\frac{N}{p_{1}}-1}_{p_{1},1}\;\;\mbox{and}\;\;f\in L^{1}_{loc}(\mathbb{R}^{+},\in B^{\frac{N}{p_{1}}-1}_{p_{1},1})$$ with $(p,p_{1})\in [1,+\infty[$ good chosen. In \cite{DW}, however, we hand to have $p=p_{1}$, indeed in this article there exists a very strong coupling between the pressure and the velocity. To be more precise, the pressure term is considered as a term of rest for the elliptic operator in the momentum equation of (\ref{0.1}). This paper improve the results of R. Danchin in \cite{DL,DW}, in the sense that the initial density belongs to larger spaces $B^{\frac{N}{p}}_{p,1}$ with $p\in[1,+\infty[$. The main idea of this paper is to introduce a new variable than the velocity in the goal to \textit{kill} the relation of coupling between the velocity and the density. In the present paper, we address the question of local well-posedness in the critical functional framework under the assumption that the initial density belongs to critical Besov space with a index of integrability different of this of the velocity. We adapt the spirit of the results of \cite{AP} and \cite{H} which treat the case of Navier-Stokes incompressible with dependent density (at the difference than in these works the velocity and the density are naturally decoupled). To simplify the notation, we assume from now on that $\bar{\rho}=1$. Hence as long as $\rho$ does not vanish, the equations for ($a=\rho^{-1}-1$,$u$) read: \begin{equation} \begin{cases} \begin{aligned} &\partial_{t}a+u\cdot\nabla a=(1+a){\rm div}u,\\ &\partial_{t}u+u\cdot\nabla u-(1+a){\cal A}u+\nabla (g(a))=f, \end{aligned} \end{cases} \label{0.6} \end{equation} In the sequel we will note ${\cal A}=\mu\Delta+(\lambda+\mu)\nabla{\rm div}$ and where $g$ is a smooth function which may be computed from the pressure function $P$.\\ One can now state our main result. \begin{theorem} Let $P$ a suitably smooth function of the density and $1\leq p_{1}\leq p\leq 2N$ such that $\frac{1}{p_{1}}\leq\frac{1}{N}+\frac{1}{p}$ and $2\frac{N}{p}-1>0$. Assume that $u_{0}\in B^{\frac{N}{p_{1}}-1}_{p_{1},1}$, $f\in L^{1}_{loc}(\mathbb{R}^{+},B^{\frac{N}{p_{1}}-1}_{p_{1},1})$ and $a_{0}\in B^{\frac{N}{p}}_{p,1} with $1+a_{0}$ bounded away from zero. \\ If $\frac{1}{p}+\frac{1}{p_{1}}>\frac{1}{N}$ there exists a positive time $T$ such that system (\ref{0.1}) has a solution $(a,u)$ with $1+a$ bounded away from zero, $$a\in \widetilde{C}([0,T],B^{\frac{N}{p}}_{p,1 ),\;\;u\in \widetilde{C}([0,T];B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})\cap\widetilde{L}^{1}( B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+2}_{p,1}).$$ Moreover this solution is unique if $\frac{2}{N}\leq\frac{1}{p}+\frac{1}{p_{1}}$. \label{theo1} \end{theorem} \begin{remarka} It seems possible to improve the theorem \ref{theo1} by choosing initial data $a_{0}$ in $B^{\frac{N}{p}}_{p,\infty}\cap B^{0}_{\infty,1}$, however some supplementary conditions appear on $p_{1}$ in this case. \end{remarka} The key to theorem \ref{theo1} is to introduce a new variable $v_{1}$ to control the velocity where to avoid the coupling between the density and the velocity, we analyze by a new way the pressure term. More precisely we write the gradient of the pressure as a Laplacian of the variable $v_{1}$, and we introduce this term in the linear part of the momentum equation. We have then a control on $v_{1}$ which can write roughly as $u-{\cal G}P(\rho)$ where ${\cal G}$ is a pseudodifferential operator of order $-1$. By this way, we have canceled the coupling between $v_{1}$ and the density, we next verify easily that we have a control Lipschitz of the gradient of $u$ (it is crucial to estimate the density by the transport equation). \begin{remarka} In the present paper we did not strive for unnecessary generality which may hide the new ideas of our analysis. Hence we focused on the somewhat academic model of barotropic fluids. In physical contexts however, a coupling with the energy equation has to be introduced. Besides, the viscosity coefficients may depend on the density. We believe that our analysis may be carried out to these more general models. (See \cite{H1}). \end{remarka} In \cite{5H5}, D. Hoff show a very strong theorem of uniqueness for the weak solution when the pressure is of the specific form $P(\rho)=K\rho$ with $K>0$. Similarly in \cite{5H2}, \cite {5H3}, \cite{5H4}, D. Hoff get global weak solution with regularizing effects on the velocity. In particular when the pressure is on this form, he doe not need to have estimate on the gradient of the density. In the following corollary, we will observe that this type of pressure assure a specific structure and avoid to impose that $p<2N$. \begin{corollaire} Assume that $P(\rho)=K\rho$ with $K>0$. Let $1\leq p_{1}\leq p\leq+\infty$ such that $\frac{1}{p_{1}}\leq\frac{1}{N}+\frac{1}{p}$. Assume that $u_{0}\in B^{\frac{N}{p_{1}}-1}_{p_{1},1}$, $f\in L^{1}_{loc}(\mathbb{R}^{+},B^{\frac{N}{p_{1}}-1}_{p_{1},1})$ and $a_{0}\in B^{\frac{N}{p}}_{p,1} with $1+a_{0}$ bounded away from zero. If $\frac{1}{p}+\frac{1}{p_{1}}>\frac{1}{N}$ there exists a positive time $T$ such that system (\ref{0.1}) has a solution $(a,u)$ with $1+a$ bounded away from zero, $$a\in \widetilde{C}([0,T],B^{\frac{N}{p}}_{p,1} ,\;\;u\in \widetilde{C}([0,T];B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})\cap\in \widetilde{L}^{1}( B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1}).$$ If moreover we assume that $\sqrt{\rho_{0}}u_{0}\in L^{2}$, $\rho_{0}-\bar{\rho}\in L^{1}_{2}$, $u_{0}\in L^{\infty}$ and $\lambda=0$ then the solution $(a,u)$ is unique. \label{coro11} \end{corollaire} \begin{remarka} Here $L^{1}_{2}$ defines the corresponding Orlicz space. \end{remarka} \begin{remarka} Up to my knowlledge, it seems that it is the first time that we get strong solution without condition of controll in space with positive regularity for the gradient of the density. \end{remarka} \begin{remarka} Moreover we can observe that with this type of pressure we are very close to have existence of strong in finite time solution for initial data $(a_{0},u_{0})$ in $B^{0}_{\infty,1}\times B^{1}_{N,1}$. It means that this theorem rely the result of D. Hoff where the initial density is assumed $L^{\infty}$ but where the initial velocity is more regular and the results of R. Danchin in \cite{DW}. \end{remarka} \begin{remarka} In particular we can show that the solution of D. Hoff in \cite{5H4} are unique on a finite time interval $[0,T]$. \end{remarka} The study of the linearization of (\ref{0.1}) leads also the following continuation criterion: \begin{theorem} \label{theo3} Let $1\leq p_{1}\leq p\leq+\infty$ such that $\frac{N}{p_{1}}-1\leq\frac{N}{p}$ and $\frac{N}{p_{1}}-1+\frac{N}{p}>0$. Assume that (\ref{0.1}) has a solution $(a,u)\in C([0,T),B^{\frac{N}{p}}_{p,1}\times(B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})^{N})$ with $p_{1}>N$, $\rho_{0}^{\frac{1}{p_{1}}}u_{0}\in L^{p_{1}}$ and: \begin{equation} \lambda\leq\frac{4\mu}{N^{2}(p_{1}-1)}, \label{inegaliteviscosite} \end{equation} on the time interval $[0,T)$ which satisfies the following three conditions: \begin{itemize} \item the function $a$ belongs to $L^{\infty}(0,T;B^{\frac{N}{p}}_{p,1}),$ \item the function $1+a$ is bounded away from zero. \end{itemize} Then $(a,u)$ may be continued beyond $T$. \end{theorem} \begin{remarka} Up my knowledge, it is the first time that we get a criterion of blow-up for strong solution for compressible Navier-Stokes system without imposing a controll Lipschitz of the norm $\nabla u$. \end{remarka} Our paper is structured as follows. In section \label{section2}, we give a few notation and briefly introduce the basic Fourier analysis techniques needed to prove our result. In section section \ref{section3} and \ref{section4} are devoted to the proof of key estimates for the linearized system (\ref{0.1}). In section \ref{section5}, we prove the theorem \ref{theo1} and corollary \ref{theo11} whereas section \ref{section6} is devoted to the proof of continuation criterions of theorem \ref{theo2} and \ref{theo3}. Two inescapable technical commutator estimates and some theorems of ellipticity are postponed in an appendix. \section{Littlewood-Paley theory and Besov spaces} \label{section2} Throughout the paper, $C$ stands for a constant whose exact meaning depends on the context. The notation $A\lesssim B$ means that $A\leq CB$. For all Banach space $X$, we denote by $C([0,T],X)$ the set of continuous functions on $[0,T]$ with values in $X$. For $p\in[1,+\infty]$, the notation $L^{p}(0,T,X)$ or $L^{p}_{T}(X)$ stands for the set of measurable functions on $(0,T)$ with values in $X$ such that $t\rightarrow\|f(t)\|_{X}$ belongs to $L^{p}(0,T)$. Littlewood-Paley decomposition corresponds to a dyadic decomposition of the space in Fourier variables. Let $\alpha>1$ and $(\varphi,\chi)$ be a couple of smooth functions valued in $[0,1]$, such that $\varphi$ is supported in the shell supported in $\{\xi\in\mathbb{R}^{N}/\alpha^{-1}\leq|\xi|\leq2\alpha\}$, $\chi$ is supported in the ball $\{\xi\in\mathbb{R}^{N}/|\xi|\leq\alpha\}$ such that: $$\forall\xi\in\mathbb{R}^{N},\;\;\;\chi(\xi)+\sum_{l\in\mathbb{N}}\varphi(2^{-l}\xi)=1.$$ Denoting $h={\cal{F}}^{-1}\varphi$, we then define the dyadic blocks by: $$ \begin{aligned} &\Delta_{l}u=0\;\;\;\mbox{if}\;\;l\leq-2,\\ &\Delta_{-1}u=\chi(D)u=\widetilde{h}*u\;\;\;\mbox{with}\;\;\widetilde{h}={\cal F}^{-1}\chi,\\ &\Delta_{l}u=\varphi(2^{-l}D)u=2^{lN}\int_{\mathbb{R}^{N}}h(2^{l}y)u(x-y)dy\;\;\;\mbox{with}\;\;h={\cal F}^{-1}\chi,\;\;\mbox{if}\;\;l\geq0,\\ &S_{l}u=\sum_{k\leq l-1}\Delta_{k}u\,. \end{aligned} $$ Formally, one can write that: $u=\sum_{k\in\mathbb{Z}}\Delta_{k}u\,.$ This decomposition is called nonhomogeneous Littlewood-Paley decomposition. \subsection{Nonhomogeneous Besov spaces and first properties} \begin{definition} For $s\in\mathbb{R},\,\,p\in[1,+\infty],\,\,q\in[1,+\infty],\,\,\mbox{and}\,\,u\in{\cal{S}}^{'}(\mathbb{R}^{N})$ we set: $$\|u\|_{B^{s}_{p,q}}=(\sum_{l\in\mathbb{Z}}(2^{ls}\|\Delta_{l}u\|_{L^{p}})^{q})^{\frac{1}{q}}.$$ The Besov space $B^{s}_{p,q}$ is the set of temperate distribution $u$ such that $\|u\|_{B^{s}_{p,q}}<+\infty$. \end{definition} \begin{remarka}The above definition is a natural generalization of the nonhomogeneous Sobolev and H$\ddot{\mbox{o}}$lder spaces: one can show that $B^{s}_{\infty,\infty}$ is the nonhomogeneous H$\ddot{\mbox{o}}$lder space $C^{s}$ and that $B^{s}_{2,2}$ is the nonhomogeneous space $H^{s}$. \end{remarka} \begin{proposition} \label{derivation,interpolation} The following properties holds: \begin{enumerate} \item there exists a constant universal $C$ such that:\\ $C^{-1}\|u\|_{B^{s}_{p,r}}\leq\|\nabla u\|_{B^{s-1}_{p,r}}\leq C\|u\|_{B^{s}_{p,r}}.$ \item If $p_{1}<p_{2}$ and $r_{1}\leq r_{2}$ then $B^{s}_{p_{1},r_{1}}\hookrightarrow B^{s-N(1/p_{1}-1/p_{2})}_{p_{2},r_{2}}$. \item $B^{s^{'}}_{p,r_{1}}\hookrightarrow B^{s}_{p,r}$ if $s^{'}> s$ or if $s=s^{'}$ and $r_{1}\leq r$. \end{enumerate} \label{interpolation} \end{proposition} Before going further into the paraproduct for Besov spaces, let us state an important proposition. \begin{proposition} Let $s\in\mathbb{R}$ and $1\leq p,r\leq+\infty$. Let $(u_{q})_{q\geq-1}$ be a sequence of functions such that $$(\sum_{q\geq-1}2^{qsr}\|u_{q}\|_{L^{p}}^{r})^{\frac{1}{r}}<+\infty.$$ If $\mbox{supp}\hat{u}_{1}\subset {\cal C}(0,2^{q}R_{1},2^{q}R_{2})$ for some $0<R_{1}<R_{2}$ then $u=\sum_{q\geq-1}u_{q}$ belongs to $B^{s}_{p,r}$ and there exists a universal constant $C$ such that: $$\|u\|_{B^{s}_{p,r}}\leq C^{1+|s|}\big(\sum_{q\geq-1}(2^{qs}\|u_{q}\|_{L^{p}})^{r}\big)^{\frac{1}{r}}.$$ \label{resteimp1} \end{proposition} Let now recall a few product laws in Besov spaces coming directly from the paradifferential calculus of J-M. Bony (see \cite{BJM}) and rewrite on a generalized form in \cite{AP} by H. Abidi and M. Paicu (in this article the results are written in the case of homogeneous sapces but it can easily generalize for the nonhomogeneous Besov spaces). \begin{proposition} \label{produit1} We have the following laws of product: \begin{itemize} \item For all $s\in\mathbb{R}$, $(p,r)\in[1,+\infty]^{2}$ we have: \begin{equation} \|uv\|_{B^{s}_{p,r}}\leq C(\|u\|_{L^{\infty}}\|v\|_{B^{s}_{p,r}}+\|v\|_{L^{\infty}}\|u\|_{B^{s}_{p,r}})\,. \label{2.2} \end{equation} \item Let $(p,p_{1},p_{2},r,\lambda_{1},\lambda_{2})\in[1,+\infty]^{2}$ such that:$\frac{1}{p}\leq\frac{1}{p_{1}}+\frac{1}{p_{2}}$, $p_{1}\leq\lambda_{2}$, $p_{2}\leq\lambda_{1}$, $\frac{1}{p}\leq\frac{1}{p_{1}}+\frac{1}{\lambda_{1}}$ and $\frac{1}{p}\leq\frac{1}{p_{2}}+\frac{1}{\lambda_{2}}$. We have then the following inequalities:\\ if $s_{1}+s_{2}+N\inf(0,1-\frac{1}{p_{1}}-\frac{1}{p_{2}})>0$, $s_{1}+\frac{N}{\lambda_{2}}<\frac{N}{p_{1}}$ and $s_{2}+\frac{N}{\lambda_{1}}<\frac{N}{p_{2}}$ then: \begin{equation} \|uv\|_{B^{s_{1}+s_{2}-N(\frac{1}{p_{1}}+\frac{1}{p_{2}}-\frac{1}{p})}_{p,r}}\lesssim\|u\|_{B^{s_{1}}_{p_{1},r}} \|v\|_{B^{s_{2}}_{p_{2},\infty}}, \label{2.3} \end{equation} when $s_{1}+\frac{N}{\lambda_{2}}=\frac{N}{p_{1}}$ (resp $s_{2}+\frac{N}{\lambda_{1}}=\frac{N}{p_{2}}$) we replace $\|u\|_{B^{s_{1}}_{p_{1},r}}\|v\|_{B^{s_{2}}_{p_{2},\infty}}$ (resp $\|v\|_{B^{s_{2}}_{p_{2},\infty}}$) by $\|u\|_{B^{s_{1}}_{p_{1},1}}\|v\|_{B^{s_{2}}_{p_{2},r}}$ (resp $\|v\|_{B^{s_{2}}_{p_{2},\infty}\cap L^{\infty}}$), if $s_{1}+\frac{N}{\lambda_{2}}=\frac{N}{p_{1}}$ and $s_{2}+\frac{N}{\lambda_{1}}=\frac{N}{p_{2}}$ we take $r=1$. \\ If $s_{1}+s_{2}=0$, $s_{1}\in(\frac{N}{\lambda_{1}}-\frac{N}{p_{2}},\frac{N}{p_{1}}-\frac{N}{\lambda_{2}}]$ and $\frac{1}{p_{1}}+\frac{1}{p_{2}}\leq 1$ then: \begin{equation} \|uv\|_{B^{-N(\frac{1}{p_{1}}+\frac{1}{p_{2}}-\frac{1}{p})}_{p,\infty}}\lesssim\|u\|_{B^{s_{1}}_{p_{1},1}} \|v\|_{B^{s_{2}}_{p_{2},\infty}}. \label{2.4} \end{equation} If $|s|<\frac{N}{p}$ for $p\geq2$ and $-\frac{N}{p^{'}}<s<\frac{N}{p}$ else, we have: \begin{equation} \|uv\|_{B^{s}_{p,r}}\leq C\|u\|_{B^{s}_{p,r}}\|v\|_{B^{\frac{N}{p}}_{p,\infty}\cap L^{\infty}}. \label{2.5} \end{equation} \end{itemize} \end{proposition} \begin{remarka} In the sequel $p$ will be either $p_{1}$ or $p_{2}$ and in this case $\frac{1}{\lambda}=\frac{1}{p_{1}}-\frac{1}{p_{2}}$ if $p_{1}\leq p_{2}$, resp $\frac{1}{\lambda}=\frac{1}{p_{2}}-\frac{1}{p_{1}}$ if $p_{2}\leq p_{1}$. \end{remarka} \begin{corollaire} \label{produit2} Let $r\in [1,+\infty]$, $1\leq p\leq p_{1}\leq +\infty$ and $s$ such that: \begin{itemize} \item $s\in(-\frac{N}{p_{1}},\frac{N}{p_{1}})$ if $\frac{1}{p}+\frac{1}{p_{1}}\leq 1$, \item $s\in(-\frac{N}{p_{1}}+N(\frac{1}{p}+\frac{1}{p_{1}}-1),\frac{N}{p_{1}})$ if $\frac{1}{p}+\frac{1}{p_{1}}> 1$, \end{itemize} then we have if $u\in B^{s}_{p,r}$ and $v\in B^{\frac{N}{p_{1}}}_{p_{1},\infty}\cap L^{\infty}$: $$\|uv\|_{B^{s}_{p,r}}\leq C\|u\|_{B^{s}_{p,r}}\|v\|_{B^{\frac{N}{p_{1}}}_{p_{1},\infty}\cap L^{\infty}}.$$ \end{corollaire} The study of non stationary PDE's requires space of type $L^{\rho}(0,T,X)$ for appropriate Banach spaces $X$. In our case, we expect $X$ to be a Besov space, so that it is natural to localize the equation through Littlewood-Payley decomposition. But, in doing so, we obtain bounds in spaces which are not type $L^{\rho}(0,T,X)$ (except if $r=p$). We are now going to define the spaces of Chemin-Lerner in which we will work, which are a refinement of the spaces $L_{T}^{\rho}(B^{s}_{p,r})$. $\hspace{15cm}$ \begin{definition} Let $\rho\in[1,+\infty]$, $T\in[1,+\infty]$ and $s_{1}\in\mathbb{R}$. We set: $$\|u\|_{\widetilde{L}^{\rho}_{T}(B^{s_{1}}_{p,r})}= \big(\sum_{l\in\mathbb{Z}}2^{lrs_{1}}\|\Delta_{l}u(t)\|_{L^{\rho}(L^{p})}^{r}\big)^{\frac{1}{r}}\,.$$ We then define the space $\widetilde{L}^{\rho}_{T}(B^{s_{1}}_{p,r})$ as the set of temperate distribution $u$ over $(0,T)\times\mathbb{R}^{N}$ such that $\|u\|_{\widetilde{L}^{\rho}_{T}(B^{s_{1}}_{p,r})}<+\infty$. \end{definition} We set $\widetilde{C}_{T}(\widetilde{B}^{s_{1}}_{p,r})=\widetilde{L}^{\infty}_{T}(\widetilde{B}^{s_{1}}_{p,r})\cap {\cal C}([0,T],B^{s_{1}}_{p,r})$. Let us emphasize that, according to Minkowski inequality, we have: $$\|u\|_{\widetilde{L}^{\rho}_{T}(B^{s_{1}}_{p,r})}\leq\|u\|_{L^{\rho}_{T}(B^{s_{1}}_{p,r})}\;\;\mbox{if}\;\;r\geq\rho ,\;\;\;\|u\|_{\widetilde{L}^{\rho}_{T}(B^{s_{1}}_{p,r})}\geq\|u\|_{L^{\rho}_{T}(B^{s_{1}}_{p,r})}\;\;\mbox{if}\;\;r\leq\rho .$$ \begin{remarka} It is easy to generalize proposition \ref{produit1}, to $\widetilde{L}^{\rho}_{T}(B^{s_{1}}_{p,r})$ spaces. The indices $s_{1}$, $p$, $r$ behave just as in the stationary case whereas the time exponent $\rho$ behaves according to H\"older inequality. \end{remarka} Here we recall a result of interpolation which explains the link of the space $B^{s}_{p,1}$ with the space $B^{s}_{p,\infty}$, see \cite{DFourier}. \begin{proposition} \label{interpolationlog} There exists a constant $C$ such that for all $s\in\mathbb{R}$, $\varepsilon>0$ and $1\leq p<+\infty$, $$\|u\|_{\widetilde{L}_{T}^{\rho}(B^{s}_{p,1})}\leq C\frac{1+\varepsilon}{\varepsilon}\|u\|_{\widetilde{L}_{T}^{\rho}(B^{s}_{p,\infty})} \biggl(1+\log\frac{\|u\|_{\widetilde{L}_{T}^{\rho}(B^{s+\varepsilon}_{p,\infty})}} {\|u\|_{\widetilde{L}_{T}^{\rho}(B^{s}_{p,\infty})}}\biggl).$$ \label{5Yudov} \end{proposition} Now we give some result on the behavior of the Besov spaces via some pseudodifferential operator (see \cite{DFourier}). \begin{definition} Let $m\in\mathbb{R}$. A smooth function function $f:\mathbb{R}^{N}\rightarrow\mathbb{R}$ is said to be a ${\cal S}^{m}$ multiplier if for all muti-index $\alpha$, there exists a constant $C_{\alpha}$ such that: $$\forall\xi\in\mathbb{R}^{N},\;\;|\partial^{\alpha}f(\xi)|\leq C_{\alpha}(1+|\xi|)^{m-|\alpha|}.$$ \label{smoothf} \end{definition} \begin{proposition} Let $m\in\mathbb{R}$ and $f$ be a ${\cal S}^{m}$ multiplier. Then for all $s\in\mathbb{R}$ and $1\leq p,r\leq+\infty$ the operator $f(D)$ is continuous from $B^{s}_{p,r}$ to $B^{s-m}_{p,r}$. \label{singuliere} \end{proposition} \section{Estimates for parabolic system with variable coefficients} \label{section3} Let us first state estimates for the following constant coefficient parabolic system: \begin{equation} \begin{cases} \begin{aligned} &\partial_{t}u-\mu\Delta u-(\lambda+\mu)\nabla{\rm div}u=f,\\ &u_{/t=0}=u_{0}. \end{aligned} \end{cases} \label{3} \end{equation} \begin{proposition} Assume that $\mu\geq0$ and that $\lambda+2\mu\geq0$. Then there exists a universal constant $\kappa$ such that for all $s\in\mathbb{Z}$ and $T\in\mathbb{R}^{+}$, $$ \begin{aligned} &\|u\|_{\widetilde{L}^{\infty}_{T}(B^{s}_{p_{1},1})}\leq\|u_{0}\|_{B^{s}_{p_{1},1}}+\|f\|_{L^{1}_{T}(B^{s}_{p_{1},1})},\\ &\kappa\nu\|u\|_{L^{1}_{T}(B^{s+2}_{p_{1},1})}\leq\sum_{l\in\mathbb{Z}}2^{ls}(1-e^{-\kappa\nu2^{2l}T})(\|\Delta_{l}u_{0}\|_{L^{p_{1}}} +\|\Delta_{l}f\|_{L^{1}_{T}(L^{p_{1}})}), \end{aligned} $$ with $\nu=\min(\mu,\lambda+2\mu)$. \label{chaleur} \end{proposition} We now consider the following parabolic system which is obtained by linearizing the momentum equation: \begin{equation} \begin{cases} \begin{aligned} &\partial_{t}u+v\cdot\nabla u+u\cdot\nabla w-b(\mu\Delta u+(\lambda+\mu)\nabla{\rm div}u=f+g,\\ &u_{/t=0}=u_{0}. \label{5} \end{aligned} \end{cases} \end{equation} Above $u$ is the unknown function. We assume that $u_{0}\in B^{s}_{p_{1},1}$, $f\in L^{1}(0,T;B^{s}_{p_{1},1})$ and $g\in L^{r}(0,T;B^{s^{'}}_{q_{1},1})$, that $v$ and $w$ are time dependent vector-fields with coefficients in $L^{1}(0,T;B^{\frac{N}{p}+1}_{p,1})$, that $b$ is bounded by below by a positive constant $\underline{b}$ and $b$ belongs to $L^{\infty}(0,T;B^{\frac{N}{p}}_{p,1})$ with $p\in[1,+\infty]$. \begin{proposition} Let $g=0$ and $\underline{\nu}=\underline{b}\min(\mu,\lambda+2\mu)$ and $\bar{\nu}=\mu+|\lambda+\mu|$. Assume that $s\in(-\frac{N}{p},\frac{N}{p}]$. Let $m\in\mathbb{Z}$ be such that $b_{m}=1+S_{m}a$ satisfies: \begin{equation} \inf_{(t,x)\in[0,T)\times\mathbb{R}^{N}}b_{m}(t,x)\geq\frac{\underline{b}}{2}. \label{6} \end{equation} There exist three constants $c$, $C$ and $\kappa$ (with $c$, $C$, depending only on $N$ and on $s$, and $\kappa$ universal) such that if in addition we have: \begin{equation} \|1-S_{m}a\|_{L^{\infty}(0,T;B^{\frac{N}{p_{1}}}_{p_{1},1})}\leq c\frac{\underline{\nu}}{\bar{\nu}} \label{7} \end{equation} then setting: $$V(t)=\int^{t}_{0}\|v\|_{B^{\frac{N}{p}+1}_{p,1}}d\tau,\;\;W(t)=\int^{t}_{0}\|w\|_{B^{\frac{N}{p}+1}_{p,1}}d\tau,\;\;\mbox{and} \;\;Z_{m}(t)=2^{2m}\bar{\nu}^{2}\underline{\nu}^{-1}\int^{t}_{0}\|a\|^{2}_{B^{\frac{N}{p_{1}}}_{p_{1},1}}d\tau,$$ We have for all $t\in[0,T]$, $$ \begin{aligned} &\|u\|_{\widetilde{L}^{\infty}((0,T)\times B^{s}_{p_{1},1})}+\kappa\underline{\nu} \|u\|_{\widetilde{L}^{1}((0,T)\times B^{s+2}_{p_{1},1})}\leq e^{C(V+W+Z_{m})(t)}(\|u_{0}\|_{B^{s}_{p_{1},1}}\\ &\hspace{7cm}+\int^{t}_{0} e^{-C(V+W+Z_{m})(\tau)}\|f(\tau)\|_{B^{s}_{p_{1},1}}d\tau). \end{aligned} $$ \label{linearise} \end{proposition} \begin{remarka} Let us stress the fact that if $a\in \widetilde{L}^{\infty}((0,T)\times B^{\frac{N}{p}}_{p,1})$ then assumption (\ref{6}) and (\ref{7}) are satisfied for $m$ large enough. This will be used in the proof of theorem \ref{theo1}. Indeed, according to Bernstein inequality, we have: $$\|a-S_{m}a\|_{L^{\infty}((0,T)\times\mathbb{R}^{N})}\leq\sum_{q\geq m}\|\Delta_{q}a\|_{L^{\infty}((0,T)\times\mathbb{R}^{N})}\lesssim\sum_{q\geq m} 2^{q\frac{N}{p}}\|\Delta_{q}a\|_{L^{\infty}(L^{p})}.$$ Because $a\in\widetilde{L}^{\infty}((0,T)\times B^{\frac{N}{p}}_{p,1})$, the right-hand side is the remainder of a convergent series hence tends to zero when $m$ goes to infinity. For a similar reason, (\ref{7}) is satisfied for $m$ large enough. \label{remark5} \end{remarka} {\bf Proof:} Let us first rewrite (\ref{5}) as follows: \begin{equation} \partial_{t}u+v\cdot\nabla u+u\cdot\nabla w-b_{m}(\mu\Delta u+(\lambda+\mu)\nabla{\rm div}u=f+E_{m}-u\cdot\nabla w, \label{8} \end{equation} with $E_{m}=(\mu\Delta u+(\lambda+\mu)\nabla{\rm div}u)(\mbox{Id}-S_{m})a$. Note that, because $-\frac{N}{p}<s\leq\frac{N}{p}$, the error term $E_{m}$ may be estimated by: \begin{equation} \|E_{m}\|_{B^{s}_{p_{1},1}}\lesssim\|a-S_{m}a\|_{B^{\frac{N}{p}}_{p,1}}\|D^{2}u\|_{B^{s}_{p_{1},1}}. \label{9} \end{equation} and we have: \begin{equation} \|u\cdot\nabla w\|_{B^{s}_{p_{1},1}}\lesssim\|\nabla w\|_{B^{\frac{N}{p}}_{p,1}}\|u\|_{B^{s}_{p_{1},1}}. \label{10} \end{equation} Now applying $\Delta_{q}$ to equation (\ref{8}) yields: \begin{equation} \begin{aligned} \frac{d}{dt}u_{q}+v\cdot\nabla u_{q}-\mu{\rm div}(b_{m}\nabla u_{q})-(\lambda+\mu)\nabla(b_{m}{\rm div}u_{q})=f_{q}&+E_{m,q}-\Delta_{q}(u\cdot\nabla w)\\ &\hspace{1,5cm}+R_{q}+\widetilde{R}_{q}. \end{aligned} \end{equation} where we denote by $u_{q}=\Delta_{q}u$ and with: $$ \begin{aligned} &R_{q}=[v^{j},\Delta_{q}]\partial_{j}u,\\ &\widetilde{R}_{q}=\mu\big(\Delta_{q}(b_{m}\Delta u)-{\rm div}(b_{m}\nabla u_{q})\big)+(\lambda+\mu)\big(\Delta_{q}(b_{m}\nabla{\rm div}u)-\nabla(b_{m}{\rm div}u_{q})\big). \end{aligned} $$ Next multiplying both sides by $|u_{q}|^{p_{1}-2}u_{q}$, and integrating by parts in the second, third and last term in the left-hand side, we get: $$ \begin{aligned} &\frac{1}{p_{1}}\frac{d}{dt}\|u_{q}\|_{L^{p_{1}}}^{p_{1}}-\frac{1}{p_{1}}\int\big(|u_{q}|^{p_{1}}{\rm div}v+\mu {\rm div}(b_{m}\nabla u_{q})|u_{q}|^{p_{1}-2}u_{q} +\xi\nabla\big(b_{m}{\rm div}u_{q}\big)|u_{q}|^{p_{1}-2}u_{q})\big)dx\\ &\hspace{1,7cm}\leq\|u_{q}\|^{p_{1}-1}_{L^{p_{1}}}(\|f_{q}\|_{L^{p_{1}}}+\|\Delta_{q}E_{m}\|_{L^{p_{1}}}+\|\Delta_{q}(u\cdot\nabla w)\|_{L^{p_{1}}}+\|R_{q}\|_{L^{p_{1}}} +\|\widetilde{R}_{q}\|_{L^{p_{1}}}). \end{aligned} $$ Hence denoting $\xi=\mu+\lambda$, $\nu=\min(\mu,\lambda+2\mu)$ and using (\ref{6}), lemma [A5] of \cite{DL} and Young's inequalities we get: $$ \begin{aligned} &\frac{1}{p_{1}}\frac{d}{dt}\|u_{q}\|_{L^{p_{1}}}^{p_{1}}+\frac{\nu \underline{b}(p_{1}-1)}{p_{1}^{2}}2^{2q}\|u_{q}\|_{L^{p_{1}}}^{p_{1}}\leq \|u_{q}\|^{p_{1}-1}_{L^{p_{1}}}\big(\|f_{q}\|_{L^{p_{1}}}+\|E_{m,q}\|_{L^{p_{1}}} +\|\Delta_{q}(u\cdot\nabla w)\|_{L^{p_{1}}}\\ &\hspace{6,8cm}+\frac{1}{p_{1}}\|u_{q}\|_{L^{p_{1}}}\|{\rm div}u\|_{L^{\infty}}+\|R_{q}\|_{L^{p_{1}}}+\|\widetilde{R}_{q}\|_{L^{p_{1}}}\big), \end{aligned} $$ which leads, after time integration to: \begin{equation} \begin{aligned} &\|u_{q}\|_{L^{p_{1}}}+\frac{\nu \underline{b}(p_{1}-1)}{p_{1}}2^{2q}\int^{t}_{0}\|u_{q}\|_{L^{p_{1}}}d\tau\leq \|\Delta_{q}u_{0}\|_{L^{p_{1}}}+\int^{t}_{0}\big( \|f_{q}\|_{L^{p_{1}}}+\|E_{m,q}\|_{L^{p_{1}}}\\ &\hspace{1,9cm}+\|\Delta_{q}(u\cdot\nabla w)\|_{L^{p_{1}}}+\frac{1}{p_{1}}\|u_{q}\|_{L^{p_{1}}}\|{\rm div}u\|_{L^{\infty}}+\|R_{q}\|_{L^{p_{1}}}+ \|\widetilde{R}_{q}\|_{L^{p_{1}}}\big)d\tau, \end{aligned} \label{11} \end{equation} where $\underline{\nu}=\underline{b}\nu$. For commutators $R_{q}$ and $\widetilde{R}_{q}$, we have the following estimates (see lemma \ref{alemme2} and \ref{alemme3} in the appendix) \begin{equation} \|R_{q}\|_{L^{p_{1}}}\lesssim c_{q}2^{-qs}\|v\|_{B^{\frac{N}{p}+1}_{p,1}}\|u\|_{B^{s}_{p_{1},1}},\label{12} \end{equation} \begin{equation} \|\widetilde{R}_{q}\|_{L^{p_{1}}}\lesssim c_{q}\bar{\nu}2^{-qs}\|S_{m}a\|_{B^{\frac{N}{p_{1}}+1}_{p_{1},1}}\|Du\|_{B^{s}_{p_{1},1}},\label{13} \end{equation} where $(c_{q})_{q\in\mathbb{Z}}$ is a positive sequence such that $\sum_{q\in\mathbb{Z}}c_{q}=1$, and $\bar{\nu}=\mu+|\lambda+\mu|$. Note that, owing to Bernstein inequality, we have: $$\|S_{m}a\|_{B^{\frac{N}{p}+1}_{p,1}}\lesssim2^{m}\|a\|_{B^{\frac{N}{p_{1}}}_{p_{1},1}}$$ Hence, plugging these latter estimates and (\ref{9}), (\ref{10}) in (\ref{11}), then multiplying by $2^{qs}$ and summing up on $q\in\mathbb{Z}$, we discover that, for all $t\in[0,T]$: $$ \begin{aligned} &\|u\|_{L^{\infty}_{t}(B^{s}_{p_{1},1})}+\frac{\nu \underline{b}(p_{1}-1)}{p}\|u\|_{L^{1}_{t}(B^{s+2}_{p_{1},1})}\leq \|u_{0}\|_{B^{s}_{p_{1},1}}+ \|f\|_{L^{1}_{t}(B^{s}_{p_{1},1})}+C\int^{t}_{0}(\|v\|_{B^{\frac{N}{p}+1}_{p_{1},1}}\\ &\hspace{1cm}+\|w\|_{B^{\frac{N}{p}+1}_{p,1}})\|u\|_{B^{s}_{p_{1},1}}d\tau+C\bar{\nu}\int^{t}_{0}(\|a-S_{m}a\|_{B^{\frac{N}{p}}_{p,1}} \|u\|_{B^{s+2}_{p_{1},1}} +2^{m}\|a\|_{B^{\frac{N}{p}}_{p,1}}\|u\|_{B^{s+1}_{p_{1},1}})d\tau, \end{aligned} $$ for a constant $C$ depending only on $N$ and $s$. Let $X(t)=\|u\|_{L^{\infty}_{t}(B^{s}_{p_{1},1})}+\nu \underline{b}\|u\|_{L^{1}_{t}(B^{s+2}_{p_{1},1})}$. Assuming that $m$ has been chosen so large as to satisfy: $$C\bar{\nu}\|a-S_{m}a\|_{L^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}\leq\underline{\nu},$$ and using that by interpolation, we have: $$C\bar{\nu}\|a\|_{B^{\frac{N}{p}}_{p,1}}\|u\|_{B^{s+2}_{p,1}}\leq\kappa\underline{\nu}+\frac{C^{2}\bar{\nu}^{2}2^{2m}} {4\kappa\underline{\nu}} \|a\|^{2}_{B^{\frac{N}{p}}_{p,1}}\|u\|_{B^{s}_{p,1}},$$ we end up with: $$X(t)\leq\|u_{0}\|_{B^{s}_{p_{1},1}}+ \|f\|_{L^{1}_{t}(B^{s}_{p_{1},1})}+C\int^{t}_{0}(\|v\|_{B^{\frac{N}{p}+1}_{p,1}}+\|w\|_{B^{\frac{N}{p}+1}_{p,1}}+\frac{\bar{\nu}^{2}} {\underline{\nu}}2^{2m} \|a\|^{2}_{B^{\frac{N}{2}}_{p,1}})Xd\tau\\$$ Gr\"onwall lemma then leads to the desired inequality. {\hfill $\Box$} \begin{remarka} The proof of the continuation criterion (theorem \ref{theo1}) relies on a better estimate which is available when $u=v=w$ and $s>0$. In fact, by arguing as in the proof of the previous proposition and by making use of inequality (\ref{54}) instead of (\ref{52}), one can prove that under conditions (\ref{6}) and (\ref{7}), there exists constants $C$ and $\kappa$ such that: $$ \begin{aligned} &\forall t\in[0,T],\;\|u\|_{L^{\infty}_{t}(B^{s}_{p_{1},1})}+\kappa\underline{\nu}\|u\|_{L^{1}_{t}(B^{s+2}_{p_{1},1})} \leq e^{C(U+Z_{m})(t)} \big(\|u_{0}\|_{B^{s}_{p_{1},1}}+\\ &\hspace{3cm}\int^{t}_{0}e^{-C(U+Z_{m})(\tau)}\|f(\tau)\|_{B^{s}_{p_{1},1}}d\tau\big)\;\;\;\mbox{with}\;\;\;U(t)=\int^{t}_{0}\|\nabla u\|_{L^{\infty}}d\tau. \end{aligned} $$ \label{remark6} \end{remarka} In the following corollary, we generalize proposition \ref{linearise1} when $g\ne 0$ and $g\in \widetilde{L}^{r}(B^{s^{'}}_{q_{1},1})$. Moreover here $u_{0}=u_{1}+u_{2}$ with $u_{1}\in B^{s}_{p_{1},1}$ and $u_{2}\in B^{s^{'}}_{p_{2},1}$. \begin{corollaire} Let $\underline{\nu}=\underline{b}\min(\mu,\lambda+2\mu)$ and $\bar{\nu}=\mu+|\lambda+\mu|$. Assume that $s,s^{'}\in(-\frac{N}{p},\frac{N}{p}]$. Let $m\in\mathbb{Z}$ be such that $b_{m}=1+S_{m}a$ satisfies: \begin{equation} \inf_{(t,x)\in[0,T)\times\mathbb{R}^{N}}b_{m}(t,x)\geq\frac{\underline{b}}{2}. \label{6} \end{equation} There exist three constants $c$, $C$ and $\kappa$ (with $c$, $C$, depending only on $N$ and on $s$, and $\kappa$ universal) such that if in addition we have: \begin{equation} \|1-S_{m}a\|_{L^{\infty}(0,T;B^{\frac{N}{p}}_{p,1})}\leq c\frac{\underline{\nu}}{\bar{\nu}} \label{7} \end{equation} then setting: $$V(t)=\int^{t}_{0}\|v\|_{B^{\frac{N}{p}+1}_{p,1}}d\tau,\;\;\;W(t)=\int^{t}_{0}\|w\|_{B^{\frac{N}{p}+1}_{p,1}}d\tau,\;\;\;\mbox{and} \;\;\;Z_{m}(t)=2^{2m}\bar{\nu}^{2}\underline{\nu}^{-1}\int^{t}_{0}\|a\|^{2}_{B^{\frac{N}{p}}_{p,1}}d\tau,$$ We have for all $t\in[0,T]$, $$ \begin{aligned} &\|u\|_{\widetilde{L}^{\infty}_{T}(B^{s}_{p_{1},1}+B^{s^{'}}_{p_{2},1})}+\kappa\underline{\nu} \|u\|_{\widetilde{L}^{1}_{T}(B^{s+2}_{p_{1},1}+B^{s^{'}+2}_{p_{2},1})}\leq e^{C(V+W+Z_{m})(t)}\big(\|u_{1}\|_{B^{s}_{p_{1},1}}+\\ &\hspace{4cm}\|u_{2}\|_{B^{s^{'}}_{p_{2},1}}+\int^{t}_{0} e^{-C(V+W+Z_{m})(\tau)}(\|f(\tau)\|_{B^{s}_{p_{1},1}}+\|g(\tau)\|_{B^{s^{'}}_{q_{1},1}})d\tau\big). \end{aligned} $$ \label{coroimportant} \end{corollaire} {\bf Proof:} We split the solution $u$ in two parts $u_{1}$ and $u_{2}$ which verify the following equations: $$ \begin{cases} \begin{aligned} &\partial_{t}u_{1}+v\cdot\nabla u_{1}+u_{1}\cdot\nabla w-b(\mu\Delta u_{1}+(\lambda+\mu)\nabla{\rm div}u_{1}=f,\\ &u_{/t=0}=u_{1}^{0}, \label{5} \end{aligned} \end{cases} $$ and: $$ \begin{cases} \begin{aligned} &\partial_{t}u_{2}+v\cdot\nabla u_{2}+u_{2}\cdot\nabla w-b(\mu\Delta u_{2}+(\lambda+\mu)\nabla{\rm div}u_{2}=g,\\ &u_{/t=0}=u_{2}^{0}. \label{5} \end{aligned} \end{cases} $$ We have then $u=u_{1}+u_{2}$ and we conclude by applying proposition \ref{linearise}. {\hfill $\Box$}\\ Proposition \ref{linearise} fails in the limit case $s=-\frac{N}{p}$. The reason why is that proposition \ref{produit1} cannot be applied any longer. One can however state the following result which will be the key to the proof of uniqueness in dimension two. \begin{proposition} \label{linearise1} Under condition (\ref{6}), there exists three constants $c$, $C$ and $\kappa$ (with $c$, $C$, depending only on $N$, and $\kappa$ universal) such that if: \begin{equation} \|a-S_{m}a\|_{\widetilde{L}^{\infty}_{t}(B^{\frac{N}{p}}_{p,1})}\leq c\frac{\underline{\nu}}{\bar{\nu}}, \label{14} \end{equation} then we have: $$\|u\|_{L^{\infty}_{t}(B^{-\frac{N}{p_{1}}}_{p_{1},\infty})}+\kappa\underline{\nu}\|u\|_{\widetilde{L}^{1}_{t}(B^{2-\frac{N}{p_{1}}} _{p_{1},\infty})} \leq 2e^{C(V+W)(t)}(\|u_{0}\|_{B^{-\frac{N}{p_{1}}}_{p_{1},\infty}}+\|f\|_{\widetilde{L}^{1}_{t}(B^{\frac{N}{p_{1}}}_{p_{1},\infty})}),$$ whenever $t\in[0,T]$ satisfies: \begin{equation} \bar{\nu}^{2}t\|a\|^{2}_{\widetilde{L}^{\infty}_{t}(B^{\frac{N}{p}}_{p,1})}\leq c2^{-2m}\underline{\nu}. \label{15} \end{equation} \end{proposition} {\bf Proof:} We just point out the changes that have to be be done compare to the proof of proposition \ref{linearise}. The first one is that instead of (\ref{9}) and (\ref{10}), we have in accordance with proposition \ref{produit1}: \begin{equation} \|E_{m}\|_{\widetilde{L}^{1}_{t}(B^{-\frac{N}{p_{1}}}_{p_{1},\infty})}\lesssim\|a-S_{m}a\| _{\widetilde{L}^{\infty}_{t}(B^{\frac{N}{p}}_{p,1})}\|D^{2}u\|_{\widetilde{L}^{1}_{t}(B^{-\frac{N}{p_{1}}}_{p_{1},\infty})}, \label{16} \end{equation} \begin{equation} \|u\cdot w\|_{B^{-\frac{N}{p}}_{p,\infty}}\lesssim\|u\|_{B^{-\frac{N}{p_{1}}}_{p_{1},\infty}}\|\nabla w\|_{B^{\frac{N}{p}}_{p,1}}. \label{17} \end{equation} The second change concerns the estimates of commutator $R_{q}$ and $\widetilde{R}_{q}$. According to inequality (\ref{53}) and remark \ref{remarque7}, we now have for all $q\in\mathbb{Z}$: \begin{equation} \|R_{q}\|_{L^{p}}\lesssim 2^{q\frac{N}{p_{1}}}\|v\|_{B^{\frac{N}{p}+1}_{p,1}}\|u\|_{B^{-\frac{N}{p_{1}}}_{p_{1},\infty}}, \label{18} \end{equation} \begin{equation} \|\widetilde{R}_{q}\|\lesssim\bar{\nu}2^{q\frac{N}{p_{1}}}\|S_{m}a\|_{\widetilde{L}^{\infty}_ {t}(B^{\frac{N}{p}+1}_{p,1})}\|Du\|_{\widetilde{L}^{1}_{t}(B^{-\frac{N}{p_{1}}}_{p_{1},\infty})}. \label{19} \end{equation} Plugging all these estimates in (\ref{11}) then taking the supremum over $q\in\mathbb{Z}$, we get: $$ \begin{aligned} &\|u\|_{L^{\infty}_{t}(B^{-\frac{N}{p_{1}}}_{p_{1},\infty})}+2\underline{\nu}\|u\|_{\widetilde{L}^{1}_{t}(B^{2-\frac{N}{p_{1}}}_{p_{1}, \infty})}\leq \|u_{0}\|_{B^{-\frac{N}{p_{1}}}_{p_{1},1}}+C_int^{t}_{0}(\|v\|_{B^{\frac{N}{p}+1}_{p,1}}+\|w\|_{B^{\frac{N}{p}+1}_{p,1}})\|u\|_{B^{-\frac{N}{p_{1}}}_{p_{1},\infty}}d\tau\\ &+C\bar{\nu}\big(\|a-S_{m}a\|_{\widetilde{L}^{\infty}_{t}(B^{\frac{N}{p}}_{p,1})}\|u\|_{\widetilde{L}^{1}_{t}(B^{2-\frac{N}{p_{1}}}_{p_{1} ,\infty})} +2^{m}\|a\|_{L^{\infty}_{t}(B^{\frac{N}{p}}_{p,1})}\|u\|_{\widetilde{L}^{1}_{t}(B^{1-\frac{N}{p_{1}}}_{p_{1},\infty})}+ \|f\|_{\widetilde{L}^{1}_{t}(B^{-\frac{N}{p_{1}}}_{p_{1},\infty})}. \end{aligned} $$ Using that: $$\|u\|_{\widetilde{L}^{1}_{t}(B^{1-\frac{N}{p_{1}}}_{p_{1},\infty})}\leq\sqrt{t}\|u\|^{\frac{1}{2}}_{\widetilde{L}^{1}_{t}(B^{2- \frac{N}{p_{1}}}_{p_{1},\infty})}\big) \|u\|^{\frac{1}{2}}_{L^{\infty}_{t}(B^{\frac{N}{p_{1}}}_{p_{1},\infty})},$$ and taking advantage of assumption (\ref{14}) and (\ref{15}), it is now easy to complete the proof. {\hfill $\Box$} \section{The mass conservation equation} \label{section4} Let us first recall standard estimates in Besov spaces for the following linear transport equation: $$ \begin{cases} \begin{aligned} &\partial_{t}a+u\cdot\nabla a=g,\\ &a_{/t=0}=a_{0}. \end{aligned} \end{cases} \leqno{({\cal H})} $$ \begin{proposition} Let $1\leq p_{1}\leq p\leq+\infty$, $r\in[1,+\infty]$ and $s\in\mathbb{R}$ be such that: $$-N\min(\frac{1}{p_{1}},\frac{1}{p^{'}})<s<1+\frac{N}{p_{1}}.$$ There exists a constant $C$ depending only on $N$, $p$, $p_{1}$, $r$ and $s$ such that for all $a\in L^{\infty}([0,T],B^{\sigma}_{p,r})$ of $({\cal H})$ with initial data $a_{0}$ in $B^{s}_{p,r}$ and $g\in L^{1}([0,T], B^{s}_{p,r})$, we have for a.e $t\in[0,T]$: \begin{equation} \|f\|_{\widetilde{L}^{\infty}_{t}(B^{s}_{p,r})}\leq e^{CU(t)}\big(\|f_{0}\|_{B^{s}_{p,r}}+\int^{t}_{0}e^{-CV(\tau)} \|F(\tau)\|_{B^{s}_{p_{1},r}}d\tau\big), \label{20} \end{equation} with: $U(t)=\int^{t}_{0}\|\nabla u(\tau)\|_{B^{\frac{N}{p_{1}}}_{p_{1},\infty}\cap L^{\infty}}d\tau$. \label{transport1} \end{proposition} For the proof of proposition \ref{transport1}, see \cite{BCD}. We now focus on the mass equation associated to (\ref{0.6}): \begin{equation} \begin{cases} \begin{aligned} &\partial_{t}a+v\cdot\nabla a=(1+a){\rm div}v,\\ &a_{/t=0}=a_{0}. \end{aligned} \end{cases} \label{525} \end{equation} Here we generalize a proof of R. Danchin in \cite{DW}. \begin{proposition} Let $r\in{1,+\infty}$, $1\leq p_{1}\leq p\leq+\infty$ and $s\in(-\min(\frac{N}{p_{1}},\frac{N}{p^{'}},\frac{N}{p}]$. Assume that $a_{0}\in B^{s}_{p,r}\cap L^{\infty}$, $v\in L^{1}(0,T;B^{\frac{N}{p_{1}}+1}_{p_{1},1})$ and that $a\in\widetilde{L}^{\infty}_{T}(B^{s}_{p,r})\cap L^{\infty}_{T}$ satisfies (\ref{525}). Let $V(t)=\int^{t}_{0}\|\nabla v(\tau)\|_{B^{\frac{N}{p_{1}}}_{p_{1},1}}d\tau$. There exists a constant $C$ depending only on $N$ such that for all $t\in[0,T]$ and $m\in\mathbb{Z}$, we have: \begin{equation} \|a\|_{\widetilde{L}^{\infty}_{t}(B^{s}_{p,r}\cap L^{\infty})}\leq e^{2CV(t)}\|a_{0}\|_{B^{s}_{p,r}\cap L^{\infty}}+e^{2CV(t)}-1, \label{22} \end{equation} \begin{equation} \|a-S_{m}a\|_{B^{s}_{p,r}}\leq\|a_{0}-S_{m}a_{0}\|_{B^{s}_{p,r}}+ \frac{1}{2}(1+\|a_{0}\|_{B^{s}_{p,r}\cap L^{\infty}})(e^{2CV(t)}-1)+C\|a\|_{L^{\infty}}V(t), \label{23} \end{equation} \begin{equation} \begin{aligned} &\big(\sum_{l\leq m}2^{lrs}\|\Delta_{l}(a-a_{0})\|^{r}_{L^{\infty}_{t}(L^{p})}\big)^{\frac{1}{r}}\leq(1+\|a_{0}\|_{B^{s}_{p,r}})(e^{CV(t)}-1) \\ &\hspace{8cm}+C2^{m}\|a_{0}\|_{B^{s}_{p,r}}\int^{t}_{0}\|v\|_{B^{\frac{N}{p_{1}}}_{p_{1},1}}d\tau. \label{24} \end{aligned} \end{equation} \label{transport2} \end{proposition} {\bf Proof:}\;\ Applying $\Delta_{l}$ to (\ref{525}) yields: $$\partial_{t}\Delta_{l}a+v\cdot\nabla\Delta_{l}a=R_{l}+\Delta_{l}((1+a){\rm div}v)\;\;\;\mbox{with}\;\;R_{l}=[v\cdot\nabla,\Delta_{l}]a.$$ Multipling by $\Delta_{l}a|\Delta_{l}a|^{p-2}$ then performing a time integration, we easily get: $$\|\Delta_{l}a(t)\|_{L^{p}}\lesssim\|\Delta_{l}a_{0}\|_{L^{p}}+\int^{t}_{0}\big(\|R_{l}\|_{L^{p}}+\|{\rm div}v\|_{L^{\infty}}\|\Delta_{l}a\|_{L^{p}} +\|\Delta_{l}((1+a){\rm div}v)\|_{L^{p}}\big)d\tau.$$ According to proposition \ref{produit1} and interpolation, there exists a constant $C$ and a positive sequence $(c_{l})_{l\in\mathbb{N}}$ in $l^{r}$ with norm $1$ such that: $$\|\Delta_{l}((1+a){\rm div}v)\|_{L^{p}}\leq Cc_{l}2^{-ls}(1+\|a\|_{B^{s}_{p,r}\cap L^{\infty}})\|{\rm div}v\|_{B^{\frac{N}{p_{1}}}_{p_{1},1}}.$$ Next the term $\|R_{l}\|_{L^{p}}$ may be bounded according to lemma \ref{alemme2} in appendix. We end up with: \begin{equation} \forall t\in [0,T],\;\forall l\in\mathbb{Z},\;\;2^{ls}\|\Delta_{l}a(t)\|_{L^{p}}\leq 2^{ls}\|\Delta_{l}a_{0}\|_{L^{p}}+ C\int^{t}_{0}c_{l}(1+\|a\|_{B^{s}_{p,r}\cap L^{\infty}})V^{'}d\tau, \label{25} \end{equation} hence, summing up on $\mathbb{Z}$ in $l^{r}$, $$\forall t\in [0,T],\;\forall l\in\mathbb{Z},\;\;\|a(t)\|_{B^{s}_{p,r}}\leq\|a_{0}\|_{B^{s}_{p,r}}+\int^{t}_{0}CV^{'}\|a(\tau)\|_{B^{s}_{p,r}}d\tau +\int^{t}_{0}C(1+\|a\|_{L^{\infty}_{T}})V^{'}d\tau. $$ Next we have: $$\|a\|_{L^{\infty}_{t}}\leq\int^{t}_{0}(1+\|a(\tau)\|_{L^{\infty}})V^{'}(\tau)d\tau.$$ By summing the two previous inequalities, applying Gronwall lemma and proposition \ref{resteimp1} yields inequality (\ref{22}). Let us now prove inequality (\ref{23}). Starting from (\ref{25}) and summing up over $l\geq m$ in $l^{r}$, we get: $$ \begin{aligned} &(\sum_{l\geq m}2^{lsr}\|\Delta_{l}a\|^{r}_{L^{\infty}_{t}(L^{p})})^{\frac{1}{r}}\leq (\sum_{l\geq m}2^{lsr}\|\Delta_{l}a_{0}\|^{r}_{L^{p}})^{\frac{1}{r}}+ C\int^{t}_{0}V^{'}(e^{2CV}\|a_{0}\|_{B^{s}_{p,r}\cap L^{\infty}}+e^{2CV}-1)d\tau\\ &\hspace{10cm}+\int^{t}_{0}C(1+\|a\|_{L^{\infty}})V^{'}d\tau. \end{aligned} $$ Straightforward calculations then leads to (\ref{23}). In order to prove (\ref{24}), we use the fact that $\widetilde{a}=a-a_{0}$ satisfies: $$\partial_{t}\widetilde{a}+v\cdot\nabla\widetilde{a}=(1+\widetilde{a}){\rm div}v+a_{0}{\rm div}v-v\cdot\nabla a_{0},\;\;\widetilde{a}_{/t=0}=0.$$ Therefore, arguing as for proving (\ref{25}), we get for all $t\in[0,T]$ and $l\in\mathbb{Z}$, $$ \begin{aligned} &2^{l\frac{N}{p}}\|\Delta_{l}\widetilde{a}\|_{L^{p}}\leq \int^{t}_{0}2^{l\frac{N}{p}}\big(\|\Delta_{l}(a_{0}{\rm div}v)\|_{L^{p}}+\|\Delta_{l}(v\cdot\nabla a_{0})\|_{L^{p}}\big)d\tau\\ &\hspace{8cm}+C\int^{t}_{0}c_{l}(1+\|a\|_{B^{\frac{N}{p}}_{p,1}})V^{'}d\tau. \end{aligned} $$ Since $B^{\frac{N}{p}}_{p,1}$ is an algebra and the product maps $B^{\frac{N}{p}}_{p,1}\times B^{\frac{N}{p}-1}_{p,1}$ in $B^{\frac{N}{p}-1}_{p,1}$, we discover that: $$ \begin{aligned} &2^{l\frac{N}{p}}\|\Delta_{l}\widetilde{a}\|_{L^{\infty}(L^{p})}\leq C\big(\int^{t}_{0}2^{l}c_{l}\|a_{0}\|_{B^{\frac{N}{p}}_{p,1}}\|v\|_{B^{\frac{N}{p}}_{p,1}}d\tau+ \int^{t}_{0}c_{l}(1+\|a_{0}\|_{B^{\frac{N}{p}}_{p,1}}+\|a\|_{B^{\frac{N}{p}}_{p,1}})V^{'}d\tau\big), \end{aligned} $$ hence, summing up on $l\leq m$, $$\begin{aligned} &\sum_{l\leq m}2^{l\frac{N}{p}}\|\Delta_{l}\widetilde{a}\|_{L^{\infty}(L^{p})}\leq C\big(\int^{t}_{0}2^{m}\|a_{0}\|_{B^{\frac{N}{p}}_{p,1}}\|v\|_{B^{\frac{N}{p}}_{p,1}}d\tau+ \int^{t}_{0}(1+\|a_{0}\|_{B^{\frac{N}{p}}_{p,1}}+\|a\|_{B^{\frac{N}{p}}_{p,1}})V^{'}d\tau\big), \end{aligned} $$ Plugging (\ref{22}) in the right-hand side yields (\ref{24}). \section{The proof of theorem \ref{theo1}} \label{section5} \subsection{Strategy of the proof} To improve the results of R. Danchin in \cite{DL}, \cite{DW}, it is crucial to kill the coupling between the velocity and the pressure which intervene in the works of R. Danchin. In this goal, we need to integrate the pressure term in the study of the linearized equation of the momentum equation. For making, we will try to express the gradient of the pressure as a Laplacian term, so we set for $\bar{\rho}>0$ a constant state: $${\rm div}v=P(\rho)-P(\bar{\rho}).$$ Let ${\cal E}$ the fundamental solution of the Laplace operator. $$$$ We will set in the sequel: $v=\nabla{\cal E}*\big(P(\rho)-P(\bar{\rho})\big)=\nabla\big({\cal E}*[P(\rho)-P(\bar{\rho})]\big)$ ( $*$ here means the operator of convolution). We verify next that: $$ \begin{aligned} \nabla{\rm div}v=\nabla\Delta \big({\cal E}*[P(\rho)-P(\bar{\rho})]\big)=\Delta\nabla\big({\cal E}*[P(\rho)-P(\bar{\rho})]\big)=\Delta v=\nabla P(\rho). \end{aligned} $$ By this way we can now rewrite the momentum equation of (\ref{0.6}). We obtain the following equation where we have set $\nu=2\mu+\lambda$: $$\partial_{t}u+u\cdot \nabla u-\frac{\mu}{\rho}\Delta\big(u-\frac{1}{\nu}v\big)-\frac{\lambda+\mu}{\rho}\nabla{\rm div}\big(u-\frac{1}{\nu}v\big)=f.$$ We want now calculate $\partial_{t}v$, by the transport equation we get: $$\partial_{t}v=\nabla{\cal E}*\partial_{t}P(\rho)=-\nabla {\cal E}*\big(P^{'}(\rho){\rm div}(\rho u)\big).$$ We have finally: $$\Delta(\partial_{t}F)=-P^{'}(\rho){\rm div}(\rho u).$$ \begin{notation} To simplify the notation, we will note in the sequel $$\nabla {\cal E}*\big(P^{'}(\rho){\rm div}(\rho u)\big)=\nabla(\Delta)^{-1}\big(P^{'}(\rho){\rm div}(\rho u)\big).$$ \end{notation} Finally we can now rewritte the system (\ref{0.6}) as follows: \begin{equation} \begin{cases} \begin{aligned} &\partial_{t}a+(v_{1}+\frac{1}{\nu}v)\cdot\nabla a=(1+a){\rm div}(v_{1}+\frac{1}{\nu}v),\\ &\partial_{t}v_{1}-(1+a){\cal A}v_{1}=f-u\cdot\nabla u+\frac{1}{\nu}\nabla(\Delta)^{-1}\big(P^{'}(\rho){\rm div}(\rho u)\big),\\ &a_{/ t=0}=a_{0},\;(v_{1})_{/ t=0}=(v_{1})_{0}. \end{aligned} \end{cases} \label{0.7} \end{equation} where $v_{1}=u-\frac{1}{\nu}v$. In the sequel we will study this system by exctracting some uniform bounds in Besov spaces on $(a,v_{1})$ as the in the following works \cite{AP}, \cite{H}. The advantage of the system (\ref{0.7}) is that we have \textit{kill} the coupling between $v_{1}$ and a term of pressure. Indeed in the works of R. Danchin \cite{DL}, \cite{DW}, the pressure was considered as a term of rest in the momentum equation, so it implied a strong relationship between the density and the velocity. In particular it was impossible to distinguish the index of integration for the Besov spaces. \subsection{Proof of the existence} \subsubsection*{Construction of approximate solutions} We use a standard scheme: \begin{enumerate} \item We smooth out the data and get a sequence of smooth solutions $(a^{n},u^{n})_{n\in\mathbb{N}}$ to (\ref{0.6}) on a bounded interval $[0,T^{n}]$ which may depend on $n$. We set $v_{1}^{n}=u^{n}-v^{n}$ where ${\rm div}v^{n}=P(\rho^{n})-P(\bar{\rho})$. \item We exhibit a positive lower bound $T$ for $T^{n}$, and prove uniform estimates on $(a^{n},u^{n})$ in the space $$E_{T}=\widetilde{C}_{T}(B^{\frac{N}{p}}_{p,1})\times\big(\widetilde{C}_{T}(B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})\cap\widetilde{L}^{1}_{T}( B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+2}_{p,1})\big).$$ More precisely to get this bounds we will need to study the behavior of $(a^{n},v_{1}^{n})$. \item We use compactness to prove that the sequence $(a^{n},u^{n})$ converges, up to extraction, to a solution of (\ref{0.7}). \end{enumerate} Througout the proof, we denote $\underline{\nu}=\underline{b}\min(\mu,\lambda+2\mu)$ and $\bar{\nu}=\mu+|\mu+\lambda|$, and we assume (with no loss of generality) that $f$ belongs to $\widetilde{L}^{1}_{T}(B^{\frac{N}{p_{1}}}_{p_{1},1})$. \subsubsection*{First step} We smooth out the data as follows: $$a_{0}^{n}=S_{n}a_{0},\;\;u_{0}^{n}=S_{n}u_{0}\;\;\;\mbox{and}\;\;\;f^{n}=S_{n}f.$$ Note that we have: $$\forall l\in\mathbb{Z},\;\;\|\Delta_{l}a^{n}_{0}\|_{L^{p}}\leq\|\Delta_{l}a_{0}\|_{L^{p}}\;\;\;\mbox{and}\;\;\;\|a^{n}_{0}\| _{B^{\frac{N}{p}}_{p,\infty}}\leq \|a_{0}\|_{B^{\frac{N}{p}}_{p,\infty}},$$ and similar properties for $u_{0}^{n}$ and $f^{n}$, a fact which will be used repeatedly during the next steps. Now, according \cite{DW}, one can solve (\ref{0.6}) with the smooth data $(a_{0}^{n},u_{0}^{n},f^{n})$. We get a solution $(a^{n},u^{n})$ on a non trivial time interval $[0,T_{n}]$ such that: \begin{equation} \begin{aligned} &a^{n}\in\widetilde{C}([0,T_{n}),B^{N}_{2,1})\;\;\mbox{and}\;\;u^{n}\in\widetilde{C}([0,T_{n}),B^{\frac{N}{2}-1}_{2,1})\cap \widetilde{L}^{1}_{T_{n}} (B^{\frac{N}{2}+1}_{2,1}). \end{aligned} \label{a26} \end{equation} \subsubsection*{Uniform bounds} Let $T_{n}$ be the lifespan of $(a_{n},u_{n})$, that is the supremum of all $T>0$ such that (\ref{0.1}) with initial data $(a_{0}^{n},u_{0}^{n})$ has a solution which satisfies (\ref{a26}). Let $T$ be in $(0,T_{n})$. We aim at getting uniform estimates in $E_{T}$ for $T$ small enough. For that, we need to introduce the solution $u^{n}_{L}$ to the linear system: $$\partial_{t}u_{L}^{n}-{\cal A}u_{L}^{n}=f^{n},\;\;u^{n}_{L}(0)=u^{n}_{0}-\frac{1}{\nu}\widetilde{v}_{0}\widetilde{v}_{0}.$$ Now, we set $\tilde{u}^{n}=u^{n}-u^{n}_{L}$ and the vectorfield $\widetilde{v}^{n}_{1}=\widetilde{u}^{n}-\frac{1}{\nu}\widetilde{v}^{n}$ with ${\rm div}\widetilde{v}^{n}=P(\rho^{n})$. We can check that $\widetilde{v}^{n}_{1}$ satisfies the parabolic system: \begin{equation} \begin{cases} \begin{aligned} &\partial_{t}\widetilde{v}_{1}^{n}+(u_{L}^{n}+\frac{1}{\nu}\widetilde{v}^{n})\cdot\nabla \tilde{v}_{1}^{n}+\widetilde{v}_{1}^{n}\cdot\nabla u^{n}-(1+a^{n}){\cal A}\widetilde{v}_{1}^{n}=a^{n} {\cal A}u_{L}^{n}-\frac{1}{\nu}(u_{L}^{n}\cdot\nabla \tilde{v}^{n}\\ &\hspace{3cm}+\frac{1}{\nu}\widetilde{v}^{n}\cdot\nabla \widetilde{v}^{n})-u_{L}^{n}\cdot\nabla u_{L}^{n}+\frac{1}{\nu}\nabla(\Delta)^{-1}(P^{'}(\rho^{n}){\rm div}(\rho^{n}u^{n})),\\ &(\widetilde{v}_{1}^{n})_{\ t=0}=0. \end{aligned} \end{cases} \label{systemessen} \end{equation} which has been studied in proposition \ref{linearise}. Define $m\in\mathbb{Z}$ by: \begin{equation} m=\inf\{ p\in\mathbb{Z}/\;2\bar{\nu}\sum_{l\geq p}2^{l\frac{N}{p}}\|\Delta_{l}a_{0}\|_{L^{p}}\leq c\bar{\nu}\} \label{def} \end{equation} where $c$ is small enough positive constant (depending only $N$) to be fixed hereafter. In the sequel we will need of a control on $a-S_{m}a$ small to apply proposition \ref{linearise}, so here $m$ is enough big (we explain how in the sequel). Let: $$\bar{b}=1+\sup_{x\in\mathbb{R}^{N}}a_{0}(x),\;A_{0}=1+2\|a_{0}\|_{B^{\frac{N}{p}}_{p,1}},\;U_{0}=\|u_{0}\|_{B^{\frac{N}{p_{1}}-1}_{p_{1},1}}+ \|a_{0}\|_{B^{\frac{N}{p}+1}_{p,1}}+ \|f\|_{L^{1}_{T}(B^{\frac{N}{p_{1}}-1}_{p_{1},1})},$$ and $\widetilde{U}_{0}=2CU_{0}+4C\bar{\nu}A_{0}$ (where $C^{'}$ is a constant embedding and $C$ stands for a large enough constant depending only $N$ which will be determined when applying proposition \ref{produit1}, \ref{linearise} and \ref{transport1} in the following computations.) We assume that the following inequalities are fulfilled for some $\eta>0$: $$ \begin{aligned} &({\cal H}_{1})&\|a^{n}-S_{m}a^{n}\|_{\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}\leq c\underline{\nu}\bar{\nu}^{-1},\\ &({\cal H}_{2})&C\bar{\nu}^{2}T\|a^{n}\|^{2}_{\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}\leq 2^{-2m}\underline{\nu},\\ &({\cal H}_{3})&\frac{1}{2}\underline{b}\leq 1+a^{n}(t,x)\leq 2\bar{b}\;\;\mbox{for all}\;\;(t,x)\in[0,T]\times\mathbb{R}^{N},\\ &({\cal H}_{4})&\|a^{n}\|_{\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}\leq A_{0}, \end{aligned} $$ $$ \begin{aligned} &({\cal H}_{5})&\|u^{n}_{L}\|_{L^{1}_{T}(B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+3}_{p,1})}\leq \eta,\\ &({\cal H}_{6})&\|\widetilde{v}_{1}^{n}\|_{\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})}+\underline{\nu} \|\widetilde{v}_{1}^{n}\|_{L^{1}_{T}(B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+2}_{p,1})}\leq \widetilde{U}_{0}\eta,\\ &({\cal H}_{7})&\|\widetilde{v}^{n}\|_{\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p}+1}_{p,1})}\leq C^{'}A_{0},\\ &({\cal H}_{8})&\|\nabla u^{n}\|_{\widetilde{L}^{1}_{T}(B^{\frac{N}{p_{1}}}_{p_{1},1})+\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}\leq (\underline{\nu}^{-1}\widetilde{U}_{0}+1)\et \end{aligned} $$ Remark that since: $$1+S_{m}a^{n}=1+a^{n}+(S_{m}a^{n}-a^{n}),$$ assumptions $({\cal H}_{1})$ and $({\cal H}_{3})$ combined with the embedding $B^{\frac{N}{p}}_{p,1}\hookrightarrow L^{\infty}$ insure that: \begin{equation} \inf_{(t,x)\in[0,T]\times\mathbb{R}^{N}}(1+S_{m}a^{n})(t,x)\geq\frac{1}{4}\underline{b}, \label{inemin} \end{equation} provided $c$ has been chosen small enough (note that $\frac{\underline{\nu}}{\bar{\nu}}\leq\bar{b}$).\\ We are going to prove that under suitable assumptions on $T$ and $\eta$ (to be specified below) if condition $({\cal H}_{1})$ to $({\cal H}_{7})$ are satisfied, then they are actually satisfied with strict inequalities. Since all those conditions depend continuously on the time variable and are strictly satisfied initially, a basic boobstrap argument insures that $({\cal H}_{1})$ to $({\cal H}_{8})$ are indeed satisfied for $T$. First we shall assume that $\eta$ and $T$ satisfies: \begin{equation} C(1+\underline{\nu}^{-1}\widetilde{U}_{0})\eta+\frac{C^{'}}{\nu}A_{0} <\log 2 \label{1conduti} \end{equation} so that denoting $\widetilde{V}_{1}^{n}(t)=\int^{t}_{0}\|\nabla \widetilde{v}_{1}^{n}\|_{B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1}}d\tau$, $\widetilde{V}^{n}(t)=\frac{1}{\nu}\int^{t}_{0}\|\nabla \widetilde{v}^{n}\|_{B^{\frac{N}{p}}_{p,1}}d\tau$ and $U^{n}_{L}(t)=\int^{t}_{0}\|\nabla u^{n}_{L}\|_{B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+3}_{p,1}}d\tau$, we have, according to $({\cal H}_{5})$ and $({\cal H}_{6})$: \begin{equation} e^{C(U^{n}_{L}+\widetilde{V}_{1}^{n}+\widetilde{V}^{n})(T)}<2\;\;\mbox{and}\;\;e^{C(U^{n}_{L}+\widetilde{V}_{1}^{n}+\widetilde{V}^{n})(T)}- \leq1. \label{1ineimpca} \end{equation} In order to bound $a^{n}$ in $\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})$, we apply inequality (\ref{22}) and get: \begin{equation} \|a^{n}\|_{\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}<1+2\|a_{0}\|_{B^{\frac{N}{p}}_{p,1}}=A_{0}. \label{inetranspr} \end{equation} Hence $({\cal H}_{4})$ is satisfied with a strict inequality. $({\cal H}_{7})$ verifies a strict inequality, it follows from proposition \ref{singuliere} and $({\cal H}_{4})$. Next, applying proposition \ref{chaleur} and proposition \ref{singuliere} yields: \begin{equation} \|u^{n}_{L}\|_{\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})}\leq U_{0}, \label{34} \end{equation} \begin{equation} \begin{aligned} &\kappa\nu\|u^{n}_{L}\|_{L^{1}_{T}(B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+3}_{p,1})}\leq\sum_{l\in\mathbb{Z}}2^{l(\frac{N}{p_{1}}-1)}(1-e^{-\kappa\nu2^{2l}T})(\|\Delta_{l}u_{0}\|_{L^{p_{1}}}+\\ &\hspace{3cm}\|\Delta_{l}f\|_{L^{1}(\mathbb{R}^{+},L^{p_{1}})})+\leq\sum_{l\in\mathbb{Z}}2^{l(\frac{N}{p}+1)}(1-e^{-\kappa\nu2^{2l}T}) \|\Delta_{l}a_{0}\|_{L^{p}}. \end{aligned} \label{35} \end{equation} Hence taking $T$ such that: \begin{equation} \begin{aligned} &\sum_{l\in\mathbb{Z}}2^{l(\frac{N}{p_{1}}-1)}(1-e^{-\kappa\nu2^{2l}T})(\|\Delta_{l}u_{0}\|_{L^{p_{1}}} +\|\Delta_{l}f\|_{L^{1}(\mathbb{R}^{+},L^{p_{1}})})\\ &\hspace{2cm}+\leq\sum_{l\in\mathbb{Z}}2^{l(\frac{N}{p}+1)}(1-e^{-\kappa\nu2^{2l}T}) \|\Delta_{l}a_{0}\|_{L^{p}}<\kappa\eta\nu, \end{aligned} \label{36} \end{equation} insures that $({\cal H}_{5})$ is strictly verified. Since $({\cal H}_{1})$, $({\cal H}_{2})$, $({\cal H}_{5})$, $({\cal H}_{6})$, $({\cal H}_{7})$ and (\ref{inemin}) are satisfied, proposition \ref{linearise} may be applied, we obtain: $$ \begin{aligned} &\|\widetilde{v}_{1}^{n}\|_{\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})}+\underline{\nu} \|\widetilde{v}_{1}^{n}\|_{L^{1}_{T}(B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+2}_{p,1})}\\ &\hspace{1cm}\leq Ce^{C(2U^{n}_{L}+2\widetilde{V}^{n}+\widetilde{V}_{1}^{n})(T)}\int^{T}_{0}\big(\|a^{n}{\cal A}u^{n}_{L}\|_{B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}}_{p,1}} +\|u^{n}_{L}\cdot\nabla u^{n}_{L}\|_{B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}}_{p,1}}\\ &\hspace{1cm}+\|u_{L}^{n}\cdot\nabla \tilde{v}^{n}\|_{B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}}_{p,1}}+\|\widetilde{v}^{n}\cdot\nabla \widetilde{v}^{n}\|_{B^{\frac{N}{p}}_{p,1}} +\|\nabla(\Delta)^{-1}(P^{'}(\rho^{n}){\rm div}(\rho^{n}u^{n}))\|_{B^{\frac{N}{p}}_{p,1}}\big) dt. \end{aligned} $$ As $\frac{N}{p}+\frac{N}{p_{1}}-1\geq 0$, $2\frac{N}{p}-1>0$ and by taking advantage of proposition \ref{produit1}, \ref{interpolation} and \ref{singuliere}, we get: $$ \begin{aligned} &\|\nabla(\Delta)^{-1}(P^{'}(\rho^{n}){\rm div}(\rho^{n}u^{n}))\|_{\widetilde{L}^{1}_{T}(B^{\frac{N}{p}}_{p,1}) \leq C_{P}(1+\|a^{n}\|_{\widetilde{L}^{\infty}(B^{\frac{N}{p}}_{p,1})})(\sqrt{T} \|\widetilde{v}_{1}^{n}\|_{\widetilde{L}^{2}_{T}(B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})}\\ &\hspace{6,8cm}+\sqrt{T}\|u_{L}^{n}\|_{\widetilde{L}^{2}_{T}(B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})}+T \|a^{n}\|_{\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}),\\ &\|\widetilde{v}^{n}\cdot\nabla \widetilde{v}^{n}\|_{\widetilde{L}^{1}_{T}(B^{\frac{N}{p}}_{p,1})}\leq C_{1} T\|a^{n}\|^{2}_{\widetilde{L}_{T}^{\infty}(B^{\frac{N}{p}}_{p,1})}. \end{aligned} $$ We proceed similarly for the other terms and we end up with: \begin{equation} \begin{aligned} &\|\widetilde{v}_{1}^{n}\|_{\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})}+\underline{\nu} \|\widetilde{v}_{1}^{n}\|_{L^{1}_{T}(B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+2}_{p,1})}\leq Ce^{C(2U^{n}_{L}+2\widetilde{V}^{n}+\widetilde{V}_{1}^{n})(T)}\\ &\times\biggl(C\|u^{n}_{L}\|_{L^{1}_{T}(B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+3}_{p,1})}(\bar{\nu}\|a^{n}\|_{L^{\infty}_{T} (B^{\frac{N}{p}}_{p,1})} +\|u^{n}_{L}\|_{L^{\infty}_{T}(B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})})+\\ &C_{1} T\|a^{n}\|^{2}_{\widetilde{L}_{T}^{\infty}(B^{\frac{N}{p}}_{p,1})}+C_{P}(1+\|a^{n}\|_{\widetilde{L}^{\infty}(B^{\frac{N}{p}}_{p,1})})(\sqrt{T} \|\widetilde{v}_{1}^{n}\|_{\widetilde{L}^{2}_{T}(B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})}\\ &+\sqrt{T}\|u_{L}^{n}\|_{\widetilde{L}^{2}_{T}(B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})}+T \|a^{n}\|_{\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})})+T\|u^{n}_{L}\|_{L^{\infty}_{T}(B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})}\times\\ &\hspace{10cm}\|a^{n}\|_{\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}\biggl).\\ \end{aligned} \label{37} \end{equation} with $C=C(N)$, $C_{1}=C_{1}(N)$ and $C_{P}=(N,P,\underline{b},\bar{b})$. Now, using assumptions $({\cal H}_{4})$, $({\cal H}_{5})$ and $({\cal H}_{6})$, and inserting (\ref{1ineimpca}) in (\ref{37}) gives: $$\|\widetilde{v}_{1}^{n}\|_{\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p_{1}}-1}_{p_{1},1})}+ \|\widetilde{v}_{1}^{n}\|_{L^{1}_{T}(B^{\frac{N}{p_{1}}+1}_{p_{1},1})}\leq2C(\bar{\nu}A_{0}+U_{0})\eta+C_{1}TA_{0}(1+A_{0})+\sqrt{T}A_{0}U_{0},$$ hence $({\cal H}_{6})$ is satisfied with a strict inequality provided when $T$ verifies: \begin{equation} 2C(\bar{\nu}A_{0}+U_{0})\eta+C_{1}TA_{0}(1+A_{0})+\sqrt{T}A_{0}U_{0}<C\bar{\nu}\eta. \label{38} \end{equation} $({\cal H}_{8})$ verifies a strict inequality, it follows from proposition $({\cal H}_{5})$, $({\cal H}_{6})$ and $({\cal H}_{7})$. We now have to check whether $({\cal H}_{1})$ is satisfied with strict inequality. For that we apply proposition (\ref{transport2}) which yields for all $m\in\mathbb{Z}$, \begin{equation} \sum_{l\geq m}2^{l\frac{N}{2}}\|\Delta_{l}a^{n}\|_{L^{\infty}_{T}(L^{p})}\leq\sum_{l\geq m}2^{l\frac{N}{p}}\|\Delta_{l}a_{0}\|_{L^{p}}+ (1+\|a_{0}\|_{B^{\frac{N}{2}}_{p,1}})\big( e^{C(U^{n}_{L}+\widetilde{U}^{n})(T)}-1\big). \label{39} \end{equation} Using (\ref{1conduti}) and $({\cal H}_{5})$, $({\cal H}_{6})$, we thus get: $$\|a^{n}-S_{m}a^{n}\|_{L^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}\leq\sum_{l\geq m}2^{l\frac{N}{p}}\|\Delta_{l}a_{0}\|_{L^{p}}+\frac{C}{\log2} (1+\|a_{0}\|_{B^{\frac{N}{p}}_{p,1}})(1+\underline{\nu}^{-1}\widetilde{L}_{0})\eta.$$ Hence $({\cal H}_{1})$ is strictly satisfied provided that $\eta$ further satisfies: \begin{equation} \frac{C}{\log2}(1+\|a_{0}\|_{B^{\frac{N}{p}}_{p,1}})(1+\underline{\nu}^{-1}\widetilde{U}_{0})\eta<\frac{c\underline{\nu}}{2\bar{\nu}}. \label{40} \end{equation} In order to check whether $({\cal H}_{3})$ is satisfied, we use the fact that: $$a^{n}-a_{0}=S_{m}(a^{n}-a_{0})+(Id-S_{m})(a^{n}-a_{0})+\sum_{l>n}\Delta_{l}a_{0},$$ whence, using $B^{\frac{N}{p}}_{p,1}\hookrightarrow L^{\infty}$ and assuming (with no loss of generality) that $n\geq m$, $$ \begin{aligned} &\|a^{n}-a_{0}\|_{L^{\infty}((0,T)\times\mathbb{R}^{N})}\leq C\big(\|S_{m}(a^{n}-a_{0})\|_{L^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}+ \|a^{n}-S_{m}a^{n}\|_{L^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}\\ &\hspace{9cm}+2\sum_{l\geq m}2^{l\frac{N}{p}}\|\Delta_{l}a_{0}\|_{L^{p}}\big). \end{aligned} $$ Changing the constant $c$ in the definition of $m$ and in (\ref{40}) if necessary, one can, in view of the previous computations, assume that: $$C\big(\|a^{n}-S_{m}a^{n}\|_{L^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}+2\sum_{l\geq m}2^{l\frac{N}{p}}\|\Delta_{l}a_{0}\|_{L^{p}}\big)\leq\frac{\underline{b}}{4}.$$ As for the term $\|S_{m}(a^{n}-a_{0})\|_{L^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}$, it may be bounded according proposition \ref{transport2}: $$ \begin{aligned} &\|S_{m}(a^{n}-a_{0})\|_{L^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}\leq(1+\|a_{0}\|_{B^{\frac{N}{p}}_{p,1}})(e^{C(\widetilde{V}_{1}^{n}+\widetilde{V}^{n}+U^{n}_{L})(T)} -1)+C2^{2m}\sqrt{T}\|a_{0}\|_{B^{\frac{N}{p}}_{p,1}}\\ &\hspace{10cm}\times\|u^{n}\|_{L^{2}_{T}(B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}}_{p,1})}. \end{aligned} $$ Note that under assumptions $({\cal H}_{5})$, $({\cal H}_{6})$, (\ref{1conduti}) and (\ref{40}) ( and changing $c$ if necessary), the first term in the right-hand side may be bounded by $\frac{\underline{b}}{8}$. Hence using interpolation, (\ref{34}) and the assumptions (\ref{1conduti}) and (\ref{40}), we end up with: $$\|S_{m}(a^{n}-a_{0})\|_{L^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}\leq\frac{\underline{b}}{8}+C2^{m}\sqrt{T}\|a_{0}\|_{B^{\frac{N}{2}}_{2,1}} \sqrt{\eta(U_{0}+\widetilde{U}_{0}\eta)(1+\underline{\nu}^{-1}\widetilde{U}_{0}}.$$ Assuming in addition that $T$ satisfies: \begin{equation} C2^{m}\sqrt{T}\|a_{0}\|_{B^{\frac{N}{p}}_{p,1}} \sqrt{\eta(U_{0}+\widetilde{U}_{0}\eta)(1+\underline{\nu}^{-1}\widetilde{U}_{0}}<\frac{\underline{b}}{8}, \label{42} \end{equation} and using the assumption $\underline{b}\leq1+a_{0}\leq\bar{b}$ yields $({\cal H}_{3})$ with a strict inequality.\\ One can now conclude that if $T<T^{n}$ has been chosen so that conditions (\ref{36}), (\ref{38}) and (\ref{42}) are satisfied (with $\eta$ verifying (\ref{1conduti}) and (\ref{40}), and $m$ defined in (\ref{def}) and $n\geq m$ then $(a^{n},u^{n})$ satisfies $({\cal H}_{1})$ to $({\cal H}_{8})$, thus is bounded independently of $n$ on $[0,T]$.\\ We still have to state that $T^{n}$ may be bounded by below by the supremum $\bar{T}$ of all times $T$ such that (\ref{36}), (\ref{38}) and (\ref{42}) are satisfied. This is actually a consequence of the uniform bounds we have just obtained, and of remark \ref{remark6} and proposition \ref{transport1}. Indeed, by combining all these informations, one can prove that if $T^{n}<\bar{T}$ then $(a^{n},u^{n})$ is actually in: $$\widetilde{L}^{\infty}_{T^{n}}(B^{\frac{N}{2}}_{2,1}\cap B^{\frac{N}{p}}_{p,1})\times\biggl(\widetilde{L}^{\infty}_{T^{n}}\big(B^{\frac{N}{2}}_{2,1}\cap (B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})\big)\cap L^{1}_{T^{n}}(B^{\frac{N}{2}+1}_{2,1}\cap (B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+2}_{p,1})\biggl)^{N}$$ hence may be continued beyond $\bar{T}$ (see the remark on the lifespan following the statement in \cite{DL}). We thus have $T^{n}\geq\bar{T}$. \subsubsection*{Compactness arguments} We now have to prove that $(a^{n},u^{n})_{n\in\mathbb{N}}$ tends (up to a subsequence) to some function $(a,u)$ which belongs to $E_{T}$. Here we recall that: $$E_{T}=\widetilde{C}([0,T],B^{\frac{N}{p}}_{p,1})\times\big(\widetilde{L}^{\infty}(B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})\cap \widetilde{L}^{1}( B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+2}_{p,1})\big).$$ The proof is based on Ascoli's theorem and compact embedding for Besov spaces. As similar arguments have been employed in \cite{DL} or \cite{DW}, we only give the outlines of the proof. \begin{itemize} \item Convergence of $(a^{n})_{n\in\mathbb{N}}$:\\ We use the fact that $\widetilde{a}^{n}=a^{n}-a^{n}_{0}$ satisfies: $$\partial_{t}\widetilde{a}^{n}=-u^{n}\cdot\nabla a^{n}+(1+a^{n}){\rm div}u^{n}.$$ Since $(u^{n})_{n\in\mathbb{N}}$ is uniformly bounded in $\widetilde{L}^{1}_{T}(B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})\cap L^{\infty}_{T}(B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})$, it is by interpolation and the fact that $p_{1}\leq p$, also bounded in $L^{r}_{T}(B^{\frac{N}{p}-1+\frac{2}{r}}_{p,1})$ for any $r\in[1,+\infty]$. By using the standard product laws in Besov spaces, we thus easily gather that $(\partial_{t}\widetilde{a}^{n})$ is uniformly bounded in $\widetilde{L}^{2}_{T}(B^{\frac{N}{p}-1}_{p,1})$. Hence $(\widetilde{a}^{n})_{n\in\mathbb{N}}$ is bounded in $\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p}-1}_{p,1}\cap B^{\frac{N}{p}}_{p,1})$ and equicontinuous on $[0,T]$ with values in $B^{\frac{N}{p}-1}_{p,1}$. Since the embedding $B^{\frac{N}{p}-1}_{p,1}\cap B^{\frac{N}{p}}_{p,1}$ is (locally) compact, and $(a_{0}^{n})_{n\in\mathbb{N}}$ tends to $a_{0}$ in $B^{\frac{N}{p}}_{p,1}$, we conclude that $(a^{n})_{n\in\mathbb{N}}$ tends (up to extraction) to some distribution $a$. Given that $(a^{n})_{n\in\mathbb{N}}$ is bounded in $\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})$, we actually have $a\in\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})$. \item Convergence of $(u^{n}_{L})_{n\in\mathbb{N}}$:\\ From the definition of $u^{n}_{L}$ and proposition \ref{chaleur}, it is clear that $(u^{n}_{L})_{n\in\mathbb{N}}$ tends to solution $u_{L}$ to: $$\partial_{t}u_{L}-{\cal A}u_{l}=f,\;\;u_{L}(0)=u_{0}-\frac{1}{\nu}.$$ in $\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})\cap \widetilde{L}^{1}_{T}(B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+3}_{p,1})$. \item Convergence of $(\widetilde{v}_{1}^{n})_{n\in\mathbb{N}}$:\\ We use the fact that: $$ \begin{aligned} &\partial_{t}\widetilde{v}_{1}^{n}=-(u_{L}^{n}+\frac{1}{\nu}\widetilde{v}^{n})\cdot\nabla \tilde{v}_{1}^{n}-\widetilde{v}_{1}^{n}\cdot\nabla u^{n}-\frac{1}{\nu}(u_{L}^{n}\cdot\nabla \tilde{v}^{n}-\frac{1}{\nu}\widetilde{v}^{n}\cdot\nabla \widetilde{v}^{n})+(1+a^{n}){\cal A}\widetilde{v}_{1}^{n}\\ &\hspace{4,5cm}+a^{n} {\cal A}u_{L}^{n}-u_{L}^{n}\cdot\nabla u_{L}^{n}+\frac{1}{\nu}\nabla(\Delta)^{-1}(P^{'}(\rho^{n}){\rm div}(\rho^{n}u^{n})),\\ \end{aligned} $$ As $(a^{n})_{n\in\mathbb{N}}$ is uniformly bounded in $L^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})$ and $(u^{n})_{n\in\mathbb{N}}$ is uniformly bounded in $L^{\infty}_{T}(B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})\cap L^{1}(B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})$, it is easy to see that the the right-hand side is uniformly bounded in $\widetilde{L}^{\frac{4}{3}}_{T}(B^{\frac{N}{p_{1}}--\frac{3}{2}}_{p_{1},1})+\widetilde{L}^{\infty}(B^{\frac{N}{p}-1}_{p,1})$ Hence $(\widetilde{v}_{1}^{n})_{n\in\mathbb{N}}$ is bounded in $\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})$ and equicontinuous on $[0,T]$ with values in $B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p_{1}}-\frac{3}{2}}_{p_{1},1}$. This enables to conclude that $(\widetilde{v}_{1}^{n})_{n\in\mathbb{N}}$ converges (up to extraction) to some function $\widetilde{v}_{1}\in \widetilde{L}^{\infty}_{T}(B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})\cap L^{1}_{T}(B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+2}_{p,1})$. \end{itemize} By interpolating with the bounds provided by the previous step, one obtains better results of convergence so that one can pass to the limit in the mass equation and in the momentum equation. Finally by setting $u=\widetilde{v}_{1}+\widetilde{v}+u_{L}$, we conclude that $(a,u)$ satisfies (\ref{0.6}).\\ In order to prove continuity in time for $a$ it suffices to make use of proposition \ref{transport1}. Indeed, $a_{0}$ is in $B^{\frac{N}{p}}_{p,1}$, and having $a\in \widetilde{L}^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})$ and $u\in \widetilde{L}^{1}_{T}(B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})$ insure that $\partial_{t}a+u\cdot\nabla a$ belongs to $\widetilde{L}^{1}_{T}(B^{\frac{N}{p}}_{p,1})$. Similarly, continuity for $u$ may be proved by using that $(\widetilde{v}_{1})_{0}\in B^{\frac{N}{p_{1}}-1}_{p_{1},1}$ and that $(\partial_{t}v_{1}-\mu\Delta v_{1})\in \widetilde{L}^{1}_{T}(B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}}_{p,1})$. We conclude by using the fact that $u=v_{1}+\frac{1}{\nu}v$. \subsection{The proof of the uniqueness} \subsubsection*{Uniqueness when $1\leq p_{1}<2N$, $\frac{2}{N}<\frac{1}{p}+\frac{1}{p_{1}}$ and $N\geq 3$} In this section, we focus on the cases $1\leq p_{1}<2N$, $\frac{2}{N}<\frac{1}{p}+\frac{1}{p_{1}}$, $N\geq 3$ and postpone the analysis of the other cases (which turns out to be critical) to the next section. Throughout the proof, we assume that we are given two solutions $(a^{1},u^{1})$ and $(a^{2},u^{2})$ of (\ref{0.6}). In the sequel we will show that $a^{1}=a^{2}$ and $v_{1}^{1}=v_{1}^{2}$ where $u^{i}=v_{1}^{i}+\widetilde{v}^{i}$. It will imply that $u^{1}=u^{2}$). We know that $(a^{1},v_{1}^{1})$ and $(a^{2},v_{1}^{2})$ belongs to: $$ \widetilde{C}([0,T]; B^{\frac{N}{p}}_{p,1})\times\big(\widetilde{C}([0,T];B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})\cap \widetilde{L}^{1}(0,T;B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+2}_{p,1})\big)^{N}.$$ Let $\delta a=a^{2}-a^{1}$, $\delta v=\widetilde{v}^{2}-\widetilde{v}^{1} and $\delta v_{1}=v_{1}^{2}-v_{1}^{1}$. The system for $(\delta a,\delta v_{1})$ reads: \begin{equation} \begin{cases} \begin{aligned} &\partial_{t}\delta a+u^{2}\cdot\nabla\delta a=\delta a{\rm div} u^{2}+(\delta v_{1}+\frac{1}{\nu}\delta v)\cdot\nabla a^{1}+(1+a^{1}){\rm div}(\delta v_{1}+\frac{1}{\nu}\delta v),\\ &\partial_{t}\delta v_{1}+u^{2}\cdot\delta \nabla v_{1}+\delta v_{1}\cdot\nabla u^{1}-(1+a^{1}){\cal A}\delta v_{1}=\delta a{\cal A}v_{1}^{2}-\frac{1}{\nu}(u^{2}\cdot\nabla\delta\widetilde{v}\\ &-\delta \widetilde{v}\cdot\nabla u^{1})+\nabla (\Delta)^{-1}\biggl((P^{'}(\rho^{2})-P^{'}(\rho^{1})){\rm div}(\rho^{2}u^{2})+P^{'}(\rho^{1}){\rm div}(\rho^{1}\delta u)\\ &\hspace{7,5cm}+ P^{'}(\rho^{1}){\rm div}((\rho^{2}-\rho^{1})u^{2})\biggl). \end{aligned} \end{cases} \label{systemeuni} \end{equation} The function $\delta a$ may be estimated by taking advantage of proposition \ref{transport1} with $s=\frac{N}{p}-1$. Denoting $U^{i}(t)=\|\nabla u^{i}\|_{\widetilde{L}^{1}(B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})}$ for $i=1,2$, we get for all $t\in[0,T]$, $$ \begin{aligned} &\|\delta a(t)\|_{B^{\frac{N}{p}-1}_{p,1}}\leq C e^{C U^{2}(t)}\int^{t}_{0}e^{-CU^{2}(\tau)}\|\delta a{\rm div} u^{2}+(\delta v_{1}+\frac{1}{\nu}\delta v)\cdot\nabla a^{1}\\ &\hspace{7cm}+(1+a^{1}){\rm div}(\delta v_{1}+\frac{1}{\nu}\delta v)\|_{B^{\frac{N}{p}-1}_{p,1}}d\tau, \end{aligned} $$ Next using proposition \ref{produit1} and \ref{singuliere} we obtain: $$ \begin{aligned} &\|\delta a(t)\|_{B^{\frac{N}{p}-1}_{p,1}}\leq C e^{C U^{2}(t)}\int^{t}_{0}e^{-CU^{2}(\tau)}\|\delta a\|_{B^{\frac{N}{p}-1}_{p,1}}\big(\|u^{2}\|_{B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1}}+(1+2\|a_{1}\|_{B^{\frac{N}{p}}_{p,1}})\big)\\ &\hspace{7,7cm}+(1+2\|a_{1}\|_{B^{\frac{N}{p}}_{p,1}})\|\delta v_{1}\|_{B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1}}d\tau, \end{aligned} $$ Hence applying Gr\"onwall lemma, we get: \begin{equation} \|\delta a(t)\|_{B^{\frac{N}{p}-1}_{p,1}}\leq C e^{C U^{2}(t)}\int^{t}_{0}e^{-CU^{2}(\tau)}(1+\|a^{1}\|_{B^{\frac{N}{p}}_{p,1}}) \|\delta v_{1}\|_{B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1}}d\tau. \label{ineunia} \end{equation} For bounding $\delta v_{1}$, we aim at applying proposition \ref{linearise} to the second equation of (\ref{systemeuni}). So let us fix an integer $m$ such that: \begin{equation} 1+\inf_{(t,x)\in[0,T]\times\mathbb{R}^{N}}S_{m}a^{1}\geq\frac{\underline{b}}{2}\;\;\mbox{and}\;\;\|a^{1}- S_{m}a^{1}\|_{L^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}\leq c\frac{\underline{\nu}}{\bar{\nu}}. \label{ineunicondi} \end{equation} Note since $a^{1}$ satisfies a transport equation with right-hand side in $\widetilde{L}^{1}_{T}(B^{\frac{N}{p}-1}_{p,1})$, proposition \ref{transport1} guarantees that $a^{1}$ is in $\widetilde{C}_{T}(B^{\frac{N}{p}}_{p,1})$. Hence such an integer does exist (see remark \ref{remark5}). Now applying corollary \ref{linearise1} with $s=\frac{N}{p_{1}}-2$ and $s^{'}=\frac{N}{p}-1$ insures that for all time $t\in[0,T]$, we have: $$ \begin{aligned} &\|\delta v_{1}\|_{L^{1}_{t}(B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})}\leq C e^{C U(t)}\int^{t}_{0}e^{-CU(\tau)}\big(\|\delta a{\cal A}v_{1}^{2} -\frac{1}{\nu}(\delta v\cdot\nabla v_{1}^{1}+v_{1}^{1}\cdot\nabla\delta v)\\ &\hspace{7cm}-\frac{1}{\nu^{2}}(v^{1}\cdot\nabla\delta v+\delta v\cdot\nabla v^{2})\|_{B^{\frac{N}{p_{1}}-2}_{p_{1},1}+B^{\frac{N}{p}-1}_{p,1}}\big)d\tau, \end{aligned} $$ with $U(t)=U^{1}(t)+U^{2}(t)+2^{2m}\underline{\nu}^{-1}\bar{\nu}^{2}\int^{t}_{0}\|a^{1}\|^{2}_{B^{\frac{N}{p}}_{p,1}}d\tau$.\\ Hence, applying proposition \ref{produit1} we get: \begin{equation} \begin{aligned} &\|\delta v_{1}\|_{\widetilde{L}^{1}_{t}(B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})}\leq C e^{C U(t)}\int^{t}_{0}e^{-CU(\tau)}\big(1+\|a^{1}\|_{B^{\frac{N}{p}}_{p,1}} +\|a^{2}\|_{B^{\frac{N}{p}}_{p,1}}\\ &\hspace{7cm}+\|v_{1}^{2}\|_{B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+2}_{p,1}} \big)\|\delta a\|_{B^{\frac{N}{p}-1}_{p,1}}d\tau. \end{aligned} \label{ineunide} \end{equation} Finally plugging (\ref{ineunia}) in (\ref{ineunide}), we get for all $t\in[0,T_{1}]$, $$ \begin{aligned} &\|\delta v_{1}\|_{\widetilde{L}^{1}_{t}(B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})}\leq C e^{C U(t)}\int^{t}_{0}\big(1+\|a^{1}\|_{B^{\frac{N}{p}}_{p,1}} +\|a^{2}\|_{B^{\frac{N}{p}}_{p,1}}+\|v_{1}^{2}\|_{B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+2}_{p,1}} \big)\\ &\hspace{9,5cm}\times\|\delta v_{1}\|_{B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1}}d\tau. \end{aligned} $$ Since $a^{1}$ and $a^{2}$ are in $L^{\infty}(B^{\frac{N}{p}}_{p,1})$ and $v_{1}^{2}$ belongs to $L^{1}_{T}(B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+2}_{p,1})$, applying Gr\"onwall lemma yields $\delta v_{1}=0$, an $[0,T]$. \subsubsection*{Uniqueness when:$\frac{2}{N}=\frac{1}{p_{1}}+\frac{1}{p}$ or $p_{1}=2N$ or $N=2$.} The above proof fails in dimension two. One of the reasons why is that the product of functions does not map $B^{\frac{N}{p}}_{p,1}\times B^{\frac{N}{p_{1}}-2}_{p_{1},1}$ in $B^{\frac{N}{p_{1}}-2}_{p_{1},1}$ but only in the larger space $B^{\frac{N}{p_{1}}-2}_{p_{1},\infty}$. This induces us to bound $\delta a$ in $\L_{T}^{\infty}(B^{\frac{N}{p}-1}_{p,\infty})$ and $\delta v_{1}$ in $L_{T}^{\infty}(B^{\frac{N}{p_{1}}-2}_{p_{1},\infty}+B^{\frac{N}{p}}_{p,\infty})\cap L^{1}_{T}(B^{\frac{N}{p_{1}}}_{p_{1},\infty}+B^{\frac{N}{p}+1}_{p,\infty})$ (or rather, in the widetilde version of those spaces, see below). Yet, we are in trouble because due to $B^{\frac{N}{p_{1}}}_{p_{1},\infty}$ is not embedded in $L^{\infty}$, the term $\delta v_{1}\cdot\nabla a^{1}$ in the right hand-side of the first equation of (\ref{systemeuni}) cannot be estimated properly. As noticed in \cite{DU}, this second difficulty may be overcome by making use of logarithmic interpolation and Osgood lemma ( a substitute for Gronwall inequality). Let us now tackle the proof. Fix an integer $m$ such that: \begin{equation} 1+\inf_{(t,x)\in[0,T]\times\mathbb{R}^{N}}S_{m}a^{1}\geq\frac{\underline{b}}{2}\;\;\mbox{and}\;\;\|a^{1}- S_{m}a^{1}\|_{\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}\leq c\frac{\underline{\nu}}{\bar{\nu}}, \label{47} \end{equation} and define $T_{1}$ as the supremum of all positive times $t$ such that: \begin{equation} t\leq T\;\;\mbox{and}\;\;t\bar{\nu}^{2}\|a^{1}\|_{\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})}\leq c2^{-2m}\underline{\nu}. \label{48} \end{equation} Remark that the proposition \ref{transport1} ensures that $a^{1}$ belongs to $\widetilde{C}_{T}(B^{\frac{N}{p}}_{p,1})$ so that the above two assumptions are satisfied if $m$ has been chosen large enough. For bounding $\delta a$ in $L^{\infty}_{T}(B^{\frac{N}{p}-1}_{p,\infty})$, we apply proposition \ref{transport1} with $r=+\infty$ and $s=0$. We get (with the notation of the previous section): $$ \begin{aligned} &\forall t\in[0,T],\;\;\|\delta a(t)\|_{B^{\frac{N}{p}-1}_{p,\infty}}\leq Ce^{CU^{2}(t)}\int^{t}_{0} e^{-CU^{2}(\tau)}\|\delta a{\rm div} u^{2}+(\delta v_{1}+\frac{1}{\nu}\delta v)\cdot\nabla a^{1}\\ &\hspace{7,5cm}+(1+a^{1}){\rm div}(\delta v_{1}+\frac{1}{\nu}\delta v)\|_{B^{\frac{N}{p}-1}_{p,\infty}}d\tau, \end{aligned} $$ hence using that the product of two functions maps $B^{\frac{N}{p}-1}_{p,\infty}\times B^{\frac{N}{p_{1}}}_{p_{1},1}$ in $B^{\frac{N}{p}-1}_{p,\infty}$, and applying Gronwall lemma, \begin{equation} \|\delta a(t)\|_{B^{\frac{N}{p}-1}_{p,\infty}}\leq Ce^{CU^{2}(t)}\int^{t}_{0} e^{-CU^{2}(\tau)}(1+\|a^{1}\|_{B^{\frac{N}{p}}_{p,1}})\|\delta v_{1}\|_{B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1}}d\tau. \label{49} \end{equation} Next, using proposition \ref{linearise1} combined with proposition \ref{produit1} and corollary \ref{produit2} in order to bound the nonlinear terms, we get for all $t\in[0,T_{1}]$,: \begin{equation} \begin{aligned} &\|\delta v_{1}\|_{\widetilde{L}^{1}_{T}(B^{\frac{N}{p_{1}}-2}_{p_{1},\infty}+B^{\frac{N}{p}+1}_{p,\infty})}\leq Ce^{C(U^{1}+U^{2})(t)}\int^{t}_{0}(1+\|a^{1}\|_{B^{\frac{N}{p}}_{p,1}}+\|a^{2}\|_{B^{\frac{N}{p}}_{p,1}}\\ &\hspace{6,5cm}+\|v_{1}^{2}\|_{B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+2}_{p,1}})\|\delta a\|_{B^{\frac{N}{p}-1}_{p,\infty}}d\tau. \end{aligned} \label{50} \end{equation} In order to control the term $\|\delta v_{1}\|_{B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1}}$ which appears in the right-hand side of (\ref{49}), we make use of the following logarithmic interpolation inequality whose proof may be found in \cite{DU}, page 120: \begin{equation} \begin{aligned} &\|\delta v_{1}\|_{L^{1}_{t}(B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})}\lesssim\\ &\|\delta v_{1}\|_{\widetilde{L}^{1}_{t}(B^{\frac{N}{p_{1}}}_{p_{1},\infty})}\log\big( e+\frac{\|\delta v_{1}\|_{\widetilde{L}^{1}_{t}(B^{\frac{N}{p_{1}}-1}_{p_{1},\infty})}+\|\delta v_{1}\|_{\widetilde{L}^{1}_{t}(B^{\frac{N}{p_{1}}+1}_{p_{1},\infty})}}{\|\delta v_{1}\|_{\widetilde{L}^{1}_{t}(B^{\frac{N}{p_{1}}}_{p_{1},\infty})}}\big)\\ &\hspace{2cm}+\|\delta v_{1}\|_{\widetilde{L}^{1}_{t}(B^{\frac{N}{p}+1}_{p,\infty})}\log\big( e+\frac{\|\delta v_{1}\|_{\widetilde{L}^{1}_{t}(B^{\frac{N}{p}}_{p,\infty})}+\|\delta v_{1}\|_{\widetilde{L}^{1}_{t}(B^{\frac{N}{p}+2}_{p,\infty})}}{\|\delta v_{1}\|_{\widetilde{L}^{1}_{t}(B^{\frac{N}{p}}_{p,\infty})}}\big). \end{aligned} \label{51} \end{equation} Because $v_{1}^{1}$ and $v_{2}^{2}$ belong to $\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p_{1}}-1}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})\cap L^{1}_{T}(B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+2}_{p,1})$, the numerator in the right-hand side may be bounded by some constant $C_{T}$ depending only on $T$ and on the norms of $v_{1}^{1}$ and $v_{1}^{2}$. Therefore inserting (\ref{49}) in (\ref{50}) and taking advantage of (\ref{51}), we end up for all $t\in[0,T_{1}]$ with: $$ \begin{aligned} &\|\delta v_{1}\|_{\widetilde{L}^{1}_{T}(B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})}\leq C(1+\|a^{1}\|_{\widetilde{L}^ {\infty}_{T}(B^{\frac{N}{p}}_{p,1})})\\ &\hspace{0,5cm}\times\int^{t}_{0}(1+\|a^{1}\|_ {B^{\frac{N}{p}}_{p,1}}+\|a^{2}\|_{B^{\frac{N}{p}}_{p,1}}+\|v_{1}^{2}\|_{B^{\frac{N}{p_{1}}+1}_{p_{1},1}+B^{\frac{N}{p}+2}_{p,1}})\|\delta v_{1}\|_ {\widetilde{L}^{1}_{t}(B^{\frac{N}{p_{1}}}_{p_{1},\infty})}\\ &\hspace{6cm}\times\log(e+C_{T}\|\delta v_{1}\|^{-1}_ {\widetilde{L}^{1}_{\tau}(B^{\frac{N}{p_{1}}}_{p_{1},\infty}+B^{\frac{N}{p}+1}_{p,\infty})}\big)d\tau. \end{aligned} $$ Since the function $t\rightarrow\|a^{1}(t)\|_{B^{\frac{N}{p}}_{p,1}}+\|a^{2}(t)\|_{B^{\frac{N}{p}}_{p,1}}+\|v_{1}^{2}(t)\|_{B^{\frac{N}{p_{1}}+1}_{p_{1},1}+ B^{\frac{N}{p}+2}_{p,1}}$ is integrable on $[0,T]$, and: $$\int^{1}_{0}\frac{dr}{r\log(e+C_{T}r^{-1})}=+\infty$$ Osgood lemma yields $\|\delta v_{1}\|_{\widetilde{L}^{1}_{T}(B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})}=0$. Note that the definition of $m$ depends only on $T$ and that (\ref{ineunicondi}) is satisfied on $[0,T]$. Hence, the above arguments may be repeated on $[T_{1},2T_{1}]$, $[2T_{1},3T_{1}]$,etc. until the whole interval $[0,T]$ is exhausted. This yields uniqueness on $[0,T]$ for $a$ and $v_{1}$ which implies uniqueness for $u$. \subsection{Proof of corollary \ref{coro11}} The proof follows the same line as theorem \ref{theo1} except concerning the term of rest $\nabla(\Delta)^{-1}(P^{'}(\rho){\rm div}(\rho u))$ in the momentum equation of system (\ref{0.7}). Indeed in our case this term can write simplify on the form $\rho u$. In this case we control this term in $\widetilde{L}^{2}(B^{\frac{N}{p}}{p,1})$ without imposing additional conditions on $p$ of type $2\frac{N}{p}-1>0$.\\ Now the difficulty is to prove the uniqueness. For that we use the main theorem of D. Hoff in \cite{5H5} which is a result weak-strong uniqueness. In this article, D. Hoff has two solutions $(\rho,u)$ and $(\rho_{1},u_{1})$ with the sme initial data $(\rho_{0},u_{0})$ and he show that under some hypothesis of regularity on $(\rho_{1},u_{1})$ and $(\rho_{2},u_{2})$ then $\rho_{1}=\rho_{2}$, $u_{1}=u_{2}$. We now discuss that our solution check the conditions required in \cite{5H5}. More precisely we have to show that our solution $(\rho,u)$ verify all the hypothesis asked on $(\rho_{1},u_{1})$ and $(\rho_{2},u_{2})$. The check is easy and tedious, but only one hypothesis required to be carreful and is in fact the main condition why D. Hoff does not get global strong solution in dimension $N=3$ for the solutions built in \cite{5H4}. We need to check that $u\in L^{\infty}_{loc}((0,T],L^{\infty})$ and $\nabla u\in L^{1}((0,T),L^{\infty})$. In our case we have $\nabla u=\nabla v_{1}+\frac{1}{\nu}\nabla v$ where we recall that ${\rm div}v=P(\rho)-P(\bar{\rho})$. We know that by interpolation $\nabla v_{1}\in L^{1}_{T}(B^{\frac{N}{p_{1}}}_{p_{1},1}+B^{\frac{N}{p}+1}_{p,1})\hookrightarrow L^{1}_{T}(L^{\infty})$ and by proposition \ref{singuliere} $\nabla v\in L^{\infty}_{T}(B^{\frac{N}{p}}_{p,1})$. We obtain then $\nabla u\in L^{1}_{T}(L^{\infty})$. We have now to show that $u\in L^{\infty}_{T}(L^{\infty})$. In fact we have just to apply classical energy inequalities, so we multiply the momentum equation by $u|u|^{p_{1}-2}$ $$ \begin{aligned} &\frac{1}{p_{1}}\int_{\mathbb{R}^{N}}\rho|u|^{p_{1}}(t,x)dx+\mu\int^{t}_{0}|u|^{p_{1}-2}|\nabla u|^{2}(t,x)dtdx+\frac{p_{1}-2}{4}\mu\int^{t}_{0} |u|^{p_{1}-4}|\nabla|u|^{2}|^{2}(t,x)dxdt\\ &\hspace{1,7cm}+\int^{t}_{0}\int_{\mathbb{R}^{N}} \big(P(\rho)-P(\bar{\rho})\big)\big({\rm div}u|u|^{p_{1}-2}+(p_{1}-2) \sum_{i,k}u_{i}u_{k}\partial_{i}u_{k}|u|^{p_{1}-4}\big)(t,x)dtdx\\ &\hspace{11cm}\leq\int_{\mathbb{R}^{N}}\rho_{0}|u_{0}|^{p_{1}}dx. \end{aligned} $$ By Young's inequalities and the fact that $P(\rho)-P(\bar{\rho})$ belongs in $L^{\infty}(L^{2}\cap L^{\infty})$ we obtain that for all $p_{1}\in [1,+\infty[$, $\rho^{\frac{1}{p_{1}}}u\in L^{\infty}(L^{p_{1}})$ and: $$\|\rho^{\frac{1}{p_{1}}}u\|_{L^{\infty}(L^{p_{1}})}\leq C_{0},$$ where $C_{0}$ depend only of the initial data. As $\frac{1}{\rho}\in L^{\infty}$, we conclude that $u$ is uniformly bounded in all spaces $L^{\infty}(L^{p_{1}})$ with $p_{1}\in [1,+\infty[$. We conclude then $u\in L^{\infty}_{T}(L^{\infty})$. \section{Continuation criterions} \label{section6} \subsection*{Proof of theorem \ref{theo3}} We now prove theorem \ref{theo3}. We have assumed here that $\rho_{0}^{\frac{1}{p_{1}}}u_{0}\in L^{p_{1}}$ with $p_{1}>N$. We want show that with our hypothesis in particular that $a\in L^{\infty}_{T}$ and $1+a$ bounded away on $[0,T]$, then we are able to show that $\rho^{\frac{1}{p_{1}}}u\in L^{\infty}_{T}(L^{p_{1}})$. In this case as $1+a$ is bounded away we show that $u\in L^{\infty}_{T}(L^{p_{1}})$ and by embedding we get $u\in L^{\infty}_{T}(B^{\frac{N}{p_{1}}-1}_{p_{1},1})$ as $\frac{N}{p_{1}}\leq 0$. We can next conclude by the fact that $(a(T,\cdot),u(T,\cdot))\in (B^{\frac{N}{p}}_{p,1}\times B^{\frac{N}{p_{1}}-1}_{p_{1},1})$ so that we can extend our solutions. Finally we have just to show that $\rho^{\frac{1}{p_{1}}}u\in L^{\infty}_{T}(L^{p_{1}})$, in this goal we have just to apply classical energy inequality. We multiply the momentum equation by $u|u|^{p_{1}-2}$ and we get after integration by part: $$ \begin{aligned} &\frac{1}{p_{1}}\int_{\mathbb{R}^{N}}\rho|u|^{p_{1}}(t,x)dx+\mu\int^{t}_{0}|u|^{p_{1}-2}|\nabla u|^{2}(t,x)dtdx+\frac{p_{1}-2}{4}\mu\int^{t}_{0} |u|^{p_{1}-4}|\nabla|u|^{2}|^{2}(t,x)dxdt\\ &+\lambda\int^{t}_{0}\int_{\mathbb{R}^{N}}({\rm div}u)^{2}|u|^{p_{1}-2}(t,x)dtdx+\lambda\frac{p_{1}-2}{2}\int^{t}_{0}\int_{\mathbb{R}^{N}} {\rm div}u\sum_{i}u_{i}\partial_{i}|u|^{2}|u|^{p_{1}-4}(t,x)dtdx-\\ &\hspace{2cm}\int^{t}_{0}\int_{\mathbb{R}^{N}} \big(P(\rho)-P(\bar{\rho})\big)\big({\rm div}u|u|^{p_{1}-2}+(p_{1}-2) \sum_{i,k}u_{i}u_{k}\partial_{i}u_{k}|u|^{p_{1}-4}\big)(t,x)dtdx\\ &\hspace{11cm}\leq\int_{\mathbb{R}^{N}}\rho_{0}|u_{0}|^{p_{1}}dx. \end{aligned} $$ By Young's inequalities, inequality (\ref{inegaliteviscosite}) and the fact that $P(\rho)-P(\bar{\rho})$ belongs in $L^{\infty}(L^{1}\cap L^{\infty})$ we conclude the proof. \section{Appendix} This section is devoted to the proof of commutator estimates which have been used in section $2$ and $3$. They are based on paradifferentiel calculus, a tool introduced by J.-M. Bony in \cite{BJM}. The basic idea of paradifferential calculus is that any product of two distributions $u$ and $v$ can be formally decomposed into: $$uv=T_{u}v+T_{v}u+R(u,v)=T_{u}v+T^{'}_{v}u$$ where the paraproduct operator is defined by $T_{u}v=\sum_{q}S_{q-1}u\Delta_{q}v$, the remainder operator $R$, by $R(u,v)=\sum_{q}\Delta_{q}u(\Delta_{q-1}v+\Delta_{q}v+\Delta_{q+1}v)$ and $T^{'}_{v}u=T_{v}u+R(u,v)$. Inequalities (\ref{12}) and (\ref{18}) are consequence of the following lemma: \begin{lemme} \label{alemme2} Let $1\leq p_{1}\leq p\leq+\infty$ and $\sigma\in(-\min(\frac{N}{p},\frac{N}{p_{1}^{'}}),\frac{N}{p}+1]$. There exists a sequence $c_{q}\in l^{1}(\mathbb{Z})$ such that $\|c_{q}\|_{l^{1}}=1$ and a constant $C$ depending only on $N$ and $\sigma$ such that: \begin{equation} \forall q\in\mathbb{Z},\;\;\|[v\cdot\nabla,\Delta_{q}]a\|_{L^{p_{1}}}\leq C c_{q}2^{-q\sigma}\|\nabla v\|_{B^{\frac{N}{p}}_{p,1}} \|a\|_{B^{\sigma}_{p_{1},1}}. \label{52} \end{equation} In the limit case $\sigma=-\min(\frac{N}{p},\frac{N}{p_{1}^{'}})$, we have: \begin{equation} \forall q\in\mathbb{Z},\;\;\|[v\cdot\nabla,\Delta_{q}]a\|_{L^{p_{1}}}\leq C c_{q}2^{q\frac{N}{p}}\|\nabla v\|_{B^{\frac{N}{p}}_{p,1}} \|a\|_{B^{-\frac{N}{p_{1}}}_{p,\infty}}. \label{53} \end{equation} Finally, for all $\sigma>0$ and $\frac{1}{p_{2}}=\frac{1}{p_{1}}-\frac{1}{p}$, there exists a constant $C$ depending only on $N$ and on $\sigma$ and a sequence $c_{q}\in l^{1}(\mathbb{Z})$ with norm $1$ such that: \begin{equation} \forall q\in\mathbb{Z},\;\;\|[v\cdot\nabla,\Delta_{q}]v\|_{L^{p}}\leq C c_{q}2^{-q\sigma}(\|\nabla v\|_{L^{\infty}}\|v\|_{B^{\sigma}_{p_{1},1}}+\|\nabla v\|_{L^{p_{2}}}\|\nabla v\|_{B^{\sigma-1}_{p,1}}). \label{54} \end{equation} \end{lemme} {\bf Proof:} These results are proved in \cite{BCD} chapter $2$. {\hfill $\Box$}\\ Inequality (\ref{13}) is a consequence of the following lemma: \begin{lemme} \label{alemme3} Let $1\leq p_{1}\leq p\leq+\infty$ and $\alpha\in(1-\frac{N}{p},1]$, $k\in\{1,\cdots,N\}$ and $R_{q}=\Delta_{q}(a\partial_{k}w)-\partial_{k}(a\Delta_{q}w)$. There exists $c=c(\alpha,N,\sigma)$ such that: \begin{equation} \sum_{q}2^{q\sigma}\|R_{q}\|_{L^{p_{1}}}\leq C\|a\|_{B^{\frac{N}{p}+\alpha}_{p,1}}\|w\|_{B^{\sigma+1-\alpha}_{p_{1},1}} \label{57} \end{equation} whenever $-\frac{N}{p}<\sigma\leq\alpha+\frac{N}{p}$.\\ In the limit case $\sigma=-\frac{N}{p}$, we have for some constant $C=C(\alpha,N)$: \begin{equation} \sup_{q}2^{-q\frac{N}{p}}\|R_{q}\|_{L^{p_{1}}}\leq C\|a\|_{B^{\frac{N}{p}+\alpha}_{p,1}}\|w\|_{B^{-\frac{N}{p_{1}}+1-\alpha}_{p_{1},\infty}}. \label{58} \end{equation} \end{lemme} {\bf Proof} The proof is almost the same as the one of lemma A3 in \cite{DL} It is based on Bony's decomposition which enables us to split $R_{q}$ into: $$R_{q}=\underbrace{\partial_{k}[\Delta_{q},T_{a}]w}_{R_{q}^{1}}-\underbrace{\Delta_{q}T_{\partial_{k}a}w}_{R_{q}^{2}}+\underbrace{\Delta_{q}T_{\partial_{k}w}w}_{R_{q}^{3}} +\underbrace{\Delta_{q}R(\partial_{k}w,a)}_{R_{q}^{4}}-\underbrace{\partial_{k}T^{'}_{\Delta_{q}w}a}_{R_{q}^{5}}.$$ Using the fact that: $$R^{1}_{q}=\sum^{q+4}_{q^{'}=q-4}\partial_{k}[\Delta_{q},S_{q^{'}-1}a]\Delta_{q^{'}}w,$$ and the mean value theorem, we readily get under the hypothesis that $\alpha\leq1$, \begin{equation} \sum_{q}2^{q\sigma}\|R^{1}_{q}\|_{L^{p_{1}}}\lesssim\|\nabla a\|_{B^{\alpha-1}_{\infty,1}}\|w\|_{B^{\sigma+1-\alpha}_{p_{1},1}}. \label{59} \end{equation} Standard continuity results for the paraproduct insure that $R^{2}_{q}$ satisfies (\ref{59}) and that: \begin{equation} \sum_{q}2^{q\sigma}\|R^{1}_{q}\|_{L^{p_{1}}}\lesssim\|\nabla w\|_{B^{\sigma-\alpha-\frac{N}{p_{1}}}_{\infty,1}}\|a\|_{B^{\frac{N}{p}+\alpha}_{p,1}}. \label{60} \end{equation} provided $\sigma-\alpha-\frac{N}{p}\leq0.$ Next, standard continuity result for the remainder insure that under the hypothesis $\sigma>-\frac{N}{p}$, we have: \begin{equation} \sum_{q}2^{q\sigma}\|R^{1}_{q}\|_{L^{p_{1}}}\lesssim\|\nabla w\|_{B^{\sigma-\alpha}_{p_{1},1}}\|a\|_{B^{\frac{N}{p}+\alpha}_{p,1}}. \label{61} \end{equation} For bounding $R^{5}_{q}$ we use the decomposition: $R^{5}_{q}=\sum_{q^{'}\geq q-3}\partial_{k}(S_{q^{'}+2}\Delta_{q}w\Delta_{q^{'}}a),$ which leads (after a suitable use of Bernstein and H\"older inequalities) to: $$2^{q\sigma}\|R^{5}_{q}\|_{L^{p_{1}}}\lesssim\sum_{q^{'}\geq q-2}2^{(q-q^{'})(\alpha+\frac{N}{p_{1}}-1)}2^{q(\sigma+1-\alpha)} \|\Delta_{q}w\|_{L^{p_{1}}}2^{q^{'}(\frac{N}{p}+\alpha)} \|\Delta_{q^{'}}a\|_{L^{p}}.$$ Hence, since $\alpha+\frac{N}{p}-1>0$, we have: $$\sum_{q}2^{q\sigma}\|R^{5}_{q}\|_{L^{p}}\lesssim\|\nabla w\|_{B^{\sigma+1-\alpha}_{p_{1},1}}\|a\|_{B^{\frac{N}{p}+\alpha}_{p,1}}.$$ Combining this latter inequality with (\ref{59}), (\ref{60}) and (\ref{61}), and using the embedding $B^{\frac{N}{p}}_{p,1}\hookrightarrow B^{r-\frac{N}{p}}_{\infty,1}$ for $r=\frac{N}{p}+\alpha-1$, $\sigma_\alpha$ completes the proof of (\ref{57}).\\ The proof of (\ref{58}) is almost the same: for bounding $R^{1}_{q}$, $R^{2}_{q}$, $R^{3}_{q}$ and $R^{5}_{q}$, it is just a matter of changing $\sum_{q}$ into $\sup_{q}$. \null{\hfill $\Box$} \begin{remarka} For proving proposition \ref{linearise1}, we shall actually use the following non-stationary version of inequality (\ref{58}): $$\sup_{q}2^{-q\frac{N}{p}}\|R_{q}\|_{L^{1}_{T}(L^{p_{1}})}\leq C\|a\|_{\widetilde{L}^{\infty}_{T}(B^{\frac{N}{p}+\alpha}_{p,1})} \|w\|_{\widetilde{L}^{1}_{T}(B^{-\frac{N}{p_{1}}+1-\alpha}_{p_{1},\infty})},$$ which may be easily proved by following the computations of the previous proof, dealing with the time dependence according to H\"older inequality. \label{remarque7} \end{remarka}
0903.0815
\section{Introduction} For almost 50 years since Casimir's discovery [1], theory existed independent of rare experiments [2,3]. A lot of theoretical work has been done during this period. However, the relationship to reality of some theoretical models, such as an ideal metal spherical shell, an ideal metal rectangular box, a dielectric ball etc., remains unclear up to now. In the last ten years scientific investigations in the field of the Casimir effect have experienced an interaction between experiment and theory. This has revealed that the application of some basic theories to real experimental situations is highly nontrivial and even leads to communication difficulties between theorists and experimentallists. In this paper we summarize some experience of the interaction between ``high theory'' and real world experimental details in the last ten years. Different points of view are considered on such problems as agreement between experiment and theory and applicability of some ideal models and approximate methods in real experimental situations. It is shown that in some cases confusion arises from an inadequate use of terminology. In Sec.~2 we discuss an old problem of the thermal Casimir force in ideal metal rectangular boxes and suggest a new solution which satisfies general physical criteria. It is shown that the case of an isolated box is independent of a box with a partition (piston). The results for the Casimir force obtained for each of these configurations are in mutual agreement. Section 3 briefly reviews the proximity force approximation including its justification from the first principles of quantum field theory and experimental applications. In this respect a new representation for the Casimir energy in terms of the functional determinants and scattering matrices is considered. The application of this representation to real material bodies is still problematic. In Sec.~4 we consider the problem of the reliability of experiments. It is underlined that the experimental error is an independent characteristic of the experimental precision which should not be confused with the measure of agreement between experiment and theory. Section 5 is devoted to the comparison between experiment and theory in the measurements of the Casimir force. In this respect different approaches to the theoretical description of the Casimir force between real metals are compared with the most precise indirect measurement of the Casimir pressure between two parallel plates by means of micromechanical torsional oscillator [4,5]. Special attention is paid to uncertainties which might be introduced in the computations due to deviations of the tabulated optical data from the data particular to the metallic films actually used. In Sec.~6 we consider a recent theoretical approach to the thermal Casimir force taking into account the screening effects and diffusion currents. We apply this approach to the case of real metals and analyze its consistency with the principles of thermodynamics and experimental data. Specifically we show that for metals with perfect crystal lattices the inclusion of screening effects results in violation of the Nernst heat theorem. The experimental data of the experiment [4,5] exclude this approach at a 99.9\% confidence level. Section 7 contains our conclusions and discussion. \section{Thermal Casimir force in ideal metal rectangular boxes} Ideal metal rectangular boxes were first considered by Lukosz [6], Mamayev and Trunov [7,8] and Ambj{\o}rm and Wolfram [9]. This configuration attracted much attention because it was found that the electromagnetic Casimir force in rectangular boxes can be both attractive and repulsive depending on the ratio of sides $a_x$, $a_y$ and $a_z$ along the $x$, $y$ and $z$ axes. The nonrenormalized Casimir energy of the box is equal to (for simplicity we consider the massless scalar field with Dirichlet boundary conditions) \begin{equation} E_0(a_x,a_y,a_z)=\frac{\hbar}{2}\sum_{n,l,p=1}^{\infty}\omega_{nlp}, \label{eq1} \end{equation} \noindent where \begin{equation} \omega_{nlp}=\pi c\left[\Bigl(\frac{n}{a_x}\Bigr)^2+ \Bigl(\frac{l}{a_y}\Bigr)^2+\Bigl(\frac{p}{a_z}\Bigr)^2\right]^{1/2}. \label{eq2} \end{equation} \noindent The regularization of (\ref{eq1}) can be performed, e.g., using the Epstein zeta function or the cut-off method [10]. The latter permits to find the geometric structure of infinities contained in (\ref{eq1}). To do so, one replaces $E_0(a_x,a_y,a_z)$ from (\ref{eq1}) with $E_0^{(\delta)}(a_x,a_y,a_z)$ by introducing the cut-off function \begin{equation} f(\delta\omega_{nlp})={\rm e}^{-\delta\omega_{nlp}} \label{eq3} \end{equation} \noindent under the sign of summation in (\ref{eq1}). After the repeated application of the Abel-Plana formula [10] to $E_0^{(\delta)}(a_x,a_y,a_z)$ one finds that there are three different types of divergent quantities in the limit $\delta\to 0$, $I_1$, $I_2$ and $I_3$ of order $\delta^{-4}$, $\delta^{-3}$ and $\delta^{-2}$, respectively. Then, the finite, renormalized, Casimir energy can be defined as \begin{equation} E_0^{\rm ren}(a_x,a_y,a_z)=\lim_{\delta\to 0}\left[ E_0^{(\delta)}(a_x,a_y,a_z)-I_1-I_2-I_3\right]. \label{eq4} \end{equation} \noindent Here, $I_k$ ($k=1,\,2,\,3$) are the counter terms having the following geometrical structure: \begin{equation} I_1=\frac{12\pi^2\hbar a_xa_ya_z}{c^3\delta^4},\qquad I_2=-\frac{\pi^2\hbar (a_xa_y+a_xa_z+a_ya_z)}{c^2\delta^3},\qquad I_3=\frac{\pi\hbar (a_x+a_y+a_z)}{8c\delta^2}. \label{eq5} \end{equation} \noindent A similar situation takes place for the electromagnetic field, where the renormalized Casimir energy, $E_{0,\rm em}^{\rm ren}$, also takes the form of (\ref{eq4}) (with $E_0$ replaced for $E_0^{\rm em}$) and \begin{equation} I_1^{\rm em}=2I_1,\qquad I_2^{\rm em}=0,\qquad I_3^{\rm em}=-2I_3. \label{eq6} \end{equation} \noindent It is seen that in both cases the counter terms are proportional to the volume of the box $V=a_xa_ya_z$, to the area of box surface and to the sum of sides. In the last few years the configuration of a rectangular box with so-called {\it movable} partition (piston) has attracted much attention [11--14]. This means that the piston can have any fixed position parallel to the two opposite faces of the box (the configuration where the piston is not fixed and may slide between the opposite faces is in fact a nonequilibrium case). Let the piston be parallel to the plane $xy$ and have an equation $z=a_{z1}<a_z$. In this case our box is divided into the two boxes $a_x\times a_y\times a_{z1}$ and $a_x\times a_y\times (a_z-a_{z1})$. Calculating the sum of the regularized Casimir energies \begin{equation} E_0^{(\delta)}(a_x,a_y,a_{z1})+E_0^{(\delta)}(a_x,a_y,a_z-a_{z1}), \label{eq7} \end{equation} \noindent one finds that the contribution from the singular terms of the form of (\ref{eq5}) does not depend on the position of the piston $a_{z1}$. This leads to a finite force acting on the piston \begin{equation} F(a_x,a_y,a_z,a_{z1})=-\frac{\partial}{\partial a_{z1}}\left[ E_0^{(\delta)}(a_x,a_y,a_{z1})+E_0^{(\delta)}(a_x,a_y,a_z-a_{z1})\right]. \label{eq8} \end{equation} \noindent This force is well defined and does not require the renormalization procedure (\ref{eq4}). In both scalar and electromagnetic cases the force acting on the piston attracts it to the nearest face of the box. On this ground the existence of the Casimir repulsion in cubes in the electromagnetic case was cosidered doubtful [12]. Specifically, it was claimed [12,13] that the definition of the pressure acting on a cube face requires elastic deformations of single bodies treated as perfect. The attraction (or repulsion for a piston with Neumann boundary conditions [15]) of a piston to the nearest face of the box does not, however, negate the Casimir repulsion for boxes without a piston that have some appropriate ratio of $a_x$, $a_y$ and $a_z$. The point is that the cases with an empty space outside the box and that with another section of the larger box outside the piston are physically quite different. In the first case the vacuum energy outside the box does not depend on $a_x$, $a_y$ and $a_z$ and there is no force acting on the box from the outside. Whereas in the second case there is an extra section of the larger box outside the piston which gives rise to the additional force acting on it. In fact one need not admit elastic deformations to define a force and a pressure in static configurations. This is simply done using the principle of virtual work and virtual displacements through real forces [16,17]. In addition, from a thermodynamic point of view any equilibrium system can be characterized by the free energy (energy if the temperature is equal to zero) and the respective pressure [18] \begin{equation} P=-\left.\frac{\partial {\cal F}}{\partial V}\right|_{T={\rm const}}. \label{eq9} \end{equation} \noindent {}From this point of view it would be illogical to admit consideration of the force acting on a piston, but exclude from consideration forces acting on the faces of a box where this piston serves as a partition. In this respect it seems important to provide a finite definition of the Casimir free energy in ideal metal rectangular boxes satisfying general physical requirements. The first calculations on this subject [9] resulted in a divergent free energy after removing the regularization. More recent results appear to be either infinite [19] or ambiguous [20]. Paper [21] reconsidered the derivation of the Casimir free energy in rectangular boxes using zeta functional regularization. However, the used formalism does not include all necessary subtractions. The following definition of the Casimir free energy in rectangular boxes suggests itself [12,13,21] \begin{equation} {\cal F}_0=E_0^{\rm ren}+\Delta_T{\cal F}_0, \qquad \Delta_T{\cal F}_0=k_BT\sum_{n,l,p=1}^{\infty}\ln\left(1- {\rm e}^{-\frac{\hbar\omega_{nlp}}{k_BT}}\right). \label{eq10} \end{equation} \noindent This expression is finite. However, it cannot be considered as physically satisfactory. The problem is that at high temperature the thermal correction (\ref{eq10}) behaves as [22] \begin{equation} \Delta_T{\cal F}_0=\alpha_1\frac{(k_BT)^4}{(\hbar c)^3}+ \alpha_2\frac{(k_BT)^2}{(\hbar c)^2}+ \alpha_3\frac{(k_BT)^2}{\hbar c}+\alpha_4 k_BT+\ldots\, , \label{eq11} \end{equation} \noindent where $\alpha_1=-V\pi^2/90$, $\alpha_{2,3}=\alpha_{2,3}(a_x,a_y,a_z)$ can be expressed in terms of the heat kernel coefficients and $\alpha_4={\rm const}$. Then at high temperature $\Delta_T{\cal F}_0$ contains terms of quantum origin which increase with the increase of temperature. In the general case, these terms lead to respective forces acting on the box faces which increase with the increase of the sides of the box. Such paradoxical properties are physically unacceptable. Because of this it was suggested [23] to define the physical Casimir free energy of the box as \begin{equation} {\cal F}=E_0^{\rm ren}+\Delta_T{\cal F}_0- \alpha_1\frac{(k_BT)^4}{(\hbar c)^3}- \alpha_2\frac{(k_BT)^2}{(\hbar c)^2}- \alpha_3\frac{(k_BT)^2}{\hbar c}. \label{eq12} \end{equation} \noindent With this definition, the respective Casimir forces acting on the box faces go to zero when all the box sides $a_x,\,a_y,\,a_z$ go to infinity in agreement with physical intuition. The physical meaning of all three subtractions made on the right-hand side of (\ref{eq12}) can be clearly understood. The first term is actually the contribution of the blackbody radiation in the volume of the box. This is seen from the fact that the free energy density of the blackbody radiation in empty space is given by \begin{equation} f_{bb}=k_BT\int\frac{d^3k}{(2\pi)^3}\ln\left(1- {\rm e}^{-\frac{\hbar c|k|}{k_BT}}\right)= -\frac{\pi^2(k_BT)^4}{90(\hbar c)^3}, \label{eq13} \end{equation} \noindent where for the electromagnetic case $f_{bb}^{\rm em}=2f_{bb}$. For the scalar Casimir effect in a rectangular box with sides $a_x\times a_y\times a_z$ the asymptotic behavior of $\Delta_T{\cal F}_0$ at high $T$ was investigated in [23] with the result \begin{equation} \alpha_2=\frac{\zeta(3)}{4\pi}(a_xa_y+a_xa_z+a_ya_z), \qquad \alpha_3=-\frac{\pi}{24}(a_x+a_y+a_z). \label{eq14} \end{equation} \noindent In the electromagnetic case the following values of these coefficients were obtained: \begin{equation} \alpha_2^{\rm em}=0, \qquad \alpha_3^{\rm em}=\frac{\pi}{12}(a_x+a_y+a_z). \label{eq15} \end{equation} \noindent This demonstrates that the geometric structures of all three terms subtracted in (\ref{eq12}) are precisely the same as the terms subtracted in (\ref{eq4}) to make the Casimir energy finite at zero temperature. Because of this, the subtraction procedure in (\ref{eq12}) can be interpreted as the additional (finite) renormalization of the same geometric parameters as were renormalized at zero temperature to make the Casimir energy of the box finite. The simplest application of the final expression for the physical Casimir free energy (\ref{eq12}) is the case of two plane parallel plates. It is easily seen that in this configuration $\alpha_2=\alpha_3=0$ and one is left with only a subtraction of the free energy of the blackbody radiation in the volume between the plates $V=aS$, where $S$ is the infinite plate area. This leads to the well known result [10,24] for the electromagnetic Casimir free energy per unit area of the plates \begin{equation} {\cal F}(a,T)=-\frac{\pi^2}{720a^3} \left\{1+\frac{45}{\pi^3}\sum_{l=1}^{\infty}\left[ \frac{\coth(\pi lt)}{t^3l^3} +\frac{\pi}{t^2l^2{\rm sinh}^2(\pi tl)}\right]- \frac{1}{t^4} \right\}, \label{eq16} \end{equation} \noindent where $t\equiv T_{\rm eff}/T$, and the effective temperature is defined as $k_BT_{\rm eff}=\hbar c/(2a)$. In particular, at $T\ll T_{\rm eff}$ one obtains \begin{equation} {\cal F}(a,T)=-\frac{\pi^2}{720a^3} \left[1+\frac{45\zeta(3)}{\pi^3} \left(\frac{T}{T_{\rm eff}}\right)^3- \left(\frac{T}{T_{\rm eff}}\right)^4 \right], \label{eq17} \end{equation} \noindent where the last contribution on the right-hand side originates from the subtraction of the blackbody radiation. We emphasize that only this term contributes to the thermal correction to the electromagnetic Casimir pressure at low temperatures (short separations) \begin{equation} P(a,T)= -\frac{\pi^2}{240a^4} \left[1+\frac{1}{3}\, \left(\frac{T}{T_{\rm eff}}\right)^4 \right]. \label{eq18} \end{equation} Equation (\ref{eq12}) solves the long-standing problem on the calculation of the physical Casimir free energies and pressures in rectangular boxes of any size. A few examples for both the scalar and electromagnetic Casimir effect are considered in [23]. Here we present the computational results for the electromagnetic free energy in a cube and for the respective Casimir force \begin{equation} F_x(a,T)=a^2P(a,T)=-\frac{1}{3}\, \frac{\partial{\cal F}(a,T)}{\partial a} \label{eq19} \end{equation} \noindent acting on the opposite cube faces. \begin{figure}[b] \vspace*{-15.cm} \begin{center} \hspace*{-2.5cm} \includegraphics{figVM-1.ps} \end{center} \vspace*{-11.cm} \caption{The electromagnetic Casimir free energy for a cube as a function of (a) size $a$ at $T=300\,$K (solid line; the dashed line shows the energy at $T=0$) and (b) temperature at $a=2\,\mu$m.} \end{figure} In Fig.~1(a) we plot the electromagnetic Casimir free energy in a cube as a function of $a$ at $T=300\,$K (solid line). In the same figure the Casimir energy at $T=0$ is shown by the dashed line. As is seen in this figure, the electromagnetic Casimir free energy decreases with the increase of separation. At large separations ${\mathcal F}$ approaches a constant. In Fig.~1(b) the electromagnetic Casimir free energy is shown as a function of temperature for a cube with $a=2\,\mu$m. The free energy decreases with the increase of $T$. At high temperatures ${\mathcal F}$ demonstrates the classical limit. The respective thermal electromagnetic Casimir force at $T=300\,$K, as a function of $a$, is shown in Fig.~2(a) by the solid line. It is positive (i.e., repulsive) for cubes of any size. Thus, thermal effects for cubes in the electromagnetic case increase the strength of the Casimir repulsion. The dashed line in Fig.~2(a) shows the electromagnetic Casimir force at $T=0$ as a function of $a$. This force is given by \begin{equation} F_{x}(a)=\frac{0.09166}{3a^2}, \label{eq20} \end{equation} \noindent i.e., it is always repulsive. Fig.~2(b) demonstrates the electromagnetic Casimir force in a cube of size $a=2\,\mu$m as a function of temperature. It is seen that the force increases with increasing temperature. \begin{figure}[t] \vspace*{-15.cm} \begin{center} \hspace*{-2.5cm} \includegraphics{figVM-2.ps} \end{center} \vspace*{-11.cm} \caption{The electromagnetic Casimir force between the opposite faces of a cube as a function of (a) size $a$ at $T=300\,$K (solid line; the dashed line shows the force at $T=0$) and (b) temperature at $a=2\,\mu$m.} \end{figure} Note that the results presented differ from those found in [21] where the terms of order $(k_BT)^4$ and of lower orders in the Casimir free energy were obtained in the high-temperature regime. This is explained by the fact that the authors of [21] did not make subtractions of the contributions from the blackbody radiation and of the terms proportional to the box surface area and to the sum of its sides. The thermal correction to the Casimir energy and force acting on a piston were investigated in [13] for the scalar field with Dirichlet or Neumann boundary conditions using the definition (\ref{eq10}). The electromagnetic Casimir free energy and force acting on a piston were found in the case of ideal metal rectangular boxes and cavities with the general cross section [13]. In the limit of low temperatures the thermal correction to the Casimir force on a piston was shown to be exponentially small. In the case of medium temperature $a_x\ll\hbar c/(k_BT)\ll a_y,a_z$ the authors of [13] obtained terms of order $(k_BT)^4$ and of order $(k_BT)^2$ in the electromagnetic Casimir free energy. In the scalar Casimir free energy, a term of order $(k_BT)^3$ was also obtained. This results in the contribution to the force acting on a piston which increases with the increase of the temperature, depends on $\hbar$ and $c$ and does not depend on the position of the piston. The scalar and electromagnetic thermal Casimir forces acting on a piston were also considered on the basis of equation (\ref{eq10}) in [25]. The same results for the thermal correction to the Casimir force acting on a piston are obtained if the free energy is defined in accordance with equation (\ref{eq12}). This is because the contribution of blackbody radiation to the energy of the entire box is equal to \begin{equation} -a_xa_ya_{z1}\,f_{bb}-a_xa_y(a_z-a_{z1})f_{bb}= -a_xa_ya_z\,f_{bb}, \label{eq21} \end{equation} \noindent i.e., it does not depend on the position of the piston. This is also true for terms of order $(k_BT)^3$ and $(k_BT)^2$ which are proportional to the surface area of each section of the box and to the sum of its sides. The above results were obtained for rectangular boxes with the Dirichlet boundary conditions (scalar case) and for ideal metal boxes (electromagnetic case). In the same way, as for zero temperature, the consideration of the thermal Casimir effect in rectangular boxes has to incorporate real material properties of the boundary surfaces. Till now this problem has not been conclusively solved. \section{Functional determinants and the justification of the proximity force approximation} The proximity force approximation [26] provides an important bridge between experiment and theory. Experimentally it is hard to use the configuration of two parallel plates. Because of this, most of experiments use the configuration of a sphere above a plate for which, even in the ideal metal case, the exact results for the electromagnetic Casimir force are not available. According to the proximity force approximation (PFA), the interaction energy between two curved surfaces $\Sigma_1$ and $\Sigma_2$ can be approximately calculated by replacing the small curved surface elements with respective plane plates. If the interaction energy between the opposite plane parallel elements is notated as $E(z)$ (where $z$ is the separation distance), the interaction energy and force are approximately represented as \begin{equation} U(a)=\int_{\Sigma_1}E(z)d\sigma, \qquad F(a)=-\frac{\partial U(a)}{\partial a}. \label{eq22} \end{equation} \noindent For the configuration of an ideal metal sphere of radius $R$ at a separation $a$ above an ideal metal plane (\ref{eq22}) results in \begin{equation} F_{\rm PFA}^{s}(a)=2\pi RE(a)=-\frac{\pi^3\hbar cR}{360a^3}. \label{eq23} \end{equation} \noindent For an ideal metal cylinder above an ideal metal plate the PFA leads to \begin{equation} F_{\rm PFA}^{c}(a)=\frac{15\pi}{16} \sqrt{\frac{2R}{a}}E(a)= -\frac{\pi^3}{384\sqrt{2}}\sqrt{\frac{R}{a}}\frac{\hbar c}{a^3}. \label{eq24} \end{equation} \noindent Equations (\ref{eq22})--(\ref{eq24}) are the approximate ones. They are applicable only at short separations between the surfaces. Thus, (\ref{eq23}) and (\ref{eq24}) work well only at $a\ll R$. In many papers the PFA (\ref{eq22}) is applied in a region where it is not applicable, for example at $a=R/2$. The obtained large deviations of the PFA result from the exact result are then considered as a ``violation of the PFA''. Such formulations are in fact misleading. The PFA gives only the main contribution to the force under some conditions. Specifically, it would be meaningless to calculate the integral in (\ref{eq22}) up to higher orders in the related small parameter with the aim of obtaining a more exact result. What is really meaningful is the search of an exact analytical representation for the Casimir force in configurations where only the PFA result is so far available. In the last few years the finite representation for the Casimir energy for two separated bodies $A$ and $B$ in terms of the functional determinants was obtained. In this representation the Casimir energy can be written in the form [27,28] \begin{equation} E(a)=\frac{1}{2\pi}\int_{0}^{\infty}\!\!\!d\xi\, {\rm Tr}\ln\bigl(1-{\cal T}^A{\cal G}_{\xi,AB}^{(0)} {\cal T}^B{\cal G}_{\xi,BA}^{(0)}\bigr) = \frac{1}{2\pi}\int_{0}^{\infty}\!\!\!d\xi\, \ln{\rm det}\bigl(1-{\cal T}^A{\cal G}_{\xi,AB}^{(0)} {\cal T}^B{\cal G}_{\xi,BA}^{(0)}\bigr). \label{eq25} \end{equation} \noindent Here, ${\cal G}_{\xi,AB}^{(0)}$ is the operator for the free space Green function with the matrix elements $\langle\mbox{\boldmath$r$}|{\cal G}_{\xi,AB}^{(0)}| \mbox{\boldmath$r$}^{\prime}\rangle$ where $\mbox{\boldmath$r$}$ belongs to the body $A$ and $\mbox{\boldmath$r$}^{\prime}$ to $B$. ${\cal T}^A\,({\cal T}^B)$ is the operator of the $T$-matrix for a body $A$ and $B$, respectively. The latter is widely used in light scattering theory, where it is the basic object for expressing the properties of the scatterers [29]. Using such a representation, in [28] the analytic results for the electromagnetic Casimir energy for an ideal metal cylinder above an ideal metal plane were obtained. Eventually, the result is expressed through the determinant of an infinite matrix with elements given in terms of the Bessel functions. The analytic asymptotic behavior of the exact Casimir energy at short separations was found in [30]. It results in the following expression for the Casimir force at $a\ll R$: \begin{equation} F^{c}(a,0)=F_{\rm PFA}^{c}(a) \,\left[1-\frac{1}{5}\left(\frac{20}{\pi^2}- \frac{7}{12}\right)\frac{a}{R}\right]. \label{eq26} \end{equation} \noindent The PFA result (\ref{eq24}) in this case matches with the first term on the right-hand side of (\ref{eq26}). Equation (\ref{eq26}) is very important. It demonstrates that the relative error of the electromagnetic Casimir force between a cylinder and a plate calculated using the PFA is equal to $0.2886\,a/R$. Thus, for typical experimental parameters of $R=100\,\mu$m and $a=100\,$nm this error is approximately equal to only 0.03\%. For a sphere above a plate made of ideal metals the exact analytic solution in the electromagnetic case has not yet been obtained. The scalar Casimir energy for a sphere above a plate was found in [30,31]. The scalar Casimir energies for both a sphere and a cylinder above a plate have also been computed numerically using the wordline algorithms [32,33], but it was noted that the Casimir energies for the Dirichlet scalar field should not be taken as an estimate for those in the electromagnetic case. For an ideal metal sphere above an ideal metal plane a correction of order $a/R$ beyond the PFA was computed numerically in [34] for $a/R\geq 0.075$ and in [35] for $a/R\geq 0.15$. In both cases the extrapolation of the obtained results to smaller $a/R$ leads to a coefficient near $a/R$ approximately equal to 1.4. In addition, the validity of the PFA for a sphere above a plate has been estimated experimentally [36] and the error introduced from the use of this approximation was shown to be less than $a/R$. This is in disagreement with the extrapolations made in [34,35]. To solve this contradiction, it is desirable to find the analytical form of the first correction beyond the PFA for a sphere above a plane, like in (\ref{eq26}) for the cylinder-plane configuration. In fact the representation (\ref{eq25}) provides a far-reaching generalization of the Lifshitz formula. {}From conceptual point of view it can be applied not only to ideal metals, but to real materials as well. The problem, however, is to find the matrix elements of the $T$-matrix operator which would take proper account of both geometric shape and material properties of the test bodies used in the experimental situation. \section{The experimental error and reliability of experiments} The concept of the experimental error is often confused with the theoretical error and with the measure of agreement between experiment and theory. However, when we deal with an {\it independent measurement}, the experimental error has nothing to do with any theory of the measured quantity. The independent measurement of the Casimir force or its gradient does not use any theory of the Casimir effect. Thus, the experiments [4,5,37--42] are independent in this respect. In other experiments (in [43], for instance) the measurement data are fitted to some theoretical expression for the Casimir force. Such kind of measurements are not independent and we do not consider them below. Some papers arrive to theoretical conclusions which are inconsistent with the measurement data. This is sometimes surrounded by the statement that the measurements might be not as precise as indicated by the authors. It is our opinion that such statements made without an indication of any specific cause are inappropriate. Both random, $\Delta^{\!\rm rand}F^{\rm expt}(a)$, and systematic, $\Delta^{\!\rm syst}F^{\rm expt}(a)$, experimental errors in the Casimir force measurements are found using the rigorous statistical procedures. They can be combined to find the total experimental error \begin{equation} \Delta^{\!\rm tot}F^{\rm expt}(a)=q_{\beta}(r)\left[ \Delta^{\!\rm rand}F^{\rm expt}(a)+ \Delta^{\!\rm syst}F^{\rm expt}(a)\right]. \label{eq27} \end{equation} \noindent Here, $q_{\beta}(r)$ determined at $\beta=0.95$ (i.e., at 95\% confidence level) varies between 0.71 and 0.81 depending of the value of the quantity $r=\Delta^{\!\rm syst}F^{\rm expt}(a)/s_{\bar{F}}(a)$, where $s_{\bar{F}}(a)$ is the variance of the mean measured quantity [44]. In fact there is no arbitrariness in the determination of the total experimental error which is the ultimate characteristic of the precision of the measurements. The most valuable esperiments are marked by a negligible role of the random error. For such experiments \begin{equation} \Delta^{\!\rm tot}F^{\rm expt}(a)\approx \Delta^{\!\rm syst}F^{\rm expt}(a). \label{eq28} \end{equation} \noindent For today there is only one indirect measurement of the Casimir pressure between Au coated plates by means of micromechanical torsional oscillator satisfying this condition [4,5]. The total experimental error in this measurement at shortest separations is as small as 0.2\% of the measured Casimir pressure. We stress once again that this error is unrelated to much larger errors inherent to theoretical computations on the basis of the Lifshitz theory or to the measure of agreement between experiment and theory. This is just the resulting error with which the experimental data are taken. \begin{figure}[t] \vspace*{-11.5cm} \begin{center} \hspace*{-2.5cm} \includegraphics{figVM-3.ps} \end{center} \vspace*{-14.5cm} \caption{The total absolute experimental error of the Casimir pressure measurements [4,5] (the solid line), the random error (the long-dashed line), and the systematic error (the short-dashed line) are shown as functions of separation.} \end{figure} As an example, the total absolute experimental error in the experiment on measuring the Casimir pressure by means of a micromechanical torsional oscillator [4,5] is shown in Fig.~3 as a function of separation (the solid line). The long-dashed and short-dashed lines show the random and systematic errors, respectively. As a result, the relative total experimental error $\delta^{\rm tot}P^{\rm expt}(a)= \Delta^{\!\rm tot}P^{\rm expt}(a)/|P^{\rm expt}(a)|$ varies from 0.19\% at $a=162\,$nm to 0.9\% at $a=400\,$nm, and to 9.0\% at $a=746\,$nm. Sometimes the experimental precision can be questioned if there are some doubts in the calibration procedures used. For example, the electrostatic calibration is of prime importance in the independent measurements of the Casimir force. Specifically, it is usually carefully verified that the residual potential between the grounded test bodies does not depend on separation where the measurements of the electric force are performed. Recently it was claimed that the residual potential $V_0$ from the electrostatic calibration in the sphere-plate configuration is separation dependent [45]. The authors used an Au-coated sphere of 30.9\,mm radius above an Au coated plate. On the basis of these measurements a reanalysis of the independence of $V_0$ on separation in the earlier measurements of the Casimir force by means of an atomic force microscope and a micromachined oscillator was invited. The results [45] are, however, not directly relevant to the earlier measurements. The point is that the radius of the sphere used in [45] is a factor of 300 larger than in the earlier precision measurements of the Casimir force. It is well known that for large test bodies (i.e., large interaction areas) there are large variations of electric forces due to deviations of the mechanically polished and ground lens surface from perfect spherical shape [46]. \section{Comparison between experiment and theory} Experiment is the supreme arbiter in physics. Because of this, the comparison between experiment and theory is a painful point for those theories that are found to be experimentally inconsistent. It happens that in such cases both the experimental data and the methods of comparison are questioned. The Casimir force is a strongly nonlinear function of the separation distance. As a consequence, such global characteristics of the agreement between experiment and theory as the root-mean-square deviation were found to be inadequate [47]. In the last few years two local methods on how to compare experiment with theory in the Casimir force measurements were elaborated and successfully applied. Within the first method [4,48,49], the experimental data are represented as crosses with arms determined by the total experimental errors in the measurement of separation and a related quantity (the force, the pressure or the frequency shift) determined at some chosen confidence level. In the same figure, one should plot the theoretical band whose width is equal to the total theoretical error determined at the same confidence as the experimental errors. The overlap (or its absence) of the experimental crosses and the theoretical band can be used to make a conclusion on the consistency or inconsistency between experiment and theory. \begin{figure}[t] \vspace*{-10.5cm} \begin{center} \hspace*{-2.5cm} \includegraphics{figVM-4.ps} \end{center} \vspace*{-16.8cm} \caption{The crosses show the measured mean Casimir pressures together with the absolute errors in the separation and pressure as a function of the separation. (a) The theoretical Casimir pressures computed using the generalized plasma-like model and the optical data extrapolated by the Drude model are shown by the light-gray and dark-gray bands, respectively. (b) The theoretical Casimir pressures computed using different sets of optical data available in the literature versus separation are shown as the dark-gray band.} \end{figure} In Fig.~4 the first method of comparison between experiment and theory is illustrated on the measurement data by Decca et al. [4,5] discussed in Sec.~3. The light-gray band in Fig.~4(a) shows the theoretical results computed using the Lifshitz theory combined with the generalized plasma-like dielectric permittivity [50,51] \begin{equation} \varepsilon_{gp}({\rm i}\xi)=\varepsilon({\rm i}\xi)+ \frac{\omega_p^2}{\xi^2}, \qquad \varepsilon({\rm i}\xi)=1+\sum_{j=1}^{K} \frac{f_j}{\omega_j^2+\xi^2+\gamma_j\xi}. \label{eq29} \end{equation} \noindent Here, $\omega_p$ is the plasma frequency, $\omega_j\neq 0$ are the frequencies of the oscillators describing core electrons, $f_j$ are the oscillator strengths and $\gamma_j$ are the relaxation parameters. The dark-gray band in Fig.~4(a) is computed by the same Lifshitz theory using the tabulated optical data for Au [52] extrapolated to low frequencies by means of the Drude model [53--55] \begin{equation} \varepsilon_{D}({\rm i}\xi)=1+ \frac{\omega_p^2}{\xi(\xi+\gamma)}= 1+\frac{4\pi\sigma({\rm i}\xi)}{\xi}. \label{eq30} \end{equation} \noindent Here $\sigma({\rm i}\xi)$ is the conductivity. It is connected with the dc conductivity by the equation \begin{equation} \sigma({\rm i}\xi)=\frac{\sigma(0)}{1+\frac{\xi}{\gamma}}. \label{eq30a} \end{equation} \noindent Note that the plasma frequency and the dc conductivity are expressed as [56] \begin{equation} \omega_p^2=\frac{4\pi e^2n}{m}, \qquad \sigma(0)=\mu\,|e|\,n, \label{eq30b} \end{equation} \noindent where $e$ and $m$ are the charge and the mass of an electron, $n$ is the charge carrier density and $\mu$ is their mobility. As is seen in Fig.~4(a), the experimental data shown as crosses (the experimental errors are determined at a 95\% confidence level) are consistent with the theoretical approach using the generalized plasma-like permittivity. The Drude model approach is excluded at a 95\% confidence level. In Fig.~4(b) the same experimental data are reproduced and compared with the Drude model approach using all sets of optical data available in the literature [57]. As is seen in Fig.~4(b), the use of optical data alternative to [52] makes the disagreement deeper between the experimental data and the Drude model approach. In Fig.~4(a,b) the comparison between experiment and theory is performed within the separation region from 500 to 600\,nm. However, exactly the same conclusions follow over the entire measurement range in this experiment from 160 to 750\,nm. In the second method for the comparison of experiment and theory in the Casimir force measurements [40,48,58], the differences between the theoretical and mean experimental quantity, for instance, $P^{\rm theor}(a)-\bar{P}^{\rm expt}(a)$, are plotted as dots. In the same figure the borders of the confidence intervals $[-\Xi_{P}(a),\Xi_{P}(a)]$ for this difference at a chosen confidence level (usually 95\%) are plotted as the function of separation. If no less than 95\% of the dots representing the above differences belong to the confidence interval the theoretical approach is consistent with the data. Alternatively, if almost all the dots are outside the confidence interval, the theoretical approach is excluded by the data at a 95\% confidence level. \begin{figure}[b] \vspace*{-4.cm} \begin{center} \hspace*{-2.cm} \includegraphics{figVM-5.ps} \end{center} \vspace*{-21.cm} \caption{The differences of the theoretical and the mean experimental Casimir pressures between the two Au plates versus separation are shown as dots. The theoretical results are calculated using the Lifshitz theory at room temparature using (a) the generalized plasma-like model and (b) the Drude model approach. The solid lines indicate the boundaries of the 95\% confidence intervals. The dashed line indicates the boundary of the 99.9\% confidence intervals. } \end{figure} In Fig.~5 we illustrate the second method for the comparison of experiemnt with theory using the experimental data of the same measurements [4,5]. In Fig.~5(a) the theoretical approach using the generalized plasma-like permittivity (\ref{eq29}) is compared with the data. It is seen that all dots are inside the error bars. Thus, this approach is consistent with the data. In Fig.~5(b) the same data are compared with the theoretical approach using the tabulated optical data extrapolated by the Drude model. The solid and dashed line represent the borders of 95\% and 99.9\% confidence intervals, respectively. As is seen in Fig.~5(b), the Drude model approach is experimentally excluded at a 95\% confidence level within the entire measurement range from 160 to 750\,nm. Within a more narrow measurement range from 210 to 620\,nm the Drude model approach is excluded at a 99.9\% confidence level. If the theoretical approach is experimentally consistent [see Fig.~5(a)] the quantity $\Xi_{P}/|\bar{P}^{\rm expt}|$, determined at a 95\% confidence level, can be used as the quantitative measure of agreement between experiment and theory. Thus, at $a=162\,$nm this measure is equal to 1.9\%. It decreases to 1.4\% at $a=300\,$nm and than gradually increases up to 9.7\% at $a=745\,$nm. It is evident that at the shortest separation the agreement between experiment and theory is almost an order of magnitude worse than the total experimental error equal to only 0.19\%. This is explained by large theoretical errors which dominate in the determination of $\Xi_{P}$ at the shortest separations. The above explanations aim to make absolutely clear that the calculation of errors and the comparison between experiment and theory is not an arbitrary, but a rigorously determined procedure. Recently, the measurement data of the experiment [4,5] was independently reanalyzed in [59] with the conclusion: ``The data rule out the Drude approach$\ldots\,$, while they are consistent with the plasma-model approach$\ldots$'' \section{Attempt to account for screening effects} Recently, the above discussed problems of the Drude model approach in application to real metals, and related problems arising for dielectric and semiconductor materials [60--64], motivated an attempt to modify the reflection coefficients in the Lifshitz formula by including the screening effects and diffusion currents [65,66]. The modified reflection coefficients for the transverse magnetic and transverse electric modes were obtained through use of Boltzmann transport equation which takes into account not only the standard drift current $\mbox{\boldmath$j$}$, but also the diffusion current $eD\nabla{n}$, where $D$ is the diffusion coefficient and $\nabla{n}$ is the gradient of the charge carrier density [66]. The transverse magnetic coefficient takes the form \begin{equation} \tilde{r}_{\rm TM}({\rm i}\xi,k_{\bot})= \frac{\tilde\varepsilon({\rm i}\xi)q-k-\frac{k_{\bot}^2}{\eta({\rm i}\xi)}\, \frac{\tilde\varepsilon({\rm i}\xi)- \varepsilon({\rm i}\xi)}{\varepsilon({\rm i}\xi)}}{\tilde\varepsilon ({\rm i}\xi)q +k+\frac{k_{\bot}^2}{\eta({\rm i}\xi)}\, \frac{\tilde\varepsilon({\rm i}\xi)- \varepsilon({\rm i}\xi)}{\varepsilon({\rm i}\xi)}}, \label{eq31} \end{equation} \noindent where $k_{\bot}$ is the projection of the wave vector in the plane of the plates, $\omega={\rm i}\xi$ is the imaginary frequency and the following notations are introduced \begin{eqnarray} && q^2=k_{\bot}^2+\frac{\xi^2}{c^2}, \qquad k^2=k_{\bot}^2+\tilde\varepsilon({\rm i}\xi)\frac{\xi^2}{c^2}, \qquad \tilde\varepsilon({\rm i}\xi)=\varepsilon({\rm i}\xi)+ \frac{\omega_p^2}{\xi(\xi+\gamma)}, \nonumber \\ && \eta({\rm i}\xi)=\left[k_{\bot}^2+\kappa^2 \frac{\varepsilon(0)}{\varepsilon({\rm i}\xi)}\, \frac{\tilde\varepsilon({\rm i}\xi)}{\tilde\varepsilon({\rm i}\xi)- \varepsilon({\rm i}\xi)}\right]^{1/2}. \label{eq32} \end{eqnarray} \noindent In this equation, $1/\kappa$ is the screening length and the dielectric permittivity of core electrons $\varepsilon({\rm i}\xi)$ is defined in (\ref{eq29}). The transverse electric coefficient is given by the standard expression \begin{equation} \tilde{r}_{\rm TE}({\rm i}\xi,k_{\bot})= \frac{q-k}{q+k}, \label{eq33} \end{equation} \noindent as is used in the Drude model approach. The paper [66] claims the application of the above approach to intrinsic semiconductors only. It uses a specific Debye-H\"{u}ckel expression for the screening length \begin{equation} \frac{1}{\kappa}=\frac{1}{\kappa_{\rm DH}}=R_{\rm DH}= \sqrt{\frac{\varepsilon(0) k_BT}{4\pi e^2n}}. \label{eq34} \end{equation} \noindent This expression is applicable to particles obeying the Maxwell-Boltzmann statistics. It is obtained from the general representation for the screening length [67] \begin{equation} \frac{1}{\kappa}=R= \sqrt{\frac{\varepsilon(0) D}{4\pi \sigma(0)}} \label{eq35} \end{equation} \noindent if one uses the expression (\ref{eq30b}) for the dc conductivity and Einstein's relation [56,67] \begin{equation} \frac{D}{\mu}=\frac{k_BT}{|e|} \label{eq36} \end{equation} \noindent valid in the case of Maxwell-Boltzmann statistics. In the limiting case $\xi\to 0$ the reflection coefficient (\ref{eq31}) coincides with that obtained in [65]. However, the application region of the reflection coefficients (\ref{eq31}), (\ref{eq32}) with the Debye-H\"{u}ckel screening length (\ref{eq34}) cannot be restricted to only intrinsic semiconductors. These coefficients should be applicable to all materials where the density of charge carriers is not too large so that they are described by Maxwell-Boltzmann statistics. This means that in the framework of the proposed approach it is legal to apply (\ref{eq31})--(\ref{eq34}) to doped semiconductors with dopant concentration below critical and to solids with ionic conductivity etc. Here, we consider the application of this approach to metallic plates. Metals and semiconductors of metallic type are characterized by rather high concentration of charge carriers which obey the quantum Fermi-Dirac statistics. The general transport equation, however, is equally applicable to classical and quantum systems. The only difference one should take into account is the type of statistics. Substituting Einstein's relation, valid in the case of Fermi-Dirac statistics [56,67] \begin{equation} \frac{D}{\mu}=\frac{2E_F}{3|e|}, \label{eq37} \end{equation} \noindent where $E_F=\hbar\omega_p$ is the Fermi energy, into (\ref{eq35}), one arrives to the following expression for the Thomas-Fermi screening length [67] \begin{equation} \frac{1}{\kappa}=\frac{1}{\kappa_{\rm TF}}=R_{\rm TF}= \sqrt{\frac{\varepsilon(0) E_F}{6\pi e^2n}}. \label{eq38} \end{equation} \noindent With this definition of the parameter $\kappa$, it is legal to apply equations (\ref{eq31})--(\ref{eq33}) to metals. Now we consider two thick metallic plates separated by a distance $a$ at temperature $T$ in thermal equilibrium. Under these conditions the Casimir free energy per unit area of the plates is given by the Lifshitz formula [68]. Let us assume that the reflection coefficients (\ref{eq31})--(\ref{eq33}), (\ref{eq38}) can be substituted into this formula. Then in terms of dimensionless variables $y=2aq$, $\zeta=\xi/\omega_c\equiv 2a\xi/c$ one obtains \begin{equation} \tilde{\cal F}(a,T)=\frac{k_BT}{8\pi a^2} \sum_{l=0}^{\infty}{\vphantom{\sum}}^{\prime} \int_{\zeta_l}^{\infty}\!\!\!y\,dy\left\{\ln\left[1- \tilde{r}_{\rm TM}^2({\rm i}\zeta_l,y)\,{\rm e}^{-y}\right]+ \ln\left[1- \tilde{r}_{\rm TE}^2({\rm i}\zeta_l,y)\,{\rm e}^{-y}\right]\right\}, \label{eq39} \end{equation} \noindent where $\zeta_l=4\pi ak_BTl/(\hbar c)$ are the dimensionless Matsubara frequencies and a prime near the summation sign adds a multiple 1/2 to the term with $l=0$. In terms of the dimensionless variables the reflection coefficient (\ref{eq31}) takes the form \begin{equation} \tilde{r}_{\rm TM}({\rm i}\zeta,y)=\frac{\tilde\varepsilon y -\bigl[y^2+(\tilde\varepsilon-1)\zeta^2\bigr]^{1/2}- \frac{(y^2-\zeta^2)(\tilde\varepsilon- \varepsilon)}{\tilde\eta\, \varepsilon}}{\tilde\varepsilon y +\bigl[y^2+(\tilde\varepsilon-1)\zeta^2\bigr]^{1/2}+ \frac{(y^2-\zeta^2)(\tilde\varepsilon- \varepsilon)}{\tilde\eta\, \varepsilon}}, \label{eq40} \end{equation} \noindent where \begin{equation} \tilde\eta=2a\eta=\left[y^2-\zeta^2+\kappa_a^2\frac{\varepsilon(0) \tilde\varepsilon}{\varepsilon (\tilde\varepsilon-\varepsilon)} \right]^{1/2}, \qquad \kappa_a\equiv 2a\kappa_{\rm TF}. \label{eq41} \end{equation} \noindent Note that all dielectric permittivities here are functions of ${\rm i}\omega_c\zeta$. Below we do not use the explicit expression for the reflection coefficient $\tilde{r}_{\rm TE}({\rm i}\zeta,y)$ because it coincides with the standard one, as defined in the Drude model approach, and considered in detail in [69]. Let us determine the behavior of the Casimir free energy (\ref{eq39}) at low temperature. For all metals the screening length (\ref{eq38}) is very small. As a result, at any reasonable separation distance between the plates, the dimensionless parameter $\kappa_a$ defined in (\ref{eq41}) is very large and the inverse quantity can be used as a small parameter \begin{equation} 2a\kappa_{\rm TF}=\kappa_a\gg 1, \qquad \beta_a\equiv\frac{1}{\kappa_a}\ll 1. \label{eq43} \end{equation} \noindent Expanding the reflection coefficient (\ref{eq40}) up to the first power of the parameter $\beta_a$ one obtains \begin{eqnarray} && \tilde{r}_{\rm TM}({\rm i}\zeta,y)={r}_{\rm TM}({\rm i}\zeta,y)- 2\beta_a\,Z+O(\beta_a^2), \label{eq44} \\ && Z\equiv\sqrt{\frac{\tilde\varepsilon (\tilde\varepsilon-\varepsilon)^3}{\varepsilon(0)\varepsilon}} \,\frac{y(y^2-\zeta^2)}{[\tilde\varepsilon y+\sqrt{y^2+ (\tilde\varepsilon -1)\zeta^2}]^2}, \nonumber \end{eqnarray} \noindent where ${r}_{\rm TM}({\rm i}\zeta,y)$ is the standard TM reflection coefficient calculated with the dielectric permittivity $\tilde\varepsilon({\rm i}\omega_c\zeta)$ [it is given by (\ref{eq40}) with the third term in both numerator and denominator omitted]. {}From (\ref{eq44}) one arrives at \begin{equation} \ln\left[1-\tilde{r}_{\rm TM}^2({\rm i}\zeta,y)\,{\rm e}^{-y}\right]= \ln\left[1-{r}_{\rm TM}^2({\rm i}\zeta,y)\,{\rm e}^{-y}\right]+ 4\beta_a\frac{{r}_{\rm TM}({\rm i}\zeta,y)\,Z}{{\rm e}^{y}- {r}_{\rm TM}^2({\rm i}\zeta,y)}+O(\beta_a^2). \label{eq45} \end{equation} Now we substitute (\ref{eq45}) and the respective known expression for the TE contribution [69] into (\ref{eq39}). Calculating the sum with the help of the Abel-Plana formula, we obtain in perfect analogy to [69] \begin{equation} \tilde{\cal F}(a,T)={\cal F}_{gp}(a,T)-\frac{k_BT}{16\pi a^2} \int_{0}^{\infty}\!\!\!y\,dy\,\ln\left[1-r_{{\rm TE},gp}^2(0,y)\, {\rm e}^{-y}\right]+{\cal F}^{(\gamma)}(a,T)+ \beta_a{\cal F}^{(\beta)}(a,T), \label{eq46} \end{equation} \noindent where ${\cal F}^{(\gamma)}(a,T)$ is determined by equation (17) in [69]. It goes to zero together with its derivative with respect to temperature when $T\to 0$. The quantity ${\cal F}^{(\beta)}(a,T)$ originates from the second contribution on the right-hand side of (\ref{eq44}). It is easily seen that ${\cal F}^{(\beta)}(a,T)=E^{(\beta)}(a)+O(T^3/T_{\rm eff}^3)$ at low $T$. The Casimir free energy ${\cal F}_{gp}(a,T)$ is defined by substituting the dielectric permittivity (\ref{eq29}) of the generalized plasma-like model into the Lifshitz formula. It was found in [50,51] and the respective thermal correction was shown to be of order $(T/T_{\rm eff})^3$ when $T\to 0$. The TE reflection coefficient at zero frequency entering (\ref{eq46}) is given by \begin{equation} r_{{\rm TE},gp}(0,y)=\frac{cy-\sqrt{4a^2\omega_p^2+c^2y^2}}{cy +\sqrt{4a^2\omega_p^2+c^2y^2}}. \label{eq47} \end{equation} As a result, calculating the Casimir entropy \begin{equation} \tilde{S}(a,T)=-\frac{\partial\tilde{\cal F}(a,T)}{\partial T} \label{eq48} \end{equation} \noindent with the use of (\ref{eq46}) and considering the limiting case of zero temperature, one arrives at \begin{equation} \tilde{S}(a,0)=\frac{k_B}{16\pi a^2}\int_{0}^{\infty}\!\!\!y\,dy\,\ln \left[1-\left(\frac{cy-\sqrt{4a^2\omega_p^2+c^2y^2}}{cy +\sqrt{4a^2\omega_p^2+c^2y^2}}\right)^2\,{\rm e}^{-y}\right]<0 \label{eq49} \end{equation} \noindent in violation of the Nernst heat theorem. This result is obtained for metals with perfect crystal lattices. In the presence of impurities the Casimir entropy abruptly jumps to zero at $T<10^{-3}\,$K [70]. Thus, the modified reflection coefficients taking the screening effects into account lead to a violation of the Nernst heat theorem for metals with perfect crystal lattices in the same way as the standard Drude model approach. Because of this, the theoretical approach using such reflection coefficients is thermodynamically inconsistent. Now we briefly compare the theoretical predictions, following from the use of reflection coefficients $\tilde{r}_{\rm TM}$ and $\tilde{r}_{\rm TE}$, with the measurement data of the most precise experiment by means of micromachined torsional oscillator [4,5]. This experiment was already discussed in Sec.~5. In Fig.~6(a) the experimental data for the Casimir pressure between two Au plates are shown as crosses with the absolute errors determined at a 95\% confidence level. \begin{figure}[b] \vspace*{-4.cm} \begin{center} \hspace*{-2.cm} \includegraphics{figVM-6.ps} \end{center} \vspace*{-21.cm} \caption{(a) The crosses show the measured mean Casimir pressures together with the absolute errors as a function of the separation. The theoretical Casimir pressures computed using the generalized plasma-like model and the approach including the screening effects are shown as solid and dashed lines, respectively. (b) Differences of the theoretical Casimir pressures computed with inclusion of the screening effects and the mean experimental Casimir pressures versus separation are shown as dots. The 95\% and 99.9\% confidence intervals are shown as the solid and dashed lines, respectively.} \end{figure} The solid line presents the computational results for $P(a,T)=-\partial{\cal F}(a,T)/\partial a$ using the Lifshitz formula and the generalized plasma-like dielectric permittivity (\ref{eq29}). The parameters of oscillators for Au were determined in [5] with high precision. The dashed line was computed using the Lifshitz formula for $P^{\rm mod}(a,T)=-\partial\tilde{\cal F}(a,T)/\partial a$ with the reflection coefficients $\tilde{r}_{\rm TM,TE}$ taking the screening effects into account. As is seen in Fig.~6(a), the theoretical approach taking into account the Thomas-Fermi screening length is experimentally excluded at a 95\% confidence level over the separation region from 500 to 600\,nm. The same conclusion follows within the entire measurement range from 160 to 750\,nm. Fig.~6(a) illustrates the first method for the comparison between experiment and theory in Casimir force measurements discussed in Sec.~5. In Fig.~6(b) the second method for the comparison of experiment and theory is illustrated. Here, the differences between the theoretical Casimir pressures computed with inclusion of the screening effects and the mean experimental pressures are shown as dots. The solid line indicates the borders of 95\% confidence intervals. Dots are outside the confidence interval $[-\Xi_{P}(a),\Xi_{P}(a)]$ over the entire measurement range from 160 to 750\,nm. In the same figure, the dashed line shows the borders of 99.9\% confidence intervals. As is seen in Fig.~6(b), dots are outside of this confidence interval within the separation region from 160 to 640\,nm. Thus, within this region of separations the theoretical approach taking the screening effects into account [66] is experimentally excluded at a 99.9\% confidence level. The physical reasons why the inclusion of the screening effects into the Lifshitz theory is thermodynamically and experimentally inconsistent can be understood as follows. The Lifshitz theory is formulated for systems in thermal equilibrium. As was indicated in [71], the drift current of conduction electrons leads to heating of the crystal lattice. In this case, if the constant temperature is preserved, there must be a unidirectional flux of heat from the Casimir plates to the heat reservoir. The existence of such an interaction between a system and a heat reservoir is strictly prohibited in a state of thermal equilibrium [72] and is in contradiction with its definition [73]. According to this definition, in thermal equilibrium all irreversible processes connected with the dissipation of energy are terminated. Specifically, in thermal equilibrium any nonzero gradients of charge carrier density and any diffusion are impossible. Thus, the inclusion of the screening effects and diffusion currents into the Lifshitz theory is in violation of its applicability conditions. \section{Conclusions and discussion} In the above, we have discussed several problems at the interface between field-theoretical description of the Casimir effect and experiments on measuring the Casimir force. The consideration of the Casimir energies and forces in ideal metal rectangular boxes leads to the conclusion that even when using ideal models it is important to take into account some general physical requirements. Thus, it is not productive to use the free energy which leads to the Casimir forces of quantum nature which increase with increasing size of the box. It also seems thermodynamically inconsistent to claim that the Casimir force acting on a piston is a well defined quantity, whereas the forces acting on all other faces of the box are excluded from consideration. The reason is that if the free energy is defined correctly (see Sec.~2), there is a uniquely defined pressure on all faces of the box equal to the negative derivative of the free energy with respect to the box volume calculated at constant temperature. An important tool for the comparison of experiment with theory is the proximity force approximation. In Sec.~3 we have discussed some inexact formulations which can be found in theoretical publications on this subject. We have also discussed recent achievements in quantum-field-theoretical approach to the calculation of the Casimir energies in terms of functional determinants and scattering matrices. This scientific direction has already obtained the first analytical results beyond the PFA. It is of great promise for many experimentally relevant applications of the theory. In Secs.~4 and 5 we tried to add clarity to the widely discussed problems of the precision of experiments on the Casimir effect and the agreement between experiment and theory. It was stressed that the precision of some independent measurements can be much higher than of respective theoretical computations using the values of parameters which may not be known precisely enough. In such cases the agreement of experiment with theory can also be not as good as the precision of the measurements. Finally, in Sec.~6 we have analyzed a recent theoretical approach to the thermal Casimir force taking into account the screening effects and diffusion currents. Using quantum Fermi-Dirac statistics and respective Thomas-Fermi screening length, we have applied this approach to calculate the Casimir free energy between two metallic plates. It was shown that the obtained free energy results in a violation of the Nernst heat theorem for metals with perfect crystal lattices. Thus, the approach under consideration is inconsistent with thermodynamics. The calculational results for the Casimir pressure in the configuration of two Au plates were compared with the results of the most precise experiment performed using a micromachined oscillator. It was shown that the theoretical predictions following from the inclusion of the screening effects are rejected by the experimental data at a 99.9\% confidence level. The reason for the failure of this approach is the inclusion of irreversible diffusion processes violating thermal equilibrium which is the basic applicability condition of the Lifshitz theory. Phenomenologically, the Lifshitz theory combined with the generalized plasma-like dielectric permittivity provides a description of dispersion forces between metallic test bodies which is in agreement with thermodynamics and consistent with all available experimental information. For now there is no other theoretical approach to the thermal Casimir force between metals which would satisfy the requirements of thermodynamics and be simultaneously consistent with all measurement data. \ack{This work was supported by Deutsche Forschungsgemeinschaft, Grant No 436 RUS 113/789/0--4. The author is grateful to the Center of Theoretical Studies and Institute of Theoretical Physics, Leipzig University where this work was performed for kind hospitality.} \medskip
0903.1220
\section{Introduction} In the standard model (SM), the purely leptonic $B$ meson decays \ensuremath{\Bp \to \ellp \nul}\xspace~\cite{charge} proceed at lowest order through the annihilation diagram shown in Fig.~\ref{fig:diagram}. The SM branching fraction can be calculated as~\cite{Silverman:1988gc} \begin{equation} {\ensuremath{\cal B}\xspace}(B^+ \rightarrow \ell^+ \nu_{\ell}) = \frac{G_{F}^{2} m_{B} m_{\ell}^{2}} {8\pi} \biggl( 1- \frac{m_{\ell}^{2}}{m_{\ensuremath{B}\xspace}^{2}} \biggr)^{2} f_{B}^{2} |V_{ub}|^{2} \tau_{\ensuremath{B}\xspace}, \end{equation} where $G_F$ is the Fermi coupling constant, $m_{\ell}$ and $m_B$ are respectively the lepton and $B$ meson masses, and $\tau_B$ is the $B^+$ lifetime. The decay rate is sensitive to the CKM matrix element $|V_{ub}|$~\cite{Cabibbo:1963yz} and the $B$ decay constant $f_{\ensuremath{B}\xspace}$ that describes the overlap of the quark wave functions within the meson. \begin{figure}[!htb] \begin{center} \includegraphics[width=4.3 cm]{decay.eps} \end{center} \caption{\label{fig:diagram Lowest order SM Feynman diagram for the purely leptonic decay $B^+ \ensuremath{\rightarrow}\xspace l^{+} \nu_{l}$.} \end{figure} The SM estimate of the branching fraction for \ensuremath{\Bp \to \taup \nut}\xspace is $(1.59 \pm 0.40)\times 10^{-4}$ assuming $\tau_B$ = 1.638$ \pm $0.011 ps~\cite{PDG}, $V_{ub}$ = (4.39$ \pm $0.33)$\times 10^{-3}$ determined from inclusive charmless semileptonic $B$ decays~\cite{Barberio:2006bi} and $f_B$ = 216$ \pm $22 MeV from lattice QCD calculation~\cite{Gray:2005ad}. To a very good approximation, helicity is conserved in \ensuremath{\Bp \to \mup \num}\xspace and \ensuremath{\Bp \to \ep \nue}\xspace decays, which are therefore suppressed by factors $m_{\mu,e}^2/m_{\tau}^2$ with respect to \ensuremath{\Bp \to \taup \nut}\xspace, leading to expected branching fractions of ${\ensuremath{\cal B}\xspace}(\ensuremath{\Bp \to \mup \num}\xspace) = (5.6 \pm 0.4) \times 10^{-7}$ and ${\ensuremath{\cal B}\xspace}(\ensuremath{\Bp \to \ep \nue}\xspace) = (1.3 \pm 0.4) \times 10^{-11}$. However, reconstruction of \ensuremath{\Bp \to \taup \nut}\xspace decays is experimentally more challenging than \ensuremath{\Bp \to \mup \num}\xspace or \ensuremath{\Bp \to \ep \nue}\xspace due to the large missing momentum from multiple neutrinos in the final state. Purely leptonic $B$ decays are sensitive to physics beyond the SM, where additional heavy virtual particles contribute to the annihilation processes. Charged Higgs boson effects may greatly enhance or suppress the branching fraction in some two-Higgs-doublet models~\cite{Hou:1992sy}. Similarly, there may be enhancements through mediation by leptoquarks in the Pati-Salam model of quark-lepton unification~\cite{Valencia:1994cj}. Direct tests of Yukawa interactions in and beyond the SM are possible in the study of these decays, as annihilation processes proceed through the longitudinal component of the intermediate vector boson. In particular, in a SUSY scenario at large $\tan \beta$, non-standard effects in helicity-suppressed charged current interactions are potentially observable, being strongly $\tan \beta$-dependent and leading to~\cite{Hou:1992sy}: \begin{eqnarray} \frac{{\ensuremath{\cal B}\xspace}(B^{+} \rightarrow l^{+} \nu_{l})_{\rm{exp}}}{{\ensuremath{\cal B}\xspace}(B^{+} \rightarrow l^{+} \nu_{l})_{\rm{SM}} } \approx ( 1- \tan^2 \beta \frac{m_B^2}{M_H^2} )^2 . \end{eqnarray} Evidence for the first purely leptonic \ensuremath{B}\xspace decays has recently been presented by both the \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ and Belle collaborations. The latest HFAG world average of the \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ \cite{Aubert:2007xj} and Belle~\cite{Adachi:2008ch} results is ${\ensuremath{\cal B}\xspace}(\ensuremath{\Bp \to \taup \nut}\xspace) = (1.51 \pm 0.33)\times 10^{-4}$~\cite{HFAG}. The current best published upper limits on \ensuremath{\Bp \to \mup \num}\xspace and \ensuremath{\Bp \to \ep \nue}\xspace are ${\ensuremath{\cal B}\xspace}(\ensuremath{\Bp \to \mup \num}\xspace) < 1.7 \times 10^{-6}$ and ${\ensuremath{\cal B}\xspace}(\ensuremath{\Bp \to \ep \nue}\xspace) < 9.8 \times 10^{-7}$ at 90$\%$ confidence level from Belle using a data sample of 235 \ensuremath{\mbox{\,fb}^{-1}}\xspace~\cite{Satoyama:2006xn}. The analysis described in herein is based on the entire dataset collected with the \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ detector~\cite{babar} at the PEP-II storage ring at the \Y4S resonance (``on-resonance''), which consists of 468 million \BB pairs, corresponding to an integrated luminosity of 426 \ensuremath{\mbox{\,fb}^{-1}}\xspace. In order to study background from continuum events such as $e^+ e^- \ensuremath{\rightarrow}\xspace q \bar{q}$ ($q=u,d,s,c$) and $e^+ e^- \ensuremath{\rightarrow}\xspace \tau^+ \tau^-$, an additional sample of about 41 \ensuremath{\mbox{\,fb}^{-1}}\xspace was collected at a center-of-mass (c.m.) energy about 40 \ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace below the \Y4S resonance (``off-resonance''). In the \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ detector, charged particle trajectories are measured with a 5-layer double-sided silicon vertex tracker and a 40-layer drift chamber, which are contained in the 1.5 T magnetic field of a superconducting solenoid. A detector of internally reflected Cherenkov radiation provides identification of charged kaons and pions. The energies and trajectories of neutral particles are measured by an electromagnetic calorimeter consisting of 6580 CsI(Tl) crystals. The flux return of the solenoid is instrumented with resistive plate chambers and, more recently, limited streamer tubes~\cite{Benelli:2006pa}, in order to provide muon identification. A {\tt GEANT}4-based~\cite{geant4} Monte Carlo (MC) simulation of generic $B\bar{B}$, $q\bar{q},\,q=u,d,s,c$, and $\tau^+\tau^-$ events as well as $B^+\ensuremath{\rightarrow}\xspace\mu^+\nu_\mu$ and $B^+\ensuremath{\rightarrow}\xspace e^+\nu_e$ signal events is used to model the detector response and test the analysis technique. The \ensuremath{\Bp \to \ellp \nul}\xspace decay produces a mono-energetic charged lepton in the \ensuremath{B}\xspace rest frame with a momentum $p^{*} \approx m_B/2$. The \ensuremath{B}\xspace mesons produced in \Y4S decays have a c.m. momentum of about 320 \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace, so we initially select lepton candidates with c.m. momentum 2.4 $< p_{\rm{c.m.}} <$ 3.2 \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace, to take into account the smearing due to the motion of the $B$. A tight particle identification requirement is applied to the candidate lepton in order to discard fake muons or electrons. Since the neutrino produced in the signal decay is not detected, all charged tracks besides the signal lepton and all neutral energy deposits in the calorimeter are combined to reconstruct the companion (tag) \ensuremath{B}\xspace. We include all neutral calorimeter clusters with cluster energy greater than 30 \ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace. Particle identification is applied to the charged tracks to identify electrons, muons, pions, kaons and protons in order to assign the most likely mass hypothesis to each \ensuremath{\B_{\rm{tag}}}\xspace daughter and thus improve the reconstruction of the \ensuremath{\B_{\rm{tag}}}\xspace. Events which have additional lepton candidates are discarded. These typically arise from semileptonic \ensuremath{\B_{\rm{tag}}}\xspace or charm decays and indicate the presence of additional neutrinos, for which the inclusive \ensuremath{\B_{\rm{tag}}}\xspace reconstruction is not expected to work well. The signal lepton's momentum in the signal $B$ rest frame $p^\ast$ is refined using the \ensuremath{\B_{\rm{tag}}}\xspace momentum direction. We assume that the signal \ensuremath{B}\xspace has a c.m. momentum of 320 \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace and choose its direction as opposite that of the reconstructed \ensuremath{\B_{\rm{tag}}}\xspace to boost the lepton candidate into the signal \ensuremath{B}\xspace rest frame. Signal events are selected using the kinematic variables \mbox{$\Delta E$}\xspace $ = E_{B}-E_{\rm beam}$ , where $E_B$ is the energy of the \ensuremath{\B_{\rm{tag}}}\xspace and $E_{\rm beam}$ is the beam energy, all in the c.m. frame. For signal events in which all decay products of the \ensuremath{\B_{\rm{tag}}}\xspace are reconstructed, we expect the $\Delta E$ distribution to peak near zero. However, we are often unable to reconstruct all \ensuremath{\B_{\rm{tag}}}\xspace decay products, which biases the $\Delta E$ distribution toward negative values. For continuum backgrounds, $\Delta E$ is shifted toward relatively large positive values since too much energy is attributed to the nominal \ensuremath{\B_{\rm{tag}}}\xspace decay, while there is a negative bias in $\tau^+\tau^-$ events due to the unreconstructed neutrinos. We require the tag $B$ to satisfy $-$2.25 $< \Delta E <$ 0 GeV for \ensuremath{\Bp \to \mup \num}\xspace decays. For \ensuremath{\Bp \to \ep \nue}\xspace decays, we require a linear combination of $\Delta E$ and the tag \ensuremath{B}\xspace transverse momentum $p_{T}$ to satisfy $(p_{T} + 0.529 \cdot \Delta E)<$0.2 and $(p_{T} - 0.529 \cdot \Delta E)<$1.5. This selection rejects background events arising from two-photon process $e^+e^-\ensuremath{\rightarrow}\xspace e^+e^-\gamma^\ast\gamma^\ast,\; \gamma^\ast\gamma^\ast\ensuremath{\rightarrow}\xspace hadrons$ with one of the final electrons scattered at a large angle and detected. The coefficient of the $\Delta E$ term is extracted from the data. Backgrounds may arise from any process producing charged tracks in the momentum range of the signal, particularly if the charged tracks are leptons. The two most significant backgrounds are \ensuremath{B}\xspace semileptonic decays involving $b\rightarrow u l \nu_{l}$ transitions in which the momentum of the leptons at the endpoint of the spectrum approaches that of the signal, and from continuum and $\tau^+ \tau^-$ events in which a charged pion is mistakenly identified as a muon or an electron. \begin{figure}[t!] \begin{center} \includegraphics[width=4.3cm]{sig_mes_2d_Tight_5.2_PAPER_LINE_BIG.eps} \hspace{-0.3cm} \includegraphics[width=4.3cm]{sig_2d_Tight_mes5.2_Babar_PAPER_LINE_BIG.eps}\\ \hspace{-0.3cm} \includegraphics[width=4.3cm]{bkg_mes_2d_Tight_5.2_PAPER_LINE_BIG.eps} \hspace{-0.3cm} \includegraphics[width=4.3cm]{onpeak_2d_mes5.17-5.2_pdf_dataset_Babar2_PAPER_LINE_BIG.eps}\\ \caption{Distributions of signal (a,b) and background (c,d) $m_{ES}$ (left) and $p_{\rm{FIT}}$ (right) for \ensuremath{\Bp \to \mup \num}\xspace from MC simulation (a,b and c) and from $m_{ES}$ sideband 5.17 $<m_{ES}<$ 5.2 GeV/c$^2$ (d).} \label{fig:parmu} \end{center} \end{figure} \begin{figure}[thb!] \begin{center} \includegraphics[width=4.3cm]{sig_mes_2d_eLH_5.22_PAPER_LINE_BIG.eps} \hspace{-0.3cm} \includegraphics[width=4.3cm]{ss_2d_mes5.22_bkg_bifur_Babar_PAPER_LINE_BIG.eps}\\ \hspace{-0.3cm} \includegraphics[width=4.3cm]{allbkg_mes_2d_eLH_5.22_PAPER_LINE_BIG.eps} \hspace{-0.3cm} \includegraphics[width=4.3cm]{allMC_2d_mes5.22_bifur_Babar2_PAPER_LINE_BIG.eps}\\ \caption{Distributions of signal (a,b) and background (c,d) $m_{ES}$ (left) and $p_{\rm{FIT}}$ (right) for \ensuremath{\Bp \to \ep \nue}\xspace from MC simulation.} \label{fig:parel} \end{center} \end{figure} Continuum events tend to produce a jet-like event topology, while \BB events tend to be more isotropically distributed in the c.m. frame, and are suppressed using event shape parameters. Five different spatial and kinematical variables, considered separately for \ensuremath{\Bp \to \mup \num}\xspace and \ensuremath{\Bp \to \ep \nue}\xspace, are combined in Fisher discriminants~\cite{fisher}. The most effective discriminating parameters are the ratio of the second $L_2$ and the zeroth $L_0$ monomial $ L_n = {\large \Sigma}_i |{\vec p}_i| \cos(\alpha)^n$, where the sum runs over all \ensuremath{\B_{\rm{tag}}}\xspace daughters having momenta ${\vec p}_i$ and $\alpha$ is the angle with respect to the lepton candidate momentum, both in the c.m. frame, and the sphericity $S = \frac{3}{2}{\rm min} \frac{\Sigma_j (p_{jT})^2}{\Sigma_j (p_j)^2 }$, where the $T$ subscript denotes the momentum component transverse to the sphericity axis, which is the axis that minimizes $S$. $S$, in fact, tends to be closer to 1 for spherical events and 0 for jet-like events. In order to take into account the changes in detector performance throughout the years, in particular in muon identification, the data sample is divided into six different data taking periods, and the Fisher discriminants and selection criteria are optimized separately with the algorithm described in~\cite{narsky} for each period The two-body kinematics of the signal decay is exploited by combining the signal lepton momentum in the \ensuremath{B}\xspace rest frame $p^*$ and $p_{\rm{c.m.}}$ in a second Fisher discriminant ($p_{\rm{FIT}}$) which discriminates against the remaining semileptonic $b \bar{b}$ and continuum background events which populate the end of the lepton spectrum in both frames. The $p^*$ and $p_{\rm{c.m.}}$ coefficients in the linear combination are determined separately for \ensuremath{\Bp \to \mup \num}\xspace and \ensuremath{\Bp \to \ep \nue}\xspace with~\cite{narsky}. We employ an extended maximum likelihood (ML) fit to extract signal and background yields using simultaneously the distributions of the Fisher output $p_{\rm{FIT}}$ and the energy-substituted mass \mbox{$m_{\rm ES}$}\xspace, defined as $\sqrt{E_{\rm beam}^{2}-|\vec{p}_B|^{\;2}}$, where $\vec{p}_B$ is the momentum of the reconstructed \ensuremath{\B_{\rm{tag}}}\xspace candidate in the c.m. frame. Signal $m_{ES}$ and $p_{\rm{FIT}}$ probability density functions (PDFs) are fixed in the final fit and are parameterized from simulated events, respectively, with a Crystal Ball function~\cite{CB} and the sum of two Gaussians (double Gaussian) for both \ensuremath{\Bp \to \mup \num}\xspace and \ensuremath{\Bp \to \ep \nue}\xspace. The background $m_{ES}$ distribution is described by an ARGUS function whose slope is determined in the fit to the yields~\cite{argus}. To parameterize the background $p_{\rm{FIT}}$ distributions, we studied the possibility of using the $m_{ES}$ sideband of on-resonance data. We found the \ensuremath{\Bp \to \mup \num}\xspace sideband suited for this purpose, while the \ensuremath{\Bp \to \ep \nue}\xspace sideband is not sufficiently populated. We use the region 5.17 $<m_{ES}<$ 5.2 GeV/c$^2$ to parameterize the \ensuremath{\Bp \to \mup \num}\xspace background $p_{\rm{FIT}}$ distribution, and simulated events for the background \ensuremath{\Bp \to \ep \nue}\xspace $p_{\rm{FIT}}$ distribution. Separately for \ensuremath{\Bp \to \mup \num}\xspace and \ensuremath{\Bp \to \ep \nue}\xspace, the sum of two Gaussians with different sigmas on the right and the left of the mean (bifurcated Gaussians) is used to parameterize the background $p_{\rm{FIT}}$ distribution and the relative fraction of the two bifurcated Gaussians is determined from the fit to the data. Figures~\ref{fig:parmu} and \ref{fig:parel} show background and signal $m_{ES}$ and $p_{\rm{FIT}}$ distributions for \ensuremath{\Bp \to \mup \num}\xspace and \ensuremath{\Bp \to \ep \nue}\xspace, respectively, with the PDFs described above superimposed. In the on-resonance data the ML fit returns 1 $\pm$ 15 signal \ensuremath{\Bp \to \mup \num}\xspace candidate events and 18 $\pm$ 14 signal \ensuremath{\Bp \to \ep \nue}\xspace candidate events. Distributions of the fit data events with the final fit superimposed, as well as the signal and background PDFs, are shown in Figure~\ref{fig:unblind} for \ensuremath{\Bp \to \mup \num}\xspace and \ensuremath{\Bp \to \ep \nue}\xspace, respectively, projected on $m_{ES}$ and $p_{\rm{FIT}}$. \begin{figure}[t!] \begin{center} \includegraphics[width=4.3cm]{mes_2dfit_data_finalres_pfitpdf2_mes5.2_fbfloat_PAPER_LINE_BIG.eps} \hspace{-0.3cm} \includegraphics[width=4.3cm]{pfit_2dfit_data_finalres_pfitpdf2_mes5.2_fbfloat_PAPER_LINE_BIG.eps}\\ \hspace{-0.3cm} \includegraphics[width=4.3cm]{mes_2dfit_eLH_finalres_datafromMC2_noBBpeakmes5.22_MLFIT_PAPER_LINE_BIG.eps} \hspace{-0.3cm} \includegraphics[width=4.3cm]{pfit_2dfit_eLH_finalres_datafromMC2_noBBpeakmes5.22_MLFIT_PAPER_LINE_BIG.eps} \hspace{-0.3cm} \caption{Final fit to the data projected on $m_{ES}$ (left) and $p_{\rm{FIT}}$ (right) distributions for \ensuremath{\Bp \to \mup \num}\xspace events (a,b) and \ensuremath{\Bp \to \ep \nue}\xspace events (c,d) : the solid blue line is the total PDF, the dashed red line is the background PDF and the dashed-dotted black line is the signal PDF} \label{fig:unblind} \end{center} \end{figure} We next evaluate systematic uncertainties on the number of \ensuremath{B^\pm}\xspace in the sample, the signal efficiency and the signal yield. The number of \ensuremath{B^\pm}\xspace mesons in the on-resonance data sample is estimated to be 468 $\times$ 10$^6$ with an uncertainty of 1.1\%~\cite{Aubert:2002hc}, assuming equal $B^+$ and $B^0$ production at the $\Upsilon(4S)$~\cite{Aubert:2005bq}. The uncertainty in the signal efficiency includes the lepton candidate selection (particle identification, tracking efficiency and event selection Fisher requirement) as well as the reconstruction efficiency of the tag \ensuremath{B}\xspace. The systematic uncertainty on the particle identification efficiency is evaluated using $e^+ e^- \ensuremath{\rightarrow}\xspace \mu^+ \mu^- \gamma$, $e^+ e^- \ensuremath{\rightarrow}\xspace e^+ e^- \mu^+ \mu^- $ and Bhabha event control samples derived from the data, which are weighted to reproduce the kinematic distribution of the lepton signal candidate. Comparing the cumulative signal efficiency obtained with and without these weights, a total discrepancy of 1.9$\%$ for \ensuremath{\Bp \to \mup \num}\xspace and 2.3$\%$ for \ensuremath{\Bp \to \ep \nue}\xspace is found and this value is taken as the particle identification systematic uncertainty. Tracking efficiency is studied employing $\tau$ decays, which must produce an odd number of final state charged tracks because of charge conservation. Thus, one can determine an absolute efficiency because the number of events with a missing track can be measured. The uncertainty associated with the tracking efficiency and the data/MC discrepancy evaluated with this method are taken in quadrature for a total tracking efficiency uncertainty of 0.4$\%$ per track. In order to evaluate the systematic uncertainty associated with the requirements on the Fisher discriminants, we compare data and MC Fisher distributions in the sidebands $\Delta E$>0 for the \ensuremath{\Bp \to \mup \num}\xspace sample and $(p_{T} + 0.529 \cdot \Delta E)$> 0.2 for the \ensuremath{\Bp \to \ep \nue}\xspace sample. We fit the data/MC ratio with a linear function, with results consistent with a unitary ratio in the whole Fishers range. We take the error on the intercept as the systematic uncertainty on the Fisher discriminants, that is 1.4 $\%$ for \ensuremath{\Bp \to \mup \num}\xspace and 5.3$\%$ for \ensuremath{\Bp \to \ep \nue}\xspace. The tag \ensuremath{B}\xspace reconstruction has been studied with a control sample of $B^+\rightarrow D^{(*)0}\pi^+$ events, where the $D$ is reconstructed into $\bar{D}^{0} \rightarrow K^+ \pi^-$ and $D^{0} \rightarrow K^- \pi^+ $, and the $D^*$ into $D^{*0} \rightarrow D^0 \gamma$ or $D^{*0} \rightarrow D^0 \pi^0$. These two-body decays are topologically very similar to our signal, as the charged pion can be treated as the signal lepton and the $D^{(*)0}$ decays products ignored to simulate the missing neutrino. The tag \ensuremath{B}\xspace reconstructed in the control sample thus simulates the tag \ensuremath{B}\xspace reconstruction in the nominal data sample. We compare the efficiencies for our tag \ensuremath{B}\xspace selection cuts in the $B^+\rightarrow D^{(*)0}\pi^+$ data and MC to quantify any data/MC disagreements that may affect the signal efficiency. We find a data/MC discrepancy on $B^+\rightarrow D^{(*)0}\pi^+$ control sample of 3.0$\%$ for \ensuremath{\Bp \to \mup \num}\xspace decays and 0.4$\%$ for \ensuremath{\Bp \to \ep \nue}\xspace decays, and assign these as the signal efficiency uncertainty arising from the tag \ensuremath{B}\xspace selection. A summary of the systematic uncertainties in the signal efficiency is given in Table~\ref{tab:systematics_eff}. The final \ensuremath{\Bp \to \mup \num}\xspace signal efficiency is (6.1 $\pm$ 0.2)\% and the \ensuremath{\Bp \to \ep \nue}\xspace signal efficiency is (4.7 $\pm$ 0.3)\%, where the errors are the sum in quadrature of statistical and systematic uncertainties. \begin{table}[!t] \caption{ Contributions to the systematic uncertainty on the signal efficiency. Total systematic represent the sum in quadrature of the table entries.} \begin{center} \begin{tabular}{ccc} \hline \hline Source & \ensuremath{\Bp \to \mup \num}\xspace & \ensuremath{\Bp \to \ep \nue}\xspace \\ \hline \hline Particle identification & 1.9\% & 2.3 \% \\ Tracking efficiency & 0.4\% & 0.4 \% \\ Tag \ensuremath{B}\xspace reconstruction & 3.0\% & 0.4 \% \\ Fisher selection & 1.4\% & 5.3 \% \\ \hline Total & 3.8\% & 5.8 \% \\ \hline \end{tabular} \end{center} \label{tab:systematics_eff} \end{table} The systematic uncertainty in the yields comes from the $p_{\rm{FIT}}$ and $m_{ES}$ PDF parameters, which are kept fixed in the final fit and, in the \ensuremath{\Bp \to \ep \nue}\xspace case, from the use of MC simulation to extract the PDF shapes. The fit parameters extracted from MC are affected by an uncertainty due to MC statistics. In order to evaluate the systematic uncertainty associated with the parameterization, the final fit has been repeated 500 times for each background and signal PDF parameter which is kept fixed in the final fit. We randomly generate the PDF parameters assuming Gaussian errors and taking into account all the correlations between them. We perform a Gaussian fit to the distribution of the number of signal events for each parameter, take the fitted sigma as the systematic uncertainty and sum in quadrature. The total systematic uncertainty on the signal yield from all signal and background PDF parameters is 8 events for \ensuremath{\Bp \to \mup \num}\xspace and 10 events for \ensuremath{\Bp \to \ep \nue}\xspace. For the \ensuremath{\Bp \to \ep \nue}\xspace sample, an additional systematic uncertainty coming from possible discrepancies in the shape of the $p_{\rm{FIT}}$ background distribution in data and simulated events must be accounted for. The data/MC ratio of the $p_{\rm{FIT}}$ distribution in the $m_{ES}$ sideband 5.16 $<m_{ES}<$ 5.22 GeV/c$^2$ is fit with a linear function. The background $p_{\rm{FIT}}$ distribution shape is varied according to the fitted linear function and its associated statistical uncertainties; the total systematic contribution from this procedure is 4 events. To evaluate the branching fraction we use the following expression: \begin{equation} {\ensuremath{\cal B}\xspace}(B\rightarrow l^+\nu)_{UL} = \frac{N_{sig}}{N_{B^{\pm}}\cdot\varepsilon}, \label{eq:ul} \end{equation} where $N_{sig}$ represents the observed signal yield, $N_{B^{\pm}}$ the number of $B^+ B^-$ in the sample (where equal production of $B^+ B^-$ and $B^0 \bar{B}^0$ is assumed) and $\varepsilon$ is the signal efficiency. As we did not find evidence for signal events, we employ a Bayesian approach to set upper limits on the branching fractions. Flat prior in the branching fractions are assumed for positive values of the branching fractions and Gaussian likelihoods are adopted for the observed signal yield, related to {\ensuremath{\cal B}\xspace}\ by Eq.(\ref{eq:ul}). The Gaussian widths are fixed to the sum in quadrature of the statistical and systematic yield errors. The effect of systematic uncertainties associated with the efficiencies, modeled by Gaussian PDFs, is taken into account as well. We extract the following 90 $\%$ confidence level upper limits on the branching fractions: \begin{eqnarray} {\ensuremath{\cal B}\xspace}(B^+\rightarrow\mu^+\nu_{\mu}) &<& 1.0 \times 10^{-6}\\ {\ensuremath{\cal B}\xspace}(B^+\rightarrow e^+\nu_{e}) &<& 1.9 \times 10^{-6}. \end{eqnarray} The 95\% upper limits are ${\ensuremath{\cal B}\xspace}(B^+\rightarrow\mu^+\nu_{\mu}) < 1.3 \times 10^{-6}$ and ${\ensuremath{\cal B}\xspace}(B^+\rightarrow e^+\nu_{e}) < 2.2 \times 10^{-6}$ . This result improves the previous best published limit for \ensuremath{\Bp \to \mup \num}\xspace branching fraction by nearly a factor two, to a value twice the SM prediction. The \ensuremath{\Bp \to \ep \nue}\xspace result is consistent with previous measurements. It should be noted that the results in~\cite{Satoyama:2006xn} are obtained using a different statistical approach to interpret the observed number of signal events. The results show no deviation from the SM expectations. \input acknow_PRL.tex
2003.04699
\section{Introduction} In order for autonomous vehicles to travel safely at higher speeds or operate in wide-open spaces where there is a dearth of distinct features, a new level of robust sensing is required. \gls{fmcw} radar satisfies these requirements, thriving in all environmental conditions (rain, snow, dust, fog, or direct sunlight), providing a \SI{360}{\degree} view of the scene, and detecting targets at ranges of up to hundreds of metres with centimetre-scale precision. Indeed, there is a burgeoning interest in exploiting \gls{fmcw} radar to enable robust mobile autonomy, including ego-motion estimation~\cite{cen2018precise,cen2019radar,2019ICRA_aldera,2019ITSC_aldera,Barnes2019MaskingByMoving,UnderTheRadarArXiv}, localisation~\cite{KidnappedRadarArXiv,tang2020rsl}, and scene understanding~\cite{weston2019probably}. \cref{fig:pipeline} shows an overview of the pipeline proposed in this paper which extends our recent work in extremely robust radar-only place recognition~\cite{KidnappedRadarArXiv} in which a metric space for embedding polar radar scans was learned, facilitating topological localisation using \gls{nn} matching. We show that this learned metric space can be leveraged within a sequenced-based topological localisation framework to bolster matching performance by both mitigating visual similarities that are caused by the planarity of the sensor and failures due to sudden obstruction in dynamic environments. Due to the complete horizontal \gls{fov} of the radar scan formation process, we show how the off-the-shelf sequence-based trajectory matching system can be manipulated to allow us to detect place matches when the vehicle is travelling down a previously visited stretch of road in the opposite direction. This paper proceeds by reviewing related literature in~\cref{sec:rel_work}. \cref{sec:method} describes our approach for a more canny use of a metric space in which polar radar scans are embedded. We describe in~\cref{sec:experimental} details for implementation, evaluation, and our dataset. \cref{sec:results} discusses results from such an evaluation. \cref{sec:concl,sec:fut} summarise the findings and suggest further avenues for investigation. \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/pipeline.pdf} \caption{An overview of our pipeline. The offline stages include \emph{enforcing} a metric space by training a \gls{fcnn} which takes polar radar scans as input, and \emph{encoding} a trajectory of scans (the map) by forward passes through this network (c.f.~\cref{sec:method:kradar}). The online stages involve \emph{inference} to represent the place the robot currently finds itself within in terms of the learned knowledge and \emph{querying} the space (c.f.~\cref{sec:method:seqslam}) which -- in contrast to our prior work -- involves a search for coherent sequences of matches rather than a globally closest frame in the embedding space.} \label{fig:pipeline} \vspace{-.5cm} \end{figure} \section{Related Work} \label{sec:rel_work} Recent work has shown the promise of \gls{fmcw} radar for robust place recognition~\cite{KidnappedRadarArXiv,gskim2020mulran} and metric localisation~\cite{UnderTheRadarArXiv}. None of these methods account for temporal effects in the radar measurement stream. SeqSLAM~\cite{milford2012seqslam} and its variants have been extremely successful at tackling large-scale, robust place recognition with video imagery in the last decade. Progress along these lines has included automatic scaling for viewpoint invariance~\cite{pepperell2015automatic}, probabilistic adjustments to the search technique~\cite{hansen2014visual}, and dealing with challenging visual appearance change using \glspl{gan}~\cite{latif2018addressing}. The work presented in this paper is most closely influenced by the use of feature embeddings learned by training \glspl{cnn}~\cite{dongdong2018cnn}, omnidirectional cameras~\cite{cheng2019panoramic}, and \gls{lidar}~\cite{yin2018synchronous} within the SeqSLAM framework. \section{Methodology} \label{sec:method} Broadly, our method can be summarised as leveraging very recent results in \gls{dl} techniques which provide good metric embeddings for the global location of radar scans within a robust sequence-based trajectory matching system. We begin the discussion with a brief overview of the baseline SeqSLAM algorithm, followed by a light description of the learned metric space, and concluded by an application which unifies these systems -- the main contribution of this paper. \subsection{Overview of SeqSLAM} \label{sec:method:seqslam} Our implementation of the proposed system is based on an open-source, publicly available port of the original algorithm\footnote{\rurl{https://github.com/tmadl/pySeqSLAM}}. Incoming images are preprocessed by downsampling (to thumbnail resolution) and patch normalisation. A difference matrix is constructed storing the euclidean distance between all image pairs. This difference matrix is then contrast enhanced. Examples of these matrices can be seen in~\cref{fig:diff_m}. For more detail, a good summary is available in~\cite{sunderhauf2013we}. When looking for a match to a query image, SeqSLAM sweeps through the contrast-enhanced difference matrix to find the best matching sequence of adjacent frames. In the experiments (c.f.~\cref{sec:experimental,sec:results}) we refer to this baseline search as~{\sc SeqSLAM}. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{figs/method.png} \caption{The off-the-shelf sequence matching SeqSLAM system is manipulated in this paper to facilitate backwards \gls{lcd}. This is achieved by mirroring the set, $v_{min} < v < v_{max}$ (blue), of trajectories considered -- also considering $-v_{max} < v < -v_{min}$ (red). Importantly, this is not a useful adjustment under a na\"{i}ve application of SeqSLAM to radar images and is only beneficial if a rotationally invariant representation is used to construct the difference matrices.} \label{fig:seqslam_bw_search} \end{figure} \subsection{Overview of Kidnapped Radar} \label{sec:method:kradar} To learn filters and cluster centres which help distinguish polar radar images for place recognition we use NetVLAD~\cite{arandjelovic2016netvlad} with VGG-16~\cite{simonyan2014very} as a front-end feature extractor -- both popularly applied to the place recognition problem. Importantly, we make alterations such that the network invariant to the orientation of input radar scans, including: circular padding~\cite{wang2018omnidirectional}, anti-aliasing blurring~\cite{zhang2019making} azimuth-wise max-pooling. To enforce the metric space, we perform online triplet mining and apply the triplet loss described in~\cite{schroff2015facenet}. Loop closure labels are taken from a ground truth dataset (c.f.~\cref{sec:experimental}). The interested reader is referred to~\cite{KidnappedRadarArXiv} for more detail. In the experiments (c.f.~\cref{sec:experimental,sec:results}) we refer to representations obtained in this manner as~{\sc kRadar}. \subsection{Sequence-based Radar Place Recognition} \label{sec:method:rseqslam} We replace the image preprocessing step with inference on the network described in~\cref{sec:method:kradar}, resulting in radar scan descriptors of size \num{4096}. The difference matrix is obtained by calculating the Euclidean distance between every pair of embeddings taken from places along the reference and live trajectories in a window of length $W$. This distance matrix is then locally contrast enhanced in sections of length $R$. When searching for a match to a query image, we perform a sweep through this contrast-enhanced difference matrix to find the best matching sequence of frames based on the sum of sequence differences. In order to be able to detect matches in reverse, this procedure is repeated with a time-reversed live trajectory -- this method would not be applicable to narrow \gls{fov} cameras but is appropriate here as the radar has a \SI{360}{\degree} \gls{fov}. A simple visualisation of the process is shown in~\cref{fig:seqslam_bw_search}. In tis case, the forward search (blue lines) is mirrored to perform a backwards search (red lines), which results in the selection of the best match (solid black line). In the experiments (c.f.~\cref{sec:experimental,sec:results}) we refer to this modified search as~{\sc LAY}~(``Look Around You''). This procedure is performed for each template, on which a threshold is applied to select the best matches. \Cref{sec:results} discusses the application of the threshold and reports the results in comparison to the original SeqSLAM approach; in particular \cref{fig:diff_and_scores} shows visual examples of the discussed methodology. \section{Experimental Setup} \label{sec:experimental} This section details our experimental design in obtaining the results to follow in~\cref{sec:results}. \subsection{Vehicle and radar specifications} Data was collected using the \textit{Oxford RobotCar} platform~\cite{RobotCarDatasetIJRR}. The vehicle, as described in the \textit{Oxford Radar RobotCar Dataset}~\cite{RadarRobotCarDatasetArXiv}, is fitted with a CTS350-X Navtech \gls{fmcw} scanning radar. \subsection{Ground truth database} The ground truth database is curated offline to capture the sets of nodes that are at a maximum distance (\SI{15}{\metre}) from a query frame, creating a graph-structured database that yields triplets of nodes for training the representation discussed in~\cref{sec:method:kradar}. To this end, we adjust the accompanying ground truth odometry described in~\cite{RadarRobotCarDatasetArXiv} in order to build a database of ground truth locations. We manually selected a moment during which the vehicle was stationary at a common point and trimmed each ground trace accordingly. We also aligned the ground traces by introducing a small rotational offset to account for differing attitudes. \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{figs/gt.png} \caption{Visualisation of a ground truth $SE(2)$ matrix showing the Euclidean distance between the global positiosn that pairs of radar scans were captured at. Each trajectory pair is associated with such a matrix. Values in these matrices are scaled from distant (white) to nearby (black). In this region of the dataset, the vehicle revisits the same stretch of the route in the opposite direction -- visible as the contours perpendicular to the main diagonal. \label{fig:gt}} \end{figure} \subsection{Trajectory reservation} \label{sec:experimental:demarc} Each approximately \SI{9}{\kilo\metre} trajectory in the Oxford city centre was divided into three distinct portions: \textit{train}, \textit{valid}, and \textit{test}. The network is trained with ground truth topological localisations between two reserved trajectories in the \emph{train} split. The \textit{test} split, upon which the results presented in~\cref{sec:results} are based, was specifically selected to feature vehicle traversals over portions of the route in the opposite direction; data from this split are not seen by the network during training. The results focus on a \gls{tr} scenario, in which all remaining trajectories in the dataset are localised against a map built from the first trajectory that we did not use for learning, totalling \num{27} trajectory pairs (and \SI{26}{\kilo\metre} of driving) with the same map but a different localisation run. \subsection{Measuring performance} \label{sec:experimental:metrics} In the ground truth $SE(2)$ database, all locations within a \SI{15}{\metre} radius of a ground truth location are considered true positives whereas those outside are considered true negatives, a more strictly imposed boundary than in~\cite{KidnappedRadarArXiv}. Evaluation of \acrfull{pr} is different for the sequence- and \gls{nn}-based approaches. For the \gls{nn}-based approach of~\cite{KidnappedRadarArXiv} we perform a ball search of the discretised metric space out to a varying embedding distance threshold. For the sequence-based approach advocated in this paper, we vary the minimum match score. As useful summaries of \gls{pr} performance, we analyse \gls{auc} as well as some F-scores, including $F_{1}$, $F_{2}$, and $F_{\beta}$ with $\beta = 0.5$~\cite{pino1999modern}. \begin{figure*} \centering \begin{subfigure}{0.24\textwidth} \includegraphics[width=\textwidth]{figs/diff_m_noenh_norot.png} \caption{} \label{fig:diff_m_noenh_norot} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\textwidth]{figs/diff_m_enh_norot.png} \caption{} \label{fig:diff_m_enh_norot} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\textwidth]{figs/diff_m_noenh.png} \caption{} \label{fig:diff_m_noenh} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\textwidth]{figs/diff_m_enh.png} \caption{} \label{fig:diff_m_enh} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\textwidth]{figs/scores_fwd_norot.png} \caption{} \label{fig:scores_fwd_vanilla} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\textwidth]{figs/scores_bwd_norot.png} \caption{} \label{fig:scores_bwd_vanilla} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\textwidth]{figs/scores_fwd.png} \caption{} \label{fig:scores_fwd_ours} \end{subfigure} \begin{subfigure}{0.24\textwidth} \includegraphics[width=\textwidth]{figs/scores_bwd.png} \caption{} \label{fig:scores_bwd_ours} \end{subfigure} \caption{Difference matrices upon which the SeqSLAM variants perform trajectory searches (top row) and relative match-score-matrices (bottom row). These are constructed matching the representation of radar scans in two trajectories (rows-versus-columns for each matrix) -- {\sc vgg-16/netvlad}~on the left side (a, b, e and f) and~{\sc kRadar}~on the right side (c, d, g and h). (a) and (c) are the difference matrices before enhancement -- directly used by the \gls{nn} search in~\cite{KidnappedRadarArXiv} -- and (b) and (d) are the respective enhanced form -- on which SeqSLAM performs its searches. (e) and (f) are constructed using embeddings inferred by {\sc vgg-16/netvlad}~in the enhanced form (b), while (g) and (h) use embeddings inferred by {\sc kRadar}~in the enhanced form (d). (e) and (g) are computed by using the forward-style match score method employed by standard SeqSLAM; in contrast, (f) and (h) employ the proposed backward-style match score method. All match-score matrices are not defined for the first window of columns as SeqSLAM must fill a buffer of frames before any matching is possible.} \label{fig:diff_and_scores} \end{figure*} \begin{figure*} \centering \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{figs/score_fwd_thr.png} \caption{} \label{fig:score_fwd_thr} \end{subfigure} \begin{subfigure}{0.35\textwidth} \includegraphics[width=\textwidth]{figs/score_bwd_thr.png} \caption{} \label{fig:score_bwd_thr} \end{subfigure} \caption{Binarised match score matrices for the \subref{fig:score_fwd_thr} baseline and \subref{fig:score_bwd_thr} mirrored SeqSLAM variants. The threshold applied for binarisation is higher for~\subref{fig:score_fwd_thr} ({\sc kRadar},{\sc SeqSLAM}) than for ({\sc kRadar},{\sc LAY}). This is in order to qualitatively show that even when increasing numbers of potential matches are allowed in {\sc SeqSLAM}~(high recall), the true backwards loop closures are not featured. For {\sc LAY}~(right), they are. From these it is evident that the tailored changes to the fundamental SeqSLAM search strategy are better suited to discover loop closures as the vehicle revisits the same route section with opposing orientation -- a common scenario in structured, urban driving. \label{fig:score_thr}} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/pr_curves.pdf} \caption{\gls{pr} curves showing the benefit of, firstly, using learned metric embeddings as opposed to radar scans directly and, secondly, the tailored changes to the baseline SeqSLAM search algorithm when performing sequence-based radar place recognition.} \label{fig:pr_curves} \end{figure} \subsection{Hyperparameter tuning} \label{sec:experimental:tune} To produce a fair comparison of the different configurations we can utilise to solve the topological localisation problem, we performed a hyperparameter tuning on the various algorithms; we selected two random trials and excluded them fro the final evaluation. The window width for the trajectory evaluation $W$ and the enhancement window $R$ have been chosen through a grid search procedure. The final values are the ones which produced precision-recall curves with the highest value of precision at \SI{80}{\%} recall. \section{Results} \label{sec:results} This section presents instrumentation of the metrics discussed in~\cref{sec:experimental:metrics}. The hyperparameter optimisation (c.f.~\cref{sec:experimental:tune}) results in the parametisation of the systems for comparison as enumerated in~\cref{tab:hyperpar}. \begin{table}[] \renewcommand{\arraystretch}{2} \centering \begin{tabular}{cc|cc} \textbf{Representation} & \textbf{Search} & $R$ & $W$ \\ \hline {\sc vgg-16/netvlad} & {\sc SeqSLAM} & 34 & 50 \\ {\sc kRadar} & {\sc SeqSLAM} & 37 & 60 \\ \hline {\sc vgg-16/netvlad} & {\sc LAY} & 31 & 60 \\ {\sc kRadar} & {\sc LAY} & 37 & 60 \\ \end{tabular} \caption{Hyperparameter summary as results of the hyperparameter grid-search optimisation.\label{tab:hyperpar}} \end{table} \cref{fig:pr_curves} shows a family of \gls{pr} curves for various methods that it is possible to perform SeqSLAM with radar data. Here, only a single trajectory pair is considered (one as the map trajectory, the other as the live trajectory). From~\cref{fig:pr_curves} it is evident that: \begin{enumerate} \item\label{obs:embed_v_scans} Performance when using the learned metric embeddings is superior to either polar or cartesian radar scans, \item\label{obs:seq_vs_nn} Sequence-based matching of trajectories outperforms \gls{nn}-based searches, \item\label{obs:vanilla_vs_kradar} Performance when using the baseline architecture is outstripped by the rotationally-invariant modifications, and \item\label{obs:fwd_vs_bwd} Performance when using the modified search algorithm is boosted. \end{enumerate} Observation~\ref{obs:embed_v_scans} can be attributed to the fact that the learned representation is designed to encode only knowledge concerning place, whereas the imagery is subject to sensor artefacts. Observation~\ref{obs:seq_vs_nn} can be attributed to perceptual aliasing along straight, canyon-like sections of an urban trajectory being mitigated. Observation~\ref{obs:vanilla_vs_kradar} can be attributed to the rotationally-invariant architecture itself. Observation~\ref{obs:fwd_vs_bwd} can be attributed to the ability of the adjusted search to detect loop closures in reverse. \cref{tab:metrics} provides further evidence for these findings by aggregating \gls{pr}-related statistics over the entirety of the dataset discussed in~\cref{sec:experimental:demarc} -- the map trajectory is kept constant and the live trajectory varies over forays spanning a month of urban driving. While it is clear that we outperform \gls{nn} techniques presented in~\cite{KidnappedRadarArXiv}, the F-scores in~\cref{tab:metrics} present a mixed result when comparing the standard SeqSLAM search and the modified search discussed in~\cref{sec:method:rseqslam}. However, consider~\cref{fig:score_thr}. Here, the structure of the backwards loop closures is discovered more readily by the backwards search. It is important to remember when inspecting the results shown in~\cref{tab:metrics,fig:pr_curves} that the data in this part of the route (c.f.~\cref{sec:experimental:demarc}) is \emph{unseen} by the network during training, and particularly challenging. This is a necessary analysis of the generalisation of learned place recognition methods but is not a requirement when deploying the learned knowledge in \gls{tr} modes of autonomy. The takeaway message is that we have improved the recall at good precision levels by about \SI{30}{\percent} by applying sequence-based place recognition techniques to our learned metric space. \begin{table*}[] \renewcommand{\arraystretch}{2} \centering \begin{tabular}{cc|cccccc} \textbf{Representation} & \textbf{Search} & \gls{auc} & max $F_{1}$ & max $F_{2}$ & max $F_{0.5}$ & $R_{P = 60\%}$ & $R_{P = 80\%}$\\ \hline {\sc vgg-16/netvlad} & \gls{nn} & 0.26 & 0.34 & 0.37 & 0.29 & 0.03 & 0.00\\ {\sc kRadar} & \gls{nn} & 0.37 & 0.41 & 0.40 & 0.41 & 0.16 & 0.06\\ \hline {\sc vgg-16/netvlad} & {\sc SeqSLAM} & 0.47 & 0.49 & 0.37 & 0.60 & 0.40 & 0.31\\ {\sc kRadar} & {\sc SeqSLAM} & 0.52 & \textbf{0.53} & 0.41 & \textbf{0.64} & 0.45 & \textbf{0.37}\\ \hline {\sc vgg-16/netvlad} & {\sc LAY} & 0.46 & 0.48 & 0.36 & 0.60 & 0.39 & 0.32\\ {\sc kRadar} & {\sc LAY} & \textbf{0.53} & \textbf{0.53} & \textbf{0.42} & 0.62 & \textbf{0.46} & 0.36\\ \end{tabular} \caption{Summary statistics for various radar-only SeqSLAM techniques (including representation of the radar frame and style of search) as aggregated over a month of urban driving. All quantities are expressed as a mean value. As discussed in~\cref{sec:experimental:metrics}, the requirement imposed on matches (as true/false positives/negatives) is more strict than that presented in~\cite{KidnappedRadarArXiv} with consequentially worse performance than previously published for {\sc vgg-16/netvlad}~{\sc kRadar}, and {\sc NN}~systems. The key message of this paper is that sequence-based exploitation of these learned metric embeddings (middle and bottom rows) is beneficial in comparison to \gls{nn} matching in a discretised search space (top two rows).} \label{tab:metrics} \end{table*} \section{Conclusion} \label{sec:concl} We have presented an application of recent advances in learning representations for imagery obtained by radar scan formation to sequence-based place recognition. The proposed system is based on a manipulation of off-the-shelf SeqSLAM with prudent adjustments made taking into account the complete sweep made by scanning radar sensors. We have have further proven the utility of our rotationally invariant architecture -- a crucial enabling factor of our SeqSLAM variant. Crucially, we achieve a boost of \SI{30}{\percent} in recall at high levels of precision over our previously published \acrlong{nn} approach. \section{Future Work} \label{sec:fut} In the future we plan to retrain and test the system on the all-weather platform described in~\cite{kyberd2019}, a signficant factor in the development of which was to explore applications of \gls{fmcw} radar to mobile autonomy in challenging, unstructured environments. We also plan to integrate the system presented in this paper with our mapping and localisation pipeline which is built atop of the scan-matching algorithm of~\cite{2018ICRA_cen,2019ICRA_cen}. \section*{Acknowledgment} This project is supported by the Assuring Autonomy International Programme, a partnership between Lloyd’s Register Foundation and the University of York as well as UK EPSRC programme grant EP/M019918/1. We would also like to thank our partners at Navtech radar. \bibliographystyle{IEEEtran}
2003.04597
\section{Introduction} Let $(M,g)$ be a smooth, compact, Riemannian manifold of dimension $n$ and consider normalized Laplace eigenfunctions: solutions to $$ (-\Delta_g-\lambda^2)\phi_\lambda=0,\qquad \|\phi_\lambda\|_{L^2(M)}=1. $$ This article studies the growth of $L^p$ norms of the eigenfunctions, $\phi_\lambda$, as $\lambda\to \infty$. Since the work of Sogge~\cite{So88}, it has been known that there is a change of behavior in the growth of $L^p$ norms for eigenfunctions at the \emph{critical exponent} $p_c:=\tfrac{2(n+1)}{n-1}$. In particular, \begin{equation} \label{e:stdBounds} \|\phi_\lambda\|_{L^p(M)}\leq C\lambda^{\delta(p)},\qquad \delta(p):=\begin{cases} \frac{n-1}{2}-\frac{n}{p}& p_c\leq p\\ \frac{n-1}{4}-\frac{n-1}{2p}& 2\leq p\leq p_c. \end{cases} \end{equation} For $p\geq p_c$,~\eqref{e:stdBounds} is saturated by the zonal harmonics on {the round sphere} $S^n$. On the other hand, for $p\leq p_c$, these bounds are saturated by the highest weight spherical harmonics on $S^n$, also known as Gaussian beams. In a very strong sense, {the authors showed in~\cite[page 4]{CG19a} that any eigenfunction saturating~\eqref{e:stdBounds} for $p>p_c$ behaves like a zonal harmonic}, while Blair--Sogge~\cite{BlSo15,BlSo17} showed that for $p<p_c$ such eigenfunctions behave like Gaussian beams. {In the $p\leq p_c$, Blair--Sogge have recently made substantial progress on improved $L^p$ estimates on manifolds with non-positive curvature~\cite{BlSo19,blair2018concerning,BlSo15b}} This article concerns the behavior of $L^p$ norms for high $p$; that is, for $p>p_c$. {While there has been a great deal of work on $L^p$ norms of eigenfunctions~\cite{KTZ,HeRi,Ta19,Ta18a,SoggeTothZelditch,SoggeZelditch,SoZe16, TZ02,ToZe03} this article departs from the now standard approaches. We both adapt the geodesic beam methods developed by the authors in~\cite{GT,GDefect,GJEDP,CGT, CG17,GT18a,CG19dyn,CG19a} and develop a new second microlocal calculus used to understand the number of points at which $|u_\lambda|$ can be large. By doing this} we give general dynamical conditions guaranteeing quantitative improvements over~\eqref{e:stdBounds} for $p>p_c$. In order to work in compact subsets of phase space, we semiclassically rescale our problem. Let $h=\lambda^{-1}$ and, abusing notation slightly, write $\phi_\lambda=\phi_h$ so that $$ (-h^2\Delta_g-1)\phi_h=0,\qquad \|\phi_h\|_{L^2(M)}=1. $$ We also work with the semiclassical Sobolev spaces {$H^s_{\text{scl}}(M)$, $s\in \mathbb{R}$, defined by the norm} $$ \|u\|_{\Hs{s}}^2:=\langle (-h^2\Delta_g+1)^su,u\rangle_{{_{\!L^2(M)}}}. $$ We start by stating a consequence of our main theorem. Let ${\Xi}$ denote the collection of maximal unit speed geodesics for $(M,g)$. {For $m$ a positive integer, $r>0$, $t\in {\mathbb R}$, and $x \in M$} define $$ {\Xi}_x^{m,r,t}:=\big\{\gamma\in \Xi: \gamma(0)=x,\,\exists\text{ at least }m\text{ conjugate points to } x \text{ in }\gamma(t-r,t+r)\big\}, $$ where we count conjugate points with multiplicity. Next, for a set $V \subset M$ write $$ \mc{C}_{_{\!V}}^{m,r,t}:=\bigcup_{x\in V}\{\gamma(t): \gamma\in \Xi_x^{m,r,t}\} $$ Note that if $r_t \to 0^+$ as $|t|\to \infty$, then saying {$y \in \mc{C}_x^{n-1,r_t,t}$} for $t$ large indicates that $y$ behaves like a point that is maximally conjugate to $x$. This is the case for every point $x$ on the sphere when $y$ is either equal to $x$ or its antipodal point. The following result applies under the assumption that points are not maximally conjugate and obtains quantitative improvements. \begin{theorem} \label{t:noConj} Let $p>p_c$, $U \subset M$, and assume there exist $t_0>0$ and $a>0$ so $$ \inf_{x_1,x_2\in U}d\big(x_1, \mc{C}_{x_2}^{n-1,r_t,t}\big)\geq r_t,\qquad\text{ for } t\geq t_0, $$ with $r_t=\frac{1}{a}e^{-at}.$ Then, there exist $C>0$ and $h_0>0$ so that for $0<h<h_0$ and $u \in {\mc{D}'}(M)$ $$ \|u\|_{L^p(U)}\leq Ch^{-\delta(p)}\left(\frac{\|u\|_{{_{\!L^2(M)}}}}{\sqrt{\log h^{-1}}}\;+\; \frac{\sqrt{\log h^{-1}}}{h}\big\|(-h^2\Delta_g-1)u\big\|_{\Hs{\frac{n-3}{2}-\frac{n}{p}}}\right). $$ \end{theorem} \noindent The assumption in Theorem~\ref{t:noConj} {rules} out maximal conjugacy {of any two points} $x,y\in U$ uniformly up to time $\infty$, and we expect it to hold on a generic manifold $M$ with $U=M$. Since Theorem~\ref{t:noConj} includes the case of manifolds without conjugate points, it generalizes the work of~\cite{HaTa15}, where it was shown that logarithmic improvements {in $L^p$ norms for} $p>p_c$ are possible on manifolds with non-positive curvature. One family of examples where the assumptions of Theorem~\ref{t:noConj} hold is that of product manifolds~\cite[Lemma 1.1]{CG19a} i.e. $(M_1\times M_2, g_1 \oplus g_2)$ where $(M_i,g_i)$ are non-trivial compact Riemannian manifolds. Note that this family of examples includes manifolds with large numbers of conjugate points e.g. $S^{2}\times {M}$. {The proof of Theorem~\ref{t:noConj} gives a great deal of information about eigenfunctions which may saturate $L^p$ bounds $(p>p_c)$. Our next theorem describes the structure of such eigenfunctions. This theorem shows that an eigenfunction can saturate the \emph{logarithmically improved} $L^p$ norm near at most \emph{boundedly many} points. Moreover, modulo an error small in $L^p$, near each of these points the eigenfunction can be decomposed as a sum of quasimodes which are similar to the highest weight spherical harmonics scaled by $h^{\frac{n-1}{4}}/\sqrt{\log h^{-1}}$ whose number is nearly proportional to $h^{\frac{1-n}{2}}$. In the theorem below the quasimodes are denoted by $v_j$ and, while similar to highest weight spherical harmonics (a.k.a Gaussian beams), they are not as tightly localized to a geodesic segment and do not have Gaussian profiles. We refer to these quasimodes as geodesic beams (see Remark~\ref{r:geodesicBeams}). \begin{theorem}\label{t:JeffsFavorite} Let $p>p_c$. There exist $c,C>0$ such that the following holds. Suppose the same assumptions as Theorem~\ref{t:noConj}. Let $0<\delta_1<\delta_2<\frac{1}{2}$, $h^{\delta_2}\leq R(h)\leq h^{\delta_1}$, and $\{x_\alpha\}_{\alpha \in \mc{I}(h)} \subset M$ be a maximal $R(h)$-separated set. Let $u \in \mc{D}'(M)$ with $\|(-h^2\Delta_g-1)u\|_{H_h^{\frac{n-3}{2}}}=o\big(\frac{h}{\log h^{-1}}\|u\|_{L^2}\big)$, and for $\varepsilon>0$ $$ \mc{S}\sub{U}(h, \varepsilon,u):=\Big\{\alpha \in \mc{I}(h): \|u\|_{L^\infty (B(x_\alpha,R(h)))} \geq \frac{\varepsilon h^{\frac{1-n}{2}}}{\sqrt{\log h^{-1}}}\|u\|_{{_{\!L^2(M)}}}, \;\; B(x_\alpha,R(h))\cap U\neq \emptyset\Big\}. $$ Then, for all $\varepsilon>0$ there are $N_\varepsilon>0$ and $h_0>0$ such that $|\mc{S}\sub{U}(h,\varepsilon,u)|\leq N_\varepsilon$ for all $0<h\leq h_0$. Moreover, there is collection of geodesic tubes $\{\mathcal{T}_j\}_{j \in \mc{L}(\varepsilon,u)}$ of radius $R(h)$ (see Definition~\ref{d: cover}), with indices satisfying $\mc{L}(\varepsilon,u)=\cup_{i=1}^{C}\J_i$ and $\mathcal{T}_k\cap \mathcal{T}_\ell=\emptyset$ for $k,\ell\in \J_i$ with $k\neq \ell$, such that $$ u=u_e+\frac{1}{\sqrt{\log h^{-1}}}\sum_{j \in \mc{L}(\varepsilon,u)}v_{j}, $$ where $v_j$ is microsupported in $\mathcal{T}_j$, $|\mc{L}(\varepsilon,u)|\leq C \varepsilon^{-2}R(h)^{1-n}$, and for all $p\leq q\leq \infty$, \begin{gather*} \|u_e\|_{L^q}\leq {\varepsilon h^{-\delta(q)}}({\log h^{-1}})^{-\frac{1}{2}}\|u\|_{L^2},\\ \|v_{j}\|_{L^2}\leq C\varepsilon^{-1}R(h)^{\frac{n-1}{2}}\|u\|_{L^2},\qquad \quad\|Pv_j\|_{L^2}\leq C\varepsilon^{-1}R(h)^{\frac{n-1}{2}}h\|u\|_{L^2}. \end{gather*} Finally, with $ \mc{L}(\varepsilon,u,\alpha):=\big\{j\in \mc{L}(\varepsilon,u):\, \pi(\mathcal{T}_j)\cap B(x_\alpha, 3R(h))\neq \emptyset\big\}, $ for every $\alpha\in \mc{S}\sub{U}(h,\varepsilon,u)$, $$ c \varepsilon^2 R(h)^{1-n}\leq |\mc{L}(\varepsilon,u,\alpha)|\leq CR(h)^{1-n},\qquad \quad\sum_{j\in \mc{L}(\varepsilon,u,\alpha)}\|v_j\|^2_{L^2} \geq c^2\varepsilon^2.$$ \end{theorem} The decomposition of $u$ into geodesic beams $v_j$ is illustrated in Figure \ref{f:structure}. One covers $S^*M$ with a collection of tubes $\{\mc{T}_j\}$ of radius $R(h)$ that run along a geodesic. Each geodesic beam $v_j$ corresponds to microlocalizing $u$ to the tube $\mc{T}_j$. Let $u$ be a quasimode with $\|Pu\|=o(h/\log h^{-1})\|u\|$. Note that, by interpolation Theorem~\ref{t:noConj} implies that for each $p>p_c$ there is $N>0$ such that if $\|u\|_{L^p}\geq \varepsilon h^{-\delta(p)}/\sqrt{\log h^{-1}}\|u\|_{L^2}$, then $\|u\|_{L^\infty}\geq \varepsilon^{N}h^{\frac{1-n}{2}}\|u\|_{L^2}$. In particular, for $u$ to saturate the logarithmically improved $L^p$ bound, it follows that $\mc{S}\sub{M}(h,\varepsilon^N,u)$ is non-empty. Theorem~\ref{t:JeffsFavorite} then gives that $\mc{S}\sub{M}(h,\varepsilon^N,u)$ has a uniformly bounded number of points and at these points the quasimode $u$ needs to consist of at least $c \varepsilon^{2N} R(h)^{1-n}$ geodesic beams whose combined $L^2$ mass is at least $c \varepsilon^{N}$. Since $\dim(S^*_{x_\alpha}M)=n-1,$ this implies that there is a positive measure set of directions through $x_\alpha$ among which $u$ is spreading its mass nearly uniformly. The proofs of Theorems~\ref{t:noConj} and~\ref{t:JeffsFavorite} hinge on a much more general theorem which does not require global geometric assumptions on $(M,g)$ and, in particular, Theorem \ref{t:JeffsFavorite} holds without modification under the assumptions of Theorem~\ref{t:main bound} below. {(We actually prove Theorem \ref{t:JeffsFavorite} under the more general assumptions, see Section \ref{s:JeffsFavorite}).} As far as the authors are aware, Theorem~\ref{t:noConj} is the first result giving quantitative estimates for the $L^p$ growth of eigenfunctions that \emph{only} requires dynamical assumptions. We emphasize that, in contrast with previous improvements on Sogge's $L^p$ estimates, the assumptions in Theorem~\ref{t:main bound} below are purely dynamical and, moreover, are local in the sense that they depend only on the geodesics passing through a shrinking neighborhood of a given set in $M$. Moreover, the techniques do not require long-time wave parametrices. } \begin{figure} \centering \includegraphics[width=16cm]{3bumps.pdf} \caption{\label{f:structure} The figure illustrates a function $u$ that saturates the $L^\infty$ bound at three points $x_{\alpha_1},x_{\alpha_2},x_{\alpha_3}$ viewed as a superposition of geodesic beams $v_j$. Each ridge corresponds to a beam $v_j$ and is microsupported on a tube $\mc{T}_j$ of radius $R(h)$. } \end{figure} Theorem~\ref{t:main bound} {below} controls $\|u\|_{L^p(U)}$ using an assumption on the maximal volume of long geodesics joining any two given points in $U$. For our proof, it is necessary to control the number points in $U$ where the $L^\infty$ norm of $u$ can be large. This is a very delicate and technical part of the argument, as the points in question may be approaching one another at rates $\sim h^\delta$ as $h\to 0^+$, with $0<\delta<\frac{1}{2}$. We {overcome this problem} by developing a second microlocal calculus in Section~\ref{S:co-isotrop} which, after a delicate microlocal argument, yields an uncertainty type principle controlling the amount of $L^2$ mass shared along short geodesics connecting two nearby points. We expect that additional development of these counting techniques will have many other applications, e.g. to estimates on $L^p$ norms $p\leq p_c$. To state our theorem, we need to introduce a few geometric objects. First, consider {the Hamiltonian function $p \in C^\infty(T^*M{\setminus\{0\}})$,} \[p(x,\xi)=|\xi|_g-1,\] and let $\varphi_t:T^*M\setminus 0 \to T^*M\setminus 0$ denote the Hamiltonian flow for $p$ {at time $t$}. We also define the \emph{maximal expansion rate } and the \emph{Ehrenfest time} at frequency $h^{-1}$ respectively: \begin{equation} \label{e:LambdaMax} \Lambda_{\max}:=\limsup_{|t|\to \infty}\frac{1}{|t|}{\log} \sup_{S^*M}\|d\varphi_t(x,\xi)\|, \qquad T_e(h):=\frac{\log h^{-1}}{2\Lambda_{\max}}, \end{equation} where $\|\cdot\|$ denotes the norm in any metric on $T(T^*M)$. Note that $\Lambda_{\max}\in[0,\infty)$, and if $\Lambda_{\max}=0$ we may replace it by an arbitrarily small positive constant. We next describe a cover of $S^*\!M$ by geodesic tubes. For each $\rho_0\in S^*\!M$, {the co-sphere bundle to $M$}, let $H_{_{\!\rho_0}}\subset M$ be a hypersurface so that $\rho_0\in S\!N^*\!H_{_{\!\rho_0}}$, {the unit conormal bundle to $H_{_{\!\rho_0}}$}. Then, let \[\mc{H}_{_{\!\rho_0}}\;\subset\; T_{H_{_{\!\rho_0}}}^*M=\{(x, \xi)\in T^*M:\; x \in H_{_{\!\rho_0}} \}\] be a hypersurface containing $S\!N^*\!H_{_{\!\rho_0}}$. {Next,} for $q\in \mc{H}_{_{\!\rho_0}}$, {$\tau>0$}, we define {the tube through $q$ of radius $R(h)>0$ and `length' $\tau+R(h)$ as} \begin{equation}\label{e:tubes} \Lambda_{q}^\tau(R(h)):=\bigcup_{|t|\leq \tau +R(h)} \varphi_t(B_{_{\mc{H}_{_{\!\rho_0}}}}\!({q},R(h))), B_{_{\mc{H}_{_{\!\rho_0}}}}\!({q}, R(h)):=\{\rho \in \mc{H}_{_{\!\rho_0}}:\; d(\rho,{q})\leq R(h)\}, \end{equation} {and $d$ is {distance induced by} the Sasaki metric on $T^*\!M$ (See e.g.~\cite[Chapter 9]{BlairSasaki} for a description of the Sasaki metric). {Note that the tube runs along the geodesic through $q\in H_{\rho_0}$. Similarly, for $A \subset S^*M$, we define $\Lambda_{A}^\tau(R(h))$ in the same way, replacing $q$ with $A$ in \eqref{e:tubes}.} } \begin{definition} \label{d: cover} Let $A\subset S^*\!M$, $r>0$, and $\{\rho_j(r)\}_{j=1}^{N_r} \subset A$ {for some $N_r>0$}. We say the collection of tubes {$\{\Lambda_{\rho_j}^\tau(r)\}_{j=1}^{N_r}$} is a \emph{$(\tau, r)$-cover} of a set $A\subset S^*\!M$ provided $$\Lambda_A^\tau(\tfrac{1}{2}r) \subset\bigcup_{j=1}^{N_r}\mathcal{T}_j,\qquad \mathcal{T}_j:=\Lambda_{\rho_j}^{\tau}(r).$$ \end{definition} \noindent{Given a $(\tau,r)$ cover $\{\mathcal{T}_j\}_{j\in \J}$ for $S^*\!M$, for each $x\in M$ we define} $$\J_{\!x}:=\{j\in \J:\; \pi(\mathcal{T}_j)\cap B(x,r)\neq \emptyset\}.$$ {We are now ready to state Theorem \ref{t:main bound}}, where we give {\emph{explicit dynamical conditions}} guaranteeing quantitative improvements in $L^p$ norms. \begin{theorem}\label{t:main bound} There exists $\tau\sub{M}>0$ such that for all $p>p_c$ {and {$\varepsilon_0>0$}} the following holds. Let $U \subset M$, \, ${0}<{\delta_1}<{\delta_2}<\frac{1}{2}$ {and let} $h^{{\delta_2}}\leq R(h)\leq h^{{\delta_1}}$ {for all $h>0$}. Let $1\leq T(h)\leq (1-2{\delta_2})T_e(h)$ and let $t_0>0$ be $h$-independent. Let $\{\mathcal{T}_j\}_{j\in \J}$ be a $(\tau, R(h))$ cover for $S^*\!M$ {for some $0<\tau<\tau\sub{M}$}. Suppose that for any pair of points $x_1,x_2\in {U}$, the tubes over $x_1$ can be partitioned into {a disjoint union} $\J_{\! x_1}=\mc{B}\sub{x_1,x_2}\sqcup\mc{G}\sub{x_1,x_2}$ where $$ \bigcup_{j\in \mc{G}\sub{x_1,x_2}}\varphi_t(\mathcal{T}_j)\cap S^*_{B(x_2,R(h))}M=\emptyset,\qquad {t\in [t_0,T(h)].}$$ Then, there are $h_0>0$ and $C>0$ so that for all $u\in \mc{D}'(M)$, and $0<h<h_0$, \begin{multline} \label{e:LpestFinal} \|u\|_{L^p({U})}\leq Ch^{-\delta(p)}\Bigg(\frac{\sqrt{t_0}}{\sqrt{T(h)}}+\Big[\sup_{x_1,x_2\in U}|\mc{B}\sub{x_1,x_2}|R(h)^{n-1}\Big]^{{\frac{1}{6+\varepsilon_0}(1-\frac{p_c}{p})}}\Bigg)\\{\times}\Bigg(\|u\|_{L^2}+\frac{T(h)}{h}\|(-h^2\Delta_g-1)u\|_{H_h^{\frac{n-3}{2}-\frac{n}{p}}}\Bigg). \end{multline} \end{theorem} In order to interpret~\eqref{e:LpestFinal}, note that we think of the tubes $\mc{G}\sub{x_1,x_2}$ and $\mc{B}\sub{x_1,x_2}$ as respectively good (or non- looping) and bad (or looping) tubes. Then, observe that $|\mc{B}\sub{x_1,x_2}|R(h)^{n-1}\sim \vol\big(\bigcup_{j\in \mc{B}\sub{x_1,x_2}} \mathcal{T}_j\cap S^*_{x_1}M)$, and $\bigcup_{j\in \mc{B}\sub{x_1,x_2}}\mathcal{T}_j$ is the set of points over $x_1$ which may loop through $x_2$ in time $T(h)$. Therefore, if the volume of points in $S^*_{x_1}M$ looping through $x_2$ is bounded by { $T(h)^{{-(3+\varepsilon_0)(1-\frac{p_c}{p})^{-1}}}$},~\eqref{e:LpestFinal} provides {$T(h)^{-\frac{1}{2}}$} improvements over the standard $L^p$ bounds. We expect these {non-looping type} assumptions {to be} valid on generic manifolds. {Theorem~\ref{t:main bound} can be used to obtain improved $L^p$ resolvent bounds~\cite[Theorem 2.21]{Cu20} and, as shown their, are stable by certain rough perturbations. These estimates in turn can be used to construct complex geometric optics solutions and solve certain inverse problems~\cite{DSFKeCa13}.} As in~\cite[Theorem 5 and Section 5]{CG19a}, the assumptions of Theorem~\ref{t:main bound} can be verified in certain integrable situations with $T(h)\gg \log h^{-1}$, thus producing $o((\log h^{-1})^{-\frac{1}{2}})$ improvements. Moreover, in~\cite{CG19dyn}, we used these types of good and bad tubes to understand averages and $L^\infty${-norms} under various assumptions on $M$, including that it has Anosov geodesic flow or non-positive curvature. Since our results do not require parametrices for the wave-group, we expect that the arguments leading to Theorem~\ref{t:main bound} will provide \emph{polynomial} improvements over Sogge's estimates on manifolds where Egorov type theorems hold for longer than logarithmic times. \begin{remark} The proofs below adapt directly to the case of quasimodes for real principal type semiclassical pseudodifferential operators of Laplace type. That is, to operators with principal symbol $p$ {satisfying both} $\partial_{\xi}p\neq 0$ on $\{p=0\}$ and {that} $\{p=0\}\cap T^*_xM$ has positive definite second fundamental form. {This is the case, for example, for Schr\"odinger operators away from the forbidden region.} However, for concreteness and simplicity of exposition, we have chosen to consider only the Laplace operator. \end{remark} \subsection{Discussion of the proof {of Theorem~\ref{t:main bound}}} {Our method for proving Theorem~\ref{t:main bound} differs from the standard approaches for treating $L^p$ norms in two major ways. It} hinges on adapting the geodesic beam techniques constructed by the authors~\cite{CG19a}, and on the development of a new second-microlocal calculus. We start in Section~\ref{s:tubes} by covering $S^*M$ with tubes of radius $R(h)$. Then, in Section~\ref{s:estNear}, we decompose the function $u$, {whose $L^p$ norm we wish to study}, into geodesic beams i.e. into pieces microlocalized along each of these tubes. We then sort these {beams} into collections which carry $\sim 2^{-k}\|u\|_{L^2}$ mass and study the collections for each $k$ separately. In order to understand the $L^p$ norm of $u$, we next decompose the manifold into balls of radius $R(h)$. By constructing a good cover of $M$, we are able to think the $L^p$ norm of a function on $M$ as the $L^p$ norm of a function on a disjoint union of balls of radius $R(h)$. In each ball, $B$, we are able to apply the methods from~\cite{CG19a} to understand the $L^\infty$ norm of $u$ on $B$ in terms of the number of tubes with mass $\sim 2^{-k}\|u\|_{L^2}$ passing over that ball. To bound the $L^p$ norm with $p<\infty$, it then remains to understand {the number of balls on which} the function $u$ can have a certain $L^\infty$ norm. In Section~\ref{S:k_1, k_2} we first observe that when $u$ has relatively low $L^\infty$ norm on a ball, this ball {can be} neglected by interpolation with Sogge's $L^{p_c}$ estimate. It thus remains to understand {\emph{the number of balls} $B$ on which} the $L^\infty$ norm of $u$ can be large (i.e. close to extremal). In fact, we will show that the number of balls such that $\|u\|_{L^\infty(B(x_\alpha,R(h)))}\geq C h^{\frac{1-n}{2}}/\sqrt{\log h^{-1}}$ is bounded \emph{uniformly} in $h$. That is, there is some number $N$ such that there are at most $N$ such balls for any value of $h>0$. This is the content of Theorem \ref{t:JeffsFavorite} and is proved in Section~\ref{s:JeffsFavorite}. It is in this step where a crucial new ingredient is input. The new method allows us to control the size of the set on which an eigenfunction (or quasimode) can have high $L^\infty$ norm. The method relies on understanding how much $L^2$ mass can be effectively shared along {short geodesics joining} two nearby points in such a way as to produce large $L^\infty$ norm at both points. That is, {if $x_\alpha$ and $x_\beta$ are nearby points on $M$, and} if $|u(x_\alpha)|$ and $|u(x_\beta)|$ are near extremal, how much total $L^2$ mass must the tubes over $x_\alpha$ and $x_\beta$ carry? In order to understand this sharing phenomenon, we develop a new second microlocal calculus associated to a Lagrangian foliation $L$ over a co-isotropic {submanifold $\Gamma \subset T^*M$}. This calculus allows for simultaneous localization along a leaf of $L$ and along $\Gamma$. The calculus, which is developed in Section~\ref{s:anisotropic}, can be seen as an interpolation between those in~\cite{DyZa} and ~\cite{SjZw:99}. It is then the incompatibility between the calculi coming from two nearby points which allows us to control this sharing {of mass}. This incompatibility is demonstrated in Section~\ref{s:uncertainMe} {in the form of an uncertainty principle type of estimate}. Once the number of balls with high $L^\infty$ norm is understood, it remains to employ the non-looping techniques from~\cite{CG19a} where the $L^2$ mass on a collection of tubes is estimated using its non-looping time (see Section~\ref{s:loopMe}). \subsection{Outline of the paper} In section~\ref{s:tubes}, we construct the covers of $S^*M$ by tubes and $T^*M$ by balls which are necessary in the rest of the article. Section~\ref{s:mainThm} contains the proof of Theorems~\ref{t:JeffsFavorite} and~\ref{t:main bound}. This proof uses the anisotropic calculus developed in Section~\ref{s:anisotropic} and the almost orthogonality results from Section~\ref{s:uncertainMe}. Section~\ref{s:dynamical} contains the necessary dynamical arguments to prove Theorem~\ref{t:noConj} using Theorem~\ref{t:main bound}. \noindent {\sc Acknowledgements.} The authors are grateful to the National Science Foundation for support under grants DMS-1900519 (Y.C) and DMS-1502661, DMS-1900434 (J.G.). {Y.C. is grateful to the Alfred P. Sloan Foundation. } \section{Tubes Lemmata} \label{s:tubes} {The next few lemmas are aimed at constructing $(\tau,r)$-good covers and partitions of various subsets of $T^*\!M$ (see also~\cite[Section 3.2]{CG19a}). \begin{definition}[good covers and partitions] \label{d:good cover} Let $A\subset T^*\!M$, $r>0$, and $\{\rho_j(r)\}_{j=1}^{N_r} \subset A$ {be a collection of points, for some $N_r>0$. Let $\mathfrak{D}$ be a positive integer}. We say that the collection of tubes $\{\Lambda_{\rho_j}^\tau(r)\}_{j=1}^{{N_r}}$ is a \emph{$( \mathfrak{D},\tau, r)$-good cover} of $A\subset T^*\!M$ provided it is a $(\tau,r)$-cover of $A$ and there exists a partition $\{\mathcal{J}_\ell\}_{\ell=1}^{\mathfrak{D}}$ of $\{1, \dots, N_r\}$ so that for every $\ell\in \{1, \dots, \mathfrak{D}\}$ \[ \Lambda_{\rho_j}^\tau (3r)\cap \Lambda_{\rho_i}^\tau(3r)=\emptyset,\qquad i,j\in \mathcal{J}_\ell, \qquad i\neq j. \] In addition, {for $0\leq \delta\leq \frac{1}{2}$ and} $R(h)\geq {8}h^\delta$, we say that a collection $\{\chi_j\}_{j=1}^{N_h}\subset S_\delta(T^*\!M;[0,1])$ is a \emph{$\delta$-good partition for $A$ associated {to a} $(\mathfrak{D},\tau, R(h))$-good cover} if $\{\chi_j\}_{j=1}^{N_h}$ is bounded in $S_\delta$ and $$ \text{(1)} \text{\ensuremath{\supp}} \chi_j \subset {\Lambda_{\rho_j}^\tau(R(h))},\qquad\qquad \text{(2) $\sum_{j=1}^{N_h}\chi_j\geq 1 \;\text{on}\; \Lambda_{A}^{{\tau/2}}(\tfrac{1}{2}R(h)).$} $$ \end{definition}} \begin{remark} {We show below that for any compact Riemannian manifold $M$, there are $\mathfrak{D}_{_{\!M}},R_0,\tau_0>0$, depending only on $(M,g)$, such that for $0<\tau<\tau_0$, $0<r<R_0$, there exists a $(\mathfrak{D}_{_{\!M}},\tau,r)$ good cover for $S^*\!M$.} \end{remark} We start by constructing a useful cover of any Riemannian manifold with bounded curvature. \begin{lemma} \label{l:cover1} Let $\tilde M$ be a compact Riemannian manifold. There exist $\mathfrak{D}_n>0$, depending only on $n$, and $R_0>0$ {depending only on $n$ and a lower bound for the sectional curvature of $\tilde M$}, so that the following holds. For $0<r<R_0$, there exist a finite collection of points $\{x_\alpha\}_{\alpha \in \I}\subset \tilde M$, $\I=\{1,\dots, N_r\}$, and a partition $\{\mathcal{I}_i\}_{i=1}^{\mathfrak{D}_n}$ of $\I$ so that \begin{equation*} \begin{gathered} \tilde M\subset \bigcup_{\alpha \in \I}B(x_{\alpha},r), \qquad\qquad B(x_{\alpha_1},3r)\cap B(x_{\alpha_2},3{r})=\emptyset, \qquad \alpha_1,\alpha_2\in \mathcal I_i,\quad \alpha_1 \neq \alpha_2,\\ \text{$\{x_\alpha\}_{\alpha \in \I}$ is an $\frac{r}{2}$ maximal separated set in $\tilde M$.} \end{gathered} \end{equation*} \end{lemma} \begin{proof} Let $\{x_\alpha\}_{\alpha \in \I}$ be a maximal $\frac{r}{2}$ separated set in $\tilde M$. Fix $\alpha_0 \in \I $ and suppose that $B(x_{\alpha_0},3{r})\cap B(x_\alpha,3{r})\neq \emptyset$ for all {$\alpha\in \mathcal K_{{{\alpha_0}}} \subset \I$}. Then for all ${\alpha \in \mathcal K_{{{\alpha_0}}}}$, $B(x_\alpha,\tfrac{{r}}{2})\subset B(x_{\alpha_0},8{{r}}).$ In particular, $$ \sum_{{\alpha\in \mathcal K_{{{\alpha_0}}}}}\vol(B(x_\alpha,\tfrac{{{r}}}{2}))\leq \vol(B(x_{\alpha_0},8{{r}})). $$ Now, there {exist} ${R_0}>0$ depending on {$n$ and} a lower bound on the {sectional} curvature of $\tilde M$, {and $\mathfrak{D}_n>0$ depending only on $n$, so that {for all $0<{r}<{R_0}$}}, \begin{equation}\label{e:asterix} \vol(B(x_{\alpha_0},8{{r}}))\leq \vol(B(x_\alpha,14{{r}}))\leq \mathfrak{D}_{n}\vol(B(x_\alpha,\tfrac{{{r}}}{2})). \end{equation} Hence, it follows from \eqref{e:asterix} that $$ \sum_{{\alpha\in \mathcal K_{{{\alpha_0}}}}}\vol(B(x_\alpha,\tfrac{{r}}{2}))\leq \vol(B(\rho_{\alpha_0},8{r}))\leq \frac{{\mathfrak{D}_n}}{{| \mathcal K_{{{\alpha_0}}}|}}\sum_{{\alpha\in \mathcal K_{{{\alpha_0}}}}}\vol(B(x_\alpha,\tfrac{{{r}}}{2})). $$ In particular, $|\mathcal K_{{{\alpha_0}}}|\leq {\mathfrak{D}_n}$. At this point we have proved that each of the balls $B(x_\alpha,3r)$ intersects at most ${\mathfrak{D}_n}-1$ other balls. We now construct the sets $\mathcal I_1,\dots, \mathcal I_{{\mathfrak{D}_n}}$ using a greedy algorithm. We will say that {the index $\alpha_1$ \emph{intersects} the index $\alpha_2$} if $$ B(x_{\alpha_1},3r)\cap B(x_{\alpha_2},3r)\neq \emptyset. $$ We place the index $1\in \mathcal I_1$. Then suppose we have placed the indices $\{1,\dots, \alpha\}$ in $\mathcal I_1,\dots, \mathcal I_{{\mathfrak{D}_n}}$ so each of the $\mathcal I_i$'s consists of disjoint indices. Then, since $\alpha+1$ intersects at most ${\mathfrak{D}_n}-1$ indices, it is disjoint from $\mathcal I_i$ for some $i$. We add the index $\alpha$ to $\mathcal I_i$. By induction we obtain the partition $\mathcal I_1,\dots ,\mathcal I_{{\mathfrak{D}_n}}$. Now, suppose that there exists $x \in \tilde M$ so that {$x \notin \bigcup_{\alpha \in \I} B(x_{\alpha},r)$}. Then, $ \min_{\alpha \in \I} d(x,x_{\alpha})\geq r, $ a contradiction of the $r/2$ maximality of $x_{\alpha}$. \end{proof} In order to construct our microlocal partition, we first fix a smooth hypersurface $H\subset M$, and choose Fermi normal coordinates $x=(x_1,x')$ in a neighborhood of $H=\{x_1=0\}$. We write $(\xi_1, \xi') \in T_x^*M$ for the dual coordinates. Let \begin{equation} \label{e:hyp} \Sigma_{_{\!H}}:=\Big\{(x, \xi)\in S^*\sub{H}M \big |\;\, |\xi_1|\geq \tfrac{1}{2} \Big\} \end{equation} We then consider \begin{equation} \label{e:hyp2} \mathcal{H}\sub{\Sigma_{_{\!H}}}:=\{(x, \xi)\in {T^*\sub{H}M} \mid\;\;\; |\xi_1|\geq \tfrac{1}{2},\;\; \tfrac{1}{2}<|\xi|_{g(x)}<\tfrac{3}{2} \}. \end{equation} Then, $\mc{H}\sub{\Sigma_{_{\!H}}}$ is transverse to the geodesic flow and there is $0<\tau_{_{\!\text{inj}H}}<1$ so that the map \begin{equation} \label{e:psi} \Psi:[-\tau_{_{\!\text{inj}H}},\tau_{_{\!\text{inj}H}}]\times \mc{H}\sub{\Sigma_{_{\!H}}} \to T^*\!M, \qquad \qquad \Psi(t,\rho):=\varphi_t(\rho), \end{equation} is injective. {Our next lemma shows that there is $\mathfrak{D}_n>0$ depending only on $n$ such that one can construct a $(\mathfrak{D}_n,\tau, r)$-good cover of $\Sigma_{_{\!H}}$.} \begin{lemma} \label{l:cover} There exist $\mathfrak{D}_{n}>0$ depending only on $n$, ${R_0=R_0(n,H)}>0$, such that for $0<{r_1}<{R_0}$, $0<r_0\leq \frac{{r_1}}{2} $, there exist {points} $\{\rho_j\}_{j=1}^{N_{r_1}}\subset \Sigma_{_{\!H}}$ {and a partition $\{\mathcal J_{i}\}_{i=1}^{\mathfrak{D}_{n}}$ of $\{1,\dots, N_{r_1}\}$ so that for all $0<\tau<\frac{\tau_{_{\!\text{inj}H}}}{2}$ \begin{center} \begin{itemize*} \item $\Lambda^\tau_{_{\!\hyp}}(r_0)\subset \bigcup_{j=1}^{N_{r_1}}\Lambda_{_{\rho_j}}^\tau({r_1}),\qquad\qquad$ \item $\Lambda_{_{\rho_j}}^\tau(3{r_1})\cap \Lambda_{_{\rho_\ell}}^\tau(3{r_1})=\emptyset, \qquad j,\ell\in \mathcal J_i,\quad j\neq \ell.$ \end{itemize*} \end{center} } \end{lemma} \begin{proof} We first apply Lemma~\ref{l:cover1} to ${\tilde M=\Sigma_{_{\!H}}}$ to obtain $R_0>0$ depending only on $n$ {and the sectional curvature of $H$ {and that of $M$ near $H$},} so that for $0<r_1<R_0$, there exist $\{\rho_j\}_{j=1}^{N_{r_1}}\subset \Sigma_{_{\!H}}$ and a partition $\{\mathcal{J}_i\}_{i=1}^{\mathfrak{D}_n}$ of $\{1,\dots,N_{r_1}\}$ such that \begin{equation*} \begin{gathered} \Sigma_{_{\!H}}\subset \bigcup_{j=1}^{{N_{r_1}}}B(\rho_j,{r_1}),\qquad\qquad B({\rho_j},{3r_1})\cap B(\rho_\ell,3{r_1})=\emptyset, \qquad j,\ell\in \mathcal J_i,\quad j\neq \ell,\\ \text{$\{\rho_j\}_{j=1}^{{N_{r_1}}}$ is {an} ${\frac{r_1}{2}}$ maximal {separated set} in $\Sigma_{_{\!H}}$.} \end{gathered} \end{equation*} Now, {suppose that} $j,\ell\in \mc{J}_i$ and $$ \Lambda_{_{\rho_\ell}}^\tau(3{r_1})\cap \Lambda_{_{\rho_{{j}}}}^\tau(3{r_1})\neq \emptyset. $$ Then, there exist $q_\ell\in B(\rho_\ell,3{r_1})\cap\mc{H}_{\Sigma_{{H}}}$, $q_{{j}}\in B(\rho_{{j}},3{r_1})\cap\mc{H}_{\Sigma_{{H}}}$ and {$ t_\ell,{t_j}\in[-\tau, \tau]$} so that $ \varphi_{_{\!t_\ell-t_{{j}}}}(q_\ell)=q_{{j}}. $ {Here, $\mc{H}_{\Sigma}$ is the hypersurface defined in \eqref{e:hyp2}}. In particular, {for $\tau<\tau_{_{\!\text{inj}H}}/2$}, this implies that $q_\ell=q_{{j}}$, $t_\ell=t_{{j}}$ and hence $B(\rho_\ell,3{r_1})\cap B(\rho_{{j}},3{r_1})\neq \emptyset$ a contradiction. Now, suppose $r_0\leq {r_1}$ and that there exists $\rho \in \Lambda^\tau_{_{\!\hyp}}(r_0)$ so that {$\rho \notin \bigcup_{j=1, \dots, N_{r_1}} \Lambda_{_{\rho_j}}^\tau(r_1)$}. Then, there are $|t|<\tau+r_0$ and $q\in \mc{H}_{\Sigma_{{H}}}$ so that $$ \rho=\varphi_t(q),\qquad d(q,\Sigma_{_{\!H}})<r_0,\qquad \min_{{j=1, \dots, N_{r_1}}}d(q, \rho_j)\geq {r_1}. $$ In particular, {there exists $\tilde{\rho}\in \Sigma_{_{\!H}}$ with $d(q, \tilde \rho)<r_0$ such that for all $j=1, \dots, N_{r_1}$}, $$ d(\tilde{\rho},\rho_j)\geq d(q,\rho_j)-d(q,\tilde{\rho})>{r_1}-r_0. $$ This contradicts the maximality of $\{\rho_j\}_{j=1}^{N_{r_1}}$ if $r_0\leq {r_1}/2$. \end{proof} {We proceed to build a $\delta$-good partition of unity associated to the cover we constructed in Lemma \ref{l:cover}. {The key feature in this partition {is} that it is invariant under the {geodesic} flow. Indeed, the partition is built so that its quantization commutes with the operator ${P=-h^2\Delta-I}$ in a neighborhood of $\Sigma_{_{\!H}}$.}} \begin{proposition}\label{l:nicePartition} There exist $\tau_1=\tau_1(\tau_{_{\!\text{inj}H}})>0$ and $\varepsilon_1=\varepsilon_1(\tau_1)>0$, and given $0<\delta<\tfrac{1}{2}$,{ $0<\varepsilon\leq \varepsilon_1$}, there exists $h_1>0$, so that for any $0<\tau\leq \tau_1$, and $R(h)\geq 2h^\delta$, the following holds. There exist $C_1>0$ so that for all $0<h\leq h_1$ and every $(\tau,R(h))$-cover of $\Sigma_{_{\!H}}$ there exists a partition of unity $\chi_j\in S_\delta\cap C^\infty_c(T^*\!M ;[-C_1h^{1-2\delta},1+C_1h^{1-2\delta}])$ on $\Lambda^\tau_{_{\!\hyp}}({\frac{1}{2}R(h)})$ for which $$ \begin{gathered} \text{\ensuremath{\supp}} \chi_j\subset \Lambda_{\rho_j}^{\tau+{\varepsilon}}(R(h)),\qquad \operatorname{MS_h}([P,Op_h(\chi_j)])\cap \Lambda^\tau_{_{\!\hyp}}({\varepsilon})=\emptyset,\\ {\sum_{j}{\chi_j}\equiv 1 \text{\;on \;} {\Lambda^\tau\sub{\Sigma_{H}}(\tfrac{1}{2}R(h))},} \end{gathered} $$ and $\{\chi_j\}_j$ is bounded in $S_\delta$, and $[-h^2\Delta_g,Op_h(\chi_j)]$ is bounded in $\Psi_\delta$. \end{proposition} {\begin{proof} The proof is identical to that of~\cite[Proposition 3.4]{CG19a}.{ Although the claim that $\sum_{j}{\chi_j}\equiv 1$ on $ {\Lambda^\tau\sub{\Sigma_{H}}(\tfrac{1}{2}R(h))}$ does not appear its statement, it is included in its proof.} \end{proof}} \section{Proof of Theorem \ref{t:main bound}} \label{s:mainThm} For each $q \in S^*\!M$, choose {a hypersurface} $H_q \subset M$ with $q \in S\!N^*\!H_q$ and $\tau_{_{\inj\! H_{q}}}>\frac{\inj(M)}{2}$, where {$\tau_{_{\inj\! H_{q}}}$ is defined in \eqref{e:psi} and} $\inj(M)$ is the injectivity radius of $M$. {We next use Lemma~\ref{l:cover} to generate a cover of $\Sigma\sub{H_q}$. Lemma~\ref{l:cover} yields the existence of $\mathfrak{D}_n>0$ depending only on $n$ and ${R_0=R_0(n,H_q)}>0$, such that the following holds. Since by assumption $R(h)\leq h^{\delta_{{1}}}$, there is $h_0>0$ such that $ h^{\delta_2}\leq R(h) \leq R_0$ for all $0<h<h_0$.} Also, {set $r_1:=R(h)$ and $r_0:=\tfrac{1}{2}R(h)$}. Then, by Lemma~\ref{l:cover} there exist $N\sub{R(h)}\!=N\sub{R(h)}(q,R(h))>0$, $\{\rho_j\}_{j\in \J_{\! q}}\subset \Sigma_{_{\!H_q}}$ with ${\J_{\! q}}= \{1,\dots, N\sub{R(h)}\}$, and a partition $\{\mathcal J_{q,i}\}_{i=1}^{{\mathfrak{D}_n}}$ of $\J_{\! q}$, so that for all $0<\tau<\frac{{\tau_{_{\!\text{inj}H}}}_q}{2}$ \begin{align} &\bullet\; \Lambda^\tau_{_{\!\hypq}}({\tfrac{1}{2}R(h)})\subset \bigcup_{j\in \J_{\! q}}\Lambda_{_{\rho_j}}^\tau({R(h)}), \label{e:cover}\\ &\bullet\; \bigcup_{i=1}^{{\mathfrak{D}_n}}{\mathcal J_{q,i}}=\J_{\! q}, \label{e:partition}\\ &\bullet\; \Lambda_{_{\rho_{j_1}}}^\tau(3{R(h)})\cap \Lambda_{_{\rho_{j_2}}}^\tau(3{R(h)})=\emptyset, \qquad {j_1},{j_2}\in \mathcal J_{q,i},\quad {j_1}\neq {j_2}. \label{e:disjoint partition} \end{align} By \eqref{e:cover} there is an $h$-independent open neighborhood of $q$, $V_q \subset S^*\!M$, covered by tubes as in Lemma~\ref{l:cover}. Since $S^*\!M$ is compact, we may choose $\{q_\ell\}_{\ell=1}^L$ with $L$ independent of $h$, so that $S^*\!M\subset \cup_{\ell=1}^L V_{q_\ell}$. In particular, if $0<\tau \leq \min_{1\leq\ell\leq L} \tau_{_{\!H_{q_\ell}}}$, and for each $\ell\in \{1, \dots, L\}$ we let $$ \mathcal{T}_{q_\ell,j}= \Lambda_{_{\rho_j}}^\tau({R(h)}),$$ then there is $\mathfrak{D}_{_{\!M}}>0$ such that $\bigcup_{\ell=1}^L\{{\mathcal{T}_{q_\ell,j}}\}_{j\in \J_{q_\ell}}$ is a $(\mathfrak{D}_{_M},\tau,R(h))$-good cover for $S^*\!M$. Let $\{{\psi_{q_\ell}}\}_{\ell=1}^L \subset C_c^\infty(T^*\!M)$ satisfy $$ \begin{gathered} \text{\ensuremath{\supp}}\psi_{q_\ell}\subset \{(x,\xi)\in T^*\!M\setminus\!\{0\}\mid \; \big(x, \tfrac{\xi}{|\xi|_g}\big)\in V_{q_\ell}\} \qquad \forall \ell=1, \dots, L,\\ \sum_{\ell=1}^L\psi_{q_\ell}\equiv 1 \text{ in an $h$-independent neighborhood of }S^*\!M. \end{gathered} $$ {We split the analysis of $u$ in two parts: near and away from the characteristic variety $\{p=0\}=S^*M$. In what follows we use $C$ to denote a positive constant that may change from line to line.} \subsection{It suffices to study $u$ near the characteristic variety} \label{s:estAway} In this section we reduce the study of $\|u\|_{L^p(U)}$ to an $h$-dependent neighborhood of the characteristic variety $\{p=0\}=S^*M$. We will use repeatedly the following result. \begin{lemma}\label{L:lp bound} For all $\varepsilon>0$ and all $p\geq 2$, there exists $C>0$ such that \begin{equation} \label{l:basicLp} \|u\|_{L^p}\leq C h^{n(\frac{1}{p}-\frac{1}{2})}\|u\|_{H_h^{n(\frac{1}{2}-\frac{1}{p})+\varepsilon}}. \end{equation} \end{lemma} \begin{proof} By~\cite[Lemma 6.1]{GDefect} (or more precisely its proof), for any $\varepsilon>0$, there exists $C_\varepsilon\geq 1$ so that $ \|\operatorname{Id}\|_{H_h^{\frac{n}{2}+\varepsilon}\to L^\infty}\leq C_\varepsilon h^{-\frac{n}{2}}. $ By complex interpolation of $\operatorname{Id}:L^2\to L^2$ and $\operatorname{Id}:H_h^{\frac{n}{2}+\varepsilon}\to L^\infty$ with $\theta=\frac{2}{p}$ we obtain $ \|\operatorname{Id}\|_{H_h^{(\frac{n}{2}+\varepsilon)(1-\theta)}\to L^p} \leq C_\varepsilon^{1-\theta}h^{-\frac{n}{2}(1-\theta)}, $ and this yields \eqref{l:basicLp}. \end{proof} Observe that $$ u=\sum_{\ell=1}^L Op_h(\psi_{q_\ell})u + \Big(1-\sum_{\ell=1}^L Op_h(\psi_{q_\ell})\Big)u. $$ Note that since $1-\sum_{\ell=1}^L\psi_{q_\ell}{=} 0$ in an $h$-independent neighborhood of $S^*M=\{p=0\}$, by the standard elliptic parametrix construction (e.g.~\cite[Appendix E]{ZwScat}) there is $E\in \Psi^{-2}(M)$ with \begin{equation}\label{e:S0 parametrix} 1-\sum_{\ell=1}^L Op_h(\psi_{q_\ell})=E{P}+O(h^\infty)_{\Psi^{-\infty}}. \end{equation} Next, combining \eqref{e:S0 parametrix} with Lemma \ref{L:lp bound}, and using that $h^{n(\frac{1}{p}-\frac{1}{2})}=h^{-\delta(p)+\frac{1}{2}} h^{-1}$, we have \begin{align}\label{e:hugo} \Big\|\Big(1-\sum_{\ell=1}^L Op_h(\psi_{q_\ell})\Big)u\Big\|_{L^p} &\leq C h^{n(\frac{1}{p}-\frac{1}{2})}\|E{P}u\|_{H_h^{n(\frac{1}{2}-\frac{1}{p})+\e}}+O(h^\infty)\|u\|_{L^2} \notag\\ &\leq Ch^{-\delta(p)+\frac{1}{2}} h^{-1}\|{P}u\|_{H_h^{n(\frac{1}{2}-\frac{1}{p})+\e-2}}+O(h^\infty)\|u\|_{L^2}. \end{align} It remains to understand the terms $Op_h(\psi_{q_\ell})u$. Since there are finitely many such terms, \begin{equation}\label{e:paco} \Big\|\sum_{\ell=1}^L Op_h(\psi_{q_\ell})u\Big\|_{L^p}\leq \sum_{\ell=1}^L \|Op_h(\psi_{q_\ell})u\|_{L^p}, \end{equation} and consider each term $\|Op_h(\psi_{q_\ell})u\|_{L^p}$ individually. {By Proposition \ref{l:nicePartition} for each $\ell=1,\dots, L$ there exist $\tau_1(q_\ell)>0$, $\varepsilon_1(q_\ell)>0$, and a family of cut-offs $\{\tilde{\chi}\sub{\mathcal{T}_{q_\ell,j}}\}_{j\in \J_{q_\ell}}$, with $\tilde\chi\sub{\mathcal{T}_{q_\ell,j}}$ supported in $\Lambda_{\rho_j}^{\tau+{\varepsilon_1(q_\ell)}}(R(h))$ and such that for all $0<\tau<\tau_1(q_\ell)$ \begin{equation}\label{E:ellipticity condition} \sum_{j\in \J_{\! q_\ell}}\tilde{\chi}\sub{\mathcal{T}_{q_\ell,j}}{\equiv} 1\qquad \text{on} \quad {\Lambda^\tau\sub{\Sigma_{H_{q_\ell}}}(\tfrac{1}{2}R(h))}. \end{equation}} {Let $\tau_0(q_\ell)$ from~\cite[Theorem 8]{CG19a}. Then, set $$ \tau_{_{\!M}}:=\min_{1\leq \ell \leq L} \Big\{\tfrac{\inj(M)}{4},\, \tau_0(q_\ell),\,\tau_1(q_\ell),\, \tfrac{1}{2}\tau_{_{\inj\! H_{q_\ell}}}\Big\}. $$ From now on we work with tubes $\mathcal{T}_{q_\ell,j}=\Lambda_{_{\rho_j}}^\tau({R(h)})$ for some $0<\tau<\tau_{_{\!M}}$. Next, we localize $u$ near and away from $\Lambda^\tau\sub{\Sigma_{H_{q_\ell}}}(h^\delta)$:} $$ Op_h(\psi_{q_{\ell}})u=\sum_{{j\in \J_{\! q_\ell}}}Op_h(\tilde \chi\sub{\mathcal{T}_{q_\ell,j}})Op_h(\psi_{q_{\ell}})u+\Big(1-\sum_{j\in \J_{\! q_\ell}}Op_h(\tilde \chi\sub{\mathcal{T}_{q_\ell,j}})\Big)Op_h(\psi_{q_\ell})u. $$ \begin{remark} \label{r:geodesicBeams} We refer to functions of the form $Op_h(\tilde{\chi}\sub{\mathcal{T}_{q_\ell},j})u$ as \emph{geodesic beams}. One can check using Proposition~\ref{l:nicePartition}, that if $u$ solves $Pu=O(h)$, then the geodesic beams also solve $Pu=O(h)$ and are localized to an $R(h)$ neighborhood of a length$\sim$1 segment of a geodesic. \end{remark} In particular, by \eqref{E:ellipticity condition}, {$\tfrac{1}{2}R(h)\geq {\frac{1}{2}h^{\delta_2}}$}, and \cite[Lemma 3.6]{CG19a}, there is $E\in h^{-{\delta_2}}\Psi_{{\delta_2}}^{\operatorname{comp}}$ so that \begin{equation}\label{e:Sdelta parametrix} \Big(1-\sum_{j\in \J_{\! q_\ell}}Op_h(\tilde{\chi}\sub{\mathcal{T}_{q_\ell,j}})\Big)Op_h(\psi_{q_{\ell}})=E{P}+O_{\Psi^{-\infty}}(h^\infty). \end{equation} Since $h^{n(\frac{1}{p}-\frac{1}{2})-\delta_{{2}}}=h^{-\delta(p)+\frac{1}{2}-\delta_{{2}}}h^{-1}$, combining \eqref{e:Sdelta parametrix} with Lemma \ref{L:lp bound} yields \begin{align}\label{e:luis} \Big\|\Big(1-\sum_{j\in \J_{\! q_\ell}} Op_h(\tilde{\chi}\sub{\mathcal{T}_{q_\ell,j}})\Big)Op_h(\psi_{q_{\ell}})u\Big\|_{L^p} \leq Ch^{-\delta(p)-\frac{1}{2}-\delta_{{2}}}\|{P}u\|_{H_h^{n(\frac{1}{2}-\frac{1}{p})+\e-2}}+O(h^\infty)\|u\|_{L^2}. \end{align} {Combining \eqref{e:hugo}, \eqref{e:paco} and \eqref{e:luis} we have proved that for $U\subset M$ \begin{align}\label{e:rico} \|u\|_{L^p(U)} &\leq \sum_{\ell=1}^L\Big\| \sum_{j\in \J_{\! q_\ell}}Op_h(\tilde{\chi}\sub{\mathcal{T}_{q_\ell,j}})Op_h(\psi_{q_\ell})u\Big\|_{L^p(U)} \notag\\ &\qquad \qquad \qquad+Ch^{-\delta(p)+\frac{1}{2}-\delta_{{2}}} h^{-1}\|{P}u\|_{H_h^{n(\frac{1}{2}-\frac{1}{p})+\e-2}}+O(h^\infty)\|u\|_{L^2}. \end{align}} \subsection{Filtering tubes by $L^2$-mass} \label{s:estNear} By \eqref{e:rico} it only remains to control terms of the form $\|\sum_{j\in \J_{\!q_\ell}}Op_h(\tilde{\chi}\sub{\mathcal{T}_{q_\ell,j}})Op_h(\psi_{q_\ell})u\|_{L^p}$, where $u$ is localized to $V_{q_\ell}$ within the characteristic variety $S^*M$ and, more importantly, to the tubes $\mathcal{T}_{q_\ell,j}$. We fix $\ell$ and, abusing notation slightly, write \begin{equation} \label{e:individualized} \begin{gathered} \psi:=\psi_{q_{\ell}},\qquad \J=\J_{\! q_\ell}, \qquad {\mathcal{T}_j=\mathcal{T}_{q_\ell,j}},\qquad \tilde\chi\sub{\mathcal{T}_j}:=\tilde\chi\sub{\mathcal{T}_{q_\ell,j}},\\ v:=\sum_{j\in \J}Op_h(\tilde{\chi}\sub{\mathcal{T}_j})Op_h(\psi)u. \end{gathered} \end{equation} Let $T=T(h)\geq 1$. For each $j\in \J$ let \begin{equation}\label{e:chiNoTilde} \chi\sub{\mathcal{T}_j}\in C_c^\infty(T^*M;[0,1]){\cap S_\delta} \end{equation} be a smooth cut-off function with $\text{\ensuremath{\supp}} \chi\sub{\mathcal{T}_j}\subset \mathcal{T}_j$, $\chi\sub{\mathcal{T}_j}\equiv 1$ on $\text{\ensuremath{\supp}} \tilde{\chi}\sub{\mathcal{T}_j}$, {and such that $\{\chi_j\}_j$ is bounded in $S_\delta$}. We shall work with the modified norm $$ \|u\|\sub{P,T}:=\|u\|_{L^2}+\tfrac{T}{h}\|Pu\|_{L^2}. $$ Note that this norm is the natural norm for obtaining $T^{-\frac{1}{2}}$ improved estimates in $L^p$ bounds since the fact that $u$ is an $o(T^{-1}h)$ quasimode implies, roughly, that $u$ is an accurate solution to $(hD_t+P)u=0$ for times $t\leq T$. For each integer $k\geq -1$ we consider the set \begin{equation}\label{E:A_k} \A_k=\Big\{j\in \J: \;\; \;\frac{1}{2^{k+1}}\|u\|\sub{P,T}\; \leq \|Op_h(\chi\sub{\mathcal{T}_j})u\|_{L^2}+h^{-1}\|Op_h(\chi\sub{\mathcal{T}_j})Pu\|_{L^2}\leq\; \frac{1}{2^{k}}\|u\|\sub{P,T}\Big\}. \end{equation} {It follows that $\A_k$ consists of those tubes $\mathcal{T}_j$ with $L^2$ mass comparable to $2^{-k}$.} Observe that since $|\chi\sub{\mathcal{T}_j}|\leq 1$, {for $h$ small enough depending on finitely many seminorms of $\chi_j$,} $ \|Op_h(\chi\sub{\mathcal{T}_j})\|_{L^2\to L^2}\leq 2. $ In particular, this together with $T\geq 1$, implies that \begin{equation}\label{E:set J} \J=\bigcup_{k\geq -1}\A_k. \end{equation} \begin{lemma}\label{L:|A_k| bound} There exists $C_n>0$ so that for all $k \geq -1$ \begin{equation}\label{e:|A_k| bound} |\A_k| \leq C_n 2^{2k}. \end{equation} \end{lemma} \begin{proof} According to \eqref{e:partition}, the collection $\{\mathcal{T}_j\}_{j \in \J}$ can be partitioned into $\mathfrak{D}_n$ sets of disjoint tubes. {Thus,} we have $\sum_{j\in \J} |\chi\sub{\mathcal{T}_j}|^2\leq \mathfrak{D}_n$ and there is $C_n>0$ depending only on $n$ such that $$ \Big\|\sum_{{j\in \J}} Op_h(\chi\sub{\mathcal{T}_j})^*Op_h(\chi\sub{\mathcal{T}_j})\Big\|_{L^2\to L^2}\leq C_n. $$ In particular, { $$ \sum_{{j\in \J}} \|Op_h(\chi\sub{\mathcal{T}_j})u\|^2_{L^2}\leq C_n\|u\|_{L^2}^2 \qquad \text{and} \qquad \sum_{{j\in \J}} \|Op_h(\chi\sub{\mathcal{T}_j})Pu\|^2_{L^2}\leq C_n\|Pu\|_{L^2}^2 . $$ } Therefore, \begin{align*} |\A_k|2^{-2k-2}\|u\|\sub{P,T}^2 &\leq 2\Big(\sum_{j\in \A_k} \|Op_h(\chi\sub{\mathcal{T}_j})u\|^2_{L^2} +h^{-2}\|Op_h(\chi\sub{\mathcal{T}_j})Pu\|_{L^2}^2\Big) \leq C_n\|u\|\sub{P,T}^2. \end{align*} \vspace{-1cm} \end{proof} Next, let \begin{equation}\label{e:w_k} w_k:=\sum_{j\in \A_k}Op_h(\tilde{\chi}\sub{\mathcal{T}_j}){Op_h(\psi)}u. \end{equation} Then, {by \eqref{e:individualized} and \eqref{E:set J} we have} \begin{equation}\label{e:v} v=\sum_{k={-1}}^\infty w_k. \end{equation} The goal is therefore to control $\|w_k\|_{L^p(U)}$ for each $k$ since the triangle inequality yields $$ \|v\|_{L^p(U)}\leq \sum_{k=-1}^\infty \|w_k\|_{L^p(U)}. $$ \subsection{Filtering tubes by $L^\infty$ weight on shrinking balls} By Lemma~\ref{l:cover1}, there are points $\{x_{\alpha}\}_{\alpha\in \I}\subset M$ such that there exists a partition $\{\I_i\}_{i=1}^{\mathfrak{D}_n}$ of $\I$ so that \begin{itemize} \item $M\subset \bigcup_{\alpha \in \I}B(x_{\alpha},R(h)),$ \medskip \item $B(x_{\alpha_1},3R(h))\cap B(x_{\alpha_2},3R(h))=\emptyset, \qquad \alpha_1,\alpha_2\in \mathcal I_i,\quad \alpha_1\neq \alpha_2.$ \end{itemize} Then, for $m \in \mathbb Z$ define \begin{equation}\label{E: I_km} \I_{k,m}:=\Big\{\alpha \in \I\sub{U}: \;\;\; 2^{m-1}\leq h^{\frac{n-1}{2}}R(h)^{\frac{1-n}{2}}2^k\frac{ \|w_k\|_{L^\infty(B(x_\alpha,R(h)))}}{\|u\|\sub{P,T}}\leq 2^m\Big\}, \end{equation} where ${\I\sub{U}:=\{\alpha \in \I:\; B(x_{\alpha},R(h))\cap U\neq \emptyset\}.}$ For each $k\in \mathbb{Z}_+$ and $\alpha \in \I$ consider the sets \[ {\A_{k}(\alpha)}:=\{j\in \A_k:\; {\pi\sub{M}}(\mathcal{T}_j)\cap B(x_{\alpha},{2}R(h))\neq \emptyset\}, \] {where $\pi\sub{M}:T^*M \to M$ is the standard projection.} \noindent The indices in $\A_k$ are those that correspond to tubes with mass comparable to $\tfrac{1}{2^{k}}\|u\|\sub{P,T}$, while indices in $\A_{k}(\alpha)$ correspond to tubes of mass $\tfrac{1}{2^{k}}\|u\|_{\sub{P,T}}$ that run over the ball $B(x_{\alpha}, {2}R(h))$. In particular, Lemma \ref{L:|A_k| bound} and ~\cite[Lemma 3.7]{CG19a} yield the existence of $C_n, c_{_{\!M}}>0$ such that \begin{equation} \label{e:bound1} c_{_{\!M}}2^m\leq |\A_{k}(\alpha)|\leq C_n 2^{2k},\qquad \alpha \in \I_{k,m}. \end{equation} {Indeed, for $\alpha \in \I_{k,m}$, \begin{equation}\label{e:lb infty} 2^{m-1}h^{\frac{1-n}{2}}R(h)^{\frac{1-n}{2}}2^{-k}\|u\|\sub{P,T}\leq \|w_k\|_{L^\infty(B(x_\alpha,R(h)))}. \end{equation} In addition, \eqref{E:A_k} and Lemma \cite[Lemma 3.7]{CG19a} imply that there exist $c\sub{M}>0$, $\tau\sub{M}>0$, and $C_n>0$, depending on $M$ and $n$ respectively, such that for all $N>0$ there exists $C\sub{N}>0$ with \begin{align*} &\|w_k\|_{L^\infty(B(x_\alpha,R(h)))}\\ &\leq \frac{C_n R(h)^{\frac{n-1}{2}}}{\tau\sub{M}^{1/2}h^{\frac{n-1}{2}}}\!\!\!\!\!\!\sum_{j\in \A_k(\alpha)}\!\!\!\!\|Op_h(\tilde{\chi}\sub{\mathcal{T}_j})Op_h(\psi) u\|_{L^2}+\!h^{-1}\|Op_h(\tilde{\chi}\sub{\mathcal{T}_j})POp_h(\psi) u\|_{L^2}\!+\!C\sub{N}h^N\|u\|\sub{P,T}\\ &\leq c_{_{\!M}}^{{-1}}h^{-\frac{n-1}{2}}R(h)^{\frac{n-1}{2}}2^{-k}\|u\|\sub{P,T}|\A_k(\alpha)|+\!C\sub{N}h^N\|u\|\sub{P,T}, \end{align*} which, combined with \eqref{e:lb infty}, proves the lower bound in ~\eqref{e:bound1}.} To simplify notation, let \begin{equation}\label{E: A_km} \A_{k,m}:=\bigcup_{\alpha\in \I_{k,m}}\A_{k}(\alpha). \end{equation} {\noindent Note that for each $\alpha\in \I_{k,m}$ there is $\tilde{x}_\alpha\in B(x_\alpha,R(h))$ such that \begin{equation}\label{E:x_alphas} |w_k(\tilde{x}_\alpha)|\geq 2^{m-1}h^{\frac{1-n}{2}}R(h)^{\frac{n-1}{2}}2^{-k}\|u\|\sub{P,T}. \end{equation} {We finish this section with a result that controls the size of $\I_{k,m}$ in terms of that of $\A_{k,m}$.} Let \begin{equation}\label{E:rho} \tfrac{1}{2}(\delta_2+1)< \rho < 1, \end{equation} {$0<\varepsilon<\delta$, $\tilde{\chi}\in C_c^\infty((-1,1))$, and define the operator $\chi\sub{h,\tilde x_\alpha}$ by \begin{equation*} \chi\sub{h,\tilde x_\alpha}u(x):=\tilde{\chi}(\tfrac{1}{\varepsilon}h^{-\rho}d(x,\tilde x_\alpha))\;[Op_h(\tilde\chi(\tfrac{1}{\varepsilon}(|\xi|_g-1)))u](x). \end{equation*} In Lemma \ref{l:chi_h,y} we prove that $\chi\sub{h,\tilde x_\alpha} \in \Psi_{\Gamma_{\tilde x_\alpha},{L_{\tilde x_\alpha}} ,\rho}^{-\infty}$, where \[\Omega_{\tilde x_\alpha}=\{\xi \in T_{\tilde x_\alpha}^*M: \; |1-|\xi|_{g(\tilde x_\alpha)}|<\delta\}, \qquad \Gamma_{\tilde x_\alpha}= \bigcup_{|t|<\tfrac{1}{2}\inj(M)} \varphi_t(\Omega_{\tilde x_\alpha}), \] and $\Psi_{\Gamma_{\tilde{x}_\alpha}, L_{\tilde{x}_\alpha} ,\rho}^{-\infty}$ is a class of smoothing pseudodifferential operators that allows for localization to $h^\rho$ neighborhoods of $\Gamma_{\tilde{x}_\alpha}$ and is compatible with localization to $h^\rho$ neighborhoods of the foliation $L_{\tilde{x}_\alpha}$ of $\Gamma_{\tilde{x}_\alpha}$ generated by $\Omega_{\tilde{x}_\alpha}$.} {In Theorem~\ref{l:nice2ndCut} for $\varepsilon>0$ we explain how to build a cut-off operator $ X_{\tilde x_\alpha}\in \Psi_{\Gamma_{\tilde{x}_\alpha}, L_{\tilde{x}_\alpha} ,\rho}^{-\infty} $ such that \begin{equation}\label{e:cutoffconds} \begin{cases} \chi\sub{h,\tilde x_\alpha} X_{\tilde x_\alpha}=\chi\sub{h,\tilde x_\alpha} + O(h^\infty)_{\Psi^{-\infty}},\\ \operatorname{WF_h}'([P, X_{\tilde x_\alpha}]) \cap \{(x,\xi): x \in B(\tilde x_\alpha, \tfrac{1}{2}{\conj M}), \;\xi \in \Omega_x\} = \emptyset, \end{cases} \end{equation} where $\conj M$ denotes the injectivity radius of $M$. }} \begin{lemma}\label{l: |I_km|} Let $\frac{1}{2}(\delta_2+1)< \rho \leq 1$. There exists $C>0$ so that for every $k \geq -1$ and $m \in \mathbb Z$ the following holds. If $$ |\A_{k,m}|\leq C \,2^{2m}R(h)^{n-1}\Big(h^{\rho-\frac{1}{2}}{R(h)^{-\frac{1}{2}}}\Big)^{-\frac{2n(n-1)}{3n+1}}, $$ then \begin{equation} \label{e:counting} |\I_{k,m}|\leq C|\A_{k,m}|2^{-2m}R(h)^{1-n}. \end{equation} \end{lemma} \begin{proof} We claim that by \eqref{e:w_k}, for $\alpha \in \I_{k,m}$, \begin{equation}\label{E:w_km} \chi\sub{h,\tilde x_\alpha} w_k=\chi\sub{h,\tilde x_\alpha} w_{k,m}+O(h^\infty\|u\|\sub{L^2}), \qquad \quad w_{k,m}:= \sum_{j\in \A_{k,m}}Op_h(\tilde{\chi}\sub{\mathcal{T}_j}){Op_h(\psi)}u. \end{equation} Indeed, it suffices to show that $\chi\sub{h,\tilde x_\alpha} Op_h(\tilde{\chi}\sub{\mathcal{T}_j}){Op_h(\psi)}u=O(h^\infty\|u\|_{L^2})$ for $\alpha \in \I_{k,m}$ and $j \notin \A_{k,m}$. Note that for such indices $\pi\sub{M}(\mathcal{T}_j)\cap B(\tilde x_\alpha, {2}R(h))=\emptyset$ while $$\text{\ensuremath{\supp}} {\tilde{\chi}(\tfrac{1}{\varepsilon}h^{-\rho}d(x,\tilde x_\alpha))} \subset B(\tilde x_\alpha, C \varepsilon h^\rho){\subset B(x_\alpha,\tfrac{3}{2}R(h))}$$ for some $C>0$ {and all $h$ small enough}. Our next goal is to produce a lower bound for $|\A_{k,m}|$ in terms of $|\I_{k,m}|$ by using the lower bound \eqref{E:x_alphas} on $\|\chi\sub{h,\tilde x_\alpha} w_{k,m}\|_{L^\infty}$ for indices $\alpha \in \I_{k,m}$. {By~\eqref{e:cutoffconds},} we have $$ \chi\sub{h,\tilde x_\alpha} w_{k,m}=\chi\sub{h,\tilde x_\alpha}X_{\tilde x_\alpha} w_{k,m}+O(h^\infty)_{L^\infty}, $$ for $\alpha \in \I_{k,m}$. In particular, by \eqref{E:x_alphas} and \eqref{E:w_km}, \begin{equation} \label{e:IhaveAsquid} 2^{m-1}h^{\frac{1-n}{2}}R(h)^{\frac{n-1}{2}}2^{-k}\|u\|\sub{P,T}\leq \|\chi\sub{h,\tilde x_\alpha} w_{k}\|_{L^\infty} \leq \| X_{\tilde x_\alpha} w_{k,m}\|_{L^\infty}{+O(h^\infty)\|u\|\sub{P,T})}. \end{equation} Therefore, applying the standard $L^\infty$ bound for quasimodes of the Laplacian (see e.g.~\cite[Theorem 7.12]{EZB}) and using {that by \eqref{e:cutoffconds}} we have that $X_{\tilde x_\alpha}$ nearly commutes with $P$ on {$B(\tilde x_\alpha, \tfrac{1}{2}{\conj M})$,} \begin{equation}\label{E:bound for one alpha} \begin{aligned} 2^{m-1}R(h)^{\frac{n-1}{2}}2^{-k}\|u\|\sub{P,T}&\leq C(\|X_{\tilde x_\alpha}w_{k,m}\|_{L^2}+h^{-1}\| PX_{\tilde x_\alpha} w_{k,m}\|_{L^2(B)})+O(h^\infty\|u\|\sub{P,T}).\\ &\leq C(\|X_{\tilde x_\alpha}w_{k,m}\|_{L^2}+h^{-1}\|X_{\tilde x_\alpha} P w_{k,m}\|_{L^2})+O(h^\infty\|u\|\sub{P,T}). \end{aligned} \end{equation} Note that we have canceled the factor $h^{\frac{1-n}{2}}$ which appears both in~\eqref{e:IhaveAsquid} and the standard $L^\infty$ bounds for quasimodes. Using that $h^{2\rho-1}R(h)^{-1}=o(1)$, Proposition \ref{P:orthogonality} proves that for all $\tilde \I \subset \I_{k,m}$ {and $v\in L^2(M)$} $$\sum_{\alpha\in\tilde I}\| X_{\tilde x_\alpha} v\|^2_{L^2}\leq C\Big(1+a_h|\tilde I|^{\frac{3n+1}{2n}}\Big)\|v\|_{L^2}^2,$$ where $a_h=(h^{\rho-\frac{1}{2}}R(h)^{-\frac{1}{2}})^{n-1}$. As a consequence, \eqref{E:bound for one alpha} gives \begin{align*} |\tilde \I|R(h)^{n-1}2^{-2k}2^{2(m-1)}\|u\|^2\sub{P,T} & \leq C \Big(\sum_{\alpha \in \tilde I}\|X_{\tilde x_\alpha} w_{k,m}\|_{L^2}^2+h^{-2}\sum_{\alpha \in \tilde I}\|X_{\tilde x_\alpha} P w_{k,m}\|_{L^2}^2\Big)\\ &\leq C\Big(1+a_h|\tilde \I|^{\frac{3n+1}{2n}}\Big)(\| w_{k,m}\|_{L^2}^2+h^{-2}\|P w_{k,m}\|_{L^2}^2)\\ &\leq C\Big(1+a_h|\tilde \I|^{\frac{3n+1}{2n}}\Big)2^{-2k}|\A_{k,m}|\|u\|^2\sub{P,T}. \end{align*} The last inequality follows from the definition of $w_{k,m}$ together with the definition \eqref{E:A_k} of $\A_k$. In particular, we have proved that there is $C>0$ such that for all $\tilde \I \subset \I_{k,m}$ \begin{equation}\label{E:max bound} |\tilde \I|R(h)^{n-1}2^{2m}\leq C\max \Big(1\;,\; a_h|\tilde \I|^{\frac{3n+1}{2n}}\Big)|\A_{k,m}|. \end{equation} Suppose that $a_h|\I_{k,m}|^{\frac{3n+1}{2n}} \geq 1$. Then, there exists $\tilde \I \subset \I_{k,m}$ such that $a_h|\tilde \I|^{\frac{3n+1}{2n}}= 1$. In particular, $|\tilde \I|R(h)^{n-1}2^{2m}\leq C|\A_{k,m}|.$ This implies that if $|\A_{k,m}|\leq \tfrac{1}{C} a_h^{-\frac{2n}{3n+1}} R(h)^{n-1}2^{2m},$ then $a_h|\I_{k,m}|^{\frac{3n+1}{2n}} \leq 1$ and so by \eqref{E:max bound} \[|\I_{k,m}|R(h)^{n-1}2^{2m} \leq C |\A_{k,m}|.\] \vspace{-.9cm} \end{proof} Note that for $w_{k,m}$ defined as in \eqref{E:w_km}, \begin{equation}\label{E:using U_km} \|w_{k}\|_{L^p(U)}^p \leq {\mathfrak{D}_n}\sum_{m=-\infty}^{\infty}\|w_{k}\|_{L^p(U_{k,m})}^p = {\mathfrak{D}_n}\sum_{m=-\infty}^{\infty}\|w_{k,m}\|_{L^p(U_{k,m})}^p+ O(h^\infty \|u\|\sub{P,T}), \end{equation} where \begin{equation}\label{E:U_km} U_{k,m}:=\bigcup_{\alpha\in \I_{k,m}}B(x_\alpha,R(h)). \end{equation} Finally, we split the study of $\|w_{k}\|_{L^p(U)}$ into two regimes: tubes with low or high $L^\infty$ mass. {Fix $N>0$ large, to be determined later.} {(Indeed, we will see that it suffices to take $N{> \frac{1}{2}}(1-\frac{p_c}{p})^{-1}$.)} Then, {we claim that} for each $k \geq -1$, \begin{equation}\label{e:splitMe} \begin{aligned} \|w_{k}\|_{L^p(U)}^p &\leq {\mathfrak{D}_n}\sum_{m=-\infty}^{m_{1,k}}\|w_{k,m}\|_{L^p(U_{k,m})}^p+{\mathfrak{D}_n}\sum_{m=m_{1,k}+1}^{m_{2,k}}\|w_{k,m}\|^p_{L^p(U_{k,m})} + O(h^\infty \|u\|\sub{P,T}), \end{aligned} \end{equation} where ${m_{1,k}}$ and ${m_{2,k}}$ are defined by $$ 2^{m_{1,k}}=\min\Bigg(\frac{2^kR(h)^{\frac{1-n}{2}}}{{T}^N}\;,\;c_n 2^{2k}\;,\; c_0R(h)^{1-n}\Bigg),\quad 2^{m_{2,k}}=\min\Big(c_n 2^{2k}\;,\; c_0R(h)^{1-n}\Big), $$ where $c_0, c_n$ are described in what follows. Indeed, note that the bound \eqref{e:bound1} yields that $2^m$ is bounded by $|\A_k(\alpha)|$ for all $\alpha \in \I_{k,m}$ and the latter is controlled by $c_0 R(h)^{n-1}$ for some $c_0>0$, depending only on $(M,g)$. Also, note that by \eqref{e:bound1} the $w_{k,m}$ are only defined for $m$ satisfying $2^m \leq c_n 2^{2k}$. These observations justify that the second sum in \eqref{e:splitMe} runs only up to $m_{2,k}$. \subsection{Control of the low $L^\infty$ mass term, $m\leq m_{1,k}$}\label{S:k_1, k_2} We first estimate the small $m$ term in \eqref{e:splitMe}. The estimates here essentially amount to interpolation between $L^{p_c}$ and $L^\infty$. From the definition \eqref{E: I_km} of $\I_{km}$, together with $\frac{1-n}{2}(p-p_c){-1}=-p\delta(p)$ and $\|w_{k,m}\|_{L^{p_c}(U_{k,m})} \leq {h^{-\frac{1}{p_c}}}\|u\|\sub{P,T}$, \begin{align*} \sum_{m=-\infty}^{m_{1,k}}\|w_{k,m}\|_{L^p(U_{k,m})}^p &\leq C\sum_{m=-\infty}^{m_{1,k}} \|w_{k,m}\|_{L^\infty(U_{k,m})}^{p-p_c}\|w_{k,m}\|^{p_c}_{L^{p_c}(U_{k,m})}\notag\\ & \leq C h^{-p\delta(p)}R(h)^{\frac{n-1}{2}(p-p_c)}2^{-k(p-p_c)}\sum_{m=-\infty}^{m_{1,k}}2^{m(p-p_c)}\|u\|^p\sub{P,T}\notag\\ & \leq Ch^{-p\delta(p)}R(h)^{\frac{n-1}{2}(p-p_c)}2^{(m_{1,k}-k)(p-p_c)}\|u\|^p\sub{P,T}. \end{align*} It follows that \begin{align}\label{E:pluto} \sum_{k\geq -1} \Big(\sum_{m=-\infty}^{m_{1,k}}\|{w_{k,m}}\|_{L^p(U_{k,m})}^p\Big)^{\frac{1}{p}} &\leq Ch^{-\delta(p)}R(h)^{\frac{n-1}{2}(1-\frac{p_c}{p})}\|u\|\sub{P,T} \sum_{k\geq -1} 2^{(m_{1,k}-k)(1-\frac{p_c}{p})}. \end{align} Finally, define $k_1, k_2$ such that \begin{equation}\label{E:defn of k} 2^{k_1}=\frac{R(h)^{\frac{1-n}{2}}}{{c_n}T^N},\qquad 2^{k_2}={c_0}R(h)^{\frac{1-n}{2}}T^N. \end{equation} If $k \leq k_1$, then $2^{m_{1,k}}=c_n2^{2k}$, so there exists {$C_{n,p}>0$} such that \[ \sum_{k=-1}^{k_1}2^{(m_{1,k}-k)(1-\frac{p_c}{p})} \leq {C_{n,p}}\frac{R(h)^{\frac{1-n}{2}(1-\frac{p_c}{p})}}{T^{N(1-\frac{p_c}{p})}}. \] If $ k_1\leq k \leq k_2$, then $2^{m_{1,k}}=\frac{2^kR(h)^{\frac{1-n}{2}}}{T^N}$. Therefore, since ${|k_2-k_1|} \leq c N\log T$ for some $c>0$, there exists $C>0$ such that \[ \sum_{k=k_1}^{k_2}2^{(m_{1,k}-k)(1-\frac{p_c}{p})} {\leq CN\log T\frac{R(h)^{\frac{1-n}{2}(1-\frac{p_c}{p})}}{T^{N(1-\frac{p_c}{p})}}}. \] Last, if $k \geq k_2$, then $2^{m_{1,k}}=c_0R(h)^{1-n}$, so there exists $C_p>0$ such that \[ \sum_{k=k_2}^{\infty}2^{(m_{1,k}-k)(1-\frac{p_c}{p})} \leq C_p\frac{R(h)^{\frac{1-n}{2}(1-\frac{p_c}{p})}}{T^{N(1-\frac{p_c}{p})}}. \] Putting these three bounds together with \eqref{E:pluto}, we obtain \begin{equation} \label{e:squid1} \sum_{k\geq -1} \Big(\sum_{m=-\infty}^{m_{1,k}}\|{w_{k,m}}\|_{L^p(U_{k,m})}^p\Big)^{\frac{1}{p}}\leq Ch^{-\delta(p)}\frac{{N\log T}}{T^{N(1-\frac{p_c}{p})}}\|u\|\sub{P,T}. \end{equation} \subsection{Control of the high $L^\infty$ mass term, $m\geq m_{1,k}$}\label{s:highLinf} In this section we estimate the large $m$ term in \eqref{e:splitMe}. To do this we split $$ \A_{k,m}=\mc{G}_{k,m}\sqcup \mc{B}_{k,m}, $$ where the set of `good' tubes $\bigcup_{j\in \mc{G}_{k,m}}\mathcal{T}_j$ is $[t_0,T]$ non-self looping and the number of `bad' tubes $|\mc{B}_{k,m}|$ is small. To do this, let \begin{equation} \label{e:indB} {\mc{B}\sub{U}(\alpha,\beta)}:=\Bigg\{j\in \bigcup_k\A_k(\alpha)\,:\; \bigcup_{t=t_0}^T\varphi_t(\mathcal{T}_j)\cap S^*_{B(x_\beta,{2}R(h))}M\neq \emptyset\Bigg\}. \end{equation} Then, we define $$ \mc{B}_{k,m}:=\bigcup_{\alpha,\beta \in \mc{I}_{k,m}}\mc{B}\sub{U}(\alpha,\beta)\cap \mc{A}_{k}(\alpha). $$ Let $\mc{G}_{k,m}:=\mc{A}_{k,m}\setminus\mc{B}_{k,m}$. Then, by construction, $\bigcup_{j\in \mc{G}_{k,m}}\mathcal{T}_j$ is $[t_0,T]$ non-self looping and we have \begin{equation}\label{E:bound on B_km} |\mc{B}_{k,m}|\leq c|\I_{k,m}|^2|\mc{B}\sub{U}| \end{equation} for some $c>0$, where \begin{equation} \label{e:globB} |\mc{B}\sub{U}|:=\sup\{ |\mc{B}\sub{U}(\alpha,\beta)|:\; \alpha, \beta \in \I\}, \end{equation} That is, $|\mc{B}\sub{U}|$ is the maximum number of loops of length in $[t_0,T]$ joining any two points {in $U$}. Then, define \begin{gather}\label{e:theGoodTheBadAndTheUgly} w_{k,m}^{\mc{G}}:=\sum_{j\in \mc{G}_{k,m}}Op_h(\tilde{\chi}\sub{\mathcal{T}_j})Op_h(\psi)u,\qquad w_{k,m}^{\mc{B}}:=\sum_{j\in \mc{B}_{k,m}}Op_h(\tilde{\chi}\sub{\mathcal{T}_j})Op_h(\psi)u. \end{gather} Next, consider \begin{align}\label{e:lps} \Big(\sum_{m=m_{1,k}}^{m_{2,k}}\|w_{k,m}\|_{L^p(U_{k,m})}^p\Big)^{\frac{1}{p}}&\leq \Big( \sum_{m=m_{1,k}}^{m_{2,k}}\|w_{k,m}^{\mc{G}}\|_{L^p(U_{k,m})}^p\Big)^{\frac{1}{p}}+\Big(\sum_{m=m_{1,k}}^{m_{2,k}}\|w_{k,m}^{\mc{B}}\|_{L^p(U_{k,m})}^p\Big)^{\frac{1}{p}}. \end{align} \subsubsection{Bound on the looping piece.} We start by estimating the `bad' piece $$\sum_{k\geq -1}\Big(\sum_{m=m_{1,k}}^{m_{2,k}}\|w_{k,m}^{\mc{B}}\|_{L^p(U_{k,m})}^p\Big)^{\frac{1}{p}}.$$ Observe that if $ 2^{m_{1,k}}=\min(c_0R(h)^{1-n}, c_n2^{2k}), $ then $m_{1,k}=m_{2,k}$ and we need not consider this part of the sum. Therefore, the high $L^\infty$ mass term has \begin{equation} \label{e:defM1} 2^{m_{1,k}}=\frac{2^kR(h)^{\frac{1-n}{2}}}{T^N} \end{equation} and $k_1\leq k\leq k_2$. Hence, for $m_{1,k}<m\leq m_{2,k}$, Lemma \ref{L:|A_k| bound} gives that there is ${C_n}>0$ with $$ |\A_{k,m}| \leq {C_n} 2^{2k} \leq {C_n} R(h)^{n-1}2^{2m} T^{2N}. $$ Furthermore, since $R(h)\geq h^{\delta_2}$ with $\delta_2 <\tfrac{1}{2}$, \eqref{E:rho} yields that there is {$\varepsilon=\varepsilon(n, N)>0$} such that $h^{\rho-\frac{1}{2}}R(h)^{-\frac{1}{2}}<h^\varepsilon$, and hence, {since $T=O(\log h^{-1})$}, \[ |\A_{k,m}|{=o\Big( R(h)^{n-1}2^{2m} \left(h^{\rho-\frac{1}{2}}R(h)^{-\frac{1}{2}}\right)^{-\frac{2n(n-1)}{3n+1}}\Big)}. \] In particular, a consequence of Lemma \ref{l: |I_km|} is the existence of {$h_0>0$} and $C>0$ such that \begin{align} \label{e:boundCount} |\I_{k,m}|&\leq C R(h)^{1-n}2^{-2m}|\A_{k,m}|\\ &\leq CR(h)^{1-n}2^{2k-2m},&\label{e:boundCount2} \end{align} for all $0<h \leq h_0$, where we have used again Lemma \ref{L:|A_k| bound} to bound $|\A_{k,m}|$. Next, note that for each point in $\I_{k,m}$ there are at most $c|\I_{k,m}||\mc{B}\sub{U}|$ tubes in $\mc{B}_{k,m}$ touching it. {Therefore,} we may apply~\cite[Lemma 3.7]{CG19a} to obtain $C>0$ such that \begin{equation} \label{e:LinfBad} \|w_{k,m}^\mc{B}\|_{L^\infty({U_{k,m}})}\leq C h^{\frac{1-n}{2}}R(h)^{\frac{n-1}{2}}|\I_{k,m}||\mc{B}\sub{U}|2^{-k}\|u\|\sub{P,T}. \end{equation} Using \eqref{e:LinfBad} and interpolating between $L^\infty$ and $L^{p_c}$ we obtain \begin{equation}\label{E: Bad piece L^p norm} \|w_{k,m}^{\mc{B}}\|_{L^p(U_{k,m})}^p \leq C h^{-p\delta(p)}\left(R(h)^{\frac{n-1}{2}}|\I_{k,m}||\mc{B}\sub{U}|2^{-k}\|u\|\sub{P,T}\right)^{p-p_c}\|w_{k,m}^{\mc{B}}\|_{L^2(U_{k,m})}^{p_c}. \end{equation} In addition, since combining \eqref{E:A_k} with \eqref{E:bound on B_km} yields \[ \|w_{k,m}^{\mc{B}}\|_{L^2(U_{k,m})} \leq C |\mc{B}_{k,m}|^{\frac{1}{2}}2^{-k}\|u\|\sub{P,T} \leq C 2^{-k}|\mc{I}_{k,m}||\mc{B}\sub{U}|^{\frac{1}{2}}\|u\|\sub{P,T}, \] the bounds in \eqref{E: Bad piece L^p norm} and \eqref{e:boundCount2}, together with the definition of $m_{1,k}$~\eqref{e:defM1} yield \begin{align*} \sum_{m=m_{1,k}}^{m_{2,k}}\|w_{k,m}^{\mc{B}}\|_{L^p(U_{k,m})}^p &\leq C h^{-p\delta(p)}R(h)^{\frac{n-1}{2}(p-p_c)} \sum_{m=m_{1,k}}^{m_{2,k}}|\I_{k,m}|^{p}|\mc{B}\sub{U}|^{p-\frac{p_c}{2}}2^{-kp}\|u\|^{p}\sub{P,T}\\ &\leq C h^{-p\delta(p)}R(h)^{\frac{n-1}{2}(-p-p_c)} 2^{kp}|\mc{B}\sub{U}|^{p-\frac{p_c}{2}}\|u\|^{p}\sub{P,T}\sum_{m=m_{1,k}}^{m_{2,k}}2^{-2mp}\\ &\leq C h^{-p\delta(p)}R(h)^{\frac{n-1}{2}(p-p_c)}|\mc{B}\sub{U}|^{p-\frac{p_c}{2}}T^{2Np}\,2^{-kp}\|u\|^{p}\sub{P,T}. \end{align*} Then, with $k_1, k_2$ defined as in \eqref{E:defn of k}, we have that \begin{align*} \sum_{k=k_1}^{k_2}\Big(\sum_{m=m_{1,k}}^{m_{2,k}}\|w_{k,m}^{\mc{B}}\|_{L^p(U_{k,m})}^p\Big)^{\frac{1}{p}}&\leq C h^{-\delta(p)}R(h)^{\frac{n-1}{2}(1-\frac{p_c}{p})}|\mc{B}\sub{U}|^{1-\frac{p_c}{2p}}T^{2N}\|u\|\sub{P,T}\sum_{k=k_1}^{k_2}2^{-k}\notag\\ &\qquad \leq C h^{-\delta(p)}(R(h)^{n-1}|\mc{B}\sub{U}|)^{1-\frac{p_c}{2p}}T^{3N}\|u\|\sub{P,T}. \end{align*} Finally, since we only need to consider $k_1\leq k\leq k_2$, \begin{equation} \sum_{k\geq -1}\Big(\sum_{m=m_{1,k}}^{m_{2,k}}\|w_{k,m}^{\mc{B}}\|_{L^p(U_{k,m})}^p\Big)^{\frac{1}{p}} \leq C h^{-\delta(p)}(R(h)^{n-1}|\mc{B}\sub{U}|)^{1-\frac{p_c}{2p}}T^{3N}\|u\|\sub{P,T}.\label{e:squid2} \end{equation} \subsubsection{Bound on the non self-looping piece.}\label{s:loopMe}In this section we aim to control the `good' piece \begin{equation}\label{E:non-looping part} \sum_{k\geq -1}\Big(\sum_{m=m_{1,k}}^{m_{2,k}}\|w_{k,m}^{\mc{G}}\|_{L^p(U_{k,m})}^p\Big)^{\frac{1}{p}}. \end{equation} So far all $L^p$ bounds appearing have been $\ll h^{\frac{1-n}{2}}/\sqrt{T}$. The reason for this is that the bounds were obtained by interpolation with an $L^\infty$ estimate which is substantially stronger than $h^{\frac{1-n}{2}}/\sqrt{T}$. We now estimate the number of non-self looping tubes $\mathcal{T}_j$ with $j \in \A_k$. That is, tubes on which the $L^2$ mass of $u$ is comparable $2^{-k}\|u\|\sub{P,T}$. \begin{lemma} \label{l:propLowerBound} {Let $k \in \mathbb Z$, $k\geq -1$,} and $t_0>1$. Suppose that $\mc{G}\subset \A_k$ is such that $$ \bigcup_{j\in\mc{G}}\mathcal{T}_j\;\;\text{ is }[t_0,T]\text{ non-self looping}. $$ Then, there exists a constant {$C_n>0$}, {depending only on $n$,} such that $ |\mathcal{G}|\leq \frac{C_n{t_0}}{T}\, 2^{2k}. $ \end{lemma} \begin{proof} Using that ${\mathcal G}\subset \A_k$, we have \begin{equation} \label{e:Gbd} |{\mathcal G}| \frac{\|u\|\sub{P,T}^2}{2^{2(k+1)}}\leq 2\sum_{j\in {\mathcal G}} \Big(\|Op_h(\chi\sub{\mathcal{T}_j})u\|^2+h^{-2}\|Op_h(\chi\sub{\mathcal{T}_j})Pu\|_{L^2}^2\Big). \end{equation} Since $\{\mathcal{T}_j\}_{j \in \mc{G}}$ is $(\mathfrak{D}_n,\tau,R(h))$-good, there are $\{\mc{G}_i\}_{i=1}^{ \mathfrak{D}_n}\subset \mc{G}$, such that for each $i=1, \dots, \mathfrak{D}_n$, $$ \mathcal{T}_{j}\cap \mathcal{T}_k=\emptyset,\qquad j,k \in \mc{G}_i, \;\;\;j\neq k. $$ By~\cite[Lemma 4.1]{CG19a} with $t_\ell=t_0$ and $T_\ell=T$ for all $\ell$, \begin{equation} \label{e:propEst} \sum_{j\in {\mathcal G}} \|Op_h(\chi\sub{\mathcal{T}_j})u\|_{L^2}^2 \leq \sum_{i=1}^{\mathfrak{D}_n}\sum_{j\in \mc{G}_i}\|Op_h(\chi\sub{\mathcal{T}_j})u\|_{L^2}^2 \leq \frac{\mathfrak{D}_n4 {t_0}}{T}\|u\|\sub{P,T}^2. \end{equation} On the other hand, since $\sum_{j\in \mc{G}_i}\|Op_h(\chi\sub{\mathcal{T}_j})\|^2 \leq 2$ for each $i$, \begin{equation} \label{e:Pbd} \sum_{j\in {\mathcal G}} \|Op_h(\chi\sub{\mathcal{T}_j})Pu\|_{L^2}^2\leq {2\mathfrak{D}_n}\|Pu\|_{L^2}^2. \end{equation} Combining~\eqref{e:Gbd},~\eqref{e:propEst}, and~\eqref{e:Pbd} yields \begin{equation*} |{\mathcal G}| \frac{\|u\|\sub{P,T}^2}{2^{2(k+1)}}\leq \frac{8\mathfrak{D}_n{t_0}}{T}\|u\|\sub{P,T}^2 +\frac{4 \mathfrak{D}_n}{h^{2}} \|Pu\|_{L^2}^2 \leq \frac{8\mathfrak{D}_n{t_0}+{\tfrac{4\mathfrak{D}_n}{T}}}{T}\|u\|\sub{P,T}^2. \end{equation*} \vspace{-1cm} \end{proof} We may now proceed to estimate the $L^p$-norm of the non-looping piece \eqref{E:non-looping part}. The first step is to notice that we only need to sum up to $m \leq m_{3,k}$, where $m_{3,k}$ is defined by $$ 2^{m_{3,k}}:=\min\Bigg( \frac{{C_n}t_02^{2k}}{{c_{_{\!M}}} T}\;,\; c_0R(h)^{1-n}\Bigg), $$ and $c_{_{\!M}}>0$ is as defined in \eqref{e:bound1} {and $C_n>0$ is the constant in Lemma \ref{l:propLowerBound}}. To see this, first observe that, using \eqref{E: I_km},~\eqref{e:boundCount} and~\eqref{e:LinfBad}, for each $\alpha \in \I_{k,m}$ \begin{equation} \label{e:LinfGood1} \begin{aligned} \|w_{k,m}^{\mc{G}}\|\sub{L^\infty (B((x_\alpha, R(h))))}&\leq \|w_{k,m}\|\sub{L^\infty (B((x_\alpha, R(h))))}+\|w_{k,m}^{\mc{B}}\|\sub{L^\infty (B((x_\alpha, R(h))))}\\ &\leq C(2^{m}+|\I_{k,m}||\mc{B}\sub{U}|)2^{-k}h^{\frac{1-n}{2}}R(h)^{\frac{n-1}{2}}\|u\|\sub{P,T}\\ &\leq C(1+R(h)^{1-n}2^{-3m}|\A_{k,m}||\mc{B}\sub{U}|)2^{m-k}h^{\frac{1-n}{2}}R(h)^{\frac{n-1}{2}}\|u\|\sub{P,T}. \end{aligned} \end{equation} Furthermore, since $|\mc{G}_{k,m}|\geq |\A_{k,m}|-|\I_{k,m}|^2|\mc{B}\sub{U}|$ and $\mc{G}_{k,m}$ is $[t_0,T]$ non-self looping, Lemma~\ref{l:propLowerBound} yields {the existence of $C_n>0$ such that} $$ |\A_{k,m}|-|\I_{k,m}|^2|\mc{B}\sub{U}| \leq C_n\frac{t_0}{T}2^{2k} .$$ Next, since $m_{1,k} \leq m \leq m_{2,k}$, we may apply Lemma \ref{l: |I_km|} to bound $|\I_{k,m}|$ as in \eqref{e:boundCount} to obtain that for some $C>0$ \begin{equation} \label{e:nonLoopCount} |\A_{k,m}|(1-CR(h)^{2(1-n)}2^{-4m}|\A_{k,m}||\mc{B}\sub{U}|)\leq C_n\frac{t_0}{T}2^{2k}. \end{equation} {In addition}, provided \begin{equation}\label{e: bound on Bu} |\mc{B}\sub{U}|R(h)^{n-1}\ll T^{-6N}, \end{equation} we have, that for $m\geq m_{1,k}$ and $k_1\leq k\leq k_2$ \begin{align}\label{E:noname bound} R(h)^{2(1-n)}2^{-4m}|\A_{k,m}||\mc{B}\sub{U}|&\leq R(h)^{2(1-n)}2^{-4m+2k}|\mc{B}\sub{U}|\leq 2^{-2k}T^{4N}|\mc{B}\sub{U}|\notag\\ &\leq {R(h)}^{n-1}T^{6N}|\mc{B}\sub{U}|\ll 1,\end{align} where we used that by~\eqref{e:bound1}, $|A_{k,m}|$ is controlled by $2^{2k}$ to get the first inequality, that $m\geq m_{1,k}$ to get the second, and that $k\geq k_1$ to get the third. Combining~\eqref{e:nonLoopCount} and the bound in \eqref{E:noname bound} we obtain $ |\A_{k,m}|\leq {C_n}\frac{t_02^{2k}}{T}, $ and so, by \eqref{e:bound1}, $ 2^m\leq {C_n}\frac{t_02^{2k}}{{c_{_{\!M}}} T}. $ {As claimed, this shows that to deal with \eqref{E:non-looping part} we only need to sum up to $m \leq m_{3,k}$. The next step is to use interpolation to control the first sum in \eqref{E:non-looping part} by \begin{equation}\label{E:interpolation good piece} {\sum_{m=m_{1,k}}^{m_{2,k}}\|w_{k,m}^\mc{G}\|_{L^p(U_{k,m})}^p=}\sum_{m=m_{1,k}}^{m_{3,k}}\|w_{k,m}^\mc{G}\|_{L^p(U_{k,m})}^p \leq \!\!\!\sum_{m=m_{1,k}}^{m_{3,k}} \|w_{k,m}^\mc{G}\|_{L^\infty(U_{k,m})}^{p-p_c}\|w_{k,m}^\mc{G}\|^{p_c}_{L^{p_c}(U_{k,m})}. \end{equation} We claim that \eqref{e:LinfGood1} yields \begin{equation}\label{e:Linfty good piece} \|w_{k,m}^{\mc{G}}\|\sub{L^\infty (B(x_\alpha, R(h)))}\leq C2^{m-k}h^{\frac{1-n}{2}}R(h)^{\frac{n-1}{2}}\|u\|\sub{P,T}. \end{equation} Indeed, using the bound \eqref{e: bound on Bu} on $|\mc{B}\sub{U}|$, that $|\A_{k,m}|$ is controlled by $2^{2k}$, that $m\geq m_{1,k}$ as in~\eqref{e:defM1}, and that $k_1\leq k\leq k_2$, we have \begin{align*} R(h)^{1-n}2^{-3m}|\A_{k,m}||\mc{B}\sub{U}| \ll R(h)^{2(1-n)}2^{-3m+2k}T^{-6N} \leq T^{-2N}. \end{align*} Using \eqref{e:Linfty good piece}, the standard bound on $\|w_{k,m}^\mc{G}\|^{p_c}_{L^{p_c}(U_{k,m})}$, and $\|w_{k,m}^{\mathcal G}\|_{L^2}^2 \leq C \frac{t_0}{T}$, we obtain \begin{equation} \label{e:OhMarcus} \|w_{k,m}^{\mc{G}}\|_{L^p(U_{k,m})}^2\leq Ch^{-p\delta(p)}(R(h)^{\frac{n-1}{2}}2^{m-k})^{p-p_c}\frac{t_0^{\frac{p_c}{2p}}}{T^{\frac{p_c}{2p}}} +O(h^\infty\|u\|\sub{P,T}^p). \end{equation} Using this, we estimate \eqref{E:interpolation good piece} \begin{align}\label{e:before358} &\sum_{m=m_{1,k}}^{{m_{2,k}}}\|w_{k,m}^\mc{G}\|_{L^p(U_{k,m})}^p \leq C h^{-p\delta(p)}(R(h)^{\frac{n-1}{2}}2^{(m_{3,k}-k)})^{p-p_c}\|u\|\sub{P,T}^{p} \frac{t_0^{\frac{p_c}{2}}}{T^{\frac{p_c}{2}}} +O(h^\infty\|u\|\sub{P,T}^p). \end{align} Then, summing in $k$, and again using that only $k_1\leq k\leq k_2$ contribute, \begin{align} {\sum_{k=-1}^{\infty}}\Big(\sum_{m=m_{1,k}}^{{m_{2,k}}}\|w_{k,m}^\mc{G}\|_{L^p(U_m)}^p\Big)^{\frac{1}{p}}&\leq C h^{-\delta(p)}\|u\|\sub{P,T} \frac{t_0^{\frac{p_c}{2p}}}{T^{\frac{p_c}{2p}}} \sum_{k=k_1}^{k_2} \big(R(h)^{\frac{n-1}{2}}2^{(m_{3,k}-k)}\big)^{1-\frac{p_c}{p}} +O(h^\infty\|u\|\sub{P,T})\notag\\ &\qquad\leq C h^{-\delta(p)}\frac{t_0^{\frac{1}{2}}}{T^{\frac{1}{2}}}\|u\|\sub{P,T}{+O(h^\infty\|u\|\sub{P,T}).}\label{e:squid3} \end{align} } Note that the sum over $k$ in~\eqref{e:squid3} is controlled by the value of $k$ for which $ \frac{C_nt_0 2^{2k}}{c_MT}=c_0R(h)^{1-n}, $ since the sum is geometrically increasing before such $k$ and geometrically decreasing afterward. \subsection{Wrapping up the proof of Theorem~\ref{t:main bound} } Combining~\eqref{e:squid1},~\eqref{e:squid2},~\eqref{e:squid3}, with \eqref{e:lps} and \eqref{e:splitMe}, and taking $N{> \frac{1}{2}}(1-\frac{p_c}{p})^{-1},$ provided $ R(h)^{n-1}|\mc{B}\sub{U}|\leq C T^{-6N}, $ {for some $C>0$,} we obtain $$ \|v\|_{L^p(U)}\leq \sum_{k=-1}^\infty \|w_k\|_{L^p(U)}\leq C h^{-\delta(p)}\Bigg(\frac{t_0^{\frac{1}{2}}}{{T}^{\frac{1}{2}}}+{(R(h)^{n-1}|\mc{B}\sub{U}|)^{1-\frac{p_c}{2p}}T^{3N}}\Bigg)\|u\|\sub{P,T} $$ as requested in \eqref{e: bound on Bu}. {Since this estimate holds only when $|\mc{B}\sub{U}|R(h)^{n-1}\leq {C}T^{-6N}$, we replace $T$ by $T_0:=\min\{{\tfrac{1}{C}}(R(h)^{n-1}|\mc{B}\sub{U}|)^{-\frac{1}{6N}}\,,\, T\}$ so that \begin{equation} \label{e:almostThere} \begin{aligned} \|v\|_{L^p(U)}&\leq C h^{-\delta(p)}\Bigg(\frac{t_0^{\frac{1}{2}}}{{T_0}^{\frac{1}{2}}}+{(R(h)^{n-1}|\mc{B}\sub{U}|)^{1-\frac{p_c}{2p}}T_0^{3N}}\Bigg)\|u\|\sub{P,T}\\ &\leq Ch^{-\delta(p)}\Bigg(\frac{t_0^{\frac{1}{2}}}{{T}^{\frac{1}{2}}}+{t_0^{\frac{1}{2}}}(R(h)^{n-1}|\mc{B}\sub{U}|)^{\frac{1}{12N}}+{(R(h)^{n-1}|\mc{B}\sub{U}|)^{\frac{1}{2}(1-\frac{p_c}{p})}}\Bigg)\|u\|\sub{P,T}\\ &\leq Ch^{-\delta(p)}\Bigg(\frac{t_0^{\frac{1}{2}}}{{T}^{\frac{1}{2}}}+{(R(h)^{n-1}|\mc{B}\sub{U}|)^{\frac{1}{12N}}}\Bigg)\|u\|\sub{P,T}, \end{aligned} \end{equation} {where the constant $C$ is adjusted from line to line.} Next, combining~\eqref{e:almostThere} with~\eqref{e:rico} and the definition of $v$ in~\eqref{e:individualized}, we obtain \begin{multline*} \|u\|_{L^p(U)}\leq Ch^{-\delta(p)}\Bigg(\frac{t_0^{\frac{1}{2}}}{{T}^{\frac{1}{2}}}+{(R(h)^{n-1}|\mc{B}\sub{U}|)^{\frac{1}{12N}}}\Bigg)\|u\|\sub{P,T} +Ch^{-\delta(p)+\frac{1}{2}-\delta_2}h^{-1}\|Pu\|_{H_h^{n(\frac{1}{2}-\frac{1}{p})+\e-2}}. \end{multline*} Putting $\varepsilon=\frac{1}{2}$ and setting $N=\frac{1}{2}(1+\frac{\varepsilon_0}{6})(1-\frac{p_c}{p})^{-1}$, the estimate~\eqref{e:LpestFinal} will follow once we relate $|\mc{B}\sub{U}|$ for a given $(\tau, R(h))$ cover to $|\mc{B}\sub{U}|$ for the $(\mathfrak{D},\tau,R(h))$ cover used in our proof. Finally, to finish the proof of Theorem~\ref{t:main bound}, we need to show that for any $(\tau,R(h))$ cover $\{{\mathcal{T}}_j\}_j$ of $S^*\!M$, up to a constant depending only on $M$, {$|\mc{B}\sub{U}|$ can be bounded by $|{\tilde{\mc{B}\sub{U}}}|$ where ${\tilde{\mc{B}}\sub{U}}$ is defined as in~\eqref{e:globB} using a $(\tilde{\mathfrak{D}},\tau,R(h))$ good cover $\{\tilde{\mathcal{T}}_k\}_k$ of $S^*\!M$.} \begin{lemma} There exists $C_{_{\!M}}>0$ depending only on $M$ so that if $\{\mathcal{T}_j\}_{j\in \J}$ and $\{\tilde{\mathcal{T}}_k\}_{k\in \mc{K}}$ are respectively a $(\tau,R(h))$ cover $S^*\!M$ and a $(\tilde{\mathfrak{D}},\tau,R(h))$ good cover of $S^*\!M$, and {$|\mc{B}\sub{U}|$, $|\tilde{\mc{B}}\sub{U}|$} are defined as in~\eqref{e:globB} for respectively the covers $\{\mathcal{T}_j\}_{j\in \J}$,$\{\tilde{\mathcal{T}}_k\}_k$, then $$ |\tilde{\mc{B}}\sub{U}|\leq C_{_{\!M}}\tilde{\mathfrak{D}}|\mc{{B}}\sub{U}|. $$ \end{lemma} \begin{proof} Fix $\alpha,\beta$ such that $x_\alpha,x_\beta \in U$. Suppose that $j\in {\mc{B}\sub{U}(\alpha,\beta)}$ where $\mc{B}\sub{U}(\alpha,\beta)$ is as in~\eqref{e:indB}. Then, there is $k\in \tilde {\mc{B}}\sub{U}(\alpha,\beta)$ such that $\tilde{\mathcal{T}}_k\cap \mathcal{T}_j\neq \emptyset.$ Now, fix $j\in \J$ and let $$\mc{C}_j:=\{ k\in \mc{K}:\;\mathcal{T}_j\cap \tilde{\mathcal{T}}_k\neq \emptyset\}.$$ We claim that there is $c\sub{M}>0$ such that for each $k\in \mc{C}_j$ \begin{equation} \label{e:inside} \tilde{\mathcal{T}}_k\subset \Lambda_{\rho_j}^{c\sub{M}\tau}(c\sub{M}R(h)). \end{equation} Assuming~\eqref{e:inside} for now, {there exists $C\sub{M}>0$ such that} $$ |\mc{C}_j|\leq \tilde{\mathfrak{D}}\frac{\vol(\Lambda_{\rho_j}^{c\sub{M}\tau}(c\sub{M}R(h))}{\inf_{k\in \mc{K}}\vol(\tilde{\mathcal{T}}_k)}\leq \tilde{\mathfrak{D}}C_{_{\!M}}. $$ Thus, for each $j\in \mc{B}\sub{U}({\alpha ,\beta})$, there are at most $C_{_{\!M}}\mathfrak{\tilde{D}}$ elements in $\tilde{\mc{B}}\sub{U}({\alpha,\beta})$ and hence $|\mc{B}\sub{U}(\alpha,\beta)|\geq |\tilde{\mc{B}}\sub{U}(\alpha,\beta)|/(C_{_{\!M}}\tilde{\mathfrak{D}})$ as claimed. We now prove~\eqref{e:inside}. Let $q\in \tilde{\mathcal{T}}_k$. Then, there are $\rho'_k,\rho'_j,q'\in S^*\!M$ and $t_k,t_j,s\in[\tau-R(h),\tau+R(h)]$ such that $$ \begin{gathered} d(\rho_k,\rho'_k)<R(h),\qquad \quad d(\rho_j,\rho'_j)<R(h),\qquad \quad d(\rho_k,q')<R(h), \\ \varphi_{t_k}(\rho'_k)=\varphi_{t_j}(\rho'_j),\qquad \qquad \varphi_{s}(q')=q. \end{gathered} $$ In particular, $d(q',\rho'_k)<2R(h)$, so there is $c\sub{M}>0$ such that $ d(\varphi_{t_k}(\rho'_k),\varphi_{t_k-s}(q))<c\sub{M}R(h). $ Applying $\varphi_{-t_j}$, and adjusting $c\sub{M}$ in a way depending only on $M$, $ d(\rho'_j,\varphi_{t_k-t_j-s}(q))<c\sub{M}R(h). $ In particular, adjusting $c\sub{M}$ again, $ d(\rho_j,\varphi_{t_k-t_j-s}(q))<c\sub{M}R(h) $ and the claim follows. \end{proof} \subsection{Proof of Theorem \ref{t:JeffsFavorite}} \label{s:JeffsFavorite} As explained in the introduction, {Theorem \ref{t:JeffsFavorite} actually holds under the more general assumptions of Theorem \ref{t:main bound}}. Let $p>p_c$ and assume that there is $\delta>0$ such that $$ T=T(h)\to \infty,\qquad\qquad |\mc{B}\sub{U}|R(h)^{n-1}T^{\frac{3p}{p-p_c}+\delta}=o(1). $$ {In the general setup we work with $$ \mc{S}\sub{U}(h, \varepsilon,u):=\Big\{\alpha \in \mc{I}(h): \|u\|_{L^\infty (B(x_\alpha,R(h)))} \geq \frac{\varepsilon h^{\frac{1-n}{2}}{\sqrt{t_0}}}{\sqrt{T(h)}}\|u\|_{{_{\!L^2(M)}}}, \;\; B(x_\alpha,R(h))\cap U\neq \emptyset\Big\}. $$} We proceed to prove Theorem \ref{t:JeffsFavorite} in this setup, using the decompositions introduced in the previous sections. Throughout this proof we assume that \begin{equation}\label{e:quas} \|Pu\|_{H_h^{\frac{n-3}{2}}}=o\Big(\frac{h}{T}\|u\|_{L^2}\Big). \end{equation} \subsubsection{Proof of the bound on $|\mc{S}\sub{U}(h,\varepsilon,u)|$} {We claim that there is $c>0$ such that for $\alpha\in S\sub{U}(h,\varepsilon,u)$ \begin{equation}\label{e:LB1} \frac{c\varepsilon\sqrt{t_0}}{\sqrt{T}}h^{-\frac{1}{p}}\|u\|_{\sub{P,T}}\leq\|u\|_{L^{p}(B(x_\alpha,2R(h)))}. \end{equation}} {To see \eqref{e:LB1}, first let $\chi_0,\chi_1 \in C_c^\infty(-2,2)$, $\chi\equiv 1$ on $[-3/2,3/2]$, $\chi_1\equiv 1$ on $\text{\ensuremath{\supp}} \chi_0$ and note that by Lemma~\ref{L:lp bound}, the elliptic parametrix construction for $P$, and~\eqref{e:quas} \begin{equation} \label{e:highFreq} \|(1-\chi_0(-h^2\Delta_g))u\|_{L^p} \leq Ch^{-\delta(p)-\frac{1}{2}}\|Pu\|_{H_h^{\frac{n-3}{2}}}=o\Big(\frac{h^{-\delta(p)+\frac{1}{2}}}{T}\Big){\|u\|_{L^2}}. \end{equation}} Therefore, for $\alpha\in S\sub{U}(h,\varepsilon,u)$ we have \begin{equation}\label{e:C_h} \|\chi_0(-h^2\Delta_g) u\|_{L^\infty(B(x_\alpha,R(h)))}\geq {\frac{\varepsilon h^{\frac{1-n}{2}}}{2\sqrt{T}}\|u\|_{{_{\!L^2(M)}}}} \end{equation} for $h$ small enough. Next, set $\chi_{\alpha,h}(x):=\chi(R(h)^{-1}d(x,x_\alpha))$ and note $$ \chi_1(-h^2\Delta_g) \chi_{\alpha,h}\chi_0(-h^2\Delta_g)u=\chi_{\alpha,h}\chi_0(-h^2\Delta_g)u+O(h^\infty\|u\|_{L^2})_{C^\infty}. $$ Then, by \eqref{e:C_h} and \cite[Theorem 7.15]{EZB} \begin{align}\label{e:LB2} {\frac{\varepsilon h^{\frac{1-n}{2}}}{2\sqrt{T}}\|u\|_{{_{\!L^2(M)}}}}\leq \|\chi_0(-h^2\Delta_g)u&\|_{L^\infty(B(x_\alpha,R(h)))} \leq \|\chi_{\alpha,h}\chi_0(-h^2\Delta_g)u\|_{{L^\infty(B(x_\alpha,R(h)))}} \notag\\ &\leq Ch^{-\frac{n}{p}}\Big({\|\chi_0(-h^2\Delta_g)u\|}_{L^{p}(B(x_\alpha,2R(h))}+O(h^\infty)\|u\|_{L^2}\Big), \end{align} Combining \eqref{e:LB2} and \eqref{e:highFreq} yields the claim in \eqref{e:LB1}. It then follows that, if $\{\alpha_i\}_{i=1}^N\subset S\sub{U}(h,\varepsilon,u)$ with $B(x_{\alpha_i},2R(h))\cap B(x_{\alpha_j},2R(h))=\emptyset$ for $i\neq j$, then {using Theorem~\ref{t:main bound},} $$ N^{\frac{1}{p}}\frac{c\varepsilon\sqrt{t_0}}{\sqrt{T}}h^{-\frac{1}{p}}\|u\|_{\sub{P,T}}\leq \|u\|_{L^{p}}\leq Ch^{-\frac{1}{p}}\|u\|_{L^{2}}\leq C h^{-\frac{1}{p}}\frac{\sqrt{t_0}}{\sqrt{T}}\|u\|_{\sub{P,T}}. $$ Then, $ N^{\frac{1}{p}}\leq C\varepsilon^{-1}. $ Since at most $\mathfrak{D}_n$ balls $B(x_\alpha,2R(h))$ intersect, $ |S\sub{U}(h,\varepsilon,u)|\leq C\mathfrak{D}_n\varepsilon^{-p}. $ \subsubsection{Preliminaries for the decomposition of $u$} Let $q\in \mathbb{R}$ such that $p\leq q\leq \infty$. Below, all implicit constants are uniform for $p\leq q\leq \infty$. As above, it suffices to prove the statement for $v$ as in~\eqref{e:individualized} instead of $u$. Then, we decompose $v=\sum_{k=-1}^\infty w_k$ as in~\eqref{e:v}. For $V\subset U$, by {the same analysis that led to}~\eqref{e:splitMe}, $$ \|w_k\|^q_{L^q(V)}\leq \mathfrak{D}_n\sum_{m=-\infty}^{m_{2,k}}\|w_{k,m}\|^q_{L^q(V\cap U_{k,m})} +O(h^\infty)\|u\|\sub{P,T}, $$ where $w_{k,m}$ is as in \eqref{E:w_km}. Then, by~\eqref{e:squid1}, with $N=\frac{q}{2(q-p_c)}+\frac{\delta}{6}$ \begin{equation} \label{e:marcus0} \sum_{k\geq -1}\Big(\sum_{m=-\infty}^{m_1,k}\|w_{k,m}\|_{L^q( U_{k,m})}^{{q}}\Big)^{{\frac{1}{q}}}\leq Ch^{-\delta(q)}\frac{\log T}{T^{\frac{1}{2}+ {\frac{\delta(q-p_c)}{6q}}}}\|u\|_{\sub{P,T}}, \end{equation} for $h$ small enough. Then, splitting $w_{k,m}=w_{k,m}^{\mc{B}}+w_{k,m}^{\mc{G}}$, as in~\eqref{e:theGoodTheBadAndTheUgly}, we have by~\eqref{e:squid2} \begin{equation} \label{e:marcus1} \sum_{k\geq -1}\Big(\sum_{m=m_{1,k}}^{m_{2,k}}\|w_{k,m}^\mc{B}\|_{L^q( U_{k,m})}^{{q}}\Big)^{{\frac{1}{q}}}\leq Ch^{-\delta(q)}(R(h)^{n-1}|\mc{B}\sub{U}|)^{1-\frac{p_c}{2q}}T^{\frac{3q}{2(q-p_c)}+\frac{\delta}{2}}\|u\|\sub{P,T}. \end{equation} Define ${k_1^\varepsilon}$ and ${k_2^\varepsilon}$, by \begin{equation}\label{e:Kep} 2^{2k_1^\varepsilon}=\frac{C^{-2}\mathfrak{D}_n^{-2}\varepsilon^2 R(h)^{1-n} c\sub{M} T}{4C_n t_0},\qquad\quad 2^{2k_2^\varepsilon}=\frac{C^2\mathfrak{D}_n^2\varepsilon^{-2} R(h)^{1-n} c\sub{M} T}{4C_n t_0}, \end{equation} where $C$ is as in ~\eqref{e:squid3}. Then, define $ \mc{K}(\varepsilon):=\{k: k_1^\varepsilon\leq k\leq k_2^\varepsilon\} $ and note that, {since $2^{(k_2^\varepsilon-k_1^\varepsilon)}=C^2\mathfrak{D}_n^2\varepsilon^{-2}$,} $|\mc{K}(\varepsilon)|\leq \log_2(4C^2\mathfrak{D}_n^2\varepsilon^{-2})=:K_\varepsilon$. Using~\eqref{e:OhMarcus} and summing over $k\notin \mc{K}(\varepsilon)$, it follows that we have \begin{equation} \label{e:marcus2} \sum_{k\notin \mc{K}(\varepsilon)}\Big(\sum_{m=m_{1,k}}^{m_{3,k}} \|w_{k,m}^\mc{G}\|_{L^q(U_{k,m})}^q\Big)^{\frac{1}{q}}\leq \frac{\varepsilon}{4\mathfrak{D}_n} \frac{h^{-\delta(q)}\sqrt{t_0}}{\sqrt{T}}\|u\|\sub{P,T}. \end{equation} Next, for $k\in \mc{K}(\varepsilon)$ let $$ \mc{M}(k,\varepsilon):=\{ m\,:\,m^\varepsilon_{3,k}\leq m\leq m_{3,k}\}, \qquad\qquad {m^\varepsilon_{3,k}:=m_{3,k}-\tfrac{p}{p-p_c}\log_2 (\varepsilon^{-1}2C\mathfrak{D}_n),} $$ and note $|\mc{M}(k,\varepsilon)|\leq \frac{p}{p-p_c}\log_2 (\varepsilon^{-1}2C\mathfrak{D}_n):=M_\varepsilon$. Using~\eqref{e:OhMarcus} and summing over $k\in \mc{K}(\varepsilon)$, $m\notin \mc{M}(k,\varepsilon)$, it follows that \begin{align}\label{e:marcus3} \sum_{k\in \mc{K}(\varepsilon)}\!\!\Big(\sum_{{m\notin \mc{M}(k,\varepsilon)}}\!\!\|w_{k,m}^{\mc{G}}\|_{L^q(U_{k,m})}^{{q}}\Big)^{{\frac{1}{q}}} \!\! &\leq Ch^{-\delta(q)}\frac{t_0^{\frac{p_c}{2q}}}{T^{\frac{p_c}{2q}}}\!\!\sum_{k\in \mc{K}(\varepsilon)}\!(R(h)^{\frac{n-1}{2}}2^{m^\varepsilon_{3,k}-k})^{1-\frac{p_c}{q}}\|u\|\sub{P,T}+O(h^\infty\|u\|\sub{P,T}) \notag\\ &\leq \frac{\varepsilon}{4\mathfrak{D}_n} \frac{h^{-\delta(q)}t_0^{\frac{1}{2}}}{T^{\frac{1}{2}}}\|u\|\sub{P,T}. \end{align} Let \begin{equation} \label{e:marcus4} \mc{N}_{k,m}(\varepsilon):=\Big\{\alpha \in \mc{I}_{k,m}\,:\, \|w_{k,m}^{\mc{G}}\|_{L^\infty(B(x_\alpha,R(h)))}\geq \frac{\varepsilon}{4\mathfrak{D}_n M_\varepsilon K_\varepsilon}\frac{h^{\frac{1-n}{2}}\sqrt{t_0}}{\sqrt{T}}\|u\|\sub{P,T}\Big\}. \end{equation} We claim \begin{equation}\label{e:cliamNe0} \mc{S}\sub{U}(h,\varepsilon,u) \subset \bigcup_{k\in \mc{K}(\varepsilon)}\bigcup_{m\in \mc{M}(k,\varepsilon)}\mc{N}_{k,m}(\varepsilon). \end{equation} To prove the claim \eqref{e:cliamNe0}, suppose $\alpha\notin \bigcup_{k\in \mc{K}(\varepsilon)}\bigcup_{m\in \mc{M}(k,\varepsilon)}\mc{N}_{k,m}(\varepsilon)$. Then, using ~\eqref{e:marcus0} {with $q=\infty$ and $N=\tfrac{1}{2}+\frac{\delta}{6}$}, \begin{align}\label{e:rain} &\frac{1}{\mathfrak{D}_n}\|v\|_{L^\infty(B(x_\alpha,R(h)))} \leq \frac{Ch^{\frac{1-n}{2}} \log T}{T^{\frac{1}{2}+\frac{\delta}{6}}}\|u\|_{\sub{P,T}} + \sum_{k\geq -1}\sum_{m=m_{1,k}}^{m_{2,k}}\|w_{k,m}\|_{L^\infty(U_{k,m})}. \end{align} Next, we decompose the second term in the RHS of \eqref{e:rain} as \begin{align}\label{e:cloud} \sum_{k\geq -1}\sum_{m=m_{1,k}}^{m_{2,k}}\|w_{k,m}^{\mc{B}}\|_{L^\infty(U_{k,m})}+\sum_{k\notin \mc{K}(\varepsilon)}\sum_{m=m_{1,k}}^{m_{3,k}}\|w_{k,m}^{\mc{G}}\|_{L^\infty(U_{k,m})}+\sum_{k\in \mc{K}(\varepsilon)}\sum_{m=m_{1,k}}^{m_{2,k}}\|w_{k,m}^{\mc{G}}\|_{L^\infty(U_{k,m})} \end{align} Note that in the term with the sum over $k\notin \mc{K}(\varepsilon)$ we only sum in $m\leq m_{3,k}$ for the same reason as in \eqref{E:interpolation good piece}. We bound the three terms in \eqref{e:cloud} using ~\eqref{e:marcus1}, ~\eqref{e:marcus2}, ~\eqref{e:marcus3}, and ~\eqref{e:marcus4}, {with $q=\infty$ and $N=\tfrac{1}{2}+\frac{\delta}{6}$}. Combining it with \eqref{e:rain} this yields \begin{align*} &\frac{1}{\mathfrak{D}_n}\|v\|_{L^\infty(B(x_\alpha,R(h)))} \leq Ch^{\frac{1-n}{2}}\|u\|_{\sub{P,T}}\Big( \frac{\log T}{T^{\frac{1}{2}+\frac{\delta}{6}}}+R(h)^{n-1}|\mc{B}\sub{U}|T^{\frac{3}{2}+\frac{\delta}{2}}+\frac{3\varepsilon}{4\mathfrak{D}_n}\frac{\sqrt{t_0}}{\sqrt{T}}+O(h^\infty)\Big). \end{align*} Thus, if $\alpha\notin \bigcup_{k\in \mc{K}(\varepsilon)}\bigcup_{m\in \mc{M}(k,\varepsilon)}\mc{N}_{k,m}(\varepsilon)$, then $ \|v\|_{L^\infty(B(x_\alpha,R(h)))}\leq \varepsilon h^{\frac{1-n}{2}}\frac{\sqrt{t_0}}{\sqrt{T}}\|u\|\sub{P,T} $ for $h$ small enough. In particular, $\alpha\notin \mc{S}\sub{U}(h,\varepsilon,u)$. This proves the claim \eqref{e:cliamNe0} \subsubsection{Decomposition of $u$} We next decompose $u$ as described in the theorem. First, put $$ u_{e,1}:=\sum_{k\geq -1}\sum_{m=-\infty}^{m_1,k}w_{k,m}+\sum_{k\geq -1}\sum_{m=m_{1,k}}^{m_{2,k}}w_{k,m}^{\mc{B}}+\sum_{k\notin \mc{K}(\varepsilon)}\sum_{m=m_{1,k}}^{m_{3,k}}w_{k,m}^{\mc{G}}+\sum_{k\in \mc{K}(\varepsilon)}\sum_{m\notin \mc{M}(k,\varepsilon)}w_{k,m}^{\mc{G}}, $$ $$ u_{big}:=\sum_{k\in \mc{K}(\varepsilon)}\sum_{m\in \mc{M}(k,\varepsilon)}w_{k,m}^{\mc{G}}. $$ and $u_{e,2}:=u-u_{big}-u_{e,1}$. Note that $$ \|u_{e,1}\|_{L^q}\leq \frac{3\varepsilon}{4}h^{-\delta(q)}\frac{\sqrt{t_0}}{\sqrt{T}}\|u\|\sub{P,T},\qquad\qquad \|u_{e,2}\|_{L^q}\leq Ch^{-\delta(q)+\frac{1}{2}-\delta_{{2}}} h^{-1}\|{P}u\|_{H_h^{\frac{n-3}{2}}}, $$ where we use~\eqref{e:marcus1}, ~\eqref{e:marcus2}, ~\eqref{e:marcus3},~\eqref{e:rain}, and~\eqref{e:cloud} to obtain the frist estimate, and~\eqref{e:rico} to obtain the second. These two estimates prove the claim on $\|u_\varepsilon\|_{L^q}$ after combining them with \eqref{e:quas}. Next, observe that $$ u_{big}= \sum_{j\in \mc{L}(\varepsilon)}u_j,\qquad \qquad u_j:=Op_h(\tilde{\chi}\sub{\mathcal{T}_j})Op_h(\psi)u,\qquad \qquad \mc{L}(\varepsilon):=\bigcup_{k\in \mc{K(\varepsilon)}}\bigcup_{m\in \mc{M}(k,\varepsilon)}\mc{G}_{k,m}. $$ We claim that the statement of the theorem holds with $v_j=\sqrt{T}u_j$. Note that $v_j$ are manifestly microsupported inside $\mathcal{T}_j$. Let $\alpha \in \mc{S}\sub{U}(h,\varepsilon,u)$, then by definition, \begin{equation}\label{e:uBigLB} \|u_{big}\|_{L^\infty(B(x_\alpha,R(h)))}\geq \frac{\varepsilon}{4}h^{\frac{1-n}{2}}\frac{\sqrt{t_0}}{\sqrt{T}}\|u\|\sub{P,T}. \end{equation} Note that for all $j\in \mc{L}(\varepsilon)$, the estimate \begin{equation} \label{e:upperA_k} \|Op_h(\tilde{\chi}\sub{\mathcal{T}_j})Op_h(\psi)u\|+h^{-1}\|Op_h(\tilde{\chi}\sub{\mathcal{T}_j})Op_h(\psi)Pu\|_{L^2}\leq {2^{-k_1^\varepsilon+1}}\|u\|\sub{P,T} \end{equation} follows from the definition,~\eqref{E:A_k}, of $\A_k$ and the fact that $\chi\sub{\mathcal{T}_j}\equiv 1$ on $\text{\ensuremath{\supp}} \tilde{\chi}\sub{\mathcal{T}_j}$. To see that $u_j$ is a quasimode, we use the definition of $\A_k$ again, together with Proposition~\ref{l:nicePartition}, and obtain \begin{equation} \label{e:upperA_kP} \|Pu_j\|_{L^2} \leq \|[-h^2\Delta_g,Op_h(\tilde{\chi}\sub{\mathcal{T}_j})]u_j\|_{L^2}+\|Op_h(\tilde{\chi}\sub{\mathcal{T}_j})Pu\|_{L^2}\leq C{2^{-k_1^\varepsilon}}h\|u\|\sub{P,T}. \end{equation} The definition of $k_1^\varepsilon$, together with~\eqref{e:upperA_k} and~\eqref{e:upperA_kP} give the required bounds on $v_j$ and $Pv_j$. Next, define $$ \mc{L}(\varepsilon, u,\alpha):=\{j\in \mc{L}\,:\,\pi\sub{M}(\mathcal{T}_j)\cap B(x_\alpha,3R(h))\neq\emptyset\}, $$ and note that by~\cite[Lemma 3.7]{CG19a} \begin{align} &\|u_{big}\|_{L^\infty(B(x_\alpha,R(h)))}\notag\\ &\leq Ch^{\frac{1-n}{2}}R(h)^{\frac{n-1}{2}}\sum_{j\in \mc{L}(\varepsilon, u, \alpha)}\|Op_h(\tilde{\chi}\sub{\mathcal{T}_j})Op_h(\psi)u\|+h^{-1}\|Op_h(\tilde{\chi}\sub{\mathcal{T}_j})Op_h(\psi)Pu\|_{L^2}+O(h^\infty)\|u\|_{L^2}\notag\\ &\leq Ch^{\frac{1-n}{2}}R(h)^{\frac{n-1}{2}}2^{-k_1^\varepsilon}|\mc{L}(\varepsilon,u,\alpha)|\|u\|\sub{P,T} +O(h^\infty)\|u\|\sub{P,T}.\label{e:usefulLower} \end{align} Therefore, {combining \eqref{e:uBigLB} with \eqref{e:usefulLower} yields} $$ \varepsilon\frac{\sqrt{t_0}}{\sqrt{T}}\leq CR(h)^{\frac{n-1}{2}}2^{-k_1^\varepsilon}|\mc{L}(\varepsilon,\alpha,u)| +O(h^\infty). $$ Moreover, $\bigcup_{j\in \mc{L}(\varepsilon,u)}\mathcal{T}_j$ is $[t_0,T]$ non-self looping and so by Lemma~\ref{l:propLowerBound} $ |\mc{L}(\varepsilon,u)|\leq \frac{C_nt_0}{T}2^{2k_2^\varepsilon}. $ Using the definition \eqref{e:Kep} of $k^\varepsilon_1,k^\varepsilon_2$ we have for $h$ small enough, \begin{equation*} c\varepsilon^2 R(h)^{1-n}=\varepsilon \frac{\sqrt{t_0}}{\sqrt{T}}R(h)^{\frac{1-n}{2}}2^{k_1^\varepsilon}\leq |\mc{L}(\varepsilon,u,\alpha)|\leq |\mc{L}(\varepsilon,u)|\leq \frac{C_n t_0}{T}2^{2k_2^\varepsilon}\leq C\varepsilon^{-2}R(h)^{1-n}, \end{equation*} which yields the upper bound on $|\mc{L}(\varepsilon,u)|$ and the lower bound on $|\mc{L}(\varepsilon,u,\alpha)|$. Note that, the upper bound on $|\mc{L}(\varepsilon,u,\alpha)|$ follows from the fact that the total number of tubes over $B(x_\alpha,3R(h))$ is bounded by $CR(h)^{1-n}$. Next, we note that the fact that at most $\mathfrak{D}_n$ tubes $\mathcal{T}_j$ overlap implies $$ \sum_{j\in \mc{L}(\varepsilon, u,\alpha)}\|Op_h(\tilde{\chi}\sub{\mathcal{T}_j})Op_h(\psi)Pu\|_{L^2}^2\leq C\|Pu\|^2_{L^2}+O(h^\infty\|u\|_{L^2}). $$ Therefore, using the first inequality in~\eqref{e:usefulLower} again, applying Cauchy-Schwarz, and using that there is $C>0$ such that $|\mc{L}(\varepsilon,u,\alpha)|\leq CR(h)^{1-n}$ we have $$ \begin{aligned}\frac{\varepsilon}{4}\frac{\sqrt{t_0}}{\sqrt{T}}\|u\|\sub{P,T}&\leq CR(h)^{\frac{n-1}{2}}|\mc{L}(\varepsilon,u,\alpha)|^{\frac{1}{2}}\Big(\sum_{j\in \mc{L}(\varepsilon, u,\alpha)}\|u_j\|_{L^2}^2\Big)^{^{\frac{1}{2}}}+ Ch^{-1}\|Pu\|_{L^2}+O(h^\infty)\|u\|_{L^2}\\ &\leq C\Big(\sum_{j\in \mc{L}(\varepsilon, u,\alpha)}\|u_j\|_{L^2}^2\Big)^{^{\frac{1}{2}}}+o(T^{-1}\|u\|_{L^2}). \end{aligned} $$ In particular, for $h$ small enough, $ c\frac{\sqrt{t_0}}{\sqrt{T}}\|u\|\sub{P,T}\leq \Big(\sum_{j\in\mc{L}(\varepsilon, u,\alpha)}\|u_j\|^2\Big)^{\frac{1}{2}}. $ This completes the proof. \qed \section{Proof of theorem 1} \label{s:dynamical} In order to finish the proof of Theorem~\ref{t:noConj}, we need to verify that the hypotheses of Theorem~\ref{t:main bound} hold with $T(h)=b\log h^{-1}$ for some $b>0$, and such that for all $x_1,x_2 \in U$ there is some splitting $\J_{x_1}=\mc{G}\sub{x_1,x_2}\cup \mc{B}\sub{x_1,x_2}$ of the set of tubes over $x_1 \in M$ with a set of `bad' tubes $\mc{B}\sub{x_1,x_2}$ satisfying $$(|\mc{B}\sub{x_1,x_2}|R(h)^{n-1})^{\frac{1}{6+\varepsilon_0}(1-\frac{p_c}{p})}\leq T(h)^{-\frac{1}{2}}$$ and ${\varepsilon_0>0}$. Fix $x_1,x_2\in U$ and let {$F_1,F_2:T^*M \to \mathbb{R}^{n+1}$ be smooth functions so that for $i=1,2$,} \begin{equation} \label{e:defFunction} \begin{gathered} S^*_{x_{i}}M=F_i^{-1}(0),\qquad \tfrac{1}{2}d(q,S^*_{x_{i}}M)\leq |F_i(q)|\leq 2d(q,S^*_{x_i}M),\qquad \max_{|\alpha|\leq 2}(|\partial^\alpha F_i(q)|)\leq 2 \\ dF_i(q)\text{ has a {right} inverse }{R}_{_{\!F_i}}(q)\text{ with }\|{R}_{_{\!F_i}}(q)\|\leq 2. \end{gathered} \end{equation} Define also $\psi_i:\mathbb{R}\times T^*\!M \to \mathbb{R}^{n+1}$ by $ \psi_i(t,\rho)=F_i\circ \varphi_t(\rho). $ To {find $\mc{B}_{x_1,x_2}$}, we apply the arguments from~\cite[Sections 2, 4]{CG19dyn}. In particular, fix $a>0$ and let $r_{t}:=a^{-1}e^{-a |t|}$. Suppose that $ d(x_2\,,\,\mc{C}_{x_1}^{n-1,r_{t_0},t_0})>r_{t_0}, $ then for $\rho_0\in S^*_{x_1}M$ with $ d(S^*_{x_2}M,\varphi_{t_0}(\rho_0))<r_{t_0} $ we have by~\cite[Lemma 4.1]{CG19dyn}, that there exists ${\bf w}\in T_{\rho_0}S^*_{x_1}M$ so that $$ d(\psi_{2})_{(t_0,\rho_0)}: \mathbb{R}\partial_t\times \mathbb{R} {\bf w} \to T_{\psi_2(t_0,\rho_0)}\mathbb{R}^{n+1} $$ has a left inverse $L_{(t_0,\rho_0)}$ with $$ \|L_{(t_0,\rho_0)}\|\leq C\sub{M}\max( a e^{C\sub{M}(a+\Lambda)|t_0|},1). $$ Next, let $\{\Lambda_{\rho_j}^\tau(r_1)$ be a $(\mathfrak{D}\sub{M},\tau,r_1)$-good cover for $S^*\!M$. We apply~\cite[Proposition 2.2]{CG19dyn} to construct $\mc{B}\sub{x_1,x_2}$ and $\mc{G}\sub{x_1,x_2}$. \begin{remark}We must point out that we are applying the proof of that proposition rather than the proposition as stated. {The only difference here is that the loops we are interested in go from a point $x_1$ to a point $x_2$ where $x_1$ and $x_2$ are not necessarily equal. This does not affect the proof.} \end{remark} {We use \cite[Proposition 2.2]{CG19dyn} to see that} there exist $\alpha_1=\alpha_1(M)>0$, $ \alpha_2=\alpha_2(M,a)$, and ${{\bf{C}_0}={\bf{C}}_0(M,a)}$ so that the following holds. Let $r_0,r_1,r_2 >0$ satisfy \begin{equation} \label{e:assumeR} r_0 < r_1, \qquad r_1< \alpha_1\, r_2, \qquad r_2 \leq \min\{{R_0},{1}, \alpha_2\, e^{-\gamma T}\}, \qquad r_0 < {\tfrac{1}{3}}\, e^{-\Lambda T} r_2, \end{equation} where {$\gamma= 5\Lambda+2a$ and $\Lambda>\Lambda_{\max}$ where $\Lambda_{\max}$ is as in~\eqref{e:LambdaMax}}}. {Then,} for all balls $B\subset S^*_{x_1}M$ of radius ${R_0}>0$, there is a family {of points} $\{\rho_j\}_{j\in \mc{B}\sub{B}}\subset S^*_{x_1}M$ such that $$ |\mc{B}\sub{B}|\leq {\bf{C}}_0{\mathfrak{D}_n} \;r_2 \;\frac{R_0^{n-1}}{r_1^{n-1}}\; T\,e^{4({2}\Lambda+{a})T}, $$ and for $j\in \mc{G}\sub{B}:= \{j\in \J_{x_1}:\; {B(\rho_j,2r_1)}\cap B\neq \emptyset \}\setminus \mc{B}\sub{B}\}$ $$ \bigcup_{t\in[t_0,T]}\varphi_t\Big(\Lambda_{\rho_j}^\tau(r_1)\Big)\cap \Lambda_{S^*_{x_2}M}^\tau(r_1)=\emptyset. $$ {We proceed to apply \cite[Proposition 2.2]{CG19dyn}.} There is ${c\sub{M}r^{1-n}}\geq N\sub{\!{r}}>0$ so that for all $x_1\in M$, $S^*_{x_1}M$ can be covered by $N\sub{\!{r}}$ balls. Let {$0<R_0<1$ and} $\{B_{i}\}_{i=1}^{N\sub{\!{R_0}}}$ be such a cover. Fix ${0}<\varepsilon{<\varepsilon_1}<{\frac{1}{{4}}}$ and set \[r_0:=h^{{\varepsilon_1}}, \qquad r_1:=h^\varepsilon, \qquad r_2:=\tfrac{2}{\alpha_1}h^\varepsilon.\] Let {$T(h)=b \log h^{-1}$ with $0<b{<\frac{1}{4\Lambda_{\max}}<\frac{1-2\varepsilon_1}{2\Lambda_{\max}}}$ to be chosen later}. Then, the assumptions in~\eqref{e:assumeR} hold provided $$ h^{\varepsilon}< \min \Big\{\tfrac{\alpha_1\alpha_2}{2} e^{- \gamma T}, {\tfrac{\alpha_1 {R_0}}{2}}\Big\},\qquad {h^{\varepsilon_1-\varepsilon}<{\tfrac{2}{3\alpha_1}} e^{-\Lambda T}.} $$ In particular, if we set $\alpha_3:= \tfrac{\alpha_1\alpha_2}{2}$, $\alpha_4=\frac{2}{3\alpha_1}$, the assumptions in~\eqref{e:assumeR} hold provided {$h<\big(\frac{\alpha_1 R(h)}{2}\big)^{\frac{1}{\varepsilon}}$} and \begin{equation} \label{e:t0Temp} T(h)< {\min\Big\{}\frac{ \varepsilon}{ \gamma} \log h^{-1} + \frac{\log \alpha_3}{ \gamma}{\;,\; \frac{\varepsilon_1-\varepsilon}{\Lambda}\log h^{-1}+\frac{\log(\alpha_4)}{\Lambda}\Big\}}. \end{equation} Fix {$b>0$} and ${h_0>0}$ so that {$ b < \frac{{\min(\varepsilon,\varepsilon_1-\varepsilon)}}{12(2\Lambda+a)} $} {and \eqref{e:t0Temp} is satisfied for all $h<h_0$.} {Note that this implies that $b=b(M,a)$, {$h_0=h_0(M,a)$}.} Let $\mc{B}\sub{x_1,x_2}:=\bigcup_{i=1}^{N\sub{{R_0}}}\mc{B}\sub{B_i}$. Then, for $j\in \mc{G}\sub{x_1,x_2}:= \J_{x_1}\setminus \mc{B}\sub{x_1,x_2}$, $$ \bigcup_{t\in[t_0,T_]}\varphi_t\Big(\Lambda_{\rho_j}^\tau(r_1)\Big)\cap \Lambda_{S^*_{x_2}M}^\tau(r_1)=\emptyset. $$ Moreover, shrinking $h_0$ in a way depending only on $(M,a,\varepsilon)$, we have for $0<h<h_0$, $$ r_1^{n-1}|\mc{B}_{x_1,x_2}|\leq C\sub{M}{\bf{C}}_0{\mathfrak{D}_n}r_2 Te^{4(2\Lambda+a)T}\leq h^\frac{\varepsilon}{3}. $$ Therefore, putting $R(h)=r_1=h^\varepsilon$ and $T=T(h)=b\log h^{-1}$ in Theorem~\ref{t:main bound} proves Theorem~\ref{t:noConj}. \section{Anisotropic Pseudodifferential calculus} \label{s:anisotropic} In this section, we develop the second microlocal calculi necessary to understand `effective sharing' of $L^2$ mass between two nearby points. That is, to answer the question: how much $L^2$ mass is necessary to produce high $L^\infty$ growth at two nearby points? To that end, we develop a calculus associated to the co-isotropic $$ \Gamma_x:=\bigcup_{{|t|<\tfrac{1}{2}\inj(M)}} \varphi_t(\Omega_x),\qquad \Omega_x:=\{\xi\in T^*_xM:\; |1-|\xi|_g|<\delta\}, $$ which allows for localization to a Lagrangian leaves $\varphi_t(\Omega_x)$. In Section~\ref{S:uncertainty} we will see, using a type of uncertainty principle, that the calculi associated to two distinct points, {$x_\alpha,\, x_\beta \in M$}, are incompatible in the sense that, despite the fact that $\Gamma_{x_\alpha}$ and $\Gamma_{x_\beta}$ intersect in a dimension 2 submanifold, for {operators ${X_{x_\alpha}}$ and $X_{x_\beta}$ localizing to $\Gamma_{x_\alpha}$ and $\Gamma_{x_\beta}$ respectively}, $$ \|X_{x_\alpha}X_{x_\beta}\|_{L^2\to L^2}\ll \|X_{x_\alpha}\|_{L^2\to L^2}\|X_{x_\beta}\|_{L^2\to L^2}. $$ Let $\Gamma\subset T^*M$ be a co-isotropic submanifold and {$L=\{L_{{q}}\}_{q\in \Gamma}$ be a family of Lagrangian subspaces $L_{{q}}\subset T_{{q}}\Gamma$} that is integrable in the sense that if $U$ is a neighborhood of $\Gamma$, and $V,W$ are smooth vector fields {on $T^*M$} such that $V_{q}, W_{q}\in L_{q}$ for all $q\in \Gamma$, then $[V,W]_{q}\in L_{q}$ for all ${q} \in \Gamma$. The aim of this section is to introduce a calculus of pseudodifferential operators associated to $(L,\Gamma)$ that allows for localization to $h^\rho$ neighborhoods of $\Gamma$ with $0\leq \rho<1$ and is compatible with localization to $h^\rho$ neighborhoods of the foliation of $\Gamma$ generated by $L$. This calculus is close in spirit to those developed in~\cite{SjZw:99} and~\cite{DyZa}. To see the relationships between these calculi, note that the calculus in~\cite{DyZa} allows for localization to any leaf of a Lagrangian foliation defined over an open subset of $T^*M$ and that in~\cite{SjZw:99} allows for localization to a single hypersurface. The calculus developed in this paper is designed to allow localization along leaves of a Lagrangian foliation defined only over a co-isotropic submanifold of $T^*M$. In the case that the co-istropic is a whole open set, this calculus is the same as the one developed in~\cite{DyZa}. Similarly, in the case that the co-isotropic is a hypersurface and no Lagrangian foliation is prescribed, the calculus becomes that developed in~\cite{SjZw:99}. \begin{definition} Let $\Gamma$ be a co-isotropic submanifold and $L$ a Lagrangian foliation on $\Gamma$. {Fix $0\leq \rho<1$ and let $k$ be a positive integer}. We say that $a\in S^{{k}}_{\Gamma,L,\rho}$ if $a\in C^\infty(T^*M)$, $a$ is supported in an $h$-independent compact set, and \begin{equation} \label{e:anSymbEst} V_1\dots V_{\ell_1}W_1\dots W_{\ell_2}a=O(h^{-\rho\ell_2}\langle h^{-\rho}d(\Gamma,\cdot)\rangle^{{k}-\ell_2}) \end{equation} {where $W_1,\dots W_{\ell_2}$ are any vector fields on $T^*M$, $V_1,\dots V_{\ell_1}$ are vector fields on $T^*M$ with $(V_1)_q,\dots (V_{\ell_1})_q\in L_q$ for $q\in \Gamma$}, {and $q \mapsto d(\Gamma, q)$ is the distance from $q$ to $\Gamma$ induced by the Sakai metric on $T^*M$}. \end{definition} We also define symbol classes associated to only the co-isotropic \begin{definition} Let $\Gamma$ be a co-isotropic submanifold. We say that $a\in S^k_{\Gamma,\rho}$ if $a\in C^\infty(T^*M)$, $a$ is supported in an $h$-independent compact set, and $$ V_1\dots V_{\ell_1}W_1\dots W_{\ell_2}a=O(h^{-\rho\ell_2}\langle h^{-\rho}d(\Gamma,\cdot)\rangle^{k-\ell_2}) $$ where $V_1,\dots V_{\ell_1}$ are tangent vector fields to $\Gamma$, and $W_1,\dots W_{\ell_2}$ are any vector fields. \end{definition} \subsection{Model case} The goal of this section is to define the quantization of symbols in $S^{{k}}_{\Gamma_0,L_0,\rho}$, where $\Gamma_0, L_0$ are a model pair of co-isotropic and Lagrangian foliation defined below. The model co-isotropic submanifolds of dimension $2n-r$ is {$$ \Gamma_0:=\{(x',x'',\xi', \xi'') \in {\mathbb R}^r\times {\mathbb R}^{n-r} \times {\mathbb R}^r \times {\mathbb R}^{n-r}:\; x'=0\} $$ with Lagrangian foliation $$ L_0:= \{L_{0,q}\}_{q \in \Gamma_0}, \qquad L_{0,q}=\text{span}\{\partial_{\xi_i},\,i=1,\dots n\}\subset T_q\Gamma_0. $$ } Note that in this model case the distance from a point $(x,\xi)$ to $\Gamma_0$ is controlled by $|x'|$. Therefore, $a\in S^{k}_{\Gamma_0,L_0,\rho}$ if and only if $a$ is supported in an $h$-independent compact set and for all $(\alpha, \beta) \in \mathbb N^n \times \mathbb N^n$ there exists $C_{\alpha, \beta}>0$ such that {$$ |\partial_{x}^{\alpha} \partial_{\xi}^\beta a|\leq C_{\alpha,\beta} h^{-\rho|\alpha|}\langle h^{-\rho}|x'|\rangle^{k-|\alpha|}. $$} In the model case, it will be convenient to define $\tilde{a}\in C^\infty(\mathbb{R}^n_x\times \mathbb{R}^n_\xi\times \mathbb{R}^r_\lambda)$ such that $$ a(x,\xi)=\tilde{a}(x,\xi,h^{-\rho}x'), $$ and for all $(\alpha', \alpha'', \beta, \gamma ) \in \mathbb N^r \times \mathbb N^{n-r} \times \mathbb N^n\times \mathbb N^r$ there exists $C_{\alpha, \beta, \gamma}>0$ such that \begin{equation}\label{E: symbol model L} |\partial_{x'}^{\alpha'}\partial_{x''}^{\alpha''} \partial_\xi^{\beta} \partial_\lambda^{\gamma} \tilde{a}(x,\xi,\lambda)|\leq C_{\alpha, \beta, \gamma} h^{-\rho|\alpha''|}\langle \lambda\rangle^{k-|\gamma|-|\alpha''|}. \end{equation} Similarly, if $a\in S^k_{\Gamma_0,\rho}$, then for $(\alpha', \alpha'', \beta, \gamma ) \in \mathbb N^r \times \mathbb N^{n-r} \times \mathbb N^n\times \mathbb N^r$ there exists $C_{\alpha, \beta, \gamma}>0$ \begin{equation}\label{E: symbol model no L} |\partial_{x'}^{\alpha'}\partial_{x''}^{\alpha''} \partial_\xi^{\beta} \partial_\lambda^{\gamma} \tilde{a}(x,\xi,\lambda)|\leq C_{\alpha, \beta, \gamma} \langle \lambda\rangle^{k-|\gamma|}. \end{equation} \begin{definition} The symbols associated with this submanifold are as follows. We say $a\in \widetilde{S^{k}_{\Gamma_0,L_0,\rho}}$ if $a\in C^\infty(\mathbb{R}_x^n\times \mathbb{R}_\xi^{n}\times \mathbb{R}_\lambda^{r})$ {satisfies \eqref{E: symbol model L}} and $a$ is supported in an $h$-independent set in $(x,\xi)$. {If we have the improved estimates} \eqref{E: symbol model no L} then we say that $a\in \widetilde{S^k_{\Gamma_0,\rho}}$. \end{definition} \begin{remark} {While there is no $\rho$ in the definition of $\widetilde{S^k_{\Gamma_0,\rho}}$, we keep it in the notation for consistency.} \end{remark} {Let $a\in {\widetilde{S^{k}_{\Gamma_0,L_0,\rho}}}$.} We then define $$ [{\widetilde{Op}}_{h}(a)] u(x):=\frac{1}{(2\pi h)^n}\int e^{\frac{i}{h}\langle x-y,\xi\rangle}a(x,\xi,h^{-\rho}x'){u(y)}{dy}d\xi. $$ Since $a\in {\widetilde{S^{k}_{\Gamma_0,L_0,\rho}}}$ is compactly supported in $x$, {there exists $C>0$ such that} on the support of the integrand $\lambda \leq {C} h^{-\rho}$ and hence $ h\leq {C} h^{1-\rho}\langle \lambda\rangle^{-1}. $ This will be important when computing certain asymptotic expansions. \begin{lemma}\label{E: boundedness} Let $k\in \mathbb{R}$ and $a\in \widetilde{S^k_{\Gamma_0,L_0,\rho}}$. Then, $$ \|{\widetilde{Op}}_h(a)\|_{L^2\to L^2}\leq C \sup_{\mathbb{R}^{2n}}|a(x,\xi,{h^{-\rho}x'})|+O(h^{{-\rho}\max(k,0)+\frac{1-\rho}{2}}) $$ \end{lemma} \begin{proof} Define $T_\delta:L^2(\mathbb{R}^n)\to L^2(\mathbb{R}^n)$ by \begin{equation} \label{e:rescaling} T_\delta u(x):=h^{\frac{n}{2}\delta}u(h^{\delta}x). \end{equation} Then $T_\delta$ is unitary and, for $a\in\widetilde{S^k_{\Gamma_0,L_0,\rho}}$, $$ {\widetilde{Op}}_h(a)u=T_{\frac{1+\rho}{2}}^{-1}Op_1(a_h)T_{\frac{1+\rho}{2}}u, \qquad a_h{(x,\xi)}:=a(h^{\frac{1+\rho}{2}}x,h^{\frac{1-\rho}{2}}\xi,h^{\frac{1-\rho}{2}}x'). $$ Then, {for all $\alpha, \beta \in \mathbb N^n$ there exists $C_{\alpha, \beta}$ such that} $ |\partial_x^\alpha \partial_\xi^\beta a_h|\leq {C_{\alpha, \beta}}h^{\frac{1-\rho}{2}(|\alpha|+|\beta|)}\langle h^{\frac{1-\rho}{2}}x'\rangle^{k-|\alpha|}. $ Now, {since $a_h,b_h\in S_{\frac{1-\rho}{2}}$,} by~\cite[Theorem 4.23]{EZB} {there is a universal constant $M>0$ with} $$ \|{\widetilde{Op}}_1(a_h)\|_{L^2\to L^2}\leq C\sum_{|\alpha|\leq Mn}\sup_{{\mathbb{R}^{2n}}}|\partial^\alpha a_h|\leq C\sup|a|+C_ah^{{-}\max({\rho} k,0)+\frac{1-\rho}{2}}. $$ \vspace{-.8cm} \end{proof} \begin{lemma} \label{l:compose} Suppose that $a\in \widetilde{S^{k_1}_{\Gamma_0,L_0,\rho}}$, $b\in \widetilde{S^{k_2}_{\Gamma_0,L_0,\rho}}$. Then, $ {\widetilde{Op}}_h(a){\widetilde{Op}}_h(b)={\widetilde{Op}}_h(c)+O(h^\infty)_{L^2\to L^2} $ where $c\in \widetilde{S^{k_1+k_2}_{\Gamma_0,L_0,\rho}}$ satisfies \begin{equation} \label{e:composed} c=ab+O(h^{1-\rho})_{\widetilde{S^{k_1+k_2-1}_{\Gamma_0,L_0,\rho}}}. \end{equation} In particular, \begin{equation} \label{e:fullAsymptotic} c\sim \sum_{j}\sum_{|\alpha|=j}\frac{i^j}{j!}\big((hD_{x''})^{\alpha''}(hD_{x'}+h^{1-\rho}D_\lambda)^{{\alpha'}}b\big) D^\alpha_{\xi}a. \end{equation} If instead, $a\in \widetilde{S^{k_1}_{\Gamma_0,\rho}}$ and $b\in \widetilde{S^{k_2}_{\Gamma_0,\rho}}$, then the remainder in~\eqref{e:composed} lies in $h^{1-\rho}\widetilde{S^{k_1+k_2-1}_{\Gamma_0,\rho}}$. \end{lemma} \begin{proof} With $T_\delta$ as in~\eqref{e:rescaling}, we have $ {\widetilde{Op}}_h(a){\widetilde{Op}}_h(b)=T^{-1}_{\rho/2}Op_h(a_h)Op_h(b_h)T_{\rho/2} $ where $$ a_h=a(h^{\frac{\rho}{2}}x,h^{-\frac{\rho}{2}}\xi,h^{-\frac{\rho}{2}}x'),\qquad b_h=b(h^{\frac{\rho}{2}}x,h^{-\frac{\rho}{2}}\xi,h^{-\frac{\rho}{2}}x'). $$ Now, {for all $\alpha, \beta \in \mathbb N^n$ there exists $C_{\alpha, \beta}$ such that} \begin{gather*} |\partial_x^\alpha \partial_\xi^\beta a_h|\leq {C_{\alpha, \beta}}h^{-\frac{\rho}{2}(|\alpha|+|\beta|)}\langle h^{-\frac{\rho}{2}}x'\rangle^{k_1-|\alpha|},\quad |\partial_x^\alpha \partial_\xi^\beta b_h|\leq {C_{\alpha, \beta}}h^{-\frac{\rho}{2}(|\alpha|+|\beta|)}\langle h^{-\frac{\rho}{2}}x'\rangle^{k_2-|\alpha|}. \end{gather*} In particular, using that $a$ and $b$ are compactly supported, $a_h\in h^{-\max(\rho k_1,0)}S_{\rho/2}$ and $b_h\in h^{-\max(\rho k_2,0)}S_{\rho/2}$ and hence~\cite[Theorems 4.14,4.17]{EZB} apply. In particular, {if we let $M>0$ and $\tilde k:= \max(k_1,0)+\max(k_2,0)$, we obtain} $ Op_h(a_h)Op_h(b_h)=Op_h(c_h) $ where, for any $N>0$, \begin{align*} &c_h(x,\xi)= \sum_{j=0}^{N-1}\sum_{|\alpha|=j}\frac{h^ji^j}{j!}(D_{\xi}^\alpha a_h(x,\xi))(D_x^\alpha b_h(x,\xi)) +O(h^{-\rho{\tilde k}+N(1-\rho)})_{S_{\rho/2}}\\ &=\sum_{j=0}^{N-1}\sum_{|\alpha|=j}\sum_{\alpha'+\alpha''=\alpha}\!\! \!\!\frac{h^{(1-\rho){j}}i^j}{j!}(D^\alpha_\xi a)_h[(h^{\rho}D_{x''})^{\alpha''}(h^{\rho}D_{x'}+D_\lambda)^{\alpha'}b]_h+O(h^{-\rho {\tilde k}+N(1-\rho)})_{S_{\rho/2}}. \end{align*} Choosing $ N=\max\Big(k_1+k_2,\frac{\rho\tilde k+M}{1-\rho}\Big), $ the remainder is $O(h^{M})_{S_{\rho/2}}$. Moreover, since $a$ and $b$ were compactly supported, we may assume {introducing an $h^\infty$ error,} that the remainder is supported in $\{(x, \xi): \, |(x,\xi)|\leq Ch^{-\frac{\rho}{2}}\}.$ Putting $$c=\sum_{j=0}^{N-1}\sum_{|\alpha|=j}\sum_{\alpha'+\alpha''=\alpha} \frac{i^j}{j!}(D^\alpha_\xi a)[(hD_{x''})^{\alpha''}(hD_{x'}+h^{1-\rho}D_\lambda)^{\alpha'}b], $$ we thus have $T_{\rho/2}^{-1}Op_h(c_h)T_{\rho/2}={\widetilde{Op}}_h(c)+O(h^M)_{\mc{D}'\to C^\infty}$ as claimed. \end{proof} \begin{lemma} \label{l:commutator} Suppose that $a\in \widetilde{S^{m_1}_{\Gamma_0,L_0,\rho}}$, $b\in \widetilde{S^{m_2}_{\Gamma_0,L_0,\rho}}$. Then, $$ [{\widetilde{Op}}_h(a),{\widetilde{Op}}_h(b)]=-ih^{1-\rho}{\widetilde{Op}}_h(c)+O(h^\infty)_{L^2\to L^2} $$ where $c\in \widetilde{S^{m_1+m_2-2}_{\Gamma_0,L_0,\rho}}$ satisfies \begin{equation*} \label{e:commutator1} c=h^\rho\sum_{i=1}^n(\partial_{\xi_i}a\partial_{x_i}b-\partial_{\xi_i}b\partial_{x_i}a)+\sum_{i=1}^r(\partial_{\xi_i}a\partial_{\lambda_i}b-\partial_{\lambda_i}a\partial_{\xi_i}b)+O(h^{1-\rho})_{\widetilde{S^{m_1+m_2-2}_{\Gamma_0,L_0,\rho}}}. \end{equation*} If instead, $a\in \widetilde{S^{m_1}_{\Gamma_0,\rho}}$ and $b\in \widetilde{S^{m_2}_{\Gamma_0,\rho}}$, then the remainder lies in $h^{1-\rho}\widetilde{S^{m_1+m_2-2}_{\Gamma_0,\rho}}$. Moreover, if $a\in S^{\operatorname{comp}}(\mathbb{R}^{2n})$ is independent of $\lambda$ and ${\partial_{\xi'}}a=e(x,\xi)x'$ {with $e(x,\xi):{\mathbb R}^r \to {\mathbb R}^r$ for all $(x,\xi)$,} then $$ [{\widetilde{Op}}_h(a),{\widetilde{Op}}_h(b)]=-ih{\widetilde{Op}}_h(c)+O(h^\infty)_{{\Psi^{-\infty}}} $$ with $ c=H_ab+\sum_{i=1}^r(e\lambda)_i\partial_{\lambda_i}b+O(h^{1-\rho})_{\widetilde{S^{m_2-1}_{\Gamma_0,L_0,\rho}}}. $ Similarly, the same conclusion holds if $b\in\widetilde{ S^{m_2}_{\Gamma_0,\rho}}$ with the error term in $c$ being $O(h^{1-\rho})_{\widetilde{S^{m_2-1}_{\Gamma_0,\rho}}}$. \end{lemma} \begin{proof} In each case, we need only apply the formula~\eqref{e:fullAsymptotic}. \end{proof} \subsection{Reduction to normal form} {In order to define the quantization of symbols in $S_{\Gamma,L, \rho}$ for general $(\Gamma, L)$, we first explain how to reduce the problem to the model case $(\Gamma_0, L_0)$.} \begin{lemma} Let $L$ be a Lagrangian foliation over a co-isotropic submanifold $\Gamma {\subset {\mathbb R}^{2n}}$ of dimension $2n-r$. Then for each $(x_0,\xi_0)\in \Gamma$ there is a neighborhood $U_0$ of $(x_0,\xi_0)$ and a symplectomorphism $\kappa:U_0\to V_0\subset T^*\mathbb{R}^n$ such that $$ \kappa(\Gamma\cap U_0)=\Gamma_0\cap V_0 \qquad \text{and}\qquad {(\kappa_*)_q L_q=L_{0,q} \;\;\text{for}\;\; q\in \Gamma \cap U_0}.$$ \end{lemma} \begin{proof} We first put $\Gamma$ in normal form. That is, we build symplectic coordinates {$(y, \eta)$} such that \begin{equation} \label{e:coisoForm} \Gamma=\{{(y, \eta):\,}y_1=\dots =y_r=0\}. \end{equation} First, assume $r=1$ and let $f_1\in C^\infty(T^*M)$ define $\Gamma$. By Darboux's theorem there are symplectic coordinates such that $y_1=f_1$ and the proof { of \eqref{e:coisoForm}} is complete for $r=1$. Next, assume that we can put any coisotropic of co-dimension $r-1$ in normal form. Let $f_1,\dots, f_r\in C^\infty(T^*M)$ define $\Gamma$. Then,{ since $\Gamma$ is co-isotropic, for $X\in T\Gamma$ and $i=1, \dots, r$} $$ \sigma(X ,H_{f_i})=df_i(X) =0. $$ In addition, since $\Gamma$ is co-isotropic, $(T\Gamma)^\perp\subset T\Gamma$ and so $H_{f_i}\in T\Gamma$ for all $i=1, \dots, r$. In particular, $$ \{f_i,f_j\}= H_{f_i}f_j=df_j(H_{f_i})=0, $$ on $\Gamma$. Now, using Darboux' theorem, choose symplectic coordinates ${(y, \eta)=}(y_1,y',\eta_1,\eta')$ so that $y_1=f_1$ and $(x_0,\xi_0)\mapsto (0,0)$. Then, $\partial_{\eta_1} f_j=\{f_j,y_1\}=0$ on $\Gamma,$ for $j=2,\dots, r$. Next, observe that $ \Gamma=\{{(y, \eta):\,}y_1=f_2=\dots ={f_{r}}=0\}, $ and $dy_1,\{df_j\}_{j=2}^{{r}}$ are independent. Thus, {since $\partial_{\eta_1} f_j=0$ on $\Gamma$, $$ \Gamma=\{{(y, \eta):\;}y_1=0, \;\; f_j(0,y',0,\eta')=0,\,\;\;j=2,\dots, {r}\}. $$ } Now, $\{y_1=\eta_1=0\}\cap \Gamma$ is a co-isotropic submanifold of co-dimension $r-1$ in $T^*\{y_1=0\}$. Hence, by induction, there are symplectic coordinates {$(y_2, \dots, y_n, \eta_2, \dots, \eta_n)$} on $T^*\{y_1=0\}$ such that $$ \Gamma\cap \{y_1=\eta_1=0\}=\{y_1=\eta_1=0, \;\; y_2=\dots =y_r=0\}. $$ In particular, $$ {\{ (y', \eta'): \; f_j(0,y',0,\eta')=0,\,\;\;j=2,\dots, {r}\}}=\{y_2=\dots = y_r=0\}. $$ Thus, extending {$(y_2, \dots, y_n, \eta_2, \dots, \eta_n)$} to be independent of $(y_1,\eta_1)$ puts $\Gamma$ in the form~\eqref{e:coisoForm}. Next, we adjust the coordinates to be adapted to $L$ along $\Gamma$. First, define $\tilde{y}_i:=y_i$ for $i=1,\dots r$. {Then, since $L\subset T\Gamma$, for every $i=1, \dots, r$ we have that $d\tilde{y_i}(X)|_{\Gamma}$ is well defined for $X\in L$ and $d\tilde{y_i}(X)|_{\Gamma}=0$.} Next, since $L$ is integrable, the Frobenius theorem~\cite[Theorem 19.21]{LeeBook13} shows that there are coordinates {$(\tilde y_{r+1}, \dots, \tilde y_n, \tilde \xi_1, \dots, \tilde \xi_n)$ on $\Gamma$, defined in a neighborhood of $(0,0)$,} such that $L$ is the annihilator of $d\tilde{y}$. {Since we know that for every $X\in L$ $$ \sigma(X,H_{\tilde{y}_i})={d\tilde y_i}(X)=0, $$ and $L$ is Lagrangian, we conclude that $H_{\tilde{y}_i}\in L$.} In particular, since $L$ is the annihilator of $d\tilde{y}$, $$\{\tilde{y}_i,\tilde{y}_j\}=H_{\tilde{y}_i}\tilde{y}_j=d\tilde{y}_j(H_{\tilde{y}_i})=0.$$ Now, extend {$(\tilde y_{r+1}, \dots, \tilde y_n, \tilde \xi_1, \dots, \tilde \xi_n)$ outside $\Gamma$ to be independent of $(\tilde y_{1}, \dots, \tilde y_r)$}. Then, $ \{\tilde{y}_i,\tilde{y}_j\}=0 $ in a neighborhood of $(x_0,\xi_0)$ and hence, by Darboux's theorem, there are functions $\{\tilde{\eta}_j\}_{j=1}^n$, such that $\{\tilde{y}_i,\tilde{\eta}_j\}=\delta_{ij}$ and $\{\tilde{\eta}_i,\tilde{\eta}_j\}=0.$ In particular, in the $(\tilde{y},\tilde{\eta})$ coordinates, $ \Gamma=\{(\tilde y, \tilde \eta):\;\; \tilde y_1=\dots =\tilde y_r=0\}, $ and $d\tilde{y}(L)|_{\Gamma}=0$. In particular, $L=\operatorname{span} \{{\partial}\tilde{\eta}_i\}$ as claimed. \end{proof} In order to create a well-defined global calculus of psuedodifferential operators associated to $(\Gamma,L)$, we will need to show invariance under conjugation by FIOs preserving the pair $(L_0,\Gamma_0)$. \begin{proposition} \label{p:invariance} Suppose that $U_0,V_0$ are neighborhoods of $(0,0)$ in $T^*\mathbb{R}^n$ and $\kappa:U_0\to V_0$ is a symplectomorphism such that \begin{equation} \label{e:specialSymplectic} \kappa(0,0)=(0,0),\qquad \kappa(\Gamma_0\cap U_0)=\Gamma_0\cap V_0,\qquad \kappa_*|_{\Gamma_0} L_0=L_0|_{\Gamma_0}. \end{equation} Next, let $T$ be a semiclassically elliptic FIO microlocally defined in a neighborhood of $$\big((0,0),(0,0)\big){\in T^*\mathbb{R}^n \times T^*\mathbb{R}^n}$$ quantizing $\kappa$. Then, for $a\in \widetilde{S^k_{\Gamma_0,L_0,\rho}}$, there {are $b\in \widetilde{S^k_{\Gamma_0,L_0,\rho}}$ and $c\in \widetilde{S^{k-1}_{\Gamma_0,L_0,\rho}}$} such that $$ T^{-1}{\widetilde{Op}}_h(a)T={\widetilde{Op}}_h(b),\qquad b=a\circ K_{\kappa}+ h^{1-\rho}c $$ where {$K_{\kappa}:T^*{\mathbb R}^n \times {\mathbb R}^r \to T^*{\mathbb R}^n \times {\mathbb R}^r$ is defined by} {\begin{equation*} K_{\kappa}(y,\eta,\mu)=\Big(\kappa(y,\eta), \pi_{x'}(\kappa(y, \eta)) \frac{|\mu|}{|y'|}\Big), \end{equation*} and $\pi_{x'}: T^*{\mathbb R}^n \to {\mathbb R}^r$ is the projection onto the first $r$-spatial coordinates.} In addition, if $a\in \widetilde{S^k_{\Gamma_0,\rho}}$, then $c\in \widetilde{S^{k-1}_{\Gamma_0,\rho}}$ and $b\in \widetilde{S^k_{\Gamma_0,\rho}}$. \end{proposition} To prove Proposition~\ref{p:invariance}, we follow~\cite{SjZw:99}. First, observe that the {proposition} holds with $\kappa=\operatorname{Id}$ since then {$T$} is a standard pseudodifferential operator. In addition, {the proposition also} holds {whenever for a given $j \in\{1, \dots, n\}$ we work with} $$ \kappa(y,\eta):=(y_1,\dots,y_{j-1},-y_j,y_{j+1},\dots,y_n\;,\;\eta_1,\dots,{\eta_{j-1}, -\eta_j},\eta_{j+1},\dots, \eta_n). $$ {Indeed, this follows from the fact that in this case an FIO quantizing $\kappa$ is} $$Tu(x)=u(x_1,\dots x_{j-1},-x_j,x_{j+1},\dots, x_n)$$ {and so the conclusion of the proposition follows from a direct computation together with the identity case.} Thus, we may assume that \begin{equation} \label{e:positivity} \kappa(y,\eta)=(x,\xi)\qquad\text{ implies }\qquad x_iy_i\geq 0,\qquad i=1,\dots n. \end{equation} \begin{lemma}\label{L: deformation of kappa} Let $\kappa$ be a symplectomorphism satisfying~\eqref{e:specialSymplectic} and~\eqref{e:positivity}. Then, there is a piecewise smooth family of symplectomorphisms $[0,1]\ni t\mapsto\kappa_t$ such that $\kappa_t$ satisfies~\eqref{e:specialSymplectic}, \eqref{e:positivity}, $\kappa_0=\operatorname{Id}$, and $\kappa_1=\kappa$. \end{lemma} \begin{proof} {In what follows we assume that $\kappa(y, \eta)= (x,\xi)$ but reorder the coordinates $(y',y'',\eta',\eta'')\in T^*\mathbb{R}^n$ is represented as $(y',\eta',y'',\eta'')\in {\mathbb R}^{2r}\times {\mathbb R}^{2(n-r)}$. Let $\xi'$ and $\kappa''{=(x''(y',\eta),\xi''(y',\eta))}$ with} $$ \kappa|_{\Gamma_0}:(0, {\eta'}, y'',{\eta''})\mapsto (0,\xi'(y'',\eta),\kappa''(y'',\eta)). $$ {Now, since $(\kappa_*)|_{\Gamma_0}L_0=L_0$, we have for $i=1,\dots n$, \begin{equation}\label{e:Tmapping} \kappa_* \partial_{\eta_i}=\frac{\partial x_j}{\partial \eta_i}\partial_{x_j}+\frac{\partial \xi_j}{\partial\eta_i}\partial_{\xi_j}{\in} L_0 \end{equation} and hence, \begin{equation} \label{e:xindep} \partial_{\eta}x|_{\Gamma_0}\equiv 0. \end{equation}} {Next, since $\kappa$ preserves $\Gamma_0$, $\{\kappa^*x_i\}_{i=1}^r$ defines $\Gamma_0$ and $ \operatorname{span}\{d\kappa^*x_i|_{\Gamma_0}\}_{i=1}^r=\operatorname{span}\{dy_i|_{\Gamma_0}\}_{i=1}^r, $ $$ \operatorname{span}\{H_{\kappa^*x_i}|_{\Gamma_0}\}_{i=1}^r=\operatorname{span}\{H_{y_i}|_{\Gamma_0}\}_{i=1}^r. $$ By Jacobi's theorem, $\kappa_*H_{\kappa^*x^i}=H_{x_i}$. Therefore, $$(\kappa|_{\Gamma_0})_* \Big(\operatorname{span} \{H_{y_i}\}_{i=1}^r\big|_{\Gamma_0}\Big)=\operatorname{span} \{H_{x_i}\}_{i=1}^r\big|_{\Gamma_0}, $$ and we conclude from~\eqref{e:Tmapping} $\xi''|_{\Gamma_0}$ is independent of $\eta'$ and hence that $\kappa''$ is independent of $\eta'$. In particular, $\kappa''$ is a symplectomorphism on $T^*\mathbb{R}^{n-r}$.} This also implies that for each fixed $(y'',\eta'')$, the map $\eta'\mapsto \xi'(y'', {\eta', \eta''})$ is a diffeomorphism. Writing $$\kappa''(y'',\eta'')=(x''(y'',\eta''),\xi''(y'',\eta'')),$$ we have {by~\eqref{e:xindep} that} $\partial_{\eta''}x''=0$, and hence $x''=x''(y'')$. Now, since $\kappa''$ is symplectic, $$ (\partial_{\eta''}\xi'' d\eta''+\partial_{y''}\xi''dy'')\wedge \partial_{y''}x''dy''=d\eta''\wedge dy'', $$ {and so we conclude that} \begin{equation}\label{E:1 and 2} (\partial_{y''}x'')^t\partial_{\eta''}\xi''=\operatorname{Id}, \qquad \qquad (\partial_{y''}x'')^t\partial_{y''}\xi'' \;\;\text{ is diagonal.} \end{equation} The first equality in \eqref{E:1 and 2} {gives that $\partial_{\eta''}\xi''$ is a function of $y''$ only, and hence there exists a function $F=F(y'')$ such that} $\xi''(y'', \eta'')=[({\partial x''}(y''))^t]^{-1}(\eta''-F(y'')).$ Therefore, calculating on $\eta''=F(y'')$, the second {statement in \eqref{E:1 and 2}} implies that $ -\partial_{y''}F(y'')dy''\wedge dy''=0. $ In particular, $ d(F(y'')\cdot dy'')=0. $ It follows from the Poincar\'e lemma that, shrinking the neighborhood of {$(0,0)$} to be simply connected if necessary, $F(y'')\cdot dy''=d\psi(y'')$ for {some function $\psi=\psi(y'')$}. Hence, \begin{equation} \label{e:GammaSymp} \kappa''(y'',\eta'')=\Big(x''(y'')\;,\; [(dx''(y''))^t]^{-1}(\eta''-{\partial \psi}(y''))\Big). \end{equation} Now, every symplectomorphism of the form~\eqref{e:GammaSymp} preserves $L_0$. Hence, we can deform $\kappa''$ to the identity by putting $\psi_t=t\psi$ and deforming $x''$ to the identity. Since the assumption in~\eqref{e:positivity} implies $\partial_{y''}x''>0$, this can be done simply by taking $ x''_t=(1-t)\operatorname{Id}+t x''. $ Putting $\kappa''_t=(x''_t,\xi''_t)$, there is $\kappa''_t$ such that $\kappa''_0=\operatorname{Id}$ and $\kappa''_1=\kappa''$. Now, composing $\kappa$ with $$ (y',\eta';y'',\eta'')\mapsto (y',\eta';(\kappa''_t)^{-1}(y'',\eta'')) $$ we reduce to the case that $\kappa''=\operatorname{Id}$. In particular, we need only consider the case in which \begin{equation} \label{e:reducedKappa} \kappa(y',\eta',y'',\eta'') =\Big(f(y,\eta)y'\;,\;\xi'(y'',\eta)+h_0(y,\eta)y' \;,\; (y'',\eta'')+h_1(y,\eta)y'\Big). \end{equation} where $f(y,\eta)\in \mathbb{GL}_r$, {$h_0(y, \eta)$ is an $r\times r$ matrix, and $h_1(y, \eta)$ is an $ 2(n-r)\times r$ matrix}. Next, we claim that {the projection map from $\operatorname{graph}(\kappa)$ to ${\mathbb R}^{2n}$ defined as } $ (x,\xi;y,\eta)\mapsto (x,\eta) $ is a local diffeomorphism. To see this, note that for $|y'|$ small the map $(x'',\eta'')\mapsto (y'',\xi'')$ is a diffeomorphism, that for each fixed $(y'',\eta'')$ the map $\eta'\mapsto \xi'$ is a diffeomorphism, and that $\det \partial_{y'}x'|_{\Gamma_0}\neq 0$. Thus, $\kappa$ has a generating function $\phi$: $ \kappa: ({\partial_\eta\phi}(x,\eta),\eta)\mapsto (x, {\partial_x \phi}(x,\eta)), $ such that $$ \det {\partial_{x \eta}^2\phi}(0,0)\neq 0,\qquad \partial_{\eta'}\phi(0,x'',\eta)=0.$$ Now, {writing $\kappa=(\kappa', \kappa'')$, we have} $\kappa''=\operatorname{Id}$ at $x'=0$. Therefore, $$ \partial_{\eta''}\phi(0,x'',\eta)=x'',\qquad \partial_{x''}\phi(0,x'',\eta)=\eta'' $$ and we have $\phi(0,x'',\eta)=\langle x'',\eta''\rangle +C$ {for some $C\in \mathbb{R}$}. We may choose $C=0$ to obtain \begin{equation} \label{e:generate} \phi(x,\eta)=\langle x'',\eta''\rangle +g(x,\eta)x', \end{equation} for some $g:{\mathbb R}^{2n} \to\mathbb{M}_{1\times r} $. Finally, since $\kappa(0,0)=(0,0)$ and $\partial_{x \eta}^2\phi$ is non-degenerate, we have $\partial_{x'}\phi(0,0)=g(0,0)=0$ and $\partial_{\eta'}g$ is non-degenerate. In fact \eqref{e:positivity} implies that as a quadratic form \begin{equation} \label{e:posGen} \partial_{\eta'}g>0. \end{equation} Observe next that for every $\phi$, such that~\eqref{e:generate} holds for some $g$ satisfying~\eqref{e:posGen} and $g(0,0)=0$ generates a canonical transformation satisfying~\eqref{e:reducedKappa} and~\eqref{e:positivity}. In particular, the symplectomorphism satisfies~\eqref{e:specialSymplectic}. Thus, we can deform from the identity by putting $g_t=(1-t)\eta'+tg$. \end{proof} Finally, we proceed with the proof of Proposition~\ref{p:invariance}. \begin{proof}[Proof of Proposition~\ref{p:invariance}] Let $\kappa_t$ be {as in Lemma \ref{L: deformation of kappa}. That is, a piecewise smooth deformation from $\kappa_0=\operatorname{Id}$ to $\kappa_1=\kappa$} such that $\kappa_t$ preserves $\Gamma_0$ and $(\kappa_t)_*|_{\Gamma_0}$ perserves $L_0$. Let $T_t$ be piecewise smooth family of elliptic FIOs defined microlocally near $(0,0)$, quantizing $\kappa_t$, and satisfying \begin{equation}\label{E:FIO T} hD_tT_t+T_tQ_t=0,\qquad T_0=\operatorname{Id}. \end{equation} Here, $Q_t$ is a smooth family of pseudodifferential operator with symbol $q_t$ satisfying $ \partial_t\kappa_t=(\kappa_t)_*H_{q_t}. $ (Such an FIO exists, for example, by~\cite[Chapter 10]{EZB} and $q_t$ exists by~\cite[Thoerems 11.3, 11.4]{EZB}) Next, define $$A_t:=T_t^{-1}{\widetilde{Op}}_h(a)T_t.$$ Note that $ T^{-1}{\widetilde{Op}}(a)T=T^{-1}T_1T_1^{-1}{\widetilde{Op}}(a)T_1T_1^{-1}T +O(h^\infty)_{\Psi^{-\infty}}. $ Hence, since the Proposition follows by direct calculation when $\kappa=\operatorname{Id}$, we may assume that $T=T_1$. {In that case, our goal is to find a symbol $b$ such that $A_1=Op_h(b)$}. First, observe that \eqref{E:FIO T} implies that {$hD_tT_t^{-1}-Q_tT_t^{-1}=0$} and so \begin{equation}\label{E: A_t} hD_t A_t=[Q_t,A_t],\qquad A_0={\widetilde{Op}}_h(a). \end{equation} We will construct $b_t\in \widetilde{S^k_{\Gamma_0,L_0,\rho}}$ such that $B_t:={\widetilde{Op}}_h(b_t)$ satisfies \begin{equation}\label{E: claim b_t} hD_t B_t=[Q_t,B_t]+O(h^\infty)_{{\Psi^{-\infty}}},\qquad B_0={\widetilde{Op}}_h(a). \end{equation} This would yield that $B_t-A_t=O(h^\infty)_{L^2\to L^2}$ and the argument would then be finished by setting $b=b_1$. Indeed, that $B_t-A_t=O(h^\infty)_{L^2\to L^2}$ would follow from the fact that {by \eqref{E: claim b_t}} $$ {hD_t(T_t B_tT_t^{-1})}=O(h^\infty)_{{\Psi^{-\infty}}}, $$ and hence, since $T_0=\operatorname{Id}$ and $B_0={\widetilde{Op}}_h(a)$, we have $ {T_tB_tT_t^{-1}}-{\widetilde{Op}}_h(a)=O(h^\infty)_{\Psi^{-\infty}}. $ Combining this with the fact that both $T_t$ and $T_t^{-1}$ are bounded on $H^k_h$ completes the proof. To find $b_t$ as in \eqref{E: claim b_t}, note that since $\kappa_t$ preserves $\Gamma_0$ and $L_0$, $\partial_t \kappa_t=H_{q_t}$, and $H_{q_t}$ is tangent to $L_0$ on $\Gamma_0$. Therefore, $\partial_{\eta'}q_t=0$ on $y'=0$ {and so there exists $r_t(y, \eta)$ such that $\partial_{\eta'}q_t(y, \eta)=r_t(y, \eta)y'$}. Hence, by Lemma~\ref{l:commutator} for any $b\in \widetilde{S^k_{\Gamma_0,L_0,\rho}}$ $$ [Q_t,{\widetilde{Op}}_h(b)]= -ih{{\widetilde{Op}}_h(f) \!+O(h^\infty)_{{\Psi^{-\infty}}}, \;\;\;f\!=\! H_{q_t}b+\sum_{j=1}^r(r_t\lambda)_j(\partial_\lambda b)_j +\!O(h^{1-\rho})_{\widetilde{S^{k-2}_{\Gamma_0,L_0,\rho}}}}\!. $$ Then, letting $b_t^0:=a\circ K_{\kappa_t} {\in \widetilde{S^k_{\Gamma_0,L_0,\rho}}}$ and $B_t^0={\widetilde{Op}}_h(b_t^0)$ yields $$ hD_t{B_t^0}={-ih{\widetilde{Op}}_h\big(H_{q_t}b_t^0+ (r_t \mu)\cdot\partial_\mu b_t^0 \big)}=[Q_t,B_t^0]+h^{2-\rho}{\widetilde{Op}}_h(e^0_t) $$ where $e^0_t\in \widetilde{S^{k-2}_{\Gamma_0,L_0,\rho}}$. {This follows from the fact that if we set $\mu(y)=y' h^{-\rho}$, then $$\partial_t (b_t^0(y, \eta, \mu(y)))=H_{q_t}b_t^0(y, \eta, \mu(y))+ \partial_\mu b_t^0(y, \eta, \mu(y))H_{q_t}(\mu(y))$$ and $H_{q_t}\mu(y)= r_t(y, \eta)\mu(y)$. } Iterating this procedure and solving away successive errors finishes the proof of Proposition~\ref{p:invariance}. If $a\in \widetilde{S^{k}_{\Gamma_0,\rho}}$, then we need only use that $\partial_{\xi'}q_t=r_tx'$ and we obtain the remaining results. \end{proof} Our next lemma follows~\cite[Lemma 4.1]{SjZw:99} and gives a characterization of our second microlocal calculus in terms of the action of an operator. In what follows, given {operators $A$ and $B$, we define the operator $\operatorname{ad}_A$ by $\operatorname{ad}_A B =[A,B].$} \begin{lemma}[Beal's criteria] \label{l:beal} Let $A_h:\mc{S}(\mathbb{R}^n)\to \mc{S}'(\mathbb{R}^n)$ {and $k\in \mathbb Z$}. Then, $A_h={\widetilde{Op}}_h(a)$ for some $a\in \widetilde{S^{k}_{\Gamma_0,L_0,\rho}}$ if and only if {for any $\alpha, \beta \in \mathbb R^n$ there exists $C>0$ with} \begin{equation*} \|\operatorname{ad}_{h^{-\rho}x}^\alpha \operatorname{ad}_{hD_{x}}^{\beta}A_hu \|_{|\beta|-\min(k,0)} \leq Ch^{(1-\rho)(|\alpha|+|\beta|)}\|u\|_{\max(k,0)} \end{equation*} where $ \|u\|_{r}:=\|u\|_{L^2}+\|h^{-\rho r}|x'|^ru\|_{L^2}, $ {for $r\geq 0$}. Similarly, $A_h=Op_h(a)$ for some $a\in \widetilde{S^k_{\Gamma_0,\rho}}$ if and only if \begin{equation*} \|\!\operatorname{ad}_{h^{-\rho}x'}^{\alpha'} \operatorname{ad}_{x''}^{\alpha''} \operatorname{ad}_{hD_{x'}}^{\beta'} \operatorname{ad}_{hD_{x''}}^{\beta''}\!\!A_hu \|_{|\beta'|-\min(k,0)} \!\leq \!Ch^{(1-\rho)(|\alpha'|+|\beta'|)+|\alpha''|+|\beta''|}\|u\|_{\max(k,0)}, \end{equation*} then $A_h=Op_h(a)$ for some $a\in \widetilde{S^k_{\Gamma_0,\rho}}.$ \end{lemma} \begin{proof} The fact that $A_h={\widetilde{Op}}_h(a)$ for some $a\in \widetilde{S^k_{\Gamma_0,L_0,\rho}}$ implies the estimates above follows directly from the model calculus. Let $U_h$ be the unitary (on $L^2$) operator, $ U_hu(x)=h^{\frac{n}{2}}u(hx), $ and note that $$ \|U_h^{-1} u\|_{r}=\|u\|_{L^2}+\|h^{(1-\rho)r}|x'|^r u\|_{L^2}. $$ Then, consider $ \tilde{A}_h:=U_h A_h U_h^{-1}. $ For fixed $h$, we can use Beal's criteria (see e.g.~\cite[Theorem 8.3]{EZB}) to see that there is $a_h$ such that $ \tilde{A}_h=a_h(x,D). $ Define $a$ such that $a(hx,\xi;h)=a_h(x,\xi)$ and hence, $ A_h=Op_h(a). $ Note that for $\phi,\psi\in \mc{S}(\mathbb{R}^n)$, \begin{equation} \label{e:clownfish} \langle \tilde{A}_h\psi, \phi\rangle =\frac{1}{(2\pi)^n} \iint e^{i\langle x,\xi\rangle }a_h(x,\xi)\hat{\psi}(\xi)\overline{\phi(x)}dxd\xi, \end{equation} where $\hat{\psi}(\xi)=(\mc{F}\psi)(\xi)=\int e^{-i\langle y,\xi\rangle }\psi(y)dy$. Next, define $$B_h:=U_h\,{\operatorname{ad}_{h^{-\rho}x}^\alpha (\operatorname{ad} _{hD_x}^\beta (A_h))U_h^{-1}}.$$ {Since $D_x U_h= U_h hD_x$ and $U_h^{-1}D_x=hD_x U_h^{-1}$, we have} \begin{align*} B_h=\operatorname{ad}_{h^{1-\rho}x}^\alpha \operatorname{ad} _{D_x}^\beta \tilde{A}_h =(-i)^{|\alpha|+|\beta|}h^{(1-\rho) |\alpha|}b_h(x,D), \end{align*} where $ b_h(x,\xi)=(-\partial_{\xi})^\alpha \partial_x^\beta a_h(x,\xi). $ {Our goal is then to understand the behavior of $b_h(x,\xi)$ in terms of $h$ and $\langle h^{1-\rho} x' \rangle$.} Let $\tau_{x_0}$ and $\hat{\tau}_{\xi_0}$ be the physical and frequency shift operators $$ \tau_{x_0}u(x)=u(x-x_0),\qquad \hat{\tau}_{\xi_0}u(x)=e^{i\langle x,\xi_0\rangle }u(x), $$ with $\mc{F}\hat{\tau}_{\xi_0}=\tau_{\xi_0}\mc{F}$ and $ \mc{F}\tau_{x_0}=\hat{\tau}_{-x_0}.$ In addition, write $\|u\|_{(-r)}:=\|\langle h^{1-\rho}x'\rangle ^{-r}u\|_{L^2}$ for the dual norm to $\|u\|_{(r)}:=\|U_h^{-1}u \|_{r}$. Assume that $k\geq 0$. Then, the definition of $B_h$ combined with the assumptions yield \begin{equation*} \label{e:aardvark} |\langle B\tau_{x_0}\hat{\tau}_{\xi_0}\psi,\tau_{y_0}\hat\tau_{\eta_0}\phi\rangle| \leq h^{(1-\rho)(|\alpha|+|\beta|)} \|\tau_{x_0}\hat\tau_{\xi_0}\psi\|_{(k)}\|\tau_{y_0}\hat{\tau}_{\eta_0}\phi\|_{(-|\beta|)}. \end{equation*} In addition, note that for fixed $\psi,\phi\in \mc{S}$, $$ \|\tau_{x_0}\hat{\tau}_{\xi_0}\psi\|_{(k)}\sim \langle h^{1-\rho}(x_0)'\rangle^k,\qquad \|\tau_{y_0}\hat{\tau}_{\eta_0}\psi\|_{(-|\beta|)}\sim \langle h^{1-\rho}(y_0)'\rangle^{-|\beta|}, $$ Therefore, \eqref{e:aardvark} leads to \begin{equation} \label{e:aardvarkpro} |\langle B\tau_{x_0}\hat{\tau}_{\xi_0}\psi,\tau_{y_0}\hat\tau_{\eta_0}\phi\rangle|\leq C h^{(1-\rho)(|\alpha|+|\beta|)}\langle h^{1-\rho}(x_0)'\rangle^{k}\langle h^{1-\rho}(y_0)'\rangle^{-|\beta|}. \end{equation} {On the other hand,} we have by~\eqref{e:clownfish} that \begin{align} |\langle B\tau_{x_0}\hat{\tau}_{\xi_0}\psi,\tau_{y_0}\hat\tau_{\eta_0}\phi\rangle| &=\frac{h^{(1-\rho)|\alpha|}}{(2\pi)^n}\Big| \iint e^{i\langle x,\xi\rangle} b_h(x,\xi)\hat{\psi}(\xi-\xi_0)e^{-i\langle x_0,\xi-\xi_0\rangle -i\langle \eta_0,x-y_0\rangle }\bar{\phi}(x-y_0)dxd\xi\Big| \notag\\ &=h^{(1-\rho)|\alpha|}|\mc{F}((\tau_{y_0,\xi_0}\chi) b_h)(\eta_0-\xi_0,x_0-y_0)|, \label{e:aardvarkproLHS} \end{align} where $\chi(x,\xi)=e^{i\langle x,\xi\rangle }\hat{\psi}(\xi)\bar{\phi}(x).$ {Combining \eqref{e:aardvarkproLHS} with \eqref{e:aardvarkpro}} we then have $$ |\mc{F}((\tau_{y_0,\xi_0}\chi) \partial_\xi^\alpha\partial_x^\beta a_h)(\eta_0-\xi_0,x_0-y_0)|\leq C h^{(1-\rho)|\beta|}\langle h^{1-\rho}(x_0)'\rangle^{k}\langle h^{1-\rho}(y_0)'\rangle^{-|\beta|}. $$ Next, note that $\chi$ can be replaced by any fixed function in $C_c^\infty$ by taking $\psi,\phi$ with $\hat{\psi}(\xi)\phi(x)\neq 0$ on $\text{\ensuremath{\supp}} \chi$. Putting $\zeta=\eta_0-\xi_0$ and $z=x_0-y_0$, we obtain {that for every $\tilde \alpha, \tilde \beta \in \mathbb N^n$} $$ |\mc{F}(\partial_\xi^{\tilde{\alpha}}\partial_x^{\tilde{\beta}}(\tau_{y_0,\xi_0}\chi)\partial_\xi^\alpha \partial_x^\beta a_h)(\zeta,z)|\leq C h^{(1-\rho)|\beta|}\langle h^{1-\rho}(x_0)'\rangle^{k}\langle h^{1-\rho}(x_0-z)'\rangle^{-|\beta|}. $$ Hence, $$ |z^{\tilde\alpha}\zeta^{\tilde{\beta}}\mc{F}((\tau_{y_0,\xi_0}\chi)\partial_{\xi}^\alpha\partial_x^\beta a_h)(\zeta,z)|\leq C h^{(1-\rho)|\beta|}\langle h^{1-\rho}(x_0)'\rangle^{k}\langle h^{1-\rho}(x_0-z)'\rangle^{-|\beta|}. $$ {In particular, for every $N>0$} $$ |\mc{F}((\tau_{y_0,\xi_0}\chi)\partial_{\xi}^\alpha\partial_x^\beta a_h)(\zeta,z)|\leq C h^{(1-\rho)|\beta|}\langle h^{1-\rho}(x_0)'\rangle^{k-|\beta|}\langle \zeta\rangle^{-N}\langle z\rangle^{-N}, $$ and, as a consequence, we obtain $$ \partial_\xi^\alpha\partial_x^\beta a_h(x,\xi)=\partial_\xi^\alpha\partial_x^\beta(a(hx,\xi))=O(h^{(1-\rho)|\beta|}\langle h^{1-\rho}x'\rangle ^{k-|\beta|}). $$ This gives the first claim of the lemma for $k\geq 0$. For $k\leq 0$, we consider $\langle h^{-\rho}x'\rangle^{-k}A$ and use the composition formulae. A nearly identical argument yields the second claim. \end{proof} \subsection{Definition of the second microlocal class} With Proposition~\ref{p:invariance} in place, { we are now in a position to define the class of operators with symbols in $S^{k}_{\Gamma,L,\rho}$. } \begin{definition} Let $\Gamma\subset U\subset T^*M$ be a co-isotropic submanifold, {$U$ an open set}, and $L$ a Lagrangian folation on $\Gamma$. A \emph{chart for $(\Gamma, L)$} is a symplectomorphism $$ \kappa:U_0\to V,\qquad U_0\subset U,\qquad V\subset T^*\mathbb{R}^n, $$ such that $\kappa(U_0\cap \Gamma)\subset V\cap \Gamma_0$ and ${\kappa_{*, q} L_q=(L_0)_{\kappa(q)}}$ for $q\in \Gamma\cap U$. \end{definition} We now define the pseudodifferential operators associated to $(\Gamma,L)$. \begin{definition} Let $M$ be a smooth, compact manifold and $U\subset {T^*M}$ open, $\Gamma\subset U$ a co-isotropic submanifold, $L$ a Lagrangian foliation on $\Gamma$ and $\rho\in [0,1)$. We say that $ A:\mc{D}'(M)\to C_c^\infty(M) $ is a \emph{semiclassical pseudodifferential operator with symbol class $S^{k}_{\Gamma,L,\rho}(U)$} (and write $A\in \Psi^k_{\Gamma,L,\rho}(U)$) {if there are charts $\{\kappa_\ell\}_{\ell=1}^N$ for $(\Gamma, L)$ and symbols $\{a_\ell\}_{\ell=1}^N \subset \widetilde{S_{\Gamma,L,\rho}^k}(U)$ such that $A$ can be written in the form} \begin{equation} \label{e:standardRep} A=\sum_{\ell=1}^N T_\ell' \,{\widetilde{Op}}_h(a_\ell)\,T_\ell+O(h^\infty)_{\mc{D'}\to C^\infty} \end{equation} {where $T_\ell$ and $T_\ell'$ are FIOs quantizing $\kappa_\ell$ and $\kappa_\ell^{-1}$ for $\ell=1, \dots, N$.} We say that $A$ is a \emph{semiclassical pseudodifferential operator with symbol class $S^{k}_{\Gamma,\rho}(U)$}, {and write $A\in \Psi^k_{\Gamma,\rho}(U)$}, if {there are symbols $\{a_\ell\}_{\ell=1}^N \subset \widetilde{S_{\Gamma,\rho}^k}(U)$ such that $A$ can be written in the form \eqref{e:standardRep}}. \end{definition} \begin{lemma} \label{l:chartCompos} Suppose that $\kappa:U\to T^*\mathbb{R}^n$ is a chart for $(\Gamma, L)$, $T$ quantizes $\kappa$, and $T'$ quantizes $\kappa^{-1}$. {If $A\in \Psi^k_{\Gamma,L,\rho}(U)$, then} there is $a\in \widetilde{S^k_{\Gamma,L,\rho}}(U)$, with $\text{\ensuremath{\supp}} a(\cdot,\cdot,\lambda)\subset \kappa(U)$, such that $ TAT'={\widetilde{Op}}_h(a)+O(h^\infty)_{\mc{D}'\to C^\infty}. $ Moreover, if $A$ is given by~\eqref{e:standardRep}, then $$ a\circ K_\kappa =\sigma(T'T)\sum_{\ell=1}^N\sigma(T_\ell'T_\ell)\,(a_\ell\circ K_{\kappa_\ell})+O(h^{1-\rho})_{\widetilde{S^{k-1}_{\Gamma,L,\rho}}}. $$ \end{lemma} \begin{proof} Note that we can write $ T A T'=\sum_{\ell=1}^N T T_\ell' {\widetilde{Op}}_h(a_\ell)T_\ell{T'} +O(h^\infty)_{\mc{D'}\to C^\infty}. $ Next, note that $T T_\ell'$ quantizes $\kappa\circ \kappa_\ell^{-1}$ and that $T_\ell T'$ quantizes $\kappa_\ell \circ \kappa^{-1}$. Letting $F_\ell$ be a microlocally unitary FIO quantizing $\kappa_\ell \circ\kappa^{-1}$, ${F_\ell}$ satisfies the hypotheses of {Proposition~\ref{p:invariance}} and we can write $$ T_\ell T'= C\sub{\!L}{F_\ell},\qquad {T}T_\ell' = {F_\ell}^{-1}C\sub{\!R} $$ with $C\sub{\!L},C\sub{\!R}\in \Psi(M)$ satisfying $ \sigma(C\sub{\!R}C\sub{\!L})=\big(\sigma(T_\ell' T_\ell)\circ \kappa_\ell^{-1}\big) \big(\sigma(T'T)\circ \kappa_\ell^{-1}\big). $ Therefore, $$ \begin{gathered}T T_\ell' {\widetilde{Op}}_h(a_\ell)T_\ell T'= {F_\ell}^{-1}C\sub{\!R} {\widetilde{Op}}_h(a_\ell)C\sub{\!L} {F_\ell}= Op_h({b_\ell})+(h^\infty)_{\mc{D'}\to C^\infty},\\ {b_\ell}=\big(\sigma(C\sub{\!R}C\sub{\!L})\circ \kappa_\ell \circ\kappa^{-1}\big) \big(a_\ell \circ K_{\kappa_\ell \circ\kappa^{-1}}\big)+O(h^{1-\rho})_{\widetilde{S^{k-1}_{\Gamma,L,\rho}}}. \end{gathered} $$ The lemma follows. \end{proof} \begin{lemma}{Let $\Gamma\subset U\subset T^*M$ be a co-isotropic submanifold, {$U$ an open set}, and $L$ a Lagrangian foliation on $\Gamma$.} There is a principal symbol map $${\sigma\sub{\Gamma, L}}:\Psi^k_{\Gamma,L,\rho}{(U)}\to S^k_{\Gamma,L,\rho}{(U)}/h^{1-\rho}S^{k-1}_{\Gamma,L,\rho}{(U)}$$ such that for $A\in \Psi^{k_1}_{\Gamma,L,\rho}{(U)}$, $B\in \Psi^{k_2}_{\Gamma,L,\rho}{(U)}$, \begin{equation} \label{e:symbols} {\sigma\sub{\Gamma, L}}(AB)={\sigma\sub{\Gamma, L}}(A){\sigma\sub{\Gamma, L}}(B),\quad {\sigma\sub{\Gamma, L}}([A,B])=-ih\{{\sigma\sub{\Gamma, L}}(A),{\sigma\sub{\Gamma, L}}(B)\}. \end{equation} Furthermore, {the sequence} $$ 0\;\mapsto\; h^{1-\rho}\Psi^{k-1}_{\Gamma,L,\rho}{(U)}\overset{{\sigma\sub{\Gamma, L}}}{\longrightarrow}\; S^{k}_{\Gamma,L,\rho}{(U)}/h^{1-\rho}S^{k-1}_{\Gamma,L,\rho}{(U)}\;\to\; 0 $$ is exact. {The same holds with $\sigma\sub{\Gamma}$, $\Psi_{\Gamma,\rho}$ and $S^k_{\Gamma, \rho}$.} \end{lemma} \begin{proof} For $A$ as in~\eqref{e:standardRep}, we define $$ {\sigma\sub{\Gamma, L}}(A)=\sum_{\ell=1}^N\sigma(T_\ell T_\ell')(\tilde{a}_\ell\circ\kappa) $$ where $\tilde{a}_\ell(x,\xi):=a_\ell(x,\xi,h^{-\rho}x')$. The fact that $\sigma$ is well defined then follows from Lemma~\ref{l:chartCompos}, and the formulae~\eqref{e:symbols} follow from Lemma~\ref{l:compose}. To see that the sequence is exact, we only need to check that if $A\in \Psi^k_{\Gamma, L,\rho}$ and $\sigma\sub{\Gamma, L}(A)=0$, then $A\in h^{1-\rho}\Psi^{k-1}_{\Gamma,L,\rho}.$ To do this, we may assume that ${\operatorname{WF_h}'}(A)\subset U$ such that there is a chart $(\kappa, U)$ for $(\Gamma,L)$. Let $T$ {be a microlocally unitary FIO quantizing $\kappa$} and suppose that $\sigma\sub{\Gamma,L}(A)\in h^{1-\rho}S_{\Gamma,L,\rho}^{k-1}$. Then, by the first part of Lemma~\ref{l:chartCompos} we know $ TAT^{-1}= {\widetilde{Op}}_h(a)+O(h^\infty) $ for some $a\in \widetilde{S^{k}_{\Gamma,L,\rho}}$. Then, by the second part of Lemma~\ref{l:chartCompos}, since $\sigma\sub{\Gamma, L}(A)\in h^{1-\rho}S_{\Gamma,L,\rho}^{k-1}$, {$a\in h^{1-\rho}{\widetilde{S^{k-1}_{\Gamma,L,\rho}}}$} and in particular, $A\in h^{1-\rho}\Psi^{k-1}_{\Gamma,L,\rho}.$ \end{proof} Note that if $A\in \Psi^{\operatorname{comp}}(M)$, then $A\in \Psi^0_{\Gamma,L,\rho}$ and $ \sigma(A)=\sigma\sub{{\Gamma}}(A). $ Furthermore, if $A\in \Psi^k_{\Gamma,\rho}$, then $A\in \Psi^k_{\Gamma,L,\rho}$ and $ \sigma\sub{{\Gamma}}(A)=\sigma\sub{{\Gamma,L}}(A). $ \begin{lemma}Let $\Gamma\subset U\subset T^*M$ be a co-isotropic submanifold, {$U$ an open set}, and $L$ a Lagrangian foliation on $\Gamma$. There is a non-canonical quantization procedure $$ Op_h^{\Gamma,L}:S^k_{\Gamma,L,\rho}{(U)}\to \Psi^k_{\Gamma,L,\rho}{(U)}$$ such that for all $A\in \Psi^{k}_{\Gamma,L,\rho}{(U)}$ there is $a\in S^{k}_{\Gamma, L,\rho}{(U)}$ such that $ Op_h^{\Gamma, L}(a)=A+O(h^\infty)_{\mc{D}'\to C^\infty}. $ and ${\sigma\sub{\Gamma,L}}\circ Op_h^{\Gamma,L}:S^k_{\Gamma,L,\rho}{(U)}\to S^k_{\Gamma,L,\rho}{(U)}/h^{1-\rho}S^{k-1}_{\Gamma,L,\rho}{(U)}$ is the natural projection map. \end{lemma} \begin{proof} {Let $\{(\kappa_\ell, U_\ell)\}_{\ell=1}^N$ be charts for $(\Gamma, L)$ such that $\{U_\ell\}_{\ell=1}^N$ is a locally finite cover for $U$, $T_\ell$ and $T_\ell'$ quantize respectively $\kappa_\ell$ and $\kappa_\ell^{-1}$, and $\sigma(T_\ell'T_\ell)\in C_c^\infty(U_\ell)$ is a partition of unity on $U$.} {Let $a\in S^{k}_{\Gamma, L,\rho}{(U)}$.} Then, define $a_\ell \in \widetilde{S^k_{\Gamma_0,L_0,\rho}}$ such that $ a_\ell(x,\xi,h^{-\rho}x'):= (\chi_\ell a)\circ \kappa^{-1}{(x,\xi)} $ where $\chi_\ell\equiv 1$ on $\text{\ensuremath{\supp}} \sigma(T_\ell'T_\ell)$. We then define the quantization map $$ Op_h^{\Gamma,L}(a):=\sum_{\ell=1}^NT_\ell' {\widetilde{Op}}_h(a_\ell)T_\ell. $$ The fact that ${\sigma\sub{\Gamma,L}}\circ Op_h^{\Gamma,L}$ is the natural projection follows immediately. Now, fix $A\in \Psi^{k}_{\Gamma, L,\rho}(U)$. Put $a_0={\sigma\sub{\Gamma,L}}(A)$. Then, $ A={Op_h^{\Gamma,L}}(a_0)+h^{1-\rho}A_1 $ where $A_1\in \Psi^{k-1}_{\Gamma,L,\rho}.$ We define $a_k={\sigma\sub{\Gamma,L}}(A_k)$ inductively for $k\geq 1$ by $$ h^{(k+1)(1-\rho)}A_{k+1}=A-\sum_{k=0}^k h^{k(1-\rho)}Op_h^{\Gamma,L}(a_k). $$ Then, letting $a\sim \sum_k h^{k(1-\rho)}a_k$, we have $ A=Op_h^{\Gamma,L}(a)+O(h^\infty)_{\mc{D}'\to C^\infty} $ as claimed. \end{proof} \begin{remark} {Note that $E:=\sum_{\ell=1}^NT_\ell T_\ell'$ is an elliptic pseudodifferential operator with symbol $1$. Therefore, there is $E'\in \Psi^0$ with $\sigma(E')=1$ such that $E' EE'=\operatorname{Id}$. Replacing $T_\ell$ by $E'T_\ell$ and $T_\ell'$ by $T_\ell'E'$, we may (and will) ask for $\sum_{\ell=1}^N T_\ell T_\ell'=\operatorname{Id}$, and so ${Op_h^{\Gamma,L}}(1)=\operatorname{Id}$.} \end{remark} \begin{lemma} {Let $\Gamma\subset U\subset T^*M$ be a co-isotropic submanifold.} If $A\in \Psi^k_{\Gamma,\rho}(U)$ and $P\in \Psi^m(U)$ with symbol $p$ such that for every $q\in \Gamma$ we have $H_p(q)\in T_q\Gamma.$ Then, $$ \frac{i}{h}[P,A]=Op_h^{\Gamma}(H_p a)+O(h^{1-\rho})_{\Psi^{k-1}_{\Gamma,\rho}}, $$ where $a(x,\xi;h)= \sigma\sub\Gamma(A)(x,\xi, h^{-\rho}x')$. \end{lemma} \begin{proof} Suppose that {$WF_h'(A) \subset U_\ell$ for $U_\ell \subset U$ open, and suppose that $\kappa:U_\ell\to T^*\mathbb{R}^n$ is a chart for $(\Gamma, L)$. Note that we may assume that $\operatorname{WF_h}(A)' \subset U_\ell$ and then use a partition of unity to cover $U$ with a family $\{U_\ell\}_\ell$}. Therefore, there exist $ a\in \widetilde{S^k_{\Gamma,\rho}}$ {and a Fourier integral operator $T$ that is microlocally elliptic on {$U_\ell$} and quantizes $\kappa$}, such that $ A=T^{-1} {\widetilde{Op}}_h(a)T +O(h^\infty)_{\mc{D'}\to C^\infty}. $ Then, on $\operatorname{WF_h}'(A)$, $$ T[P,A]T^{-1}=[T PT^{-1},{\widetilde{Op}}_h( a)]+O(h^\infty)_{\mc{D}'\to C^\infty}. $$ Now, $ T PT^{-1}=Op_h(p\circ \kappa^{-1})+O(h)_{{\Psi^{m-1}}}. $ Hence, {a direct computation using Lemma~\ref{l:commutator} gives} $$ [TPT^{-1}, {\widetilde{Op}}_h( a)]=-ih{\widetilde{Op}}_h(c)+O(h^{2-\rho})_{\widetilde{\Psi^{k-2}_{\Gamma_0,\rho}}}, $$ with $c(x,\xi,h^{-\rho}x')=H_{p\circ \kappa^{-1}}{( a(x,\xi,h^{-\rho}x'))} {\in S^{k-1}\sub{\Gamma, \rho}(U_\ell)}.$ In particular, $$ [P,A]=-ihT^{-1}{\widetilde{Op}}_h(c)T+O(h^{2-\rho})_{\Psi^{k-2}_{\Gamma,\rho}} $$ Therefore, $ [P,A]\in h\Psi^{k-1}_{\Gamma,\rho}. $ with symbol $\sigma\sub{\Gamma}(ih^{-1}[P,A])=H_p {( a(x,\xi,h^{-\rho}x'))}$. \end{proof} \section{An Uncertainty principle for co-isotropic localizers} \label{s:uncertainMe} The {first goal of this section is to build a family of cut-off operators $X_y$ with $y \in M$ that act as the identity on the shrinking ball $B(y, h^\rho)$ and such that they commute with $P$ in a fixed size neighborhood of $y$. This is the content of section \ref{S:co-isotrop}. The second goal is to control $\|X_{y_1} X_{y_2}\|_{L^2 \to L^2}$ in terms of the distance $d(y_1, y_2)$, as this distance shrinks to $0$. We do this in Section \ref{S:uncertainty}. Finally, in Section~\ref{s:almostOrthog}, we study the consequences of these estimates for the almost orthogonality of $X_{y_i}$.} {In order to localize to the ball $B(y, h^\rho)$ in a way compatible with microlocalization we need to make sense of $$\chi_y(x)=\tilde{\chi}\big(\tfrac{1}{\varepsilon}h^{-\rho}d(x,y)\big)\qquad \tilde{\chi}\in C_c^\infty((-1,1)),$$ as an operator in some anisotropic pseudodifferential calculus. As a function, $\chi_y$ is in the symbol class $S^{-\infty}_{\Gamma_y, L_y}$, where $\Gamma_y, L_y$ are the co-isotropic submanifold and Lagrangian foliation defined as follows: } Fix $\delta>0$, to be chosen small later, and {for each $x\in M$ let} \begin{equation} \label{e:coiso} \Gamma_y:=\bigcup_{|t|<\frac{1}{2}\inj(M)} \varphi_t(\Omega_y), \qquad \Omega_y:=\big\{{ \xi\in T^*_yM:\; \;\big|1-|\xi|_g\big|<\delta}\big\}. \end{equation} In this section, we construct localizers to $\Gamma_{y}$ adapted to the Laplacian and study the incompatibility between localization to $\Gamma_{y_1}$ and $\Gamma_{y_2}$ as a function of the distance between $y_1, y_2 \in M$. {Let $y \in M$. In what follows we work with the Lagrangian foliation $L_y$ of $\Gamma_y$ given by $$ L_y=\{L_{y, \tilde q}\}_{\tilde q \in \Gamma_y}, \qquad L_{y, \tilde q}= (\varphi_t)_*(T_qT^*_yM), $$ where $\tilde q= \varphi_t(q)$ for some $|t|<\frac{1}{2}\inj(M)$ and $q \in \Omega_y$. } {\begin{remark} In fact, it will be enough for us to show that $\chi_y(x) \tilde{\chi}( \delta^{-1}(|hD|_g-1))\in \Psi_{\Gamma_y,L_y,\rho}$ since we will be working near the characteristic variety for the Laplacian. \end{remark} } \subsection{Co-isotropic cutoffs adapted to the Laplacian}\label{S:co-isotrop} \begin{lemma}\label{l:chi_h,y} Let $y\in M$, $0<\varepsilon<\delta$, $0\leq \rho <1$, $\tilde{\chi}\in C_c^\infty((-1,1))$, and define the operator $\chi\sub{h,y}$ by \begin{equation}\label{e:chi_y} \chi\sub{h,y}u(x):=\tilde{\chi}(\tfrac{1}{\varepsilon}h^{-\rho}d(x,y))\;[Op_h(\tilde\chi(\tfrac{1}{\varepsilon}(|\xi|_g-1)))u](x). \end{equation} Then, $\chi\sub{h,y} \in \Psi_{\Gamma_{y},{L_{y}} ,\rho}^{-\infty}$. \end{lemma} \begin{proof} We will use Lemma~\ref{l:beal} to prove the claim. First, observe that we may work in a single chart for $(\Gamma_y,L_y)$ by using a partition of unity. Therefore, suppose that $B\in \Psi^0$ and $\kappa: U_0\to T^*\mathbb{R}^n$ is a chart for $(\Gamma_y,L_y)$, $V_0\Subset U_0$, and $T$ is an FIO quantizing $\kappa$ that is microlocally unitary on $V_0$. Furthermore, since $\kappa_*L_y=L_0$, we may assume that $\kappa(U_0\cap T^*_yM)\subset T^*_{0}\mathbb{R}^n$. Denote the microlocal inverse of $T$ by $T'$. Then, observe that for $A$ and $B$ with wavefront set in $V_0$ $$ {\operatorname{ad}_A (TB T')=T \operatorname{ad}_{T'AT}(B)T'} +O(h^\infty)_{\mc{D'}\to C^\infty}. $$ By a partition of unity, we will work as though $\chi\sub{h,y}$ were microsupported in $U_0$. We then {consider for all $N>0$, and $\alpha, \beta \in \mathbb N^n$,} \begin{multline} h^{-2N\rho}|x'|^{2N} \operatorname{ad}_{h^{-\rho}x}^\alpha \operatorname{ad}_{hD_{x}}^\beta {(T\chi\sub{h,y}T')} \notag\\ =h^{-2\rho N}T (T'|x'|^2T)^N \operatorname{ad}_{h^{-\rho}T' xT}^\alpha { (\operatorname{ad}_{T'hD_xT}^\beta {(\chi\sub{h,y})} )}T' +O(h^\infty)_{\mc{D}'\to C^\infty}. \label{E:ads} \end{multline} In order to prove the requisite estimates, we will actually view $\chi\sub{h,y}$ first as an element of the model microlocal class. In particular, {we work with $x\in M$ written in geodesic normal coordinates centered at $y$, }so that $$ \chi\sub{h,y}u(x)={\tilde{\chi}(\tfrac{1}{\varepsilon}h^{-\rho}{|x|})\;[Op_h(\tilde\chi(\tfrac{1}{\varepsilon}(|\xi|_g-1)))u](x)}. $$ Then, $\chi\sub{h,y} {={\widetilde{Op}}_h(\tfrac{1}{\varepsilon}\tilde \chi(\lambda)) \, Op_h(\tilde\chi(\tfrac{1}{\varepsilon}(|\xi|-1)))}$ is an element of $\widetilde{\Psi^{-\infty}_{\Gamma_0,L_0,\rho}}$ {with $r=n$}, and so we can apply Lemma~\ref{l:commutator} to compute $\operatorname{ad}_A(\chi\sub{h,y})$ for $A\in \Psi^{-\infty}(M)$. In particular, \begin{equation} \operatorname{ad}_{{T'hD_xT}}(\chi\sub{h,y})={\widetilde{Op}}_h(c)+O(h^\infty) \label{E:Ads1} \end{equation} where $c\in h^{1-\rho}\widetilde{S^{-\infty}_{\Gamma_0,L_0,\rho}}$ is supported on {$\{(x, \xi, \lambda): \, |x|\leq \varepsilon h^\rho, \, |\lambda|\leq \varepsilon\}$}. Now, suppose $c\in\widetilde{S^{-\infty}_{\Gamma_0,L_0,\rho}}$ is supported on {$\{(x, \xi, \lambda): \, |x|\leq \varepsilon h^\rho, \, |\lambda|\leq \varepsilon\}$} and $B\in \Psi^{-\infty}$ with $\sigma(B)(0,\xi)=0$. Then, again using Lemma~\ref{l:commutator}, \begin{equation} \operatorname{ad}_B ({\widetilde{Op}}_h(c)) ={\widetilde{Op}}_h(c')+O(h^\infty)\label{E:Ads2} \end{equation} where $c'\in h\widetilde{S^{-\infty}_{\Gamma_0,L_0,\rho}}$ is supported on {$\{(x, \xi, \lambda): \, |x|\leq \varepsilon h^\rho, \, |\lambda|\leq \varepsilon\}$}. Now, note that since $\kappa (T^*_yM)\subset T^*_0\mathbb{R}^n$, then for all $i=1,\dots n$, $B=T'x_iT$ has symbol $\sigma(B)=[b(x,\xi)x]_i$ for some $b \in C^\infty(T^*M; {\mathbb{M}_{n\times n}})$. Therefore, \eqref{E:Ads1} and \eqref{E:Ads2} yield $$ \operatorname{ad}_{h^{-\rho}T' xT}^\alpha (\operatorname{ad}_{T'hD_xT}^\beta (\chi\sub{h,y})) =h^{(1-\rho)(|\alpha|+|\beta|)}{\widetilde{Op}}_h(c')+O(h^\infty), $$ where $c'\in \widetilde{S^{-\infty}_{\Gamma_0,L_0,\rho}}$ is supported on {$\{(x, \xi, \lambda): \, |x|\leq \varepsilon h^\rho, \, |\lambda|\leq \varepsilon\}$}. Finally, using again that $T'{x_i}T$ has symbol {$[b(x,\xi)x]_i$}, we have that \eqref{E:Ads2} gives $$ \|h^{-2N\rho}|x'|^{2N}\operatorname{ad}_{h^{-\rho}T' xT}^\alpha (\operatorname{ad}_{T'hD_xT}^\beta (\chi\sub{h,y})) \|_{L^2\to L^2}\leq C h^{(1-\rho)(|\alpha|+|\beta|)}. $$ \end{proof} We next construct a pseudodifferential cutoff, $X_y\in \Psi^{-\infty}_{\Gamma_y,\rho}$ which is microlocally the identity near $S^*_yM$ and which essentially commutes with $P$ near $y$. In particular, we will have $$\chi\sub{h,y}X_y=\chi\sub{h,y}+O(h^\infty)_{\Psi^{-\infty}}.$$ When considering the value of a quasimode $u$, that is $h^\rho$ close to the point $y$, this will allow us to effectively work with $X_yu$ instead. \begin{theorem} \label{l:nice2ndCut} Let $y\in M$, $0<\varepsilon<\delta$, $0\leq \rho <1$. {Then, there exists $ X_{y}\in \Psi_{\Gamma_y,\rho}^{-\infty} \subset \Psi_{\Gamma_{y},{L_{y}} ,\rho}^{-\infty} $ with \begin{enumerate} \item if $\chi\sub{h,y}$ is defined as in \eqref{e:chi_y}, then \begin{equation} \label{e:2MicrolocalIdentity} \chi\sub{h,y}X_y=\chi\sub{h,y} + O(h^\infty)_{\Psi^{-\infty}}. \end{equation} \item ${\operatorname{WF_h}'}([P, X_{y}]) \cap \{(x,\xi): x \in B(y, \tfrac{1}{2}\conj(M)), \;\xi \in \Omega_x\} = \emptyset.$ \end{enumerate} } \end{theorem} \begin{proof} {First, we note that we will actually prove that $X_{y}\in \Psi_{\Gamma_y,\rho}^{-\infty}$, and so the result will follow since $\Psi_{\Gamma_{y} ,\rho}^{-\infty} \subset \Psi_{\Gamma_{y},{L_{y}} ,\rho}^{-\infty}$.} Let $\mc{H}\subset T^*M$ be transverse to {the Hamiltonian flow} $H_p$ such that $\Omega_y\subset \mc{H}$. Next, let $\varkappa \in C_c^\infty((-2,2))$ with $\varkappa \equiv 1$ on $[-1,1]$ and define ${\varkappa_0\in C_c^\infty(\mc{H})}$, by $$ \varkappa_0={\varkappa}(h^{-\rho}d(x,y)) \varkappa(\tfrac{2}{\delta}(1-|\xi|_g)), $$ {where $\delta$ is as in the definition of $\Omega_y$.} Let $\psi\in C_c^\infty(T^*M)$ with $$\psi \equiv 1\text{ on }B(y,\tfrac{1}{2}\conj(M))\cap \{|\xi|_g<2\},\qquad \text{\ensuremath{\supp}} \psi\subset B(y,\tfrac{3}{4}\conj(M)).$$ Then, let $\chi_0$ be defined locally by $ H_p\chi_0\equiv 0$ and $\chi_0|_{\mc{H}}=\varkappa_0. $ so that $\chi_0\in { S^{-\infty}_{\Gamma_y,\rho}}$. That is, $ \chi_0(\varphi_t(q))=\psi(\varphi_t(q))\chi_0(q) $ for $|t|<\conj(M)$ and $q\in \mc{H}$. Next, observe that there is {$e_0\in S^{-\infty}_{\Gamma_y,\rho}$} such that $$ -\tfrac{i}{h}[P,Op_h^{\Gamma_y}(\chi_0)]=h^{1-\rho}Op_h^{\Gamma_y}(e_0),\qquad \text{\ensuremath{\supp}} e_0 \cap B(y,\tfrac{1}{2}\conj(M)) \subset \bigcup_{|t|< \tfrac{3}{4}\conj(M)} \varphi_t(\mc{H}\cap \text{\ensuremath{\supp}} \partial \varkappa_0). $$ Suppose that {there exist $\chi_{k-1}, e_{k-1}\in{ S^{-\infty}_{\Gamma_y,\rho}}$ such that} $$ -\tfrac{i}{h}[P,Op_h^{\Gamma_y}(\chi_{k-1})]=h^{k(1-\rho)}Op_h(e_{k-1}),\qquad \text{\ensuremath{\supp}} e_{k-1} \cap B(y,\tfrac{1}{2}\conj(M)) \subset \hspace{-.4cm} \bigcup_{|t|< \tfrac{3}{4}\conj(M)}\hspace{-.5cm} \varphi_t(\mc{H}\cap \text{\ensuremath{\supp}} \partial \varkappa_0). $$ Then, define $\tilde{\chi}_k\in{ S^{-\infty}_{\Gamma_y,\rho}}$ by solving locally $ H_p\tilde{\chi}_k=e_{k-1}$ and $\tilde{\chi}_{k}|_{\mc{H}}=0.$ Note that then { $$ \text{\ensuremath{\supp}} \tilde{\chi}_k \cap B(y,\tfrac{1}{2}\conj(M)) \subset \bigcup_{|t|< \tfrac{3}{4}\conj(M)} \varphi_t(\mc{H}\cap \text{\ensuremath{\supp}} \partial \varkappa_0) $$ and} $$ h^{-k(1-\rho)}\sigma\Big(\tfrac{i}{h}\Big[P,Op_h^{\Gamma_y}\big(\chi_{k-1}+h^{k(1-\rho)}\tilde{\chi}_k\big)\Big]\Big)=H_p\tilde{\chi}_k-e_{k-1}=0. $$ In particular, with $\chi_k:=\chi_{k-1}+h^{k(1-\rho)}\tilde{\chi}_k$, we obtain $ -\tfrac{i}{h}[P,Op_h^{\Gamma_y}(\chi_{k})]=h^{(k+1)(1-\rho)}Op_h(e_{k}) $ with $e_k\in S^{-\infty}_{\Gamma_y,\rho}$ and { $$ \text{\ensuremath{\supp}} e_{k} \cap B(y,\tfrac{1}{2}\conj(M)) \subset \bigcup_{|t|< \tfrac{3}{4}\conj(M)} \varphi_t(\mc{H}\cap \text{\ensuremath{\supp}} \partial \varkappa_0). $$} Setting $$ {X_y=Op_h^{\Gamma_y}(\chi_\infty)}, \qquad \chi_\infty\sim \big(\chi_0 +\sum_k \chi_{k+1}-\chi_k)\big), $$ we have that $X_y$ satisfies the second claim and, moreover, $\chi_\infty\equiv 1$ on $$ {\bigcup_{|t|\leq \frac{1}{4}\conj(M)}\varphi_t\Big(\mc{H}\cap \{d(x,y)<h^\rho\}\cap \big\{\big||\xi|_g-1\big|<\tfrac{\delta}{2}\big\}\Big).} $$ {To see the first claim, observe that for $\varepsilon>0$ small enough, $$ B(y,\varepsilon h^\rho)\cap \big\{\big||\xi|_g-1\big|<\delta\}\subset {\bigcup_{|t|\leq \frac{1}{4}\conj(M)}\varphi_t\Big(\mc{H}\cap \{d(x,y)< h^\rho\}\cap \big\{\big||\xi|_g-1\big|<\tfrac{\delta}{2}\big\}\Big).} $$ and hence $ \chi\sub{h,y}X_y=\chi\sub{h,y}Op_{h}^{\Gamma,L}(1)+O(h^\infty)_{\Psi^{-\infty}}=\chi\sub{h,y}+O(h^\infty)_{\Psi^{-\infty}}. $} \end{proof} \subsection{An uncertainty principle for co-isotropic localizers}\label{S:uncertainty} Let $\Gamma(t)\subset T^*\mathbb{R}^n$, $t\in {(-\varepsilon_0,\varepsilon_0)}$ be a smooth family of co-isotropic submanifolds of dimension $n+1$ with $$ {\Gamma(0)}=\{(0,x_n,\xi',\xi_n):\;{ x_n}\in \mathbb{R},\,\xi'\in \mathbb{R}^{n-1}, \,\xi_n \in {\mathbb R}\}. $$ Assume that $\Gamma(t)$ is defined by $\{q_i(t)\}_{i=1}^{n-1}$ with $q_i(0)=x_i$. Moreover, assume that there are $c,C>0$ such that for $i=1,\dots,n-1$, \begin{equation} \label{e:uncertainAssume1} |\{q_i(t),x_i\}|\geq c|t|\qquad\text{ on }\;\Gamma(0)\cap \Gamma(t),\qquad |t|>0, \end{equation} and for all $i,j=1,\dots, n-1$, {and all $t\in {(-\varepsilon_0,\varepsilon_0)}$,} \begin{equation} \label{e:uncertainAssume2} \{q_i(t),q_j(t)\}=0,\qquad \{q_i(t),\xi_n\}=0,\qquad |\{q_i(t),x_j\}|\leq Ct^2,\; i\neq j. \end{equation} The main goal of this section is to prove the following proposition \begin{proposition} \label{p:uncertainty} Let $0<\rho<1$ {and $\{\Gamma(t):\; t\in (-\varepsilon_0,\varepsilon_0)\}$ be as above.} Suppose that $X(t)\in \Psi_{\Gamma(t),\rho}^{-\infty}$ {for all $t\in (-\varepsilon_0,\varepsilon_0)$}, and that there is $\varepsilon>0$ such that $h^{\rho-\varepsilon}\leq {|t|<\varepsilon_0}$. Then, $$ \|X(0)X(t)\|_{L^2\to L^2}\leq Ch^{\frac{n-1}{2}(2\rho-1)}t^{\frac{1-n}{2}}. $$ \end{proposition} \begin{proof} We begin by finding a convenient chart for $\Gamma(t)$. By Darboux's theorem, there is a smooth family of sympletomorphisms $\kappa_t: V_1 \to V_2$ such that for $j=1,\dots,n-1$, {\begin{equation}\label{E: kappa} \kappa_t^*(q_j(t))=y_j,\qquad \kappa_t^*\xi_n=\eta_n, \end{equation} where $V_1, V_2$ are simply connected neighborhoods of $0$}. {Note that $\kappa_t(\Gamma(0))=\Gamma(t)$ with this setup, so $\kappa_t^{-1}$ is a chart for $\Gamma(t)$}. By \cite[Theorem 11.4]{EZB}, the symplectomorphism $\kappa_t$ can be extended to a family of symplectomorphisms on $T^*\mathbb{R}^n$ that is the identity outside a compact set, and such that there is a smooth family of symbols ${p_t}\in C^\infty(T^*\mathbb{R}^{n})$ satisfying $ \partial_t\kappa_t=(\kappa_t)_*H_{p_t}. $ Now, let $U(t):L^2\to L^2$ solve $$ (hD_t+Op_h({p_t}))U(t)=0,\qquad U(0)=\operatorname{Id}. $$ Then, {$U(t)$} is microlocally unitary from $V_1$ to $V_2$ and quantizes $\kappa_t$. Moreover, $$ U(t)=\frac{1}{(2\pi h)^{n}}\int_{{\mathbb R}^n} e^{\frac{i}{h}(\phi(t,x,\eta)-\langle y,\eta\rangle)}b(t,x,\eta;h)d\eta $$ {where $b(t, \cdot)\in S^{\operatorname{comp}}(T^*{\mathbb R}^n)$ and the phase function $\phi(t,\cdot)\!\in C^\infty\!(T^*{\mathbb R}^n;{\mathbb R})$ satisfies} $$ \partial_t\phi+{p_t}(x,\partial_{x}\phi)=0,\qquad \phi(0,x,\eta)=\langle x,\eta\rangle, $$ for all $t\in (-\varepsilon_0,\varepsilon_0)$. Since $U(t)$ is microlocally unitary, it is enough to estimate the operator $${A(t)}:={X(0)}X(t)U(t).$$ First, note that since $X(t)\in \Psi_{\Gamma(t),\rho}^{-\infty}$, {and $U(t)$ quantizes $\kappa_t$, there exist} $a(t)\in \widetilde{S^{-\infty}_{\Gamma_0,\rho}}$ {with $t\in {(-\varepsilon_0,\varepsilon_0)}$} such that {$X(t)=U(t){\widetilde{Op}}_h(a(t))[U(t)]^* + O(h^\infty)_{L^2 \to L^2}$} and so $$ A(t)={\widetilde{Op}}_h(a(0)) U(t){\widetilde{Op}}_h(a(t))+{O(h^\infty)_{L^2\to L^2}}. $$ Fix $N>n-1$ and let $\chi=\chi(\lambda)\in \widetilde{S^{-N}_{\Gamma_0,\rho}}$ be such that $|\chi(\lambda)|\geq c\langle \lambda\rangle^{-N}$. Now, since $a(t)\in \widetilde{S^{-\infty}_{\Gamma_0,\rho}}$, by the elliptic parametrix construction there are $e\sub{L}(t),e\sub{R}(t)\in \widetilde{S^{-\infty}_{\Gamma_0,\rho}}$ such that \begin{gather*} {\widetilde{Op}}_h(e\sub{L}(t)){\widetilde{Op}}_h(\chi)={\widetilde{Op}}_h(a(t))+O(h^\infty)_{L^2\to L^2},\quad {\widetilde{Op}}_h(\chi){\widetilde{Op}}_h(e\sub{R}(t))={\widetilde{Op}}_h(a(t))+O(h^\infty)_{L^2\to L^2}, \end{gather*} {for all $t\in {(-\varepsilon_0,\varepsilon_0)}$}. Note that we are implicitly using the fact that $a(t)$ is compactly supported in $(x,\xi)$ to handle the fact that $\chi$ is not compactly supported in $(x,\xi)$. Thus, $$ A(t)={\widetilde{Op}}_h(e\sub{L}(0)){\widetilde{Op}}_h(\chi)U(t){\widetilde{Op}}_h(\chi){\widetilde{Op}}_h(e\sub{R}(t))+O(h^\infty)_{L^2\to L^2}. $$ Since ${\widetilde{Op}}_h(e\sub{L}(t))$ and ${\widetilde{Op}}_h(e\sub{R}(t))$ are $L^2$ bounded uniformly in $t\in (-\varepsilon_0,\varepsilon_0)$, we estimate $$ {\tilde{A}(t)}:={\widetilde{Op}}_h(\chi)U(t){\widetilde{Op}}_h(\chi) $$ In fact, we estimate $B(t):=\tilde{A}(t)(\tilde{A}(t))^*$ by considering its kernel. \begin{align*} {B(t;x,y)}&=\int U(t)(x,w)U(t)^*(w,y)\chi(h^{-\rho}x')\chi(h^{\rho}y')\chi(h^{-\rho}w')^2 dw\\ &\hspace{-0.3cm}=\!\!\frac{1}{(2\pi h)^{2n}}\!\!\int e^{\frac{i}{h}\Phi(t,x,w,y,\eta,\xi)}b(t,x,\eta)\bar{b}(t,y,\xi)\chi(h^{-\rho}x')\chi(h^{-\rho}y'){\chi(h^{-\rho}w')^2}dwd\eta d\xi, \end{align*} with $ \Phi(t,x,w,y,\eta,\xi)=\phi({t},x,\eta)-\phi({t},y,\xi)+\langle w,\xi-\eta\rangle. $ First, performing stationary phase in $(w_n,\eta_n)$ yields { \begin{align*} B(t;x,y)&=\frac{1}{(2\pi h)^{2n-1}}\int F(t,x,w', \xi_n) \overline{F(t,y,w', \xi_n)} dw'd\xi_n, \end{align*} $$ F(t,x,w', \xi_n):=\int e^{\frac{i}{h} (\phi({t}, x, \eta', \xi_n)-\langle w', \eta'\rangle)} b_{{1}}(t,x,\eta', \xi_n)\chi(h^{-\rho}x')\chi(h^{-\rho}w')^2 d\eta' $$ for some $b_1\in S^{\operatorname{comp}}(T^*\mathbb{R}^n)$. } Next, note that {since $\phi(0,x,\eta)=\langle x,\eta\rangle$,} $$ \phi({t},x,\eta)-\langle x,\eta\rangle =t\tilde{\phi}(t,x,\eta) $$ with {$\tilde \phi$ such that for every multi-index $\alpha$ there exists $C_\alpha>0$ with} $ |\partial^\alpha_{t,x,\eta} \tilde{\phi}|\leq C_\alpha. $ Next, we claim that {there exists $C>0$ such that \begin{equation}\label{E:claim on derivative} \|(\partial^2_{\eta'} \tilde{\phi}(t, x, \eta))^{-1}\|\leq C \qquad \;\;\text{if}\quad (x,\eta)\in \Gamma(0),\; \,\partial_{\eta'}\phi(t, x, \eta)=0. \end{equation}} We postpone the proof of \eqref{E:claim on derivative} and proceed to finish the proof of the lemma. To continue the proof, note that modulo an $O(h^{N\varepsilon})$ error, we may assume that the integrand of $B(t;x,y)$ is supported, in $\{(x,y,w'): |x'|\leq h^{\rho-\varepsilon},\,|y'|\leq h^{\rho-\varepsilon},\,|w'|\leq h^{\rho-\varepsilon}\},$ {and $h^{\rho-\varepsilon}\leq |t|$.} Therefore, the bound in \eqref{E:claim on derivative} continues to hold on the support of the integrand. By~\eqref{E:claim on derivative} and \begin{equation} \label{e:derVanish} \partial^2_{\eta'}{\big(\phi(t,x,\eta)-t\tilde{\phi}(t,x,\eta) \big)}=0, \end{equation} {there is a unique critical point $\eta'_c(t,x,w',\xi_n)$ for the map $\eta' \mapsto \phi(t,x,\eta', \xi_n)-\langle w',\eta'\rangle$,} in an $O(1)$ neighborhood of $\eta'_c$. In particular, {$\eta'_c$ is} the unique solution to $ \partial_{\eta'}\phi(t,x,\eta'_c,\xi_n)-w'=0. $ Next, again using~\eqref{e:derVanish}, {by applying the method of stationary phase in $\eta'$ to $F$, with small parameter $h/t$}, we obtain \[ B(t; x,y)=\frac{1}{(2\pi h)^{n}t^{n-1}}\int e^{\frac{i}{h}\Phi_1(t,x,w',y,\xi_n)}{B_1(t;x,y,w', \eta_c', \xi)}dw' d\xi_n, \] \begin{gather*} \Phi_1(t,x,w',y,\xi_n):=\Psi(t,x,w',\xi_n)-\Psi(t,y,w',\xi_n),\\ \Psi(t,x,w',\xi_n):=\phi(t,x,\eta'_c(t,x,w',\xi_n),\xi_n)-\langle w',\eta'_c(t,x,w',\xi_n)\rangle,\\ B_1(t;x,y,w', \eta', \xi):=b_{2}(t,x,\eta', \xi_n)\bar{b}(t,y,\xi',\xi_n)\chi(h^{-\rho}x')\chi(h^{-\rho}y'){\chi(h^{-\rho}w')^2}. \end{gather*} for some $b_2\in S^{\operatorname{comp}}(T^*\mathbb{R}^n)$. Next, observe that $ \partial_{x_n}\partial_{\xi_n}\Psi(t,x,w',\xi_n)= 1+O(t), $ and therefore, {there exist $c>0$ and a function $g=g(x',y,w',\xi_n)$} so that $ |\partial_{\xi_n}\Phi_1|\geq c |x_n-g_n|. $ In particular, integration by parts in $\xi_n$ shows that for any $N>0$ there is $C_N>0$ such that $$ |B(t; x,y)|\leq C_N{h^{-n}}t^{1-n}h^{\rho(n-1)}\chi(h^{-\rho}y')\chi(h^{-\rho}x')\frac{ h^{2N}+h^N|x_n-g_n|}{(h^2+|x_n-g_n|^2)^N}. $$ Applying Schur's lemma together with the fact that {there exists $C>0$ such that} {for all $t$} $$ \sup_{x}\int |{B(t;x,y)}|dy+\sup_y\int |{B(t;x,y)})|dy\leq Ch^{(2\rho-1)(n-1)}t^{1-n}, $$ yields {that} $ \|{B(t)}\|_{L^2\to L^2}\leq Ch^{(2\rho-1)(n-1)}t^{1-n}, $ for all $t\in (-\varepsilon_0,\varepsilon_0)$, and hence $ \|X(0)X(t)\|_{L^2\to L^2}\leq Ch^{\frac{n-1}{2}(2\rho-1)}t^{\frac{1-n}{2}}, $ as claimed.\smallskip \noindent{\emph{Proof of the bound in \eqref{E:claim on derivative}.}} {Let $\phi_t(x,\eta):=\phi(t,x, \eta)$ and $\varphi_t(x,y,\eta):=\phi_t(x,\eta)+\langle y, \eta\rangle$. Then, $C_{\varphi_t}=\{(x,y, \eta):\; \partial_\eta \phi_t(x,\eta)=y\}$ and so $$\Lambda_{\varphi_t}=\{(x,\partial_x \phi_t(x,\eta), \partial_\eta \phi_t(x,\eta), -\eta) \}\subset T^*{\mathbb R}^n \times T^*{\mathbb R}^n.$$ In particular, since $\Lambda_{\varphi_t}$ is the twisted graph of $\kappa_t$, we have that $\kappa_t$ is characterized by $$ \kappa_t(\partial_\eta \phi_t(x,\eta), \eta)= (x,\partial_x \phi_t(x,\eta)).$$ Furthermore, since $\kappa_t(\Gamma(0))=\Gamma(t)$, we have $$\Gamma(t)=\{(x, \xi): \; \kappa_t(y, \eta)=(x,\xi), \; y=\partial_\eta \phi_t(x,\eta), \; \xi=\partial_x \phi_t(x,\eta),\; (y, \eta) \in \Gamma(0)\}.$$ Then, using $\kappa_t^* \xi_n =\eta_n$, $$\Gamma(t)=\{(x, \xi): \; \xi'=\partial_{x'} \phi_t(x,\eta), \; \partial_{\eta'} \phi_t(x,\eta)=0, \; \xi_n=\eta_n,\; \eta\in {\mathbb R}^n\}.$$ Next, let $\tilde p:=(\tilde x, \tilde \eta)\in \Gamma(0)$ be such that $\partial_{\eta'}\phi_t(\tilde x, \tilde \eta)=0$. Without loss of generality, in what follows we assume that $\tilde x_n=0$. Letting $\Gamma_0(t):=\Gamma(t)\big|_{\{ x_n=0\}}$ we have that $$\Gamma_0(t)=\{(x, \xi): \; \xi'=\partial_{x'} \phi_t(x,\eta), \; \partial_{\eta'} \phi_t(x,\eta)=0, \;x_n=0,\; \xi_n=\eta_n,\; \eta\in {\mathbb R}^n\}.$$ In particular, letting $\tilde{\xi}:=(\partial_{x'}\phi_t(\tilde{p}),\tilde{\eta}_n)$, and $\tilde{p}_0:=(\tilde{x},\tilde{\xi})$, we have $\tilde p_0\in \Gamma_0(t) \cap \Gamma_0(0),$ and \begin{align*} T_{\tilde p_0}\Gamma_0(t) =\{(\delta_x, \delta_\xi): \; &\delta_{\xi'}=\partial_x\partial_{x'} \phi_t(\tilde p) \delta_x+\partial_\eta\partial_{x'} \phi_t(\tilde p) \delta_\eta , \\ \;& \partial_x\partial_{\eta'} \phi_t(\tilde p)\delta_x+\partial_\eta\partial_{\eta'} \phi_t(\tilde p)\delta_\eta =0, \;\delta_{x_n}=0,\; \delta_{\xi_n}=\delta_{\eta_n},\; \delta_{\eta}\in {\mathbb R}^n\}. \end{align*} Next, we note that $\partial_{x_n} \in T_{\tilde p_0}\Gamma(t)$ and $H_{q_i(t)} \in T_{\tilde p_0}\Gamma(t)$ for all $i=1, \dots, n-1$. Therefore, since $\partial_{x_n}q_i(t)=0$, we also know that $H_{q_i(t)}':=(\partial_{\xi'}q_i(t), 0, -\partial_{x'}q_i(t),0) \in T_{\tilde p_0}\Gamma_0(t)$ for all $i=1, \dots, n-1$. We claim that there exists $C>0$ such that for all $v=(\delta_{x'},0, \delta_{\xi'}, 0) \in \text{span}\{H_{q_i(t)}'\}_{i=1}^{n-1} \subset T_{\tilde p_0}\Gamma(t)$ we have \begin{equation}\label{e: delta relation} \|\delta_{x'}\| \geq C t \| \delta_{\xi'}\|. \end{equation} Suppose that the claim in \eqref{e: delta relation} holds. Then, note that for each such $v$, since $\delta_{x_n}=0$ and $\delta_{\xi_n}=0$, we have that there is $\delta_{\eta'} \in {\mathbb R}^{n-1}$ such that $$ \delta_{\xi'}=\partial_{x'}^2 \phi_t(\tilde p) \delta_{x'}+\partial_{\eta'x'}^2 \phi_t(\tilde p) \delta_{\eta'}, \qquad \; \partial_{x'\eta'}^2 \phi_t(\tilde p)\delta_{x'}+\partial_{\eta'}^2 \phi_t(\tilde p)\delta_{\eta'} =0. $$ Using that $\partial_{x'\eta'}^2 \phi_t(\tilde p)=\operatorname{Id} +O(t)$ and $\partial_{x'}^2 \phi_t(\tilde p)=O(t)$, we conclude that $$ \partial_{\eta'}^2 \phi_t(\tilde p)[\partial_{\eta'x'}^2 \phi_t(\tilde p)]^{-1}\delta_{\xi'}= \left(\partial_{\eta'}^2 \phi_t(\tilde p)[\partial_{\eta'x'}^2 \phi_t(\tilde p)]^{-1}\partial_{x'}^2 \phi_t(\tilde p) -\partial_{x'\eta'}^2 \phi_t(\tilde p) \right)\delta_{x'}, $$ and so \begin{equation}\label{e: startfish} \partial_{\eta'}^2 \phi_t(\tilde p)(\operatorname{Id} +O(t))\delta_{\xi'}={(-\operatorname{Id} +O(t))}\delta_{x'}. \end{equation} Let $H_{q_i(t)}'=(\delta_{x'}^{(i)}, 0, \delta_{\xi'}^{(i)},0)$. Since $\tilde p_0 \in \Gamma(t) \cap \Gamma(0)$, assumptions \eqref{e:uncertainAssume1} and \eqref{e:uncertainAssume2} yield that the vectors $\{\delta_{x'}^{(i)}\}_{i=1}^{n-1}$ are linearly independent. Indeed, setting $e_i:=(\delta_{ij})_{j=1}^{n-1} \in {\mathbb R}^{n-1}$, \begin{equation}\label{e: vectors are li} \delta_{x'}^{(i)}=\partial_{\xi_i}q_i(t) e_i +O(t^2), \qquad |\partial_{\xi_i}q_i(t)|\geq Ct, \end{equation} for $t$ small enough. Furthermore, \eqref{e: startfish} then yields that $\{\delta_{\xi'}^{(i)}\}_{i=1}^{n-1}$ are linearly independent. Then, combining \eqref{e: startfish} with \eqref{e: delta relation} yields \eqref{E:claim on derivative} as claimed. To finish it only remains to prove \eqref{e: delta relation}. Let $v=(\delta_{x'},0, \delta_{\xi'}, 0) \in \text{span}\{H_{q_i(t)}'\}_{i=1}^{n-1}$. Then, there is $a \in {\mathbb R}^{n-1}$ such that $\delta_{x'}=\sum_{i=1}^{n-1} a_i \delta_{x'}^{(i)}$ and $\delta_{\xi'}=\sum_{i=1}^{n-1} a_i \delta_{\xi'}^{(i)}$. Next, note that by \eqref{e: vectors are li} we have $\|\delta_{x'}\| \geq \|a\| (Ct +O(t^2))$. Since $\|\delta_{\xi'}\| \leq C_0 \|a\|$ for some $C_0>0$, the claim in \eqref{e: delta relation} follows. } \end{proof} \begin{figure} \label{f:twisting} \begin{tikzpicture} \begin{scope}[scale=1] \draw[->,dashed] (-15:-1.5)--(-15:2)node[right]{$x'$}; \draw[->,dashed] (0,-1.5)--(0,2)node[above]{$\xi'$}; \draw[red,thick] (-15:-1)--(-15:1) node[below]{$\red{\Gamma_{x_i}}$}; \draw[red] (-15:1)--++(45:1)--++(-15:-2)--++(45:-1); \draw[thick](45:-1)--(45:2)node [right]{$\gamma_{\red{x_i},\blue{x_j}}$}; \draw[blue, thick] (15:-1)node[left]{$\blue{\Gamma_{x_j}}$}--(15:1) ; \draw[blue](15:1)--++(45:1)--++(15:-2)--++(45:-1); \begin{scope}[rotate=-15] \draw[thick] (.5,0) arc (0:30:.5); \draw[->](15:2)node[right]{$\sim cd(\red{x_i},\blue{x_j})$}--(15:.5); \end{scope} \end{scope} \end{tikzpicture} \caption{A pictorial representation of the co-isotropics involved in Corollary~\ref{c:corUncertain} where $\gamma_{x_i,x_j}$ is the geodesic from $x_i$ to $x_j$. Localization to both $\Gamma_{x_i}$ and $\Gamma_{x_j}$ implies localization in the non-symplectically orthogonal directions, $x'$ and $\xi'$. The uncertainty principle then rules this behavior out.} \end{figure} For each $x \in M$ let $\Gamma_x$ be as in~\eqref{e:coiso}. (See Figure~\ref{f:twisting} for a schematic representation of these two co-isotropic submanifolds.) Then we have the following result. \begin{corollary} \label{c:corUncertain} Let $0<\rho<1$, {$0<\varepsilon<\rho$}, and $\gamma(t):(-\varepsilon_0,\varepsilon_0)\to M$ be a unit speed geodesic. {Then, for $X(t)\in \Psi^{-\infty}_{\Gamma_{\gamma(t)},\rho}$ and $h$ such that $h^{\rho-\varepsilon}\leq {|t|<\varepsilon_0}$}, $$ \|X(0)X(t)\|_{L^2\to L^2}\leq Ch^{\frac{n-1}{2}(2\rho-1)}t^{\frac{1-n}{2}}. $$ \end{corollary} \begin{proof} To do this, we study the geometry of the flow-out co-isotropics $\Gamma_{\gamma(t)}$. {Namely, we prove that $\Gamma_{\gamma(t)}$ is defined by some functions $\{q_i(t)\}_{i=1}^n$ with $q_i(0)=x_i$ and satisfying \eqref{e:uncertainAssume1} and \eqref{e:uncertainAssume2}. We then apply Proposition \ref{p:uncertainty} to $\Gamma(t)=\kappa^{-1}(\Gamma_{\gamma(t)})$, for a suitable symplectomorphism $\kappa$.} Fix coordinates {$(x',x_n) \in {\mathbb R}^{n-1}\times {\mathbb R}$} on $M$ so that $\gamma(t)=( 0,t)$, and for each $t \in (-\varepsilon_0,\varepsilon_0)$ let $\mc{H}_t$ be the submanifold transverse to the Hamiltonian vector field $H_p$ defined by $$ \mc{H}_{t}:=\{ (x',t, \xi', \xi_n):\; 2\xi_n> |\xi|_g, \; {|x'| \leq \delta_0}\}, $$ {where $\delta_0>0$ is chosen such that } $ {\Gamma_{\gamma(t)}}\cap \mc{H}_t= \{(0,t,\xi', \xi_n): \, 2\xi_n>|\xi|_g,\; \big||\xi|_g-1\big|<\delta\}. $ In particular, {as a subset of $\big\{\big||\xi|_g-1\big|<\delta\big\}$, } $\Gamma_{\gamma(t)}\cap \mc{H}_t$ is defined by the coordinate functions $\{x_i\}_{i=1}^{n-1}$. {For each $t \in (-\varepsilon_0,\varepsilon_0)$} let $\tilde{q}_i(t):\mc{H}_t\to \mathbb{R}$ be given by $\tilde{q}_i(t)=x_i$ for $i=1,\dots, n-1$. Then, define $\{q_i(t)\}_{i=1}^{n-1}$ {on $T^*M$} by $$ H_pq_i(t)=0,\qquad q_i(t)|\sub{\mc{H}_t}=\tilde{q}_i(t). $$ Note that {for all $t$} $ H_p(H_{q_i(t)}q_j(t))=0 $ and $$ \{q_i(t),q_j(t)\}\mid\sub{\mc{H}_t}=\partial_{\xi_n}q_i(t)\partial_{x_n}q_j(t)-\partial_{\xi_n}q_j(t)\partial_{x_n}q_i(t)+\tilde{H}_{q_i(t)}q_j(t),$$ where $\tilde{H}$ is the Hamiltonian vector field in $T^*\{x_n=t\}$. In particular, since $\partial_{\xi_n}\tilde{q}_i(t)=0$ and $\tilde{H}_{q_i(t)}$ is tangent to $\mc{H}_t$, we have $ \{q_i(t),q_j(t)\}\mid\sub{\mc{H}_t}=0. $ Hence, $\{q_i(t),q_j(t)\}\equiv 0$, $\{q_i(t),p\}=0$, {$q_i(0)=x_i$}, and $\{q_i(t)\}_{i=1}^{n-1}$ define $\Gamma_{\gamma(t)}.$ Next, observe that {there exists $s \in {\mathbb R}$ such that for each $i=1, \dots, n-1$,} $ q_i(0)(x, \xi)=x_i(\varphi_{s}(x,\xi))$ with $\varphi_{s}(x,\xi)\in\mc{H}_0. $ Since $\partial_{\xi_n}p\neq 0$ on $\mc{H}_0$, {for $E$ near $0$ there exist $a_{\sub{\!E}}$ and $e_{\sub{\!E}}$ such that} $$ p(x, \xi)-E=e_{\sub{\!E}}(x,\xi)(\xi_n-a_{\sub{\!E}}(x,\xi')) $$ with $e_{\sub{\!E}}>c$ {for some constant $c>0$.} In particular, $\varphi_s=\exp(sH_p)$ {is a reparametrization of $\tilde \varphi_s:=\exp(s(H_{\xi_n-a_{\sub{\!E}}}))$ on $\{p=E\}$} and we have that for $(x,\xi)\in \{p=E\}$, {and all $i=1, \dots, n-1$}, {$$ q_i(0)(x,\xi)=x_i(\tilde \varphi_{-x_n}(x,\xi))={x_i}+x_n\partial_{\xi_i}a_{\sub{\!E}}(x,\xi')+O(x_n^2)_{C^\infty}. $$} In particular, on $\mc{H}_t\cap \{p=E\}$, since $H_{q_j(t)}$ is tangent to $\{p=E\}$, we have \begin{align*} \{q_j(t),q_i(0)\}|\sub{\mc{H}_t\cap \{p=E\}}&=\partial_{\xi_n}q_j(t)\partial_{x_n}q_i(0)-\partial_{x_n}q_j(t)\partial_{\xi_n}q_i(0) +\tilde{H}_{q_j(t)}q_i(0) =O(t^2)+\partial_{\xi_j}{(t\partial_{\xi_i}a_{\sub{\!E}})} \end{align*} Now, since $\partial_\xi^2p|\sub{T\{p=E\}}>0$, and for all $i, j=1, \dots, n$ \begin{align*} &\partial_{\xi_i\xi_j}p=\partial_{\xi_j}\partial_{\xi_i}e_{\sub{\!E}}(\xi_n-a_{\sub{\!E}})+\partial_{\xi_i}e_{\sub{\!E}}(\delta_{nj}-\partial_{\xi_j}a_{\sub{\!E}})+\partial_{\xi_j}e_{\sub{\!E}}(\delta_{ni}-\partial_{\xi_i}a_{\sub{\!E}})-e_{\sub{\!E}}\partial_{\xi_j}\partial_{\xi_i}a_{\sub{\!E}}, \end{align*} then, as quadratic forms, $ \partial_{\xi}^2p|\sub{T\{p=E\}}=-e_{\sub{\!E}}{\partial_\xi^2}a_{\sub{\!E}}|\sub{T\{p=E\}}. $ Hence, $\partial_{\xi'}^2a_{\sub{\!E}}<0$, and {there is $c>0$ with} $$ c\delta_{ij}t+O(t^2) \leq \Big|\{q_i(t),q_j(0)\}|_{\mc{H}_t\cap \{p=E\}}\Big|\leq C\delta_{ij}t+O(t^2). $$ Then, $ c\delta_{ij}t+O(t^2) \leq \Big|\{q_i(t),q_j(0)\}|_{ \{p=E\}} \Big|\leq C\delta_{ij}t+O(t^2) $ by invariance under $H_p$. Since $E$ small is arbitrary, this holds on $\Gamma_{\gamma(0)}\cap \Gamma_{\gamma(t)}$. Now, by Darboux's theorem, there is a symplectomorphism $\kappa$ such that for all $i=1, \dots, n-1$ $ \kappa^*q_i(0)=x_i$ and $\kappa^*p=\xi_n. $ In particular, $ \kappa^{-1}(\Gamma_{\gamma(0)})\subset {\Gamma(0)=}\{(0,x_n,\xi', \xi_n): \;x_n\in \mathbb{R},\,\xi\in \mathbb{R}^{n-1} \times \mathbb{R}\} $ and, abusing notation slightly by relabeling $q_i(t)=\kappa^*q_i(t)$, we have that~\eqref{e:uncertainAssume1} and~\eqref{e:uncertainAssume2} hold.{ In particular, Proposition ~\ref{p:uncertainty} applies to $\Gamma(t)=\kappa^{-1}(\Gamma_{\gamma(t)})$. } Now, let $U$ be a microlocally unitary quantization of $\kappa$, and $X(t)\in \Psi^{-\infty}_{\Gamma_{\gamma(t)},\rho}$. Then, $U^{-1}X(t)U\in \Psi^{-\infty}_{\Gamma(t),\rho}$ and hence the corollary is proved. \end{proof} \subsection{Almost orthogonality for coisotropic cutoffs} \label{s:almostOrthog} {In this section, we finally prove an estimate which shows that co-isotropic cutoffs associated with $\Gamma_{x_i}$ for many $x_i$ are almost orthogonal. This, together with the fact that these cutoffs respect pointwise values near $x_i$, is what allows us to control the number of points at which a quasimode may be large.} \begin{proposition}\label{P:orthogonality} Let $\{B(x_i,R)\}_{i=1}^{N(h)}$ be a $(\mathfrak{D},R)$-good cover for $M$, and $X_i\in \Psi^{-\infty}_{\Gamma_{x_i},\rho}$ $i=1,\dots N(h),$ with uniform symbol estimates. Then, there are $C>0$ and $h_0>0$ such that for all $0<h<h_0$, $\mc{J}\subset \{1,\dots ,N(h)\}$, and $u\in L^2(M)$, we have \begin{equation} \label{e:orthogonality} {\sum_{j\in \mc{J}}\|X_j u\|^2_{L^2}\leq C \Big(1+(h^{2\rho-1}R^{-1})^{\frac{n-1}{2}}|\mc{J}|^{\frac{3n+1}{2n}}\big(1+(h^{2\rho-1}R^{-1})^{\frac{n-1}{4}}\big) \Big)\|u\|_{L^2}^2} \end{equation} \end{proposition} \begin{proof} To prove this bound we will decompose the sum in \eqref{e:orthogonality} as \begin{equation}\label{e:Xgoal} \sum_{i\in \mc{J}} \|X_i u\|_{L^2}^2=\Big\|\sum_{i\in \mc{J}} X_iu \Big\|_{L^2}^2- \Big\langle \sum_{\substack{i,j\in \mc{J}\\i\neq j}}{X_j^*}X_iu,u \Big\rangle. \end{equation} First, we note that by Corollary~\ref{c:corUncertain}, {there exists $C>0$ such that for $i \neq j$} $$ \|{X_j^*}X_i\|\leq Ch^{(n-1)(\rho-\frac{1}{2}})d({x_i},x_j)^{\frac{1-n}{2}}. $$ Therefore, by the the Cotlar-Stein lemma, \begin{align*} \Big\|\sum_{j\in \mc{J}} X_j\Big\| &{\leq \sup_{j\in \mc{J}} \Big(\| X_j\| + \sum_{{i \in \mc{J} \backslash\{j\}}}\|X_j^*X_i\|^{\tfrac{1}{2}}\Big)}\leq 2+C h^{\frac{n-1}{2}(\rho-\frac{1}{2})}\, \sup_{j\in \mc{J}}\sum_{{i \in \mc{J} \backslash\{j\}}}d(x_i,x_j)^{\frac{1-n}{4}}. \end{align*} To estimate the sum, observe that {there exists $C>0$ such that for any $j \in \mc{J}$ and any positive integer $k$} $ \tfrac{1}{C}2^{kn}\leq \#\{i:\; 2^{k}R\leq d(x_i,x_j)\leq 2^{k+1}R\}\leq C2^{(k+1)n}. $ {In particular, there is $C>0$ such that for any $j \in \mc{J}$ \begin{equation}\label{E:adding wo j} \sum_{i \in \mc{J} \backslash\{j\}} d(x_i,x_j)^{\frac{1-n}{4}}\leq C\sum_{k=0}^{\frac{1}{n}\log_2|\mc{J}|}2^{kn}(2^kR)^{\frac{1-n}{4}}\leq C|\mc{J}|^{\frac{3n+1}{4n}}R^{\frac{1-n}{4}}. \end{equation}} Therefore, we shall bound the first term in \eqref{e:Xgoal} using \begin{equation}\label{e:X_j} \Big\|\sum_{j\in \mc{J}}X_j \Big\|\leq C+Ch^{\frac{n-1}{2}(\rho-\frac{1}{2})}R^{\frac{1-n}{4}}|\mc{J}|^{\frac{3n+1}{4n}}. \end{equation} We next proceed to control the second term in \eqref{e:Xgoal}. Let $\tilde{X}_j\in \Psi^{-\infty}_{\Gamma_{x_j},\rho}$ such that $\tilde{X}_jX_j=X_j+O(h^\infty)_{L^2\to L^2}$. By the Cotlar-Stein Lemma, \begin{equation}\label{e:sec term} \Big\|\sum_{\substack{i,j\in \mc{J}\\i\neq j}} X_j^*X_i \Big\| \leq \sup_{\substack{k,\ell\in \mc{J}\\k\neq \ell}} \sum_{\substack{i,j\in \mc{J}\\i\neq j}} \|X_k^*\tilde{X}_\ell X_\ell X_j^*\tilde{X}_j^*X_i\|^{\frac{1}{2}}+O(h^\infty|\mc{J}|^2). \end{equation} By Corollary \ref{c:corUncertain} there exists $C>0$ such that for $k \neq \ell$, $i \neq j$, $$ \|{X_k^* \tilde X_\ell X_\ell X_j^*\tilde X_j^*X_i}\|\leq C h^{(n-1)(2\rho-1)}\min\{1, h^{\frac{(n-1)}{2}(2\rho-1)}d(x_j,x_\ell)^{-\frac{n-1}{2}}\}(d(x_k,x_\ell)d(x_j,x_i))^{\frac{1-n}{2}}. $$ Using that $\sup_{\substack{k,\ell\in \mc{J}\\k \neq \ell}}d(x_k,x_\ell)^{\frac{1-n}{4}}\leq R^{\frac{1-n}{4}}$, adding first in $i \in \mc{J}\backslash \{j\}$ in \eqref{e:sec term}, and combining with the bound in \eqref{E:adding wo j}, yields \begin{align}\label{e:X_jX_k} \Big\|\sum_{\substack{i,j\in \mc{J}\\i\neq j}} X_j^*X_i\Big\| &\leq C h^{\frac{n-1}{2}(2\rho-1)} (1+h^{\frac{n-1}{4}(2\rho-1)}|\mc{J}|^{\frac{3n+1}{4n}}R^{\frac{1-n}{4}}) |\mc{J}|^{\frac{3n+1}{4n}}R^{\frac{1-n}{2}}. \end{align} In particular, combining \eqref{e:X_j} and \eqref{e:X_jX_k} into \eqref{e:Xgoal} we obtain \begin{align*} \sum_{i\in \mc{J}} \|X_i u\|^2&\leq C \Big(1+h^{(n-1)(\rho-\frac{1}{2})}R^{\frac{1-n}{2}}|\mc{J}|^{\frac{3n+1}{2n}}+h^{\frac{3(n-1)}{4}(2\rho-1)}R^{\frac{3(1-n)}{4}}{|\mc{J}|^{\frac{3n+1}{2n}}}\Big)\|u\|_{L^2}^2 \notag\\ &\leq C (1+h^{(n-1)(\rho-\frac{1}{2})}R^{\frac{1-n}{2}}(1+(h^{2\rho-1}R^{-1})^{\frac{n-1}{4}}){|\mc{J}|^{\frac{3n+1}{2n}}})\|u\|_{L^2}^2. \end{align*} \vspace{-.5cm} \end{proof}
1911.12811
\section*{Abstract} {\bf We give a detailed description of the nested algebraic Bethe ansatz. We consider integrable models with a $\mathfrak{gl}_3$-invariant $R$-matrix as the basic example, however, we also describe possible generalizations. We give recursions and explicit formulas for the Bethe vectors. We also give a representation for the Bethe vectors in the form of a trace formula. } \vspace{10pt} \noindent\rule{\textwidth}{1pt} \tableofcontents\thispagestyle{fancy} \noindent\rule{\textwidth}{1pt} \vspace{10pt} \section{Introduction} Algebraic Bethe ansatz (ABA) is a part of the Quantum Inverse Scattering Method (QISM), that emerged in the late 70's in the works of the Leningrad School \cite{FadST79,FadT79}. Almost simultaneously with this method, a nested algebraic Bethe ansatz (NABA) was developed in \cite{Kul81,KulRes82,KulRes83,Res86}. The NABA is a method that allows us to find the spectrum of quantum integrable models describing systems with several types of excitations. It is an algebraic interpretation of the approach proposed in the works \cite{Yang67,Sath68,Sath75}. These notes are based on a series of lectures given by the author at the Les Houches summer school 2018 {\it Integrability in Atomic and Condensed Matter Physics}. We introduce the reader to the basic principles of the NABA. The presentation follows the classical scheme described in the papers \cite{Kul81,KulRes82,KulRes83,Res86}. However, we give much more details and illustrate the general principles with concrete calculations. To simplify the discussion, we will mainly confine ourselves to the case of models, which are described by a $\mathfrak{gl}_3$-invariant $R$-matrix. A number of statements, however, are formulated for a fairly general case. Besides, we give a number of comments on how one can generalize the results obtained for the $\mathfrak{gl}_3$ case to the models with symmetries of higher ranks. We also describe a method developed in the works \cite{TarV94,TarV13,BelRag08}. In contrast to the classical NABA scheme, which allows to construct Bethe vectors recursively, this approach immediately sets explicit formulas for these vectors. We present a proof of the equivalence of this approach to the NABA. In these notes, we focus on the mathematical aspects of NABA and do not consider the physical applications of models solvable by this method. Note, however, that these physical applications are very wide, since the NABA models provide a more realistic description of strongly interacting systems. The reason is that in the NABA we are dealing with several creation operators. This allows us to consider systems where several degrees of freedom of fundamental particles interact, for example, the spin and the charge of the electrons. Therefore, the NABA solvable models have found wide application primarily in the physics of strongly correlated electronic systems (Yang--Gaudin model \cite{McG65,McG66,Yang67,Gau67}, t-J model and Hubbard model \cite{Schl87,LieW68,LieW03,EssFGKK05}). We can also consider systems consisting of several types of particles, such as systems with impurities (Kondo model) \cite{And80,AndFL80,Wie80,Wie81}. For a more detailed description of the application of NABA to Fermi gases and ultracold atom systems, we refer the reader to review \cite{GuaBL13}. It is also worth mentioning that the Hamiltonians of integrable systems with a large number of degrees of freedom arise in supersymmetric gauge theories \cite{MinZ03}. To conclude this short introduction, we would like to mention that the NABA is a generalization of the ABA and uses basically the same concepts. Some techniques are also borrowed from the ABA. Therefore, to understand the stuff, the reader must possess the basic principles and techniques of the ABA. In addition to the original works mentioned above, they can be found in \cite{BogIK93L,FadLH96,Sla07,Sla18}. \subsection{Reminder of the algebraic Bethe ansatz\label{SS-RABA}} A key equation of the ABA is an $RTT$-relation \cite{FadST79,FadT79,BogIK93L,FadLH96,Sla07,Sla18} \be{00-RTT} R(u,v)\bigl(T(u)\otimes I \bigr)\bigl(I\otimes T(v)\bigr)=\bigl(I\otimes T(v) \bigr) \bigl(T(u)\otimes I\bigr)R(u,v). \end{equation} Here $T(u)$ is a monodromy matrix \be{T22} T(u)=\begin{pmatrix} A(u)& B(u)\\ C(u)& D(u) \end{pmatrix}, \end{equation} whose matrix elements act in some Hilbert space $\mathcal{H}$. The monodromy matrix also acts in the space $\mathbb{C}^2$ which is called an auxiliary space. The $R$-matrix $R(u,v)$ acts in $\mathbb{C}^2\otimes \mathbb{C}^2$. Another commonly used form of equation \eqref{00-RTT} is \be{00-RTT1} R_{12}(u,v)T_{1}(u)T_{2}(v)=T_{2}(v) T_{1}(u)R_{12}(u,v). \end{equation} Here the subscripts show in which of the two auxiliary spaces $\mathbb{C}^2$ the $T$-matrices act nontrivially. The $R$-matrix $R_{12}(u,v)$ acts in both spaces $\mathbb{C}^2$. The $RTT$-relation immediately yields \be{00-trace-22} [\mathop{\rm tr}\nolimits T(u),\mathop{\rm tr}\nolimits T(v)]=0, \qquad \mathop{\rm tr}\nolimits T(u)=A(u)+D(u). \end{equation} Thus, the trace of the monodromy matrix in the auxiliary space (transfer matrix) is a generating function of commuting operators \be{00-expan-T} \mathop{\rm tr}\nolimits T(u)=\sum_k (u-u_0)^k I_k,\qquad [I_k,I_n]=0. \end{equation} Choosing one of $I_k$ as a Hamiltonian of a quantum model we automatically obtain many (generically, infinitely many) integrals of motion. Thus, we have a chance to build an integrable model. It is assumed within the framework of the ABA that the Hilbert space $\mathcal{H}$ of the model contains a vacuum vector $|0\rangle$ with the following properties: \be{00-vac22} A(u)|0\rangle=a(u)|0\rangle, \qquad D(u)|0\rangle=d(u)|0\rangle,\qquad C(u)|0\rangle=0. \end{equation} Here $a(u)$ and $d(u)$ are some functions that depend on the particular model. Common eigenstates of the Hamiltonian and other integrals of motion are eigenstates of the transfer matrix for arbitrary complex $z$: \be{00-Eig-state} \mathop{\rm tr}\nolimits T(z)|\Psi\rangle=\Lambda(z)|\Psi\rangle, \end{equation} where $\Lambda(z)$ is the transfer matrix eigenvalue. Within the framework of the ABA they can be obtained by the successive action of the operators $B(u)$ on the vacuum vector: \be{00-Eig-stateB} |\Psi\rangle=B(u_1)\dots B(u_n)|0\rangle, \end{equation} provided parameters $u_1,\dots,u_n$ satisfy a system of Bethe equations (see below). In this case, we call the vector \eqref{00-Eig-stateB} an {\it on-shell Bethe vector}. Otherwise, if parameters $u_1,\dots,u_n$ are arbitrary complex numbers, we call the vector \eqref{00-Eig-stateB} an {\it off-shell Bethe vector} or simply the {\it Bethe vector}. \subsection{Possible generalization\label{SS-PG}} A question arises: can we generalize this construction to the case of the $N\times N$ monodromy matrix whose auxiliary space would be $\mathbb{C}^N$? Namely, we still want to have the $RTT$-relation \eqref{00-RTT}. Then, the transfer matrix \be{00-trace-NN} \mathop{\rm tr}\nolimits T(u)=\sum_{i=1}^NT_{ii}(u) \end{equation} satisfies the commutation relation \eqref{00-trace-22}. Thus, we can obtain a Hamiltonian and other integrals of motion via \eqref{00-expan-T}. In order to construct the Hamiltonian eigenstates we assume that the Hilbert space of the model has a vacuum vector $|0\rangle$ with the properties analogous to \eqref{00-vac22}: \be{00-vac} \begin{aligned} &T_{ii}(u)|0\rangle=\lambda_i(u)|0\rangle, \qquad i=1,\dots,N,\\ &T_{ij}(u)|0\rangle=0, \qquad i>j. \end{aligned} \end{equation} Here $\lambda_i(u)$ are some functions dependent on the particular model. \subsection{Examples of \texorpdfstring{$R$}{}-matrices\label{SS-ERM}} The first problem is to find an $R$-matrix acting in $\mathbb{C}^N\otimes \mathbb{C}^N$. The $R$-matrix should satisfy the Yang--Baxter equation \be{00-Yangian} R_{12}(u_1,u_2)R_{13}(u_1,u_3)R_{23}(u_2,u_3)= R_{23}(u_2,u_3)R_{13}(u_1,u_3)R_{12}(u_1,u_2), \end{equation} in order to provide compatibility of the $RTT$-relation. The first example of the non-trivial $R$-matrix has exactly the same form as in the case of the $\mathbb{C}^2$ auxiliary space: \be{00-RYang} R(u,v)=\mathbb{I}+g(u,v) P, \qquad g(u,v)=\frac{c}{u-v}. \end{equation} Here $\mathbb{I}$ is the identity operator in $\mathbb{C}^N\otimes \mathbb{C}^N$, $P$ is the permutation operator in the same space, and $c$ is a constant. The permutation operator has the form \be{00-P} P=\sum_{i,j=1}^N E_{ij}\otimes E_{ji}, \end{equation} where $(E_{ij})_{lk}=\delta_{il}\delta_{jk}$, $i,j,l,k=1,...,N$ are $N\times N$ matrices with unit at the intersection of $i$th row and $j$th column and zeros elsewhere (the standard basis matrices). Another solution to the Yang--Baxter equation is given by the $q$-deformation of the $R$-matrix \eqref{00-RYang} \cite{KulRes82,Jim83,Dri86}: \begin{equation}\label{00-RUqglN} \begin{split} R^{(q)}(u,v)= f_q(u,v)&\ \sum_{1\leq i\leq N}E_{ii}\otimes E_{ii}\ +\ \sum_{1\leq i<j\leq N}(E_{ii}\otimes E_{jj}+E_{jj}\otimes E_{ii}) \\ + &\sum_{1\leq i<j\leq N} \big(u g_q(u,v) E_{ij}\otimes E_{ji}+ vg_q(u,v)E_{ji}\otimes E_{ij}\big)\,, \end{split} \end{equation} where \begin{equation}\label{00-fqq} f_q(u,v)=\frac{qu-q^{-1}v}{u-v},\qquad g_q(u,v)=\frac{q-q^{-1}}{u-v}. \end{equation} Pay attention that this $R$-matrix is not a complete analog of the well known $4\times 4$ trigonometric $R$-matrix acting in $\mathbb{C}^2\otimes \mathbb{C}^2$. Indeed, the latter has the following form\footnote{% For those who are used to write this matrix in terms of trigonometric (hyperbolic) functions, it is enough to substitute $u=e^{2x}$, $v=e^{2y}$, and $q=e^{\eta}$ in \eqref{00-fqq}.}: \be{00-trigR} R^{\text{trig}}(u,v)= \begin{pmatrix} f_q(u,v)&0&0&0\\ 0&1&\sqrt{uv}g_q(u,v)&0\\0&\sqrt{uv}g_q(u,v)&1&0\\0&0&0&f_q(u,v) \end{pmatrix}. \end{equation} One would expect that an analog of $R^{\text{trig}}(u,v)$ in the case $\mathbb{C}^N\otimes \mathbb{C}^N$ is \begin{equation}\label{00-RUqglN-wr} \begin{split} R^{\text{trig}}(u,v)= f_q(u,v)&\ \sum_{1\leq i\leq N}E_{ii}\otimes E_{ii}\ +\ \sum_{1\leq i<j\leq N}(E_{ii}\otimes E_{jj}+E_{jj}\otimes E_{ii}) \\ + &\sum_{1\leq i<j\leq N} \sqrt{uv} \;g_q(u,v) \big(E_{ij}\otimes E_{ji}+E_{ji}\otimes E_{ij}\big). \end{split} \end{equation} However, the $R$-matrix \eqref{00-RUqglN-wr} satisfies the Yang--Baxter equation for $N=2$ only. The point is that the $R$-matrix \eqref{00-RUqglN} can be transformed as follows: \be{tR-0} \tilde R^{(q)}_{12}(u,v) =K_1\left(\tfrac uv\right) R^{(q)}_{12}(u,v)K_1\left(\tfrac vu\right), \end{equation} where \be{00-Kx} K\left(\tfrac uv\right)=\sum_{j=1}^N \left(\tfrac uv\right)^{(N+1)/4-j/2}E_{jj}. \end{equation} Then the new matrix $\tilde R^{(q)}(u,v)$ is also a solution of the Yang--Baxter equation (see appendix~\ref{A-simtr}). It is easy to check that for $N=2$ the matrix $\tilde R^{(q)}(u,v)$ \eqref{tR-0} coincides with $R^{\text{trig}}(u,v)$ \eqref{00-trigR}, but this is not true for $N>2$. \medskip There exist, of course, other $R$-matrices acting in $\mathbb{C}^N\otimes \mathbb{C}^N$ and satisfying the Yang--Baxter equation, for example, Belavin elliptic $R$-matrix \cite{Bel80,OdeF93,FelP94,Ode02}. However, we will restrict our selves with consideration of the simplest $R$-matrix \eqref{00-RYang} only. Furthermore, the main part of these lectures will be devoted to the case $N=3$. We will see that even in this simplest case one should solve several non-trivial problems. The $R$-matrix \eqref{00-RYang} is called $\mathfrak{gl}_N$-invariant due to the property \be{00-gln-inv} [R_{12}(u,v),G_1G_2]=0, \end{equation} for any $G\in\mathfrak{gl}_N$. \subsection{Examples of monodromy matrices \label{SS-EMM}} The first example of the monodromy matrix is completely analogous to the $\mathfrak{gl}_2$ case \be{00-MonmXXX} T(u)=R_{0L}(u,\xi_L)\dots R_{01}(u,\xi_1). \end{equation} This is the monodromy matrix of the $SU(N)$-invariant inhomogeneous $XXX$ Heisenberg chain. The parameters $\xi_i$ are inhomogeneities. Each $R$-matrix $R_{0i}(u,\xi_i)$ acts in the tensor product $V_0\otimes V_i$, where every $V_i$ is $\mathbb{C}^N$. The auxiliary space of the monodromy matrix is $V_0\sim \mathbb{C}^N$. The quantum space is \be{00-spaceXXX} \mathcal{H} =V_1\otimes \dots \otimes V_L=\underbrace{\mathbb{C}^N\otimes \dots \otimes \mathbb{C}^N}_{L\quad\text{times}}. \end{equation} This quantum space has a vacuum vector of the form \be{00-vacXXX} |0\rangle =\underbrace{\left(\begin{smallmatrix} 1\\0\\ \raisebox{5pt}{\vdots}\\0 \end{smallmatrix}\right)\otimes \dots \otimes \left(\begin{smallmatrix} 1\\0\\ \raisebox{5pt}{\vdots}\\0 \end{smallmatrix}\right)}_{L\quad\text{times}}. \end{equation} Another example describes a system of bosons. For simplicity we give explicit formulas for the $\mathfrak{gl}_3$ case \cite{KulRes82} (generalization to $\mathfrak{gl}_N$ is quite obvious). An $L$-operator of this system has the following form: \be{L-kulresh} L^{(a)}(u)=u\mathbf{1}-c\mathcal{L}, \end{equation} where $\mathbf{1}$ is the identity operator and \be{p-kulresh} \mathcal{L}=\begin{pmatrix}a^\dagger_1a_1& a^\dagger_1a_2& ia^\dagger_1\sqrt{m+\rho}\\ a^\dagger_2a_1& a^\dagger_2a_2& ia^\dagger_2\sqrt{m+\rho}\\ i\sqrt{m+\rho}\;a_1& i\sqrt{m+\rho}\;a_2& -m-\rho \end{pmatrix}. \end{equation} Here $m$ is a complex number and $\rho=a^\dagger_1a_1+a^\dagger_2a_2$. The operators $a_k$ and $a^\dagger_k$ ($k=1,2$) act in a Fock space with the Fock vacuum $|0\rangle$: $a_k|0\rangle=0$. They have standard commutation relations of the Heisenberg algebra $[a_i,a^\dagger_k]=\delta_{ik}$. In order to construct the monodromy matrix we replace the original $a_k$ and $a^\dagger_k$ operators with $a_k(n)$ and $a_k^\dagger(n)$ so that $[a_i(n),a^\dagger_k(m)]=\delta_{ik}\delta_{nm}$. Then \be{TdoseGas} T(u)=L_M(u)\dots L_1(u), \quad \text{where} \quad L_i(u)= L^{(a)}(u)\Bigr|_{\substack{a_k=a_k(i)\\a_k^\dagger=a_k^\dagger(i)}}. \end{equation} This monodromy matrix describes a chain in each site of which there may be a particle of one of two sorts. The vacuum vector coincides with the Fock vacuum. This system admits the continuum limit. Then it turns into the system of the two-component Bose-gas with $\delta$-function interaction \cite{Yang67,Sath68,Sath75,Sla15BG}. \subsection{Remark about \texorpdfstring{$RTT$}{}-algebra\label{SS-RRTTA}} $RTT$-algebra \eqref{00-RTT} with the $R$-matrix \eqref{00-RYang} is closely related to the concept of Yangian $Y(\mathfrak{gl}_N)$ (see \cite{MolNO96,Mol07} and references therein). Sometimes in the literature it is called the Yangian. In fact, the $RTT$-algebra with the $\mathfrak{gl}_N$-invariant $R$-matrix is somewhat wider. In the case of the Yangian, we must impose an additional condition on the asymptotic behavior of the monodromy matrix elements $T_{ij}(u)$ at $u\to\infty$ \be{00-asyT} T_{ij}(u)=\delta_{ij}+\sum_{k=0}^\infty \left(\frac cu\right)^{k+1} T_{ij}[k],\qquad u\to\infty, \end{equation} where $T_{ij}[k]$ are generators of the Yangian $Y(\mathfrak{gl}_N)$. Note that the examples of the monodromy matrices considered in section~\ref{SS-EMM} enjoy this condition after appropriate normalization. However, if we pass from the monodromy matrix $T(u)$ satisfying condition \eqref{00-asyT} to a matrix $KT(u)$, where $K$ is a diagonal $c$-number matrix, then the new matrix $KT(u)$ will also satisfy the $RTT$-relation. The properties of the vacuum vector will also be preserved. However, the $KT(u)$ matrix no longer has expansion \eqref{00-asyT}, since it does not begin with the identity operator. This type of transformation (twist transformation) can be done with any of the $L$-operators entering the definition of the monodromy matrix. As a result, in some cases the monodromy matrix satisfying the $RTT$-relation may have essential singularity at infinity (see e.g. continuum models of one-dimensional Bose and Fermi gases \cite{Sla15BG,BogIK93L}). Below we denote by $\mathcal{R}_N$ the $RTT$-algebra with the $\mathfrak{gl}_N$-invariant $R$-matrix \eqref{00-RYang}, where $N$ indicates the size of the monodromy matrix. Starting from this point, we consider only such algebras, unless otherwise specified. \subsection{Automorphism} Let us define a linear mapping $\varphi:\; T\to\tilde T$ such that \be{00-varphi} \begin{aligned} &\tilde T_{ij}=\varphi\bigl(T_{ij}(u)\bigr)= T_{\tilde j,\tilde i}(-u), \qquad \text{where}\qquad \tilde i = N+1-i,\\ &\varphi\bigl(T_{ij}(u)T_{kl}(v)\bigr)=\varphi\bigl(T_{ij}(u)\bigr) \varphi\bigl(T_{kl}(v)\bigr). \end{aligned} \end{equation} \begin{prop}\label{00-Propauto}\cite{Mol07} Mapping \eqref{00-varphi} is an automorphism of the $\mathcal{R}_N$ algebra . \end{prop} To prove this proposition we need an auxiliary lemma. \begin{lemma}\label{00-Lem-RTT} The $RTT$-relation \eqref{00-RTT1} with the $R$-matrix \eqref{00-RYang} implies \be{00-CRcomp} \begin{aligned} {}[T_{ij}(u),T_{kl}(v)]&=g(u,v)\bigl(T_{kj}(v)T_{il}(u) -T_{kj}(u)T_{il}(v)\bigr),\\ {}&=g(u,v)\bigl(T_{il}(u)T_{kj}(v) -T_{il}(v)T_{kj}(u)\bigr). \end{aligned} \end{equation} \end{lemma} \textsl{Proof.} Observe that the second equation \eqref{00-CRcomp} can be obtained from the first one via simultaneous replacements $i\leftrightarrow k$, $j\leftrightarrow l$, and $u\leftrightarrow v$. Thus, it is enough to prove the first equation \eqref{00-CRcomp}. Let us write down the $RTT$-relation \eqref{00-RTT1} in the form \be{A-RTT1} [T_{1}(u),T_{2}(v)]=g(u,v)\bigl(T_{2}(v) T_{1}(u)P_{12}-P_{12}T_{1}(u)T_{2}(v)\bigr). \end{equation} Here we used representation \eqref{00-RYang} for the $R$-matrix. The monodromy matrices $T_1(u)$ and $T_2(v)$ can be written as \be{A-TT} T_1(u)=\sum_{i,j=1}^N T_{ij}(u)E_1^{ij},\qquad T_2(v)=\sum_{k,l=1}^N T_{kl}(v)E_2^{kl}. \end{equation} Here we used superscripts to denote different standard basis matrices, since the subscripts are already occupied for the designation of auxiliary spaces. The matrix elements $T_{ij}$ (or $T_{kl}$) act in the Hilbert space $\mathcal{H}$ only, while the standard basis matrices act in the auxiliary spaces $\mathbb{C}^N$. Recall also that \be{A-R} P_{12}=\sum_{a,b=1}^N E_1^{ab}E_2^{ba}. \end{equation} Now we simply substitute \eqref{A-TT} and \eqref{A-R} into \eqref{A-RTT1}. Then we obtain in the l.h.s. \be{lhs-RTT} [T_{1}(u),T_{2}(v)]=\sum_{i,j,k,l=1}^N [T_{ij}(u),T_{kl}(v)]E_1^{ij}E_2^{kl}. \end{equation} In the r.h.s., we have \begin{multline}\label{rhs-RTT1} g(u,v)\bigl(T_{2}(v) T_{1}(u)P_{12}-P_{12}T_{1}(u)T_{2}(v)\bigr)\\ =g(u,v)\sum_{i,j,k,l,a,b=1}^N\Bigl(T_{kl}(v) T_{ij}(u)E_1^{ij}E_2^{kl}E_1^{ab}E_2^{ba}-T_{ij}(u)T_{kl}(v)E_1^{ab}E_2^{ba}E_1^{ij}E_2^{kl}\Bigr). \end{multline} Multiplying the standard basis matrices via $E^{\lambda\mu}_\ell E^{\rho\sigma}_\ell=\delta_{\mu\rho}E^{\lambda\sigma}_\ell$ (for $\ell=1,2$) we obtain \begin{multline}\label{rhs-RTT2} g(u,v)\bigl(T_{2}(v) T_{1}(u)P_{12}-P_{12}T_{1}(u)T_{2}(v)\bigr)\\ =g(u,v)\sum_{i,j,k,l,a,b=1}^N\Bigl(T_{kl}(v) T_{ij}(u)E_1^{ib}E_2^{ka}\delta_{ja}\delta_{lb}-T_{ij}(u)T_{kl}(v)E_1^{aj}E_2^{bl}\delta_{bi}\delta_{ka}\Bigr)\\ =g(u,v)\sum_{i,j,k,l=1}^N\Bigl(T_{kl}(v) T_{ij}(u)E_1^{il}E_2^{kj}-T_{ij}(u)T_{kl}(v)E_1^{kj}E_2^{il}\Bigr). \end{multline} Replacing the subscripts $l\leftrightarrow j$ in the first term and the subscripts $k\leftrightarrow i$ in the second term we arrive at \begin{multline}\label{rhs-RTT3} g(u,v)\bigl(T_{2}(v) T_{1}(u)P_{12}-P_{12}T_{1}(u)T_{2}(v)\bigr)\\ =g(u,v)\sum_{i,j,k,l=1}^N\Bigl(T_{kj}(v) T_{il}(u)-T_{kj}(u)T_{il}(v)\Bigr)E_1^{ij}E_2^{kl}. \end{multline} Comparing the coefficients of $E_1^{ij}E_2^{kl}$ in \eqref{lhs-RTT} and \eqref{rhs-RTT3} we immediately obtain \eqref{00-CRcomp}.\qed \medskip \textsl{Proof of proposition~\ref{00-Propauto}.} Consider commutation relations of the operators $\tilde T_{ij}$. We have \begin{multline}\label{00-CRtildeT} [\tilde T_{ij}(u),\tilde T_{kl}(v)]= [T_{\tilde j,\tilde i}(-u),T_{\tilde l,\tilde k}(-v)] =g(-u,-v)\bigl(T_{\tilde l,\tilde i}(-v)T_{\tilde j,\tilde k}(-u) -T_{\tilde l,\tilde i}(-u)T_{\tilde j,\tilde k}(-v)\bigr)\\[5pt] =g(v,u)\bigl(\tilde T_{il}(v)\tilde T_{kj}(u) -\tilde T_{il}(u)\tilde T_{kj}(v)\bigr) =g(u,v)\bigl(\tilde T_{il}(u)\tilde T_{kj}(v)-\tilde T_{il}(v)\tilde T_{kj}(u)\bigr). \end{multline} Thus, the matrix elements $\tilde T_{ij}(u)$ satisfy the same commutation relations as $T_{ij}(u)$.\qed \subsection{Coloring\label{SS-C}} In physical models, the vectors of the space $\mathcal{H}$ describe states with different types of particles (excitations). We now introduce a notion of coloring, in which particles of different types also appear. To distinguish them from physical particles, we will call them quasiparticles, and their different types are colors. The space $\mathcal{H}$ is generated by the states of the form \be{00-state} |\Psi\rangle = \prod_{p=1}^n T_{i_p,j_p}(u_p)|0 \rangle, \end{equation} where $i_p< j_p$ for $p=1,\dots,n$. This means that $T_{i_p,j_p}(u_p)$ are creation operators. We say that an operator $T_{ij}$ with $i<j$ creates quasiparticles with the colors $i,\dots, j-1$, one quasiparticle of each color. In particular, the operator $T_{i,i+1}$ creates one quasiparticle of the color $i$, the operator $T_{1N}$ creates $N-1$ quasiparticles of $N-1$ different colors. Thus, in $\mathfrak{gl}_N$ based models quasiparticles may have $N-1$ colors. Let $\{a_1,\dots,a_{N-1}\}$ be a set of non-negative integers. We say that a state has coloring $\{a_k\}\equiv\{a_1,\dots,a_{N-1}\}$, if it contains $a_k$ quasiparticles of the color $k$. In other words, we introduce a mapping $\mathop{\rm Col}(|\Psi\rangle)$ that maps $|\Psi\rangle$ to its coloring $\{a_k\}$: \be{00-colstate} \mathop{\rm Col}(|\Psi\rangle)=\{a_k\},\qquad\text{where}\qquad a_k=\sum_{p=1}^n \bigl(\theta(j_p-k)-\theta(i_p-k)\bigr). \end{equation} Here $\theta(k)$ is a step function of integer argument such that $\theta(k)=1$ for $k>0$ and $\theta(k)=0$ otherwise. Let us give several examples. Consider $\mathfrak{gl}_4$ based models. Then we deal with three colors. We have \be{00-Colex} \begin{aligned} &\mathop{\rm Col}(|0\rangle)=\{0,0,0\},\qquad\text{(by definition),}\\ &\mathop{\rm Col}(T_{23}(u)|0\rangle)=\{0,1,0\},\\ &\mathop{\rm Col}(T_{13}(u_1)T_{14}(u_2)|0\rangle)=\{2,2,1\},\\ &\mathop{\rm Col}(T_{12}(u_1)T_{23}(v_1)T_{12}(u_2)T_{24}(v_2)T_{14}(w)|0\rangle)=\{3,3,2\}. \end{aligned} \end{equation} Observe that the coloring does not depend on the arguments of the operators $T_{ij}(u)$ and on the order of these operators. Assuming that the null-vector\footnote{Do not confuse the null-vector $0$ with the vacuum vector $|0\rangle$.} has arbitrary coloring we extend the mapping \eqref{00-colstate} to some linear combinations of the states \eqref{00-state} as well as to the states containing neutral operators (i.e. $T_{ii}(u)$) and annihilation operators (i.e. $T_{ij}(u)$ with $i>j$). Namely, if $\mathop{\rm Col}(|\Psi_1\rangle)=\mathop{\rm Col}(|\Psi_2\rangle)$, then linear combinations of these states have the same coloring: \be{00-collinc} \mathop{\rm Col}(\alpha|\Psi_1\rangle+\beta|\Psi_2\rangle)=\mathop{\rm Col}(|\Psi_1\rangle),\qquad\text{if}\qquad \mathop{\rm Col}(|\Psi_1\rangle)=\mathop{\rm Col}(|\Psi_2\rangle), \end{equation} where $\alpha$ and $\beta$ are complex numbers. Then the states with the same coloring generate a subspace $\mathcal{H}_{\{a_k\}}$ of the space $\mathcal{H}$. The latter then can be presented as a direct sum of the subspaces with fixed coloring: \be{dirsum} \mathcal{H}=\oplus \mathcal{H}_{\{a_k\}}. \end{equation} Let us consider the states of the form \eqref{00-state}, but now suppose that among $T_{i_p,j_p}(u_p)$ there can be neutral operators and annihilation operators. The coloring of these states is defined by the same formula \eqref{00-colstate}. Then the integers $a_k$ may take negative values. \begin{prop}\label{P-negcal} Let $\mathop{\rm Col}(|\Psi\rangle)=\{a_k\}$, where at least one $a_j<0$. Then $|\Psi\rangle=0$. \end{prop} \textsl{Proof.} Suppose that $|\Psi\rangle\ne0$. Then we can normal order all the operators $T_{ij}$, that is, we can move all neutral and annihilation operators to the extreme right position using commutation relations \eqref{00-CRcomp}. Observe that the coloring mapping is compatible with these commutation relations. Thus, at any step of the normal ordering we deal with the state of the initial coloring. After the normal ordering is completed, the state $|\Psi\rangle$ depends on creation operators only. Then due to \eqref{00-colstate} $a_k\ge 0$ for all $k=1,\dots,N-1$. We arrive at the contradiction, hence, $|\Psi\rangle=0$.\qed \medskip Proposition~\ref{P-negcal} allows us in some cases to quickly calculate the action of annihilation operators on the states without use of commutation relations \eqref{00-CRcomp}. For example, we can immediately say that \be{00-ex1} T_{41}(z)T_{13}(u_1)T_{13}(u_2)T_{12}(v_1)T_{13}(u_3)T_{12}(v_2)|0\rangle=0, \end{equation} because \be{00-ex11} \mathop{\rm Col}\bigl(T_{41}(z)T_{13}(u_1)T_{13}(u_2)T_{12}(v_1)T_{13}(u_3)T_{12}(v_2)\bigr)=\{4,2,-1,\dots\}. \end{equation} The reader can convince himself that the use of commutation relations \eqref{00-CRcomp} gives the same result, however it takes much more time and efforts. Generically, if $j-1 <k <i$, and the annihilation operator $T_{ij}$ acts on a state in which there is no quasiparticles of the color $k$, then this action vanishes, like in \eqref{00-ex1}. \begin{prop}\label{00-no-color} Let a state $|\Psi\rangle$ do not contain quasiparticles of the color $1$. Then \be{00-invcol1} T_{11}(z)|\Psi\rangle=\lambda_{1}(z)|\Psi\rangle. \end{equation} \end{prop} \textsl{Proof.} Obviously, it is enough to consider monomials \eqref{00-state} consisting of creation operators only. Otherwise, we always can get rid of the neutral and annihilation operators by normal ordering them. Since this monomial does not contain quasiparticles of the color $1$, we conclude that $i_p>1$ and $j_p>1$. Then, due to commutation relations \eqref{00-CRcomp} we have \be{00-actTp} T_{11}(z)T_{i_p,j_p}(u_p)=T_{i_p,j_p}(u_p)T_{11}(z)+ g(z,u_p)\bigl(T_{1,j_p}(z)T_{i_p,1}(u_p)-T_{1,j_p}(u_p)T_{i_p,1}(z)\bigr). \end{equation} There are two terms here. In the first term, the operators $T_{11}(z)$ and $T_{i_p,j_p}(u_p)$ are simply rearranged. In the second term, we have the annihilation operator $T_{i_p,1}$ on the right. The action of the latter on a state without quasiparticles of the first color gives zero. Thus, the operator $T_{11}(z)$ goes through all the creation operators $T_{i_p,j_p}(u_p)$ to the extreme right position, where it acts on the vacuum vector and gives $\lambda_{1}(z)$. \qed \medskip Applying the automorphism \eqref{00-varphi} to \eqref{00-invcol1} we find that in the $\mathcal{R}_N$ algebra \be{00-invcolN} T_{NN}(z)|\Psi\rangle=\lambda_{N}(z)|\Psi\rangle, \end{equation} provided the state $|\Psi\rangle$ does not contain quasiparticles of the last color $N-1$. This property also can be checked via direct calculation similar to \eqref{00-actTp}. However, an analogous property is not valid for the quasiparticles of the intermediate colors $2,\dots,N-2$. For example, if $|\Psi\rangle=T_{12}(u_1)T_{34}(u_2)|0\rangle$, then this state does not contain quasiparticles of the color $2$. At the same time, \begin{multline}\label{00-actT2} T_{22}(z)|\Psi\rangle=T_{22}(z)\;T_{12}(u_1)T_{34}(u_2)|0\rangle=T_{12}(u_1)T_{22}(z)T_{34}(u_2)|0\rangle\\ +g(z,u_1)\bigl(T_{12}(u_1)T_{22}(z)-T_{12}(z)T_{22}(u_1)\bigr)T_{34}(u_2)|0\rangle\\ =\lambda_{2}(z)\bigl(1+g(z,u_1)\bigr)|\Psi\rangle+g(u_1,z)\lambda_2(u_1)T_{12}(z)T_{34}(u_2)|0\rangle. \end{multline} In conclusion, we note that the coloring mapping can also be introduced for models with the $q$-deformed $R$-matrix \eqref{00-RUqglN}. \section{Bethe vectors\label{S-BV}} We have already introduced in section~\ref{SS-RABA} notions of Bethe vector and on-shell Bethe vector in $\mathfrak{gl}_2$ based models. Recall that on-shell Bethe vectors are eigenvectors of the transfer matrix. To solve the spectral problem, they are only needed. However, in computing the correlation functions, we also have to deal with off-shell Bethe vectors. Indeed, a typical problem arising in calculating correlation functions is to compute a matrix element of an operator $\hat{\mathcal{O}}$ of the following form: \be{typform} \mathcal{O}_{\Psi'\Psi}=\langle\Psi'|\hat{\mathcal{O}}|\Psi\rangle. \end{equation} Here $|\Psi\rangle$ is an on-shell Bethe vector, and $\langle\Psi'|$ is an on-shell Bethe vector in the dual space (dual on-shell Bethe vector). If the operator $\hat{\mathcal{O}}$ does not commute with the transfer matrix, then $\hat{\mathcal{O}}|\Psi\rangle=|\Phi\rangle$, where $|\Phi\rangle$ is no longer on-shell Bethe vector. In many cases, it is possible to express this vector as a linear combination of off-shell Bethe vectors. In particular, if an explicit solution of the quantum inverse problem is known for the model under consideration, then we can express local operators in terms of the elements of the monodromy matrix \cite{KitMT99,GohK00,MaiT00}. Let us give an example. The solution of the quantum inverse problem in the models with the monodromy matrix \eqref{00-MonmXXX} has the form \be{SolISP} E^{ij}_k=\left(\prod_{\ell=1}^{k-1}\mathop{\rm tr}\nolimits T(\xi_\ell)\right)T_{ji}(\xi_k)\left(\prod_{\ell=1}^{k}\mathop{\rm tr}\nolimits T(\xi_\ell)\right)^{-1}. \end{equation} Here $E^{ij}_k$ is the standard basis matrix acting in the local space $V_k$. Then the matrix element \eqref{typform} of this local operator reduces to \be{typformEij} \langle\Psi'|E^{ij}_k|\Psi\rangle= \frac{\prod_{\ell=1}^{k-1}\Lambda'(\xi_\ell)}{\prod_{\ell=1}^{k}\Lambda(\xi_\ell)}\langle\Psi'|T_{ji}(\xi_k)|\Psi\rangle. \end{equation} Here $\Lambda'(\xi_\ell)$ and $\Lambda(\xi_\ell)$ respectively are the transfer matrix eigenvalues on the on-shell vectors $\langle\Psi'|$ and $|\Psi\rangle$. Thus, we have to calculate the action of the matrix element $T_{ji}(\xi_k)$ on the vector $|\Psi\rangle$ and then calculate the resulting scalar product. In $\mathfrak{gl}_2$ based models, such action obviously gives a linear combination of off-shell Bethe vectors. For instance, if \be{psi-BBB} |\Psi\rangle=\prod_{\ell=1}^nB(u_\ell)|0\rangle, \end{equation} then \begin{equation}\label{actgl2} T_{11}(\xi_k)|\Psi\rangle= a(\xi_k)\prod_{i=1}^nf(u_i,\xi_k)|\Psi\rangle +\sum_{j=1}^n a(u_j)g(u_j,\xi_k)\left(\prod_{\substack{i=1\\ i\ne j}}^nf(u_i,u_j)\right)|\Phi_j\rangle, \end{equation} where $f(u,v)=1+g(u,v)$, and \be{phi-bbb} |\Phi_j\rangle=B(\xi_k)\prod_{\substack{\ell=1\\ \ell\ne j}}^nB(u_\ell)|0\rangle, \end{equation} (see \cite{FadST79,BogIK93L,FadLH96,Sla18}). The set of variables $u_1,\dots,u_n$ satisfies Bethe equations, because the original vector $|\Psi\rangle$ is on-shell. However, the new sets $\{\xi_k,u_1,\dots,u_n\}\setminus u_j$, $j=1,\dots,n$, are no longer solutions to these equations\footnote{Recall that the inhomogeneities $\xi_k$ are arbitrary complex numbers.}. Therefore, we have a linear combination of off-shell Bethe vectors $|\Phi_j\rangle$ in \eqref{actgl2}. Similarly, in models with higher rank symmetries, the actions of the monodromy matrix elements on Bethe vectors generate linear combinations of off-shell vectors \cite{BelPRS13a,HutLPRS17, HutLPRS20a}. As a result, matrix elements of local operators are reduced to scalar products of Bethe vectors, in which one of the vectors is on-shell, while another one, generally speaking, is off-shell. Compact determinant formulas for such scalar products are known in the case of the $\mathcal{R}_2$ algebra and its $q$-deformation \cite{Sla89,KitMT99,Sla18}. Partially similar results were recently obtained in the case of the $\mathcal{R}_3$ algebra \cite{PozOK12,BelPRS12b,PakRS14bb,PakRS15bb,Sla15sp}. These determinant representations allow us to study correlation functions analytically and numerically \cite{KitMT00,KitMST02,GohKS04,KitMST05,KitKMST12,CauM05,CauHM05,CauCS07}. Thus, despite the fact that the off-shell Bethe vectors themselves have no physical meaning, they play a very important role in calculating the correlation functions. It is for this reason that we pay so much attention to these vectors below. \subsection{Bethe vectors in \texorpdfstring{$\mathfrak{gl}_2$}{} based models\label{SS-BVgl2}} In the $\mathfrak{gl}_2$ based models the on-shell Bethe vectors have the form \eqref{00-Eig-stateB}, provided the parameters $\bar u=\{u_1,\dots,u_n\}$ satisfy a system of Bethe equations \cite{FadST79,FadT79,BogIK93L,FadLH96,Sla07,Sla18} \be{BE-or} \frac{a(u_j)}{d(u_j)}=\prod_{\substack{k=1\\k\ne j}}^n\frac{f(u_j,u_k)}{f(u_k,u_j)}, \qquad j=1,\dots,n, \end{equation} and we recall that \be{01-f} f(u,v)=1+g(u,v)=\frac{u-v+c}{u-v}. \end{equation} Complex variables $\bar u=\{u_1,\dots,u_n\}$ are called {\it Bethe parameters}. Generic Bethe vectors (off-shell Bethe vectors) also have the form \eqref{00-Eig-stateB}, however, the Bethe parameters are arbitrary complex numbers. Equation \eqref{00-Eig-stateB} can be taken as the definition of off-shell Bethe vectors. However, this is not the only possible definition. The point is that the main property of the off-shell Bethe vectors is that they become on-shell, if the system of Bethe equations is fulfilled. Then the original definition can be modified in various ways. Namely, we can add to the vector $B(u_1)\dots B(u_n)|0\rangle$ any other vector that vanishes on the system of Bethe equations. For example, let \be{00-Eig-stateBmod} |\Psi\rangle=B(u_1)\dots B(u_n)|0\rangle+\left(\prod_{j=1}^na(u_j)-\prod_{j=1}^nd(u_j)\right)|\Phi\rangle, \end{equation} where $|\Phi\rangle$ is an arbitrary vector. It is easy to see that if the set $\bar u$ enjoys the Bethe equations \eqref{BE-or}, then \be{BE-cons} \prod_{j=1}^na(u_j)=\prod_{j=1}^nd(u_j). \end{equation} Hence, the coefficient of the vector $|\Phi\rangle$ vanishes, and the vector \eqref{00-Eig-stateBmod} becomes on-shell. Thus, the combination \eqref{00-Eig-stateBmod} can also be called the Bethe vector, because it turns into the on-shell Bethe vector as soon as the Bethe parameters satisfy Bethe equations. It is clear that we can invent plenty of combinations of this type. Therefore, strictly speaking, the definition of the off-shell Bethe vector is ambiguous. Among possible definitions, equation \eqref{00-Eig-stateB} looks the most simple. However, there exist other presentations for the Bethe vectors, which also have rather simple form. For instance, let \be{new-T} \tilde T=KTK^{-1}=\begin{pmatrix} \tilde A(u) &\tilde B(u)\\ \tilde C(u)&\tilde D(u) \end{pmatrix}, \end{equation} where $K$ is a $2\times 2$ $c$-number invertible matrix such that $K_{11}\ne 0$. It is clear that the new operator $\tilde B(u)$ is a linear combination of the original $A$, $B$, $C$, and $D$. Nevertheless, a state \be{00-tBmod} |\widetilde{\Psi}\rangle=\tilde B(u_1)\dots \tilde B(u_n)|0\rangle \end{equation} is the on-shell Bethe vector provided the system \eqref{BE-or} is fulfilled \cite{GroLMS17,BelS18}. We suggest the reader to check this statement. Anyway, presentation \eqref{00-tBmod} looks as simple as the original formula \eqref{00-Eig-stateB}. Thus, we have a big freedom in the definition of the Bethe vectors. Nevertheless, following the tradition we define them by equation \eqref{00-Eig-stateB}, in particular, for the reasons of simplicity. \subsection{Bethe vectors in \texorpdfstring{$\mathfrak{gl}_3$}{} based models\label{SS-BVgl3}} The problem of Bethe vectors in the $\mathfrak{gl}_3$ (and higher rank) based models is much more sophisticated than in the case considered above. The ambiguity of their definition still exists, however, now the form of the on-shell Bethe vectors is much more complex than \eqref{00-Eig-stateB}. Therefore, we cannot use even the reasons of simplicity for choosing an appropriate definition. The matter is that there is only one creation operator in the $\mathfrak{gl}_2$ case ($T_{12}$), while there are three creation operators in the $\mathfrak{gl}_3$ case ($T_{12}$, $T_{13}$, $T_{23}$). Consider a simple example that will allow us to feel the difference between the Bethe vectors in the $\mathfrak{gl}_2$ and $\mathfrak{gl}_3$ based models. For this we will try to construct simple on-shell Bethe vectors in these two cases. Consider the commutation relations \eqref{00-CRcomp}. We see that generically we have different operators in the l.h.s. ($T_{ij}$ and $T_{kl}$) and in the r.h.s. ($T_{il}$ and $T_{kj}$). However, if the operators in the l.h.s. belong to the same row (column), then we obtain the same operators in the r.h.s. In particular, we have \be{01-ABDB} \begin{aligned} T_{11}(u)T_{12}(v)&=f(v,u)T_{12}(v)T_{11}(u)+g(u,v)T_{12}(u)T_{11}(v),\\ T_{22}(u)T_{12}(v)&=f(u,v)T_{12}(v)T_{22}(u)+g(v,u)T_{12}(u)T_{22}(v). \end{aligned} \end{equation} Let us try to find an on-shell Bethe vector in a model described by the $\mathcal{R}_2$ algebra. This means that we are looking for the eigenvectors of the transfer matrix $T_{11}(z)+T_{22}(z)$. Let us test a vector $T_{12}(u)|0\rangle$. Then due to \eqref{01-ABDB} we obtain \begin{multline}\label{01-actGL2} \Bigl(T_{11}(z)+T_{22}(z)\Bigr)T_{12}(u)|0\rangle= T_{12}(u)\Bigl(f(u,z)T_{11}(z)+f(z,u)T_{22}(z)\Bigr)|0\rangle\\ +g(z,u)T_{12}(z)\Bigl(T_{11}(u)-T_{22}(u)\Bigr)|0\rangle. \end{multline} We see that we still deal with the vectors of the type $T_{12}(\cdot)|0\rangle$. Indeed, using $T_{ii}(u)|0\rangle=\lambda_i(u)|0\rangle$ (where $\lambda_1(u)=a(u)$ and $\lambda_2(u)=d(u)$ \eqref{00-vac22}) we obtain \begin{multline}\label{01-actGL2-1} \Bigl(T_{11}(z)+T_{22}(z)\Bigr)T_{12}(u)|0\rangle= \Bigl(f(u,z)a(z)+f(z,u)d(z)\Bigr)T_{12}(u)|0\rangle\\ +g(z,u)\Bigl(a(u)-d(u)\Bigr)T_{12}(z)|0\rangle. \end{multline} Thus, the result of the action of the transfer matrix $T_{11}(z)+T_{22}(z)$ gives two vectors: $T_{12}(u)|0\rangle$ and $T_{12}(z)|0\rangle$. The first one is the same as in the l.h.s., while the second is different. Traditionally this second term is called {\it unwanted term}. We will call it {\it unwanted term of the first type}. It is still given as the action of $T_{12}$ on the vacuum (as in the l.h.s.), but the operator $T_{12}$ has new argument. This unwanted term can be killed, if we choose an appropriate $u=u_0$, namely, such that $a(u_0)=d(u_0)$. Observe that this condition coincides with Bethe equations \eqref{BE-or} at $n=1$. Then the vector $T_{12}(u_0)|0\rangle$ becomes the eigenvector of the transfer matrix. It is easy to see that generically, if we test a vector of the form $T_{12}(u_1)\dots T_{12}(u_n)|0\rangle$, then the action of the transfer matrix produces unwanted terms of the first type only: we still obtain the products of the operators $T_{12}$ applied to $|0\rangle$, but some of these operators may have new arguments. Now let us consider the $\mathcal{R}_3$ algebra. Let us test the vector $T_{13}(u)|0\rangle$. We should act with the transfer matrix onto this vector \be{01-actGL3} \Bigl(T_{11}(z)+T_{22}(z)+T_{33}(z)\Bigr)T_{13}(u)|0\rangle. \end{equation} We see immediately a principle difference with the case considered above. Namely, the operators $T_{22}$ and $T_{13}$ do not belong to the same row or column. Due to the commutation relations we have \begin{multline}\label{01-actGL3-1} T_{22}(z)T_{13}(u)|0\rangle=\Bigl(T_{13}(u)T_{22}(z)+g(z,u)\bigl(T_{12}(u)T_{23}(z)-T_{12}(z)T_{23}(u)\bigr)\Bigr)|0\rangle\\ =\lambda_{2}(z)T_{13}(u)|0\rangle+g(z,u)\bigl(T_{12}(u)T_{23}(z)-T_{12}(z)T_{23}(u)\bigr)|0\rangle, \end{multline} and the action of the transfer matrix is \begin{multline}\label{01-actTMGL3} \mathop{\rm tr}\nolimits T(z)T_{13}(u)|0\rangle= \Bigl(f(u,z)\lambda_{1}(z)+\lambda_{2}(z)+f(z,u)\lambda_{3}(z)\Bigr)T_{13}(u)|0\rangle\\ +g(z,u)\Bigl(\lambda_{1}(u)-\lambda_{3}(u)\Bigr)T_{13}(z)|0\rangle\\ +g(z,u)\bigl(T_{12}(u)T_{23}(z)-T_{12}(z)T_{23}(u)\bigr)|0\rangle. \end{multline} We obtain the vectors of a new type $T_{12}(u)T_{23}(z)|0\rangle$ and $T_{12}(z)T_{23}(u)|0\rangle$, that we call {\it unwanted terms of the second type}. These unwanted terms generically cannot be killed by an appropriate choice of the original argument $u$. \textsl{Remark 1.} Strictly speaking, the term in the third line of \eqref{01-actTMGL3} vanishes at $u\to\infty$. In some models (for example, $SU(3)$-invariant $XXX$ chain) Bethe equations have infinite roots, and then the corresponding contribution in \eqref{01-actTMGL3} vanishes. \textsl{Remark 2.} In some models the operator $T_{23}(u)$ actually plays the role of the annihilation operator: $T_{23}(u)|0\rangle=0$ for all $u$. A typical example of such a monodromy matrix is the matrix \eqref{00-MonmXXX}. We will consider this example in more detail in section~\ref{SS-PCM}. Then, for this type of models, the unwanted terms of the second type in \eqref{01-actTMGL3} automatically vanish. The vector $T_{13}(u)|0\rangle$ thus becomes the on-shell Bethe vector for $u=u_0$ such that $\lambda_{1}(u_0)=\lambda_{3}(u_0)$. We emphasize, however, that this is true only for the special case of models with the $\mathcal{R}_3$ algebra. Generically $T_{23}(u)|0\rangle\ne0$, therefore, we obtain the unwanted terms of the second type in \eqref{01-actTMGL3}. Summarizing the above considerations we conclude that we deal with unwanted terms of two types: \begin{itemize} \item first type: the operators are the same as in the l.h.s., but some of them accept new arguments; \item second type: the operators in the r.h.s. are different form the ones in the l.h.s. In this case, they can either keep their original arguments or accept new arguments. \end{itemize} In the case of the $\mathcal{R}_2$ algebra we obtain the first type of unwanted terms only. For the $\mathcal{R}_3$ algebra (and higher) we necessarily obtain both types of unwanted terms. Therefore the structure of the Bethe vectors in the $\mathfrak{gl}_3$ case generically cannot be so simple as in the $\mathfrak{gl}_2$ case. The above example shows that not every combination of the creation operators applied to the vacuum has a chance to become an eigenvector of the transfer matrix even for some specific values of the Bethe parameters. In particular, in the general case, the vector $T_{13}(u)|0\rangle$ cannot be an on-shell Bethe vector for any values of $u$. We will see below, that in order to obtain a Bethe vector, one should take the following combination of the terms $T_{13}(u)|0\rangle$ and $T_{12}(u)T_{23}(v)|0\rangle$: \be{01-comb} g(v,u)\lambda_2(v)T_{13}(u)|0\rangle+T_{12}(u)T_{23}(v)|0\rangle. \end{equation} If the parameters $u$ and $v$ enjoy the system of equations \be{sysBE1} \frac{\lambda_1(u)}{\lambda_2(u)}=\frac{\lambda_3(v)}{\lambda_2(v)}=f(v,u), \end{equation} then the state \eqref{01-comb} is the on-shell Bethe vector. Thus, the state \eqref{01-comb} can be called the off-shell Bethe vector, if $u$ and $v$ are generic complex numbers. We see, however, that even in this simple example, the form of the off-shell Bethe vector is highly non-trivial. This is a very special polynomial in the creation operators acting on the vacuum vector. Of course, the modification of the vector \eqref{01-comb} in the same spirit as in \eqref{00-Eig-stateBmod} remains possible, and hence, representation \eqref{01-comb} for this off-shell Bethe vector is not unique. It is not clear, however, which among these representations is the simplest. There are several ways to construct on-shell Bethe vectors in the models with the $\mathfrak{gl}_N$-invariant $R$-matrix. In addition to the NABA, it is also worth mentioning the approach associated with the so called {\it trace formula} \cite{TarV94,TarV13,BelRag08}, as well as the method based on the use of a special current algebra to describe the $RTT$-relation \cite{KhoP,KhoPT,EnrKP,OPS}. It is remarkable that all the three methods listed above give eventually the same expression, not only for on-shell, but also for off-shell Bethe vectors. And this is despite the initial ambiguity in defining the off-shell Bethe vectors. Therefore, we will adopt this formula as the definition of the Bethe vector. Details will be described later. \textsl{Remark.} Recently a new presentation for on-shell Bethe vectors in terms of Sklyanin’s $B$-operators \cite{Skl93} was conjectured in \cite{GroLMS17} for the models with $\mathfrak{gl}_N$-invariant $R$-matrix. The proof of this presentation was given in \cite{RyaV19} in the framework of the quantum separation of variables developed in \cite{MaiN18}. Within this approach, the on-shell Bethe vectors have the form \eqref{00-Eig-stateB} provided the Bethe parameters satisfy Bethe equations. However, it was shown in \cite{LiaS18} that if the Bethe parameters remain arbitrary complex numbers, then at least in the case of the $\mathcal{R}_3$ algebra, the corresponding vectors coincide with the special case of off-shell Bethe vectors constructed within the NABA framework. \subsection{Notation\label{S-N}} We now fix some notation and conventions that will be used throughout all the notes. \begin{itemize} \item {\sl Rational functions.} We have introduced already two rational functions $g(x,y)$ and $f(x,y)$. Recall that \be{01-univ-not} g(x,y)=\frac{c}{x-y},\qquad f(x,y)=1+g(x,y)=\frac{x-y+c}{x-y}. \end{equation} Observe that $g(x,y)=-g(y,x)$. Below we will permanently use these functions. \item {\sl Sets of variables.} We denote sets of variables by a bar: $\bar x$, $\bar u$, $\bar v$ etc. Individual elements of the sets are denoted by the subscripts: $v_j$, $u_k$ etc. A notation $\bar u_i$, means $\bar u\setminus u_i$ etc. Instead of the standard notation $\bar u\cup\bar v$ we use braces $\{\bar u,\bar v\}$ for the union of sets. \item {\sl Shorthand notation for products.} In order to make formulas more compact we use a shorthand notation for the products of commuting operators or functions depending on one or two variables. Namely, if the functions $\lambda_i$, $g$, $f$, as well as the operators $T_{ij}$ depend on sets of variables, this means that one should take the product over the corresponding set. For example, \be{01-SH-prod} T_{ij}(\bar u)=\prod_{u_k\in\bar u} T_{ij}(u_k);\quad g(z, \bar w_i)= \prod_{\substack{w_j\in\bar w\\w_j\ne w_i}} g(z, w_j);\quad f(\bar u,\bar v)=\prod_{u_j\in\bar u}\prod_{v_k\in\bar v} f(u_j,v_k). \end{equation} Observe that $[T_{ij}(u),T_{ij}(v)]=0$ due to the commutation relations \eqref{00-CRcomp}. Therefore, the product $T_{ij}(\bar u)$ is well defined. By definition, any product over the empty set is equal to $1$. A double product is equal to $1$ if at least one of the sets is empty. The use of this shorthand notation is not a whim, but a necessity. Even within the framework of the usual ABA, we often have to deal with rather cumbersome formulas. In the NABA, this bulkiness drastically increases. The shorthand notation for products allows us to reduce the size of formulas to some extent. Therefore, we will constantly use them despite some disadvantages (for example, the lack of information about the cardinalities of the sets in which the product is taken). We will extend this convention for new functions that will appear later on. For the moment, let us show how this convention works in particular examples. Equation \eqref{00-Eig-stateB} takes the following form \be{00-Eig-stateBsh} |\Psi\rangle=B(\bar u)|0\rangle. \end{equation} If necessary, we should add a special comment on the cardinality of the set $\bar u$. The system of Bethe equations \eqref{BE-or} reads \be{BE-orsh} \frac{a(u_j)}{d(u_j)}=\frac{f(u_j,\bar u_j)}{f(\bar u_j,u_j)}, \qquad j=1,\dots,n. \end{equation} \end{itemize} \section{Nested algebraic Bethe ansatz \label{S-NABA}} In this section, we directly proceed to the description of the NABA method. This method allows us to obtain a representation for on-shell and off-shell Bethe vectors in the models with the $\mathfrak{gl}_3$-invariant $R$-matrix. In addition, we obtain a system of Bethe equations, whose solutions determine the spectrum of the transfer matrix. The presentation follows the works \cite{Kul81,KulRes82,KulRes83,Res86}. \subsection{Basic notions \label{SS-BN}} Consider a model with the $\mathfrak{gl}_3$-invariant $R$-matrix \eqref{00-RYang} \be{01-R9} R(u,v)=\mathbb{I}+g(u,v)P, \end{equation} where the identity matrix $\mathbb{I}$ and the permutation matrix $P$ are $9\times 9$ matrices: \be{01-IP9} \mathbb{I}_{ij}^{kl}=\delta_{ij}\delta_{kl},\qquad P_{ij}^{kl}=\delta_{il}\delta_{jk},\qquad i,j,k,l=1,2,3. \end{equation} Recall that the $R$-matrix acts in the tensor product $\mathbb{C}^3\otimes\mathbb{C}^3$. The lower indices $i$ and $j$ in \eqref{01-IP9} refer to the first space, the upper indices $k$ and $l$ in refer to the second space. We will need also the $\mathfrak{gl}_2$-invariant $R$-matrix, which we denote by $r(u,v)$: \be{01-R4} r(u,v)=\bs{1}+g(u,v)\bs{p}. \end{equation} Here the identity matrix $\bs{1}$ and the permutation matrix $\bs{p}$ are $4\times 4$ matrices acting in $\mathbb{C}^2\otimes\mathbb{C}^2$: \be{01-IP4} \bs{1}_{\alpha\beta}^{\rho\mu}=\delta_{\alpha\beta}\delta_{\rho\mu},\qquad \bs{p}_{\alpha\beta}^{\rho\mu}=\delta_{\alpha\mu}\delta_{\beta\rho}, \qquad \alpha,\beta,\rho,\mu=1,2. \end{equation} The indices of the matrices are arranged similarly to equation \eqref{01-IP9}. A monodromy matrix is \be{T-matrix1} T(u)=\begin{pmatrix} T_{11}(u)& T_{12}(u)& T_{13}(u)\\ T_{21}(u)& T_{22}(u)& T_{23}(u)\\ T_{31}(u)& T_{32}(u)& T_{33}(u)\end{pmatrix}. \end{equation} It satisfies $RTT$-relation \eqref{00-RTT}. This relation implies the set of commutation relations \eqref{00-CRcomp} for the operators $T_{ij}$. We will also use one more parametrization of the monodromy matrix \be{01-T-matrix2} T(u)=\begin{pmatrix} A(u)& \mathbb{B}(u)\\ \mathbb{C}(u)& \mathbb{D}(u) \end{pmatrix}= \begin{pmatrix} A(u)& B_{1}(u)& B_{2}(u)\\ C_{1}(u)& D_{11}(u)& D_{12}(u)\\ C_{2}(u)& D_{21}(u)& D_{22}(u)\end{pmatrix}. \end{equation} Here in the intermediate formula the $T$-matrix is presented as a $2\times2$ block-matrix. The block $A$ has the size $1\times 1$, the block $\mathbb{B}$ has the size $1\times 2$, the block $\mathbb{C}$ has the size $2\times 1$, and the block $\mathbb{D}$ has the size $2\times 2$. {\sl Remark.} One can also consider another embedding \be{01-T-matrix3} T(u)=\begin{pmatrix} \mathbb{A}'(u)& \mathbb{B}'(u)\\ \mathbb{C}'(u)& D'(u) \end{pmatrix}= \begin{pmatrix} A'_{11}(u)& A'_{12}(u)& B'_{1}(u)\\ A'_{21}(u)& A'_{22}(u)& B'_{2}(u)\\ C'_{1}(u)& C'_{2}(u)& D'(u)\end{pmatrix}, \end{equation} where now the block $\mathbb{A}'$ is a $2\times 2$ matrix, while the block $D'$ has the size $1\times 1$. These two embeddings are equivalent due to the automorphism \eqref{00-varphi}. For definiteness, we consider in details parametrization \eqref{01-T-matrix2} and give short comments about parametrization \eqref{01-T-matrix3}. \subsection{Particular case of \texorpdfstring{$\mathfrak{gl}_3$}{} invariant models\label{SS-PCM}} In this section we consider a particular case of the Bethe vectors. It will give us an idea of their construction in the general case. Generically, the operators $T_{ij}$ with $i<j$ are the creation operators. However, as we already mentioned, in some models the operator $T_{23}$ annihilates the vacuum vector\footnote{Another possibility is that $T_{12}$ annihilates the vacuum. Due to the automorphism \eqref{00-varphi} the cases $T_{23}(u)|0\rangle=0$ and $T_{12}(u)|0\rangle=0$ are equivalent.}: $T_{23}(u)|0\rangle=0$. Actually, we very often deal with this situation in the models of physical application ($SU(3)$-invariant Heisenberg chain, two-component Bose gas, t-J model). Therefore this particular case is rather important. Consider the simplest example of such monodromy matrix: $T(u)=R(u,\xi)$, where $\xi$ is a fixed complex number. This is the monodromy matrix of the $XXX$ chain consisting of one site. Then $T_{23}(u)=g(u,\xi)E_{32}$. Obviously \be{00-actT23} T_{23}(u)|0\rangle=g(u,\xi)E_{32}|0\rangle= g(u,\xi)E_{32}\left(\begin{smallmatrix} 1\\0\\0 \end{smallmatrix}\right)=0. \end{equation} Consider now the chain with $L$ sites. Then the monodromy matrix is given by \eqref{00-MonmXXX} \be{00-Induct1} T^{(L)}(u)= T^{(L-1)}(u) T^{(1)}(u), \end{equation} where \be{00-Induct2} T^{(L-1)}(u)=R_{0L}(u,\xi_L)\dots R_{02}(u,\xi_2), \qquad T^{(1)}(u)=R_{01}(u,\xi_1). \end{equation} Hence, the operator $T_{23}^{(L)}(u)$ of the whole chain has the following representation \be{00-actInd} T_{23}^{(L)}(u)= T_{21}^{(L-1)}(u)T^{(1)}_{13}(u)+ T_{22}^{(L-1)}(u)T^{(1)}_{23}(u)+T_{23}^{(L-1)}(u)T^{(1)}_{33}(u), \end{equation} where $T_{ij}^{(L-1)}(u)$ and $T^{(1)}_{ij}(u)$ are the entries of the monodromy matrices respectively corresponding to the sub-chains of the lengths $L-1$ and $1$. Since $T_{ij}^{(L-1)}(u)$ and $T^{(1)}_{kl}(v)$ act in different spaces, they commute: $[T_{ij}^{(L-1)}(u),T^{(1)}_{kl}(v)]=0$ for arbitrary subscripts and arbitrary arguments. We know that $T_{23}^{(L-1)}(u)|0\rangle=0$ for $L=2$, since in this case we again deal with the monodromy matrix of the $XXX$ chain with one site. Assume that $T_{23}^{(L-1)}(u)|0\rangle=0$ for some $L>1$. Then it follows from \eqref{00-actInd} that $T_{23}^{(L)}(u)|0\rangle=0$. Indeed, $T_{21}^{(L-1)}(u)|0\rangle=0$ by definition, $T^{(1)}_{23}(u)|0\rangle=0$ due to \eqref{00-actT23}, and $T_{23}^{(L-1)}(u)|0\rangle=0$ due to the induction assumption. This method also allows us to find the vacuum eigenvalues $\lambda_i(u)$ of the diagonal entries $T_{ii}(u)$: \be{00-vaceig} \lambda_1(u)=f(u,\bar{\xi})=\prod_{k=1}^Lf(u,\xi_k),\qquad \lambda_2(u)=\lambda_3(u)=1. \end{equation} Observe that we have $\lambda_2(u)=\lambda_3(u)$. This is the direct consequence of the property $T_{23}(u)|0\rangle=0$. More precisely, if $T_{23}(u)|0\rangle=0$, then $\lambda_2(u) = \kappa\lambda_3(u)$, where $\kappa$ is a constant. Indeed, it follows from \eqref{00-CRcomp} that \be{01-23-32} [T_{23}(u),T_{32}(v)]=g(u,v)\bigl(T_{22}(u)T_{33}(v)-T_{22}(v)T_{33}(u)\bigr). \end{equation} Applying this equation to $|0\rangle$ we obtain \be{01-23-32v} 0=g(u,v)\bigl(\lambda_{2}(u)\lambda_{3}(v)-\lambda_{2}(v)\lambda_{3}(u)\bigr)|0\rangle. \end{equation} Hence, the ratio $\lambda_2(u)/\lambda_3(u)$ does not depend on $u$. \subsection{Action of the operators \texorpdfstring{$D_{\xi\alpha}$}{}} Consider a model in which $T_{23}(u)|0\rangle=0$ without specifying the Hilbert space $\mathcal{H}$ in which the operators $T_{ij}(u)$ act. Let the monodromy matrix be normalized in such a way that $\lambda_3(u)=1$. Then $\lambda_2(u)$ must be a constant. For simplicity we assume that $\lambda_2(u)=1$, like in \eqref{00-vaceig}. More general case will be considered later. In the case under consideration we have only two creation operators: $T_{12}(u)\equiv B_1(u)$ and $T_{13}(u)\equiv B_2(u)$ (see \eqref{01-T-matrix2}). We can try to check a monomial \be{01-prod-B} B_{\beta_1}(u_1)\dots B_{\beta_a}(u_a)|0\rangle,\qquad a=0,1,\dots , \end{equation} as a candidate for the transfer matrix eigenvector. Here every $\beta_i$ is equal to either $1$ or $2$. We should act with the transfer matrix \be{01-tr-mat} \mathop{\rm tr}\nolimits T(z)=A(z)+D_{11}(z)+D_{22}(z) \end{equation} onto this vector. We start our consideration with the action of the operators $D_{\alpha\alpha}(z)$. It follows from \eqref{00-CRcomp} that % \be{01-DB} D_{\xi\alpha}(z)B_\beta(u)= B_\beta(u)D_{\xi\alpha}(z)+ g(z,u)B_\alpha(u)D_{\xi\beta}(z)+g(u,z)B_\alpha(z)D_{\xi\beta}(u). \end{equation} % We see that acting with $ D_{\xi\alpha}(z)$ onto the vector \eqref{01-prod-B} we may have unwanted terms of the second type. Indeed, the operator $D_{11}$ acting on $B_2$ gives contributions with $B_1$, and the operator $D_{22}$ acting on $B_1$ gives contributions with $B_2$. Thus, the operator structure in the vector \eqref{01-prod-B} is not invariant under the action of $D_{11}(z)+D_{22}(z)$. At the same time the action of the operator $A(z)$ does not produce unwanted terms of the second type (see section~\ref{SS-AA}). Thus, the monomial \eqref{01-prod-B} generically is not invariant under the action of $\mathop{\rm tr}\nolimits T(z)$. Therefore, it is quite natural to replace the monomial \eqref{01-prod-B} by a polynomial \be{01-Lin-comb} |\Psi_a(\bar u)\rangle=\sum_{\beta_1,\dots,\beta_a}B_{\beta_1}(u_1)\dots B_{\beta_a}(u_a)F_{\beta_1,\dots,\beta_a}|0\rangle,\qquad a=0,1,\dots . \end{equation} Here $F_{\beta_1,\dots,\beta_a}$ are some numerical coefficients. The sum is taken over every $\beta_i\in\{\beta_1,\dots,\beta_a\}$. Each $\beta_i$ takes the values $\beta_i=1,2$. Let us write down \eqref{01-Lin-comb} in the form of a scalar product. For this we introduce a two-component vector-row $\mathbb{B}(u)=\bigl(B_1(u), B_2(u)\bigr)$ with operator-valued components. Consider a tensor product \be{01-tens-prod} \mathbb{B}_1(u_1)\mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a) = \mathbb{B}(u_1)\otimes\mathbb{B}(u_2)\otimes\dots \otimes\mathbb{B}(u_a). \end{equation} This is a $2^a$-component vector-row. Then we can write down the vector $|\Psi_a(\bar u)\rangle$ as the scalar product \be{01-BVtens} |\Psi_a(\bar u)\rangle = \mathbb{B}_1(u_1)\mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a) \mathbb{F}(\bar u)|0\rangle, \end{equation} where $\mathbb{F}(\bar u)$ is a vector belonging to the space \be{01-tensprod} \underbrace{\mathbb{C}^2\otimes \dots \otimes \mathbb{C}^2 }_{a\quad\text{times}}. \end{equation} The commutation relations \eqref{01-DB} can be written as follows \be{01-DB-rmat} \mathbb{D}_0(z)\mathbb{B}_1(u)=\mathbb{B}_1(u) \mathbb{D}_0(z) r_{01}(z,u) +g(u,z)\mathbb{B}_1(z)\mathbb{D}_0(u)p_{01}, \end{equation} where $p_{01}$ is $4\times 4$ permutation matrix. Here the matrix $\mathbb{D}_0$ acts in the auxiliary space $V_0\sim \mathbb{C}^2$ and the vector $\mathbb{B}$ belongs to the auxiliary space $V_1\sim \mathbb{C}^2$. The $r$-matrix $r_{01}(z,u)$ and the matrix $p_{01}$ act in the tensor product $V_0\otimes V_1$. We call the first term in the r.h.s. of \eqref{01-DB-rmat} {\it the first scheme of commutation}. The second term in the r.h.s. of \eqref{01-DB-rmat} is called {\it the second scheme of commutation}. It follows from the commutation relations \eqref{00-CRcomp} that \begin{equation}\label{01-CRDD} [D_{\alpha\beta}(u),D_{\gamma\delta}(v)]=g(u,v)\bigl(D_{\gamma\beta}(v)D_{\alpha\delta}(u) -D_{\gamma\beta}(u)D_{\alpha\delta}(v)\bigr). \end{equation} Hence, the matrix $\mathbb{D}(u)$ satisfies the $RTT$-relation with the $R$-matrix $r(u,v)$ \be{01-rDDr} r_{12}(u,v)\mathbb{D}_1(u)\mathbb{D}_2(v) =\mathbb{D}_2(v)\mathbb{D}_1(u)r_{12}(u,v). \end{equation} Therefore, $\mathbb{D}(u)$ can be treated as the monodromy matrix of a model with the $\mathfrak{gl}_2$-invariant $R$-matrix. Let us act with $\mathop{\rm tr}\nolimits \mathbb{D}(z)$ onto $|\Psi_a(\bar u)\rangle$. We have \be{01-DBVtens} \mathop{\rm tr}\nolimits_0 \mathbb{D}_0(z)|\Psi_a(\bar u)\rangle = \mathop{\rm tr}\nolimits_0 \mathbb{D}_0(z)\mathbb{B}_1(u_1)\mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a) \mathbb{F}(\bar u)|0\rangle. \end{equation} Here we have stressed that $\mathbb{D}(z)$ acts in $V_0$, which is different from $V_1,\dots, V_a$. Permuting $\mathbb{D}_0(z)$ and $\mathbb{B}_1(u_1)$ we obtain \begin{multline}\label{01-DBVtens1} \mathop{\rm tr}\nolimits_0 \mathbb{D}_0(z)|\Psi_a(\bar u)\rangle = \mathop{\rm tr}\nolimits_0 \Bigl(\mathbb{B}_1(u_1) \mathbb{D}_0(z) r_{01}(z,u_1) +g(u_1,z)\mathbb{B}_1(z)\mathbb{D}_0(u_1)p_{01}\Bigr)\\ \times\mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a) \mathbb{F}(\bar u)|0\rangle. \end{multline} We have two contributions. The second one definitely is unwanted, as it contains the operators $B_\beta(z)$ (in the vector-row $\mathbb{B}_1(z)$). Let us leave this term for some time and only deal with the wanted contributions. In other words, we use the first scheme of commutation only. In the case of the $\mathcal{R}_2$ algebra the use of the first scheme would necessarily give us wanted terms only. However, in the case of the $\mathcal{R}_3$ algebra we still may have unwanted terms of the second type. Our first goal is to get rid of them. We have \be{01-DBVtens1wt} \mathop{\rm tr}\nolimits_0 \mathbb{D}_0(z)|\Psi_a(\bar u)\rangle = \mathop{\rm tr}\nolimits_0 \mathbb{B}_1(u_1) \mathbb{D}_0(z) r_{01}(z,u_1) \mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a) \mathbb{F}(\bar u)|0\rangle+\mathcal{Z}, \end{equation} where $\mathcal{Z}$ denotes unwanted contributions. Clearly, the $R$-matrix $r_{01}(z,u_1)$ can be moved to the right \be{01-DBVtens2} \mathop{\rm tr}\nolimits_0 \mathbb{D}_0(z)|\Psi_a(\bar u)\rangle = \mathop{\rm tr}\nolimits_0 \mathbb{B}_1(u_1) \mathbb{D}_0(z) \mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a) r_{01}(z,u_1)\mathbb{F}(\bar u)|0\rangle+\mathcal{Z}. \end{equation} Then we repeat the procedure and finally we arrive at \be{01-DBVtens3} \mathop{\rm tr}\nolimits_0 \mathbb{D}_0(z)|\Psi_a(\bar u)\rangle = \mathop{\rm tr}\nolimits_0 \mathbb{B}_1(u_1) \mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a)\mathbb{D}_0(z)r_{0a}(z,u_a) \dots r_{01}(z,u_1)\mathbb{F}(\bar u)|0\rangle+\mathcal{Z}. \end{equation} We have obtained a matrix $\widehat{\mathcal{T}}^{(a)}(z)$ \be{01-calT} \widehat{\mathcal{T}}^{(a)}_0(z)=\mathbb{D}_0(z)\mathcal{T}^{(a)}_0(z),\quad\text{where}\quad \mathcal{T}^{(a)}_0(z)=r_{0a}(z,u_a)\dots r_{01}(z,u_1), \end{equation} and the subscript $0$ stresses that the auxiliary space of this matrix is $V_0$. Recall that the matrix $\mathbb{D}_0(z)$ can be treated is the monodromy matrix satisfying the $\mathcal{R}_2$ algebra due to \eqref{01-rDDr}. Its matrix elements act in the original Hilbert space $\mathcal{H}$ as follows: \be{01-vaceig} \begin{aligned} &D_{11}(z)|0\rangle=T_{22}(z)|0\rangle=1, \qquad D_{22}(z)|0\rangle=T_{33}(z)|0\rangle=1,\\ &D_{12}(z)|0\rangle=T_{23}(z)|0\rangle=0, \qquad D_{21}(z)|0\rangle=T_{32}(z)|0\rangle=0. \end{aligned} \end{equation} The matrix $\mathcal{T}^{(a)}_0(z)$ is the monodromy matrix of the inhomogeneous $\mathfrak{gl}_2$-invariant $XXX$ chain of the length $a$. The role of the inhomogeneity parameters is played by the parameters $\bar u=\{u_1,\dots,u_a\}$. The quantum space $\mathcal{H}^{(a)}$ of this model is the tensor product \be{01-spaceHa} \mathcal{H}^{(a)}=V_1\otimes\dots \otimes V_a, \qquad \text{where}\qquad V_j\sim \mathbb{C}^2. \end{equation} This is exactly the space containing the vector $\mathbb{F}(\bar u)$: $\mathbb{F}(\bar u)\in \mathcal{H}^{(a)}$. A vacuum vector in the space $\mathcal{H}^{(a)}$ has the form \be{01-vacXXX} |\Omega^{(a)}\rangle =\underbrace{\left(\begin{smallmatrix} 1\\0 \end{smallmatrix}\right)\otimes \dots \otimes \left(\begin{smallmatrix} 1\\0 \end{smallmatrix}\right)}_{a\quad\text{times}}. \end{equation} If we present $\mathcal{T}^{(a)}(z)$ as \be{01-MMTa} \mathcal{T}^{(a)}(z)=\begin{pmatrix} \mathcal{A}^{(a)}(z)&\mathcal{B}^{(a)}(z)\\\mathcal{C}^{(a)}(z)&\mathcal{D}^{(a)}(z)\end{pmatrix}, \end{equation} then \be{01-EVMMTa} \mathcal{A}^{(a)}(z)|\Omega^{(a)}\rangle=f(z,\bar u) |\Omega^{(a)}\rangle, \qquad \mathcal{D}^{(a)}(z)|\Omega^{(a)}\rangle= |\Omega^{(a)}\rangle. \end{equation} Thus, $\widehat{\mathcal{T}}^{(a)}_0(z)$ is the monodromy matrix of the $\mathcal{R}_2$ algebra, being the product of two monodromy matrices whose entries act in different spaces. It remains to act with $\mathop{\rm tr}\nolimits_0 \widehat{\mathcal{T}}^{(a)}_0(z)$ on the vector $\mathbb{F}(\bar u)|0\rangle$. Due to \eqref{01-vaceig} we have \begin{multline}\label{01-actTrace} \mathop{\rm tr}\nolimits_0 \widehat{\mathcal{T}}^{(a)}_0(z)\mathbb{F}(\bar u)|0\rangle\\ =\Bigl(D_{11}(z)\mathcal{A}^{(a)}(z)+D_{12}(z)\mathcal{C}^{(a)}(z) +D_{21}(z)\mathcal{B}^{(a)}(z)+D_{22}(z)\mathcal{D}^{(a)}(z)\Bigr)\mathbb{F}(\bar u)|0\rangle\\ =\Bigl(\mathcal{A}^{(a)}(z)+\mathcal{D}^{(a)}(z)\Bigr)\mathbb{F}(\bar u)|0\rangle = \mathop{\rm tr}\nolimits\mathcal{T}^{(a)}(z)\mathbb{F}(\bar u)|0\rangle. \end{multline} Thus, if we do not want to have unwanted terms of the second type in the action of $\mathop{\rm tr}\nolimits \mathbb{D}(z)$, we should require that $\mathbb{F}(\bar u)$ be an eigenvector of the transfer matrix $\mathop{\rm tr}\nolimits\mathcal{T}^{(a)}(z)$. Hence, $\mathbb{F}(\bar u)$ has the form \be{01-F-BBB} \mathbb{F}(\bar u)=\mathcal{B}^{(a)}(v_1)\dots \mathcal{B}^{(a)}(v_b)|\Omega^{(a)}\rangle, \end{equation} and the set $\bar v=\{v_1,\dots,v_b\}$ satisfies Bethe equations \be{01-BE1} f(v_j,\bar u)=\frac{f(v_j,\bar v_j)}{f(\bar v_j,v_j)},\qquad j=1,\dots,b. \end{equation} Recall that here we use the shorthand notation \eqref{01-SH-prod} for the products of the $f$-functions. Observe also that the number of the operators $\mathcal{B}^{(a)}(v_b)$ cannot exceed the number of the sites of the chain $a$. Hence, $b\le a$. Thus, we have conclude that the vector $\mathbb{F}(\bar u)$ should be the eigenvector of the transfer matrix of the inhomogeneous $XXX$ chain with the inhomogeneities $\bar u$. This is the main idea of the nested algebraic Bethe ansatz. Namely, the on-shell Bethe vector of the model with the $\mathfrak{gl}_3$-invariant $R$-matrix is expressed in terms of the on-shell Bethe vector of the model with the $\mathfrak{gl}_2$-invariant $R$-matrix. We see that $\mathbb{F}(\bar u)$ depends on the set of auxiliary parameters $\bar v$, that is $\mathbb{F}(\bar u)=\mathbb{F}(\bar u;\bar v)$. Hence, the vector $|\Psi_a(\bar u)\rangle$ also depends on these parameters: $|\Psi_a(\bar u)\rangle= |\Psi_{a,b}(\bar u;\bar v)\rangle$. By construction, this vector is symmetric over the variables $\bar v$. It turns out that it is also symmetric over the variables $\bar u$, however, this symmetry is far from the evident. We postpone the corresponding proof till section~\ref{SS-SOU}. For the moment we assume that $|\Psi_{a,b}(\bar u;\bar v)\rangle$ is symmetric over $\bar u$. Thus, we obtain \begin{equation}\label{01-actTrace1} \mathop{\rm tr}\nolimits_0 \mathbb{D}_0(z)|\Psi_{a,b}(\bar u;\bar v)\rangle =\tau_D(z|\bar u;\bar v)|\Psi_{a,b}(\bar u;\bar v)\rangle+\mathcal{Z}, \end{equation} where \be{01-eigvTa} \tau_D(z|\bar u;\bar v)=f(z,\bar u)f(\bar v,z)+f(z,\bar v). \end{equation} Up to now we used the first scheme of commutation only. Let us take into account the second scheme. It is clear that if we use at least once the second scheme, then the operators $D_{ij}(z)$ and $B_{\beta_k}(u_l)$ exchange their arguments. Therefore, after moving the matrix $\mathbb{D}_0(z)$ through the product of $\mathbb{B}_j(u_j)$ to the right it will have an argument $u_k\in\bar u$. At the same time one of the operator-valued vectors $\mathbb{B}_{j_0}$ will have the argument $z$. Other operator-valued vectors will have arguments $u_j$ such that $u_j\ne u_k$. Further arguments closely resemble those used in computing unwanted terms in $\mathcal{R}_2$ algebra. Due to the symmetry of $|\Psi_{a,b}(\bar u;\bar v)\rangle$ over $\bar u$ it is enough to consider the case when $\mathbb{B}_1(u_1)$ looses its argument and absorbs the argument $z$, while $\mathbb{D}_0(z)$ arrives at the extreme right position with the argument $u_1$. Then at the first step we should use the second scheme in \eqref{01-DBVtens1}, otherwise $\mathbb{D}_0(z)$ never absorbs the argument $u_1$. We have \be{01-DBVtensuw} \mathop{\rm tr}\nolimits_0 \mathbb{D}_0(z)|\Psi_{a,b}(\bar u;\bar v)\rangle = g(u_1,z)\mathop{\rm tr}\nolimits_0 \mathbb{B}_1(z)\mathbb{D}_0(u_1)p_{01}\mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a) \mathbb{F}(\bar u;\bar v)|0\rangle+\mathcal{Z}, \end{equation} where $\mathcal{Z}$ now denotes all the terms which do not give contributions to the desired result. Obviously, we can move $p_{01}$ to the right \be{01-DBVuw1} \mathop{\rm tr}\nolimits_0 \mathbb{D}_0(z)|\Psi_{a,b}(\bar u;\bar v)\rangle = g(u_1,z)\mathop{\rm tr}\nolimits_0 \mathbb{B}_1(z)\mathbb{D}_0(u_1)\mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a) p_{01}\mathbb{F}(\bar u;\bar v)|0\rangle+\mathcal{Z}. \end{equation} Moving further $\mathbb{D}_0(u_1)$ to the right we should keep its argument, therefore, now we can use only the first scheme. Then we obtain \begin{multline}\label{01-DBVuw2} \mathop{\rm tr}\nolimits_0 \mathbb{D}_0(z)|\Psi_{a,b}(\bar u;\bar v)\rangle = g(u_1,z)\mathop{\rm tr}\nolimits_0 \mathbb{B}_1(z)\mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a)\\ \times \mathbb{D}_0(u_1)r_{0a}(u_1,u_a)\dots r_{02}(u_1,u_2) p_{01}\mathbb{F}(\bar u;\bar v)|0\rangle+\mathcal{Z}. \end{multline} It is easy to see that \begin{multline}\label{vych} \mathbb{D}_0(u_1)r_{0a}(u_1,u_a)\dots r_{02}(u_1,u_2) p_{01}=\tfrac 1c\mathop{\rm Res} \mathbb{D}_0(z)r_{0a}(z,u_a)\dots r_{02}(z,u_2)r_{01}(z,u_1) \Bigr|_{z=u_1}\\ =\tfrac 1c\mathop{\rm Res} \mathbb{D}_0(z)\mathcal{T}^{(a)}_0(z)\Bigr|_{z=u_1}=\tfrac 1c\mathop{\rm Res} \widehat{\mathcal{T}}^{(a)}_0(z)\Bigr|_{z=u_1}. \end{multline} Hence, we obtain \begin{multline}\label{01-DBVuw3} \mathop{\rm tr}\nolimits_0 \mathbb{D}_0(z)|\Psi_{a,b}(\bar u;\bar v)\rangle = g(u_1,z) \mathbb{B}_1(z)\mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a)\tfrac 1c\mathop{\rm Res}\mathop{\rm tr}\nolimits_0 \widehat{\mathcal{T}}^{(a)}_0(z)\mathbb{F}(\bar u;\bar v)|0\rangle\Bigr|_{z=u_1}+\mathcal{Z}\\ = g(u_1,z) \mathbb{B}_1(z)\mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a)\tfrac 1c\mathop{\rm Res}\mathop{\rm tr}\nolimits_0\mathcal{T}^{(a)}_0(z)\mathbb{F}(\bar u;\bar v)|0\rangle\Bigr|_{z=u_1}+\mathcal{Z}, \end{multline} where we used \eqref{01-actTrace}. Since $\mathbb{F}(\bar u;\bar v)$ is the eigenvector of $\mathop{\rm tr}\nolimits\mathcal{T}^{(a)}_0(z)$ for any $z$ we find \be{01-DBVuw4} \mathop{\rm tr}\nolimits_0 \mathbb{D}_0(z)|\Psi_{a,b}(\bar u;\bar v)\rangle = g(u_1,z)\tfrac 1c\mathop{\rm Res}\tau_D(z|\bar u;\bar v)\Bigr|_{z=u_1} \mathbb{B}_1(z)\mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a)\mathbb{F}(\bar u;\bar v)|0\rangle+\mathcal{Z}, \end{equation} where the eigenvalue $\tau_D(z|\bar u;\bar v)$ is given by \eqref{01-eigvTa}. Substituting this eigenvalue into \eqref{01-DBVuw4} we eventually arrive at \be{01-DBVuw5} \mathop{\rm tr}\nolimits_0 \mathbb{D}_0(z)|\Psi_{a,b}(\bar u;\bar v)\rangle = g(u_1,z)f(u_1,\bar u_1)f(\bar v,u_1)|\Phi_{a,b}(z,u_1;\bar u;\bar v)\rangle +\mathcal{Z}, \end{equation} where \be{01-Phi} |\Phi_{a,b}(z,u_1;\bar u;\bar v)\rangle=\mathbb{B}_1(z)\mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a)\mathbb{F}(\bar u;\bar v)|0\rangle. \end{equation} Thus, using the symmetry of $|\Psi_{a,b}(\bar u;\bar v)\rangle$ over $\bar u$ we find the total action of $\mathop{\rm tr}\nolimits \mathbb{D}(z)=T_{22}(z)+T_{33}(z)$ on the vector $|\Psi_{a,b}(\bar u;\bar v)\rangle$. It is given by \be{01-totact} \mathop{\rm tr}\nolimits \mathbb{D}(z)|\Psi_{a,b}(\bar u;\bar v)\rangle=\tau_D(z|\bar u;\bar v)|\Psi_{a,b}(\bar u;\bar v)\rangle+ \sum_{k=1}^a g(u_k,z)f(u_k,\bar u_k)f(\bar v,u_k)|\Phi_{a,b}(z,u_k;\bar u;\bar v)\rangle, \end{equation} where \be{01-Phik} |\Phi_{a,b}(z,u_k;\bar u;\bar v)\rangle=|\Phi_{a,b}(z,u_1;\bar u;\bar v)\rangle\Bigr|_{u_1\leftrightarrow u_k}. \end{equation} \subsection{Action of \texorpdfstring{$A(z)$}{}\label{SS-AA}} We did not consider yet the action of the operator $T_{11}(z)= A(z)$ on the vector $|\Psi_{a,b}(\bar u;\bar v)\rangle$. This action is relatively simple and reminds the action of the operator $A$ in the case of the $\mathcal{R}_2$ algebra. Indeed, the commutation relation of $A(z)$ and $B_\beta(u)$ is \be{01-AB} A(z)B_\beta(u)=f(u,z)B_\beta(u)A(z)+g(z,u)B_\beta(z)A(u),\qquad \beta=1,2. \end{equation} Therefore, the action of $A(z)$ does not produce unwanted terms of the second type. It is easy to see that the result should have the following form: \be{01-APhi} A(z)|\Psi_{a,b}(\bar u;\bar v)\rangle=\tau_A(z|\bar u;\bar v) |\Psi_{a,b}(\bar u;\bar v)\rangle+\sum_{k=1}^a\Lambda_k |\Phi_{a,b}(z,u_k;\bar u;\bar v)\rangle, \end{equation} where $\tau_A(z|\bar u;\bar v)$ and $\Lambda_k$ are some numerical coefficients. They can be found exactly in the same manner as in the $\mathcal{R}_2$ case. Recall this procedure. As usual, let us call the first term in the r.h.s. of \eqref{01-AB} {\it the first scheme of commutation}. Respectively, the second term in the r.h.s. of \eqref{01-AB} is called {\it the second scheme of commutation}. In the first scheme both $A$ and $B_\beta$ keep their original arguments, while in the second scheme they exchange them. Obviously, in order to obtain the contribution proportional to $\tau_A(z|\bar u;\bar v)$ we should use the first scheme of commutation only. We obtain \be{01-APhi1s} A(z)|\Psi_{a,b}(\bar u;\bar v)\rangle=f(\bar u,z)\mathbb{B}_1(u_1) \mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a)A(z) \mathbb{F}(\bar u;\bar v)|0\rangle+\mathcal{Z}, \end{equation} where $\mathcal{Z}$ denotes all unwanted terms. Acting with $A(z)$ on $|0\rangle$ we immediately obtain \be{01-L0} \tau_A(z|\bar u;\bar v)=\lambda_1(z)f(\bar u,z), \end{equation} and thus, this coefficient actually does not depend on the set $\bar v$. In order to find the coefficients $\Lambda_k$ it is enough to find one of them, say, $\Lambda_1$. Here we use the symmetry of $|\Psi_{a,b}(\bar u;\bar v)\rangle$ over $\bar u$. Then at the first step we must use the second scheme of commutation, and after this we must use only the first scheme of commutation. This gives us \be{01-APhi2s} A(z)|\Psi_{a,b}(\bar u;\bar v)\rangle=g(z,u_1)f(\bar u_1,u_1)\mathbb{B}_1(z) \mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a)A(u_1) \mathbb{F}(\bar u;\bar v)|0\rangle+\mathcal{Z}, \end{equation} where now $\mathcal{Z}$ denotes all the terms that do not give contributions to the desired result. From this, we find \be{01-Lam1} \Lambda_1=\lambda_1(u_1)g(z,u_1)f(\bar u_1,u_1). \end{equation} Thus, we have computed the action of the transfer matrix $\mathop{\rm tr}\nolimits T(z)$ on the vector $|\Psi_{a,b}(\bar u;\bar v)\rangle$. It is given by the sum of \eqref{01-totact} and \eqref{01-APhi}: \be{01-totact1} \mathop{\rm tr}\nolimits T(z)|\Psi_{a,b}(\bar u;\bar v)\rangle=\tau(z|\bar u;\bar v) |\Psi_{a,b}(\bar u;\bar v)\rangle+\sum_{k=1}^aM_k |\Phi_{a,b}(z,u_k;\bar u;\bar v)\rangle. \end{equation} Here \be{01-tottau} \tau(z|\bar u;\bar v)=\tau_A(z|\bar u;\bar v)+\tau_D(z|\bar u;\bar v) =\lambda_1(z)f(\bar u,z)+f(z,\bar u)f(\bar v,z)+f(z,\bar v). \end{equation} The coefficients $M_k$ are \be{01-totMk} M_k=\lambda_1(u_k)g(z,u_k)f(\bar u_k,u_k)+g(u_k,z)f(u_k,\bar u_k)f(\bar v,u_k). \end{equation} It is clear that $|\Psi_{a,b}(\bar u;\bar v)\rangle$ becomes on-shell Bethe vector, if we set $M_k=0$ for $k=1,\dots,a$, and for all complex $z$. This leads us to a new system of equations \be{01-BE2} \lambda_1(u_k)=\frac{f(u_k,\bar u_k)}{f(\bar u_k,u_k)}f(\bar v,u_k),\qquad k=1,\dots,a. \end{equation} Together with the already obtained equations \eqref{01-BE1} \be{01-BE1du} f(v_j,\bar u)=\frac{f(v_j,\bar v_j)}{f(\bar v_j,v_j)},\qquad j=1,\dots,b, \end{equation} equations \eqref{01-BE2} form a system of $a+b$ equations on $a+b$ variables $\bar u$ and $\bar v$. This system is a particular case (corresponding to $\lambda_2(v)=\lambda_3(v)$) of Bethe equations for the models with the $\mathfrak{gl}_3$-invariant $R$-matrix. Thus, the on-shell Bethe vectors have the form \eqref{01-BVtens} with $\mathbb{F}(\bar u;\bar v)$ given by \eqref{01-F-BBB}. The parameters $\bar u$ and $\bar v$ should satisfy the systems of equations \eqref{01-BE2} and \eqref{01-BE1du}. \subsection{General case \label{SS-GC}} All the consideration above concerned the particular case $T_{23}(u)|0\rangle=0$. What should be done in the general case? Remarkably, almost the entire scheme remains the same. We should only assume that the vector $\mathbb{F}(\bar u;\bar v)$ in the expression \be{01-BVtensM} |\Psi_a(\bar u;\bar v)\rangle = \mathbb{B}_1(u_1)\mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a) \mathbb{F}(\bar u;\bar v)|0\rangle \end{equation} has operator-valued components, which depend on the operators $D_{\alpha\beta}$ (in particular, on $T_{23}$). In other words, the vector $\mathbb{F}(\bar u;\bar v)|0\rangle$ belongs to the tensor product $\mathcal{H}\otimes\mathcal{H}^{(a)}$ (because the action of $T_{23}$ on the vacuum $|0\rangle$ gives a vector in the space $\mathcal{H}$). We also should take into account that \be{actD11} \begin{aligned} &D_{11}(z)|0\rangle=T_{22}(z)|0\rangle=\lambda_{2}(z)|0\rangle,\\ &D_{22}(z)|0\rangle=T_{33}(z)|0\rangle=\lambda_{3}(z)|0\rangle, \end{aligned} \end{equation} and now we have no the restriction $\lambda_2(z)/\lambda_3(z)=const$. We start with representation \eqref{01-BVtensM} and act on this vector with $\mathop{\rm tr}\nolimits \mathbb{D}(z)$. Using the first scheme of commutation only we arrive at \eqref{01-DBVtens3}: \be{01-GDBVtens3} \mathop{\rm tr}\nolimits_0 \mathbb{D}_0(z)|\Psi_a(\bar u)\rangle = \mathbb{B}_1(u_1) \mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a)\mathop{\rm tr}\nolimits_0\widehat{\mathcal{T}}^{(a)}_0(z)\mathbb{F}(\bar u;\bar v)|0\rangle+\mathcal{Z}. \end{equation} We see that $\mathbb{F}(\bar u;\bar v)|0\rangle$ should be an eigenvector of the transfer matrix $\mathop{\rm tr}\nolimits\widehat{\mathcal{T}}^{(a)}(z)$ \eqref{01-calT}. The form of equation \eqref{01-GDBVtens3} coincides with the form of \eqref{01-DBVtens3} in which the monodromy matrix $\widehat{\mathcal{T}}^{(a)}(z)$ \eqref{01-calT} first appeared. The vector $\mathbb{F}(\bar u;\bar v)|0\rangle$ also still belongs to the tensor product $\mathcal{H}\otimes\mathcal{H}^{(a)}$. However, in the case considered above, we had a factorization: the vacuum vector $|0\rangle$ belonged to the space $\mathcal{H}$, while the vector $\mathbb{F}(\bar u;\bar v)$ belonged to the space $\mathcal{H}^{(a)}$. Now there is no such factorization, and we must consider the vector $\mathbb{F}(\bar u;\bar v)|0\rangle$ as a whole. Despite this difference, we can follow the same scheme as before. Let \be{01-GMMTa} \widehat{\mathcal{T}}^{(a)}(z)=\begin{pmatrix} \widehat{\mathcal{A}}^{(a)}(z)&\widehat{\mathcal{B}}^{(a)}(z)\\ \widehat{\mathcal{C}}^{(a)}(z)&\widehat{\mathcal{D}}^{(a)}(z)\end{pmatrix}. \end{equation} The entries of this matrix act in $\mathcal{H}\otimes\mathcal{H}^{(a)}$ with the vacuum vector $|0\rangle\otimes |\Omega^{(a)}\rangle$. The vacuum eigenvalues of the diagonal entries $\widehat{\mathcal{T}}^{(a)}_{ii}(z)$ (i.e. of $\widehat{\mathcal{A}}^{(a)}(z)$ and $\widehat{\mathcal{D}}^{(a)}(z)$) are given by the products of the vacuum eigenvalues of $D_{ii}(z)$ and $\mathcal{T}^{(a)}_{ii}(z)$: \be{01-vaceigH} \begin{aligned} &\widehat{\mathcal{A}}^{(a)}(z)|0\rangle\otimes |\Omega^{(a)}\rangle=\lambda_2(z)f(z,\bar u)|0\rangle\otimes |\Omega^{(a)}\rangle,\\ &\widehat{\mathcal{D}}^{(a)}(z)|0\rangle\otimes |\Omega^{(a)}\rangle=\lambda_3(z)|0\rangle\otimes |\Omega^{(a)}\rangle. \end{aligned} \end{equation} Thus, the eigenvectors of $\mathop{\rm tr}\nolimits\widehat{\mathcal{T}}^{(a)}(z)$ have the form similar to \eqref{01-F-BBB} \be{01-GF-BBB} \mathbb{F}(\bar u;\bar v)=\widehat{\mathcal{B}}^{(a)}(v_1)\dots \widehat{\mathcal{B}}^{(a)}(v_b)|0\rangle\otimes |\Omega^{(a)}\rangle, \end{equation} provided the set $\bar v$ satisfies Bethe equations \be{01-GBE1} \frac{\lambda_2(v_j)}{\lambda_3(v_j)}=\frac{f(v_j,\bar v_j)}{f(\bar v_j,v_j)}\frac1{f(v_j,\bar u)},\qquad j=1,\dots,b. \end{equation} Observe that now the matrix $\widehat{\mathcal{T}}^{(a)}(z)$ is no longer the monodromy matrix of the $XXX$ chain. It is the product of two monodromy matrices $\mathbb{D}(z)$ and $\mathcal{T}^{(a)}(z)$. Therefore, there is no restriction on the number of the operators $\widehat{\mathcal{B}}^{(a)}$ in \eqref{01-GF-BBB}. Thus, we do not have the constraint $b\le a$ as it was previously. Thus, we obtain \begin{equation}\label{01-GactTrace} \mathop{\rm tr}\nolimits \mathbb{D}(z)|\Psi_{a,b}(\bar u;\bar v)\rangle =\widehat{\tau}_D(z|\bar u;\bar v)|\Psi_{a,b}(\bar u;\bar v)\rangle+\mathcal{Z}, \end{equation} where now \be{01-GeigvTa} \widehat{\tau}_D(z|\bar u;\bar v)=\lambda_2(z)f(z,\bar u)f(\bar v,z)+\lambda_3(z)f(z,\bar v). \end{equation} Consideration of the unwanted terms of the first type produced by the action of $\mathop{\rm tr}\nolimits \mathbb{D}(z)$ can be done exactly in the same manner as before. This leads us to the analog of \eqref{01-DBVuw5} \be{01-GDBVuw5} \mathop{\rm tr}\nolimits \mathbb{D}(z)|\Psi_{a,b}(\bar u;\bar v)\rangle = \lambda_2(u_1)g(u_1,z)f(u_1,\bar u_1)f(\bar v,u_1)|\Phi_{a,b}(z,u_1;\bar u;\bar v)\rangle +\mathcal{Z}, \end{equation} where $|\Phi_{a,b}(z,u_1;\bar u;\bar v)\rangle$ is still given by \eqref{01-Phi}. The only natural difference between \eqref{01-GDBVuw5} and \eqref{01-DBVuw5} is that we have the additional factor $\lambda_2(u_1)$. Previously this factor was equal to $1$. Thus, the total action of $\mathop{\rm tr}\nolimits \mathbb{D}(z)$ on the vector $|\Psi_{a,b}(\bar u;\bar v)\rangle$ has the form \begin{multline}\label{01-totact2} \mathop{\rm tr}\nolimits \mathbb{D}(z)|\Psi_{a,b}(\bar u;\bar v)\rangle=\widehat{\tau}_D(z|\bar u;\bar v)|\Psi_{a,b}(\bar u;\bar v)\rangle\\ +\sum_{k=1}^a \lambda_2(u_k)g(u_k,z)f(u_k,\bar u_k)f(\bar v,u_k)|\Phi_{a,b}(z,u_k;\bar u;\bar v)\rangle, \end{multline} where $|\Phi_{a,b}(z,u_k;\bar u;\bar v)\rangle$ is given by \eqref{01-Phik}. Recall that this result is obtained under assumption that $|\Psi_{a,b}(\bar u;\bar v)\rangle$ is symmetric over the set $\bar u$. This symmetry still should be proved. Considering the action of $A(z)$ on $|\Psi_{a,b}(\bar u;\bar v)\rangle$ we deal with only one new problem. Namely, we should prove that \be{01-actAF} A(z)\mathbb{F}(\bar u;\bar v)|0\rangle= \lambda_1(z)\mathbb{F}(\bar u;\bar v)|0\rangle. \end{equation} This property was obvious in the previous case, because $\mathbb{F}(\bar u;\bar v)$ did not belong to the space $\mathcal{H}$, in which the operator $A(z)$ acted. Now $\mathbb{F}(\bar u;\bar v)|0\rangle\in\mathcal{H}\otimes\mathcal{H}^{(a)}$, therefore, the property \eqref{01-actAF} should be proved. However, the proof immediately follows from proposition~\ref{00-no-color}. Indeed, since the components of $\mathbb{F}(\bar u;\bar v)|0\rangle$ depend on the operators $D_{\alpha\beta}$, they only contain quasiparticles of the second color. Thus, the action of $A(z)$ on each of these components reduces to multiplication by $\lambda_1(z)$. In all other respects the action of $A(z)$ on $|\Psi_{a,b}(\bar u;\bar v)\rangle$ can be derived via the same lines leading eventually to equation \eqref{01-APhi}, where $\tau_A(z|\bar u;\bar v)$ and $\Lambda_k$ respectively are given by \eqref{01-L0} and \eqref{01-Lam1}. Thus, the action of the transfer matrix $\mathop{\rm tr}\nolimits T(z)$ on the vector $|\Psi_{a,b}(\bar u;\bar v)\rangle$ reads \be{01-Gtotact} \mathop{\rm tr}\nolimits T(z)|\Psi_{a,b}(\bar u;\bar v)\rangle=\widehat{\tau}(z|\bar u;\bar v) |\Psi_{a,b}(\bar u;\bar v)\rangle+\sum_{k=1}^a\widehat{M}_k |\Phi_{a,b}(z,u_k;\bar u;\bar v)\rangle. \end{equation} Here \be{01-Gtottau} \widehat{\tau}(z|\bar u;\bar v) =\lambda_1(z)f(\bar u,z)+\lambda_2(z)f(z,\bar u)f(\bar v,z)+\lambda_3(z)f(z,\bar v), \end{equation} and the coefficients $\widehat{M}_k$ have the form \be{01-GtotMk} \widehat{M}_k=\lambda_1(u_k)g(z,u_k)f(\bar u_k,u_k)+\lambda_2(u_k)g(u_k,z)f(u_k,\bar u_k)f(\bar v,u_k). \end{equation} Setting $\widehat{M}_k=0$ for $k=1,\dots,a$ we obtain a system of equations \be{01-GBE2} \frac{\lambda_1(u_k)}{\lambda_2(u_k)}=\frac{f(u_k,\bar u_k)}{f(\bar u_k,u_k)}f(\bar v,u_k),\qquad k=1,\dots,a. \end{equation} If the sets $\bar u$ and $\bar v$ satisfy the systems \eqref{01-GBE1} and \eqref{01-GBE2}, then $|\Psi_{a,b}(\bar u;\bar v)\rangle$ becomes the on-shell Bethe vector. \subsection{Definition of Bethe vectors\label{SS-DBV}} At this point we can turn back to the problem of off-shell Bethe vectors (or simply Bethe vectors). Now we are able to give their definition at least for the $\mathfrak{gl}_3$ based models. \begin{Def}\label{DefBV1} We call a state $|\Psi_{a,b}(\bar u;\bar v)\rangle$ an off-shell Bethe vector of the $\mathcal{R}_3$ algebra, if it has the form \eqref{01-BVtensM}, where the vector $\mathbb{F}(\bar u;\bar v)|0\rangle$ has the form \eqref{01-GF-BBB}. In other words, \be{def-OSBV} |\Psi_{a,b}(\bar u;\bar v)\rangle = \mathbb{B}_1(u_1)\mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a) \widehat{\mathcal{B}}^{(a)}(v_1)\dots \widehat{\mathcal{B}}^{(a)}(v_b) |0\rangle\otimes |\Omega^{(a)}\rangle. \end{equation} Here $\mathbb{B}(u_i)$ are the operator-valued vector rows of the original monodromy matrix \eqref{01-T-matrix2}, and $\widehat{\mathcal{B}}^{(a)}(v_k)$ are the creation operators of the auxiliary monodromy matrix \eqref{01-GMMTa}. The Bethe parameters $\bar u$ and $\bar v$ are arbitrary complex numbers. The cardinalities of the sets $\bar u$ and $\bar v$ respectively are $\#\bar u=a$ and $\#\bar v=b$, where $a,b=0,1,\dots$. \end{Def} \textsl{Remark.} Note that when we looked for the on-shell Bethe vectors we required the vector $\mathbb{F}(\bar u;\bar v)|0\rangle$ to be the eigenvector of the transfer matrix $\mathop{\rm tr}\nolimits\widehat{\mathcal{T}}^{(a)}(z)$ \eqref{01-GMMTa}. This requirement led us to the set of equations \eqref{01-GBE1}. Now we do not impose this constraint. Thus, $\mathbb{F}(\bar u;\bar v)|0\rangle$ is not necessarily the eigenvector of $\mathop{\rm tr}\nolimits\widehat{\mathcal{T}}^{(a)}(z)$, but it has the form \eqref{01-GF-BBB}. If the parameters $\bar u$ and $\bar v$ satisfy the system of Bethe equations \eqref{01-GBE1} and \eqref{01-GBE2}, that is \be{01-GBEfull} \begin{aligned} &\frac{\lambda_1(u_k)}{\lambda_2(u_k)}=\frac{f(u_k,\bar u_k)}{f(\bar u_k,u_k)}f(\bar v,u_k),\qquad k=1,\dots,a,\\ &\frac{\lambda_2(v_j)}{\lambda_3(v_j)}=\frac{f(v_j,\bar v_j)}{f(\bar v_j,v_j)}\frac1{f(v_j,\bar u)},\qquad j=1,\dots,b, \end{aligned} \end{equation} then the vector \eqref{def-OSBV} becomes an on-shell Bethe vector. Formally, definition~\ref{DefBV1} uniquely fixes the Bethe vector as a polynomial in the creation operators\footnote{Strictly speaking, equation \eqref{def-OSBV} also contains the neutral operators $T_{ii}$. However, their action on the vacuum vector can be replaced by the corresponding eigenvalues.} $T_{ij}$ with $i<j$ acting on the vacuum vector. However, these operators, generally speaking, do not commute with each other. Therefore, their reordering leads to new representations for the Bethe vectors. Formula \eqref{def-OSBV} is one of such representations. Unfortunately, equation \eqref{def-OSBV} does not give an explicit dependence of the Bethe vector on the creation operators. We will derive such explicit dependence later. In the meantime, as a example, consider a couple of the simplest cases. The most simple case is $a=0$, that is $\bar u=\emptyset$. Then $\widehat{\mathcal{T}}^{(a)}(z)=\mathbb{D}(z)$, and hence, $\widehat{\mathcal{B}}^{(a)}(z)=T_{23}(z)$. We obtain \be{BVa0-1} |\Psi_{0,b}(\emptyset;\bar v)\rangle =T_{23}(\bar v)|0\rangle. \end{equation} We see that in this case the $\mathcal{R}_3$ Bethe vector reduces to the $\mathcal{R}_2$ Bethe vector. This is not surprising, because the state $|\Psi_{0,b}(\emptyset;\bar v)\rangle$ has quasiparticles of the color $2$ only. Thus, it should coincide with the Bethe vector of the $\mathfrak{gl}_2$ based models. Let now $a=b=1$. Then \be{Psi11-0} |\Psi_{1,1}(u;v)\rangle = \mathbb{B}(u) \widehat{\mathcal{B}}^{(1)}(v) |0\rangle\otimes |\Omega^{(1)}\rangle, \end{equation} where $|\Omega^{(1)}\rangle=\left(\begin{smallmatrix}1\\0\end{smallmatrix}\right)$. The matrix $\widehat{\mathcal{T}}^{(1)}(v)$ is given by \eqref{01-calT} at $z=v$ and $a=1$, that is \be{01-calTa1} \widehat{\mathcal{T}}^{(1)}_0(v)=\mathbb{D}_0(v)\mathcal{T}^{(1)}_0(v), \end{equation} where \be{t0a1} \mathcal{T}^{(1)}_0(v)=r_{01}(v,u)=\begin{pmatrix}\mathbf{1}&0\\0&\mathbf{1}\end{pmatrix}_0+g(v,u)\begin{pmatrix}E_{11} &E_{21}\\E_{12}& E_{22} \end{pmatrix}_0, \end{equation} and we stressed by the subscript $0$ that the auxiliary space of this matrix is $V_0$. Thus, \be{explBhat} \widehat{\mathcal{B}}^{(1)}(v)=g(v,u)D_{11}(v)E_{21}+D_{12}(v)\bigl(1+g(v,u)E_{22}\bigr), \end{equation} and hence, the action of $\widehat{\mathcal{B}}^{(1)}(v)$ on $|\Omega^{(1)}\rangle$ is given by \be{actexplBhat} \widehat{\mathcal{B}}^{(1)}(v)\left(\begin{smallmatrix}1\\0\end{smallmatrix}\right)= g(v,u)D_{11}(v)\left(\begin{smallmatrix}0\\1\end{smallmatrix}\right)+D_{12}(v)\left(\begin{smallmatrix}1\\0\end{smallmatrix}\right). \end{equation} Equation \eqref{actexplBhat} gives us the components of the vector $\mathbb{F}(u;v)|0\rangle$ in the space $\mathcal{H}^{(1)}$: \be{compF01} \begin{aligned} &F_1(u;v)|0\rangle=D_{12}(v)|0\rangle=T_{23}(v)|0\rangle,\\ &F_2(u;v)|0\rangle=g(v,u)D_{11}(v)|0\rangle=g(v,u)T_{22}(v)|0\rangle=g(v,u)\lambda_{2}(v)|0\rangle. \end{aligned} \end{equation} Thus, we obtain the explicit expression for the Bethe vector $|\Psi_{1,1}(u;v)\rangle$: \begin{multline}\label{Psi11-01} |\Psi_{1,1}(u;v)\rangle = B_1(u)F_1(u;v)|0\rangle +B_2(u)F_2(u;v)|0\rangle\\ =T_{12}(u)T_{23}(v)|0\rangle +g(v,u)\lambda_{2}(v)T_{13}(u)|0\rangle, \end{multline} what coincides with \eqref{01-comb}. This Bethe vector becomes on-shell, if $u$ and $v$ satisfy the system \eqref{01-GBEfull}. It is easy to see that for $a=b=1$ this system turns into \eqref{sysBE1}. \subsection{Remarks about different embedding\label{SS-RDE}} Let us say few words about parametrization \eqref{01-T-matrix3}. This parametrization also can be used for constructing the Bethe vectors. The general strategy is the same in this case, however, several minor details are different. We recommend that the reader obtain himself the formula for the Bethe vector using parametrization \eqref{01-T-matrix3}. We restrict ourselves with several comments. In the case of embedding \eqref{01-T-matrix3} we deal with a $2\times 2$ matrix $\mathbb{A}'$ and a two-component vector-column \be{02-veccol} \mathbb{B}'(v)=\begin{pmatrix}B'_1(v)\\ B'_2(v)\end{pmatrix}=\begin{pmatrix}T_{13}(v)\\ T_{23}(v)\end{pmatrix}. \end{equation} Instead of \eqref{01-BVtensM} we use the following ansatz for the Bethe vectors: \be{01p-BVtens} |\Psi_{a,b}(\bar u;\bar v)\rangle = \Bigl(\mathbb{B}'_1(v_1)\mathbb{B}'_2(v_2)\dots \mathbb{B}'_b(v_b)\Bigr)^T \mathbb{F}'(\bar u;\bar v)|0\rangle. \end{equation} Here the superscript $T$ means transposition in the space $V_1\otimes\dots\otimes V_b$ (where each $V_k\sim\mathbb{C}^2$). The commutation relations between the operator-valued matrix $\mathbb{A}'$ and the operator-valued vector-column $\mathbb{B}'$ have the form \be{01p-DB-rmat} \mathbb{A}'_0(z)\mathbb{B}'_1(v)=r_{01}(v,z)\mathbb{B}'_1(v) \mathbb{A}'_0(z) +g(z,v)p_{01}\mathbb{B}'_1(z)\mathbb{A}'_0(v). \end{equation} Here, in distinction of \eqref{01-DB-rmat}, the $R$-matrix $r_{01}(v,z)$ and the permutation matrix $p_{01}$ act on other matrices from the left. Therefore, moving $\mathbb{A}'_0(z)$ through the product of $\mathbb{B}'_j(v_j)$ we obtain the product of the $R$-matrices in the extreme left position. However, after the transposition we obtain the product of the $R$-matrices to the right from the product of the $\mathbb{B}'_j(v_j)$-operators. Then we require that the vector $\mathbb{F}'(\bar u;\bar v)|0\rangle$ would be an eigenvector of the monodromy matrix $\widetilde{\mathcal{T}}^{(b)}(z)$: \be{01p-calT} \widetilde{\mathcal{T}}^{(b)}_0(z)=r'_{01}(z,v_1)\dots r'_{0b}(z,v_b)\mathbb{A}'_0(z). \end{equation} Here $r'_{0k}(u,v)=r^{t_k}_{0k}(-u,-v)$, where $t_k$ means the transposition in the space $V_k$. These matrices appear when we take the transposition of the product $r_{01}\dots r_{0b}$ in the space $V_1\otimes\dots\otimes V_b$. We leave to the reader to prove that $r'(u,v)$ satisfies the $RTT$-relation with the $R$-matrix $r(u,v)$: \be{rtt-prim} r_{12}(u,v) r'_{13}(u,w) r'_{23}(v,w) = r'_{23}(v,w) r'_{13}(u,w) r_{12}(u,v). \end{equation} Thus, the product $r'_{01}(z,v_1)\dots r'_{0b}(z,v_b)$ also satisfies the $RTT$-relation with the $R$-matrix $r(u,v)$. The entries of $\widetilde{\mathcal{T}}^{(b)}(z)$ act in the space $\mathcal{H}^{(b)}\otimes \mathcal{H}$, where $\mathcal{H}^{(b)}=V_1\otimes\dots\otimes V_b$ has the following vacuum vector: \be{01p-vacXXX} |\widetilde\Omega^{(b)}\rangle =\underbrace{\left(\begin{smallmatrix} 0\\1 \end{smallmatrix}\right)\otimes \dots \otimes \left(\begin{smallmatrix} 0\\1 \end{smallmatrix}\right)}_{b\quad\text{times}}. \end{equation} If we set \be{01p-GMMTa} \widetilde{\mathcal{T}}^{(b)}(z)=\begin{pmatrix} \widetilde{\mathcal{A}}^{(b)}(z)&\widetilde{\mathcal{B}}^{(b)}(z)\\ \widetilde{\mathcal{C}}^{(b)}(z)&\widetilde{\mathcal{D}}^{(b)}(z)\end{pmatrix}, \end{equation} then the vector $\mathbb{F}'(\bar u;\bar v)|0\rangle$ has the following form: \be{01p-GF-BBB} \mathbb{F}'(\bar u;\bar v)|0\rangle=\widetilde{\mathcal{B}}^{(b)}(u_1)\dots \widetilde{\mathcal{B}}^{(b)}(u_a) |\widetilde{\Omega}^{(b)}\rangle\otimes |0\rangle. \end{equation} We will see below that formulas for the Bethe vectors based on the embeddings \eqref{01-T-matrix2} and \eqref{01-T-matrix3} look very different. First, they have different ordering of the creation operators. Second, some of those operators have different arguments. Nevertheless, these different representations describe the same Bethe vector $|\Psi_{a,b}(\bar u;\bar v)\rangle$. \subsection{Remarks about \texorpdfstring{$\mathfrak{gl}_N$}{} Bethe vectors\label{SS-RBVglN}} The scheme described above does not change in the case of the models with $\mathfrak{gl}_N$-invariant $R$-matrix or its $q$-deformed analog \eqref{00-RUqglN}. We present the $N\times N$ monodromy matrix as a $2\times2$ block-matrix \be{01-T-matrixN} T(u)=\begin{pmatrix} A(u)& \mathbb{B}(u)\\ \mathbb{C}(u)& \mathbb{D}(u) \end{pmatrix}. \end{equation} Now the block $\mathbb{D}$ has the size $(N-1)\times (N-1)$. Respectively, $\mathbb{B}$ is the operator-valued vector-row with $N-1$ components \be{glNBB} \mathbb{B}(u)=\bigl(B_1(u),\dots,B_{N-1}(u)\bigr)=\bigl(T_{12}(u),\dots,T_{1,N}(u)\bigr). \end{equation} Then we look for the on-shell Bethe vectors in the form similar to \eqref{01-BVtensM} \be{01-BVglN} |\Psi\rangle = \mathbb{B}_1(u_1)\dots \mathbb{B}_a(u_a) \mathbb{F}|0\rangle. \end{equation} The vector $\mathbb{F}|0\rangle$ belongs to the space $\mathcal{H}\otimes\mathcal{H}^{(a)}$, where $\mathcal{H}^{(a)}$ is the tensor product \be{01-spaceHaM} \mathcal{H}^{(a)}=V_1\otimes\dots \otimes V_a, \qquad \text{where}\qquad V_j\sim \mathbb{C}^{N-1}. \end{equation} Otherwise, all the arguments remain unchanged. They lead us to the conclusion that the vector $\mathbb{F}|0\rangle$ must be an eigenvector of the transfer matrix of the model with $\mathfrak{gl}_{N-1}$-invariant $R$-matrix. Of course, an explicit expression for this vector is no longer given by \eqref{01-GF-BBB}, but is much more complicated. It is clear that using this method we obtain Bethe vectors depending on $N-1$ sets of variables\footnote{% In the $\mathfrak{gl}_3$-based models we have two sets of variables $\bar u=\bar t^1$ and $\bar v=\bar t^2$.} $\bar t=\{\bar t^1,\dots,\bar t^{N-1}\}$. In its turn, every set $\bar t^k$ consists of individual elements $\bar t^k=\{t^k_1,\dots,t^k_{a_k}\}$, where $a_k=\#\bar t^k$. Then we can refine formula \eqref{01-BVglN} as \be{01-BVglN1} |\Psi_{\bar a}(\bar t)\rangle = \mathbb{B}_1(t^1_1)\dots \mathbb{B}_{a_1}(t^1_{a_1}) \mathbb{F}(\bar t)|0\rangle, \end{equation} where $\bar a$ is a multi-index consisting of the cardinalities $\bar a=\{a_1,\dots,a_{N-1}\}$. In this formula, $\mathbb{F}(\bar t)|0\rangle$ is the Bethe vector of the monodromy matrix \be{01-calTglN} \widehat{\mathcal{T}}^{(a_1)}_0(z)=\mathbb{D}_0(z)r_{0,a_1}(z,t^1_{a_1})\dots r_{01}(z,t^1_1), \end{equation} where now the $R$-matrix $r(u,v)$ acts in $\mathbb{C}^{N-1}\otimes \mathbb{C}^{N-1}$. Equation \eqref{01-BVglN1} can be taken as the definition of the off-shell Bethe vector. This vector becomes on-shell if the Bethe parameters $\bar t$ satisfy a system of Bethe equations. It has the following form: \be{00-BE} \frac{\lambda_k(t_j^k)}{\lambda_{k+1}(t_j^k)}=\frac{f(t_j^k,\bar t_j^k)f(\bar t^{k+1},t_j^k)} {f(\bar t_j^k,t_j^k)f(t_j^k,\bar t^{k-1})}, \qquad \begin{array}{l}k=1,\dots,N-1,\\ j=1,\dots,a_k.\end{array} \end{equation} Here we set by definition $\bar t^0=\bar t^N=\emptyset$ and used the shorthand notation for the products of the $f$-functions over the sets $\bar t^k$. \section{Trace formula\label{S-TF}} \subsection{Bethe vector via trace formula\label{SS-BVTF}} The resulting formulas for Bethe vectors depend to a large extent on the embedding of the $\mathcal{R}_2$ algebra in the $\mathcal{R}_3$ algebra. Depending on the embedding the roles of the parameters $\bar u$ and $\bar v$ also are very different. In this section, we consider one more representation for the Bethe vectors \cite{TarV94,TarV13,BelRag08}. The main advantage of this representation is that it can be easily generalized to the case of models with $\mathfrak{gl}_N$-invariant $R$-matrix (although we will still restrict ourselves to the case $\mathfrak{gl}_3$). Besides, the Bethe parameters $\bar u$ and $\bar v$ are included in this representation in a more symmetric way. Finally, the new formula for Bethe vectors will allow us to prove the symmetry of these vectors with respect to the parameters $\bar u$, which has not yet been proved. Consider a tensor product $V_{k_1}\otimes\dots\otimes V_{k_a}\otimes V_{n_1}\otimes\dots\otimes V_{n_b}$, where each $V_j\sim\mathbb{C}^3$. Let \be{Tb} \mathbb{T}_{\bar k}(\bar u)=T_{k_1}(u_1)\dots T_{k_a}(u_a),\qquad \mathbb{T}_{\bar n}(\bar v)=T_{n_1}(v_1)\dots T_{n_b}(v_b), \end{equation} and \be{Rb} \mathbb{R}_{\bar n,\bar k}(\bar v,\bar u)=\prod_{i=1}^b \prod_{j=a}^{1} R_{n_i,k_j}(v_i,u_j). \end{equation} Here every $T_j$ acts in $V_j\otimes\mathcal{H}$. Each $R$-matrix $R_{i,j}$ acts in $V_i\otimes V_j$. We would like to draw attention of the reader to the ordering of the $R$-matrices in the double product \eqref{Rb}. There the index $i$ changes in the standard increasing direction, while the index $j$ changes in the decreasing direction. For example, for $a=b=2$, the product \eqref{Rb} reads \be{Rb-examp} \mathbb{R}_{\bar n,\bar k}(\bar v,\bar u)= R_{n_1,k_2}(v_1,u_2)R_{n_1,k_1}(v_1,u_1)R_{n_2,k_2}(v_2,u_2)R_{n_2,k_1}(v_2,u_1). \end{equation} \begin{prop}\label{P-TF} The off-shell Bethe vectors of the $\mathfrak{gl}_3$-invariant models have the following form: \be{BVtf} |\Psi_{a,b}(\bar u;\bar v)\rangle=\mathop{\rm tr}\nolimits_{\bar k,\bar n}\Bigl( \mathbb{T}_{\bar k}(\bar u)\mathbb{T}_{\bar n}(\bar v)\mathbb{R}_{\bar n,\bar k}(\bar v,\bar u) E_{k_1}^{21}\dots E_{k_a}^{21} E_{n_1}^{32}\dots E_{n_b}^{32}\Bigr)|0\rangle. \end{equation} The trace is taken over all the spaces $V_{k_1},\dots,V_{k_a},V_{n_1},\dots,V_{n_b}$. The matrices $E_{k_j}^{21}$ and $E_{n_j}^{32}$ are the standard basis matrices that respectively act in the spaces $V_{k_j}$ and $V_{n_j}$. In distinction of the previous section we use here superscripts for the different standard basis matrices. \end{prop} Equation \eqref{BVtf} is known as a {\it trace formula}. We will prove that the trace formula is equivalent to the representation obtained in the previous section. \textsl{Proof.} Let us present all the monodromy matrices in \eqref{BVtf} as \be{decMon} T_{k_s}(u_s)=\sum_{i,j=1}^3T_{i,j}(u_s)E_{k_s}^{i,j},\qquad T_{n_p}(v_p)=\sum_{\alpha,\beta=1}^3T_{\alpha,\beta}(v_p)E_{n_p}^{\alpha,\beta}. \end{equation} Substituting this into the trace formula we obtain \begin{multline}\label{BVtfdec} |\Psi_{a,b}(\bar u;\bar v)\rangle=\sum_{\bar i,\bar j=1}^3 \sum_{\bar \alpha,\bar \beta=1}^3 T_{i_1,j_1}(u_1)\dots T_{i_a,j_a}(u_a)T_{\alpha_1,\beta_1}(v_1)\dots T_{\alpha_b,\beta_b}(v_b)\\ \times \mathop{\rm tr}\nolimits_{\bar k,\bar n}\Bigl( \mathbb{R}_{\bar n,\bar k}(\bar v,\bar u) E_{k_1}^{21}\dots E_{k_a}^{21} E_{n_1}^{32}\dots E_{n_b}^{32}\; E_{k_1}^{i_1,j_1}\dots E_{k_a}^{i_a,j_a} E_{n_1}^{\alpha_1,\beta_1}\dots E_{n_b}^{\alpha_b,\beta_b}\Bigr)|0\rangle, \end{multline} where we have used cyclicity of the trace. The sum is taken over all $i_s$, $j_s$ (with $s=1,\dots,a$) and all $\alpha_p$, $\beta_p$ (with $p=1,\dots,b$). Taking the product of the $E$-matrices via $E^{ab}E^{cd}=\delta_{bc}E^{ad}$ we obtain that all $i_s=1$ and all $\alpha_p=2$. Then \begin{multline}\label{BVtfdec1} |\Psi_{a,b}(\bar u;\bar v)\rangle=\sum_{\bar j=1}^3 \sum_{\bar \beta=1}^3 T_{1,j_1}(u_1)\dots T_{1,j_a}(u_a)T_{2,\beta_1}(v_1)\dots T_{2,\beta_b}(v_b)\\ \times \mathop{\rm tr}\nolimits_{\bar k,\bar n}\Bigl( \mathbb{R}_{\bar n,\bar k}(\bar v,\bar u) E_{k_1}^{2,j_1}\dots E_{k_a}^{2,j_a} E_{n_1}^{3,\beta_1}\dots E_{n_b}^{3,\beta_b}\Bigr)|0\rangle. \end{multline} To calculate the remaining trace we present the product of the $R$-matrices $\mathbb{R}_{\bar n,\bar k}(\bar v,\bar u)$ as \be{02-multi} \mathbb{R}_{\bar n,\bar k}(\bar v,\bar u)=\sum_{\bar{\lambda},\bar{\mu},\bar p,\bar q=1}^3 r^{\lambda_1\mu_1,\dots,\lambda_b\mu_b;p_1,q_1,\dots,p_a,q_a}(\bar v,\bar u) E_{n_1}^{\lambda_1,\mu_1}\dots E_{n_b}^{\lambda_b,\mu_b} E_{k_1}^{p_1,q_1}\dots E_{k_a}^{p_a,q_a} , \end{equation} where $r^{\lambda_1\mu_1,\dots,p_a,q_a}(\bar v,\bar u)$ are numeric coefficients, and the sum is taken over all $\bar{\lambda}=\{\lambda_1,\dots,\lambda_b\}$, $\bar{\mu}=\{\mu_1,\dots,\mu_b\}$, $\bar p=\{p_1,\dots,p_a\}$, and $\bar q=\{q_1,\dots,q_a\}$. Then we obtain (see appendix~\ref{A-trace} for more details) \be{rsm} \mathop{\rm tr}\nolimits_{\bar k,\bar n}\Bigl( \mathbb{R}_{\bar n,\bar k}(\bar v,\bar u)% E_{k_1}^{2,j_1}\dots E_{k_a}^{2,j_a} E_{n_1}^{3,\beta_1}\dots E_{n_b}^{3,\beta_b}\Bigr)= r^{\beta_13,\dots,\beta_b3;j_1,2,\dots,j_a 2}(\bar v,\bar u)\equiv \bs{r}^{\bar\beta,\bar j}(\bar v,\bar u). \end{equation} Now we should compute the coefficients $\bs{r}^{\bar\beta,\bar j}(\bar v,\bar u)$. For this, it is convenient to use a diagram technique \cite{Bax82,Sla18}. We present a single $R$-matrix as a vertex (see Fig.~\ref{P-Rvertex}). \begin{figure}[ht!] \begin{picture}(440,70) \put(100,10){% \begin{picture}(400,70) \put(40,0){\line(0,1){50}} \put(15,25){\line(1,0){50}} \put(18,18){$\alpha$} \put(58,28){$\beta$} \put(43,5){$i$} \put(33,40){$j$} \put(5,22){$v$} \put(37,-8){$u$} \put(-70,22){$R^{\alpha\beta; ij}(v,u)=$} \end{picture}} \put(300,10){% \begin{picture}(400,70) \put(40,0){\line(0,1){50}} \put(15,25){\line(1,0){50}} \put(71,24){$V_{n_p}$} \put(34,55){$V_{k_s}$} \put(5,22){$v$} \put(37,-8){$u$} \put(-70,22){$R_{n_p,k_s}(v,u)=$} \end{picture}} \end{picture} \caption{\label{P-Rvertex} $R$-matrix as a vertex. Horizontal edge is associated with the parameter $v$ and the space $V_{n_p}$. Vertical edge is associated with the parameter $u$ and the space $V_{k_s}$.} \end{figure} Observe that $R^{\alpha\beta; ij}(v,u)\ne 0$, if either $\alpha=\beta$ and $i=j$ or $\alpha=j$ and $i=\beta$. \begin{figure}[ht!] \begin{picture}(440,70) \put(100,10){% \begin{picture}(400,70) \put(40,0){\line(0,1){50}} \put(15,25){\line(1,0){50}} \put(16,16){$\beta$} \put(58,28){$\beta$} \put(43,5){$j$} \put(33,40){$j$} \put(5,22){$v$} \put(37,-8){$u$} \thicklines \put(35,35){\vector(0,-1){30}} \put(35.5,35){\vector(0,-1){30}} \put(54,20){\vector(-1,0){30}} \put(54,20.5){\vector(-1,0){30}} \put(44,-15){(a)} \end{picture}} \put(300,10){% \begin{picture}(400,70) \put(40,0){\line(0,1){50}} \put(15,25){\line(1,0){50}} \put(18,17){$j$} \put(63,28){$\beta$} \put(33,3){$\beta$} \put(43,45){$j$} \put(5,22){$v$} \put(37,-8){$u$} \thicklines \qbezier(15,28)(38,26)(38,48) \qbezier(15,28.5)(38.5,26.5)(38.5,48) \put(20,28){\vector(-1,0){5}} \qbezier(42,2)(42,22)(64,22) \qbezier(42.5,2)(42.5,22.5)(64,22.5) \put(43,3){\vector(0,-1){5}} \put(20,-15){(b)} \end{picture}} \end{picture} \caption{\label{P-Mind} Two moves of the indices.} \end{figure} Thus, the index, which enters the vertex from the north, can go further to the south or turn to the west. Respectively, the index, which enters the vertex from the east, can go further to the west or turn to the south (see Fig.~\ref{P-Mind}). The product of the $R$-matrices \be{RRRR} R_{n_p,k_a}(v_p,u_a)\dots R_{n_p,k_1}(v_p,u_1) \end{equation} is given by the horizontal line (see Fig.~\ref{P-prodR}). \begin{figure}[ht!] \begin{picture}(440,70) \put(50,0){% \begin{picture}(400,70) \multiput(40,0)(40,0){7}{\line(0,1){50}} \put(40,0){\line(0,1){50}} \put(80,0){\line(0,1){50}} \put(120,0){\line(0,1){50}} \put(15,25){\line(1,0){235}} \put(240,25){\line(1,0){60}} \put(18,18){$\alpha$} \put(43,5){$i_a$} \put(42,40){$j_a$} \put(37,-10){$V_{k_a}$} \put(37,55){$u_a$} \put(290,16){$\beta$} \put(305,21){$V_{n_p}$} \put(0,21){$v_p$} \put(83,5){$i_{a-1}$} \put(82,40){$j_{a-1}$} \put(77,-10){$V_{k_{a-1}}$} \put(77,55){$u_{a-1}$} \put(232,5){$i_{2}$} \put(230,40){$j_{2}$} \put(235,-10){$V_{k_2}$} \put(235,55){$u_2$} \put(268,5){$i_1$} \put(270,40){$j_1$} \put(275,-10){$V_{k_1}$} \put(275,55){$u_1$} \end{picture}} \end{picture} \caption{\label{P-prodR} Product of the $R$-matrices.} \end{figure} Respectively, the total product $\mathbb{R}_{\bar n,\bar k}(\bar v,\bar u)$ looks as it is shown on the Fig~\ref{P-totprodR}. \begin{figure}[ht!] \begin{picture}(450,130) \put(160,-10){% \begin{picture}(200,200) \multiput(10,20)(20,0){6}{\line(0,1){100}} \multiput(0,30)(0,20){5}{\line(1,0){120}} \put(7,132){$\scriptstyle V_{k_a}$} \put(109,132){$\scriptstyle V_{k_1}$} \put(89,132){$\scriptstyle V_{k_2}$} \put(7,12){$\scriptstyle u_{a}$} \put(109,12){$\scriptstyle u_{1}$} \put(89,12){$\scriptstyle u_{2}$} \put(-20,28){$\scriptstyle v_{b}$} \put(-20,88){$\scriptstyle v_{2}$} \put(-20,108){$\scriptstyle v_{1}$} \put(133,28){$\scriptstyle V_{n_b}$} \put(133,88){$\scriptstyle V_{n_2}$} \put(133,108){$\scriptstyle V_{n_1}$} \end{picture}} \put(80,58){$\mathbb{R}_{\bar n,\bar k}(\bar v,\bar u)=$} \end{picture} \caption{\label{P-totprodR} Graphical interpretation of $\mathbb{R}_{\bar n,\bar k}(\bar v,\bar u)$ } \end{figure} \begin{figure}[ht!] \begin{picture}(450,130) \put(160,-10){% \begin{picture}(200,130) \multiput(10,20)(20,0){6}{\line(0,1){100}} \multiput(0,30)(0,20){5}{\line(1,0){120}} \multiput(12,117)(20,0){6}{$\scriptstyle 2$} \multiput(118,31)(0,20){5}{$\scriptstyle 3$} \put(7,132){$\scriptstyle V_{k_a}$} \put(109,132){$\scriptstyle V_{k_1}$} \put(89,132){$\scriptstyle V_{k_2}$} \put(7,12){$\scriptstyle u_{a}$} \put(109,12){$\scriptstyle u_{1}$} \put(89,12){$\scriptstyle u_{2}$} \put(12,22){$\scriptstyle j_{a}$} \put(94,22){$\scriptstyle j_{2}$} \put(114,22){$\scriptstyle j_{1}$} \put(-20,28){$\scriptstyle v_{b}$} \put(-20,88){$\scriptstyle v_{2}$} \put(-20,108){$\scriptstyle v_{1}$} \put(-5,22){$\scriptstyle \beta_{b}$} \put(-5,82){$\scriptstyle \beta_{2}$} \put(-5,102){$\scriptstyle \beta_{1}$} \put(133,28){$\scriptstyle V_{n_b}$} \put(133,88){$\scriptstyle V_{n_2}$} \put(133,108){$\scriptstyle V_{n_1}$} \end{picture}} \put(76,60){$\bs{r}^{\bar\beta,\bar j}(\bar v,\bar u)=$} \end{picture} \caption{\label{P-tottrace}Graphical interpretation of the matrix element $\bs{r}^{\bar\beta,\bar j}(\bar v,\bar u)$. } \end{figure} Finally, the matrix element $\bs{r}^{\bar\beta,\bar j}(\bar v,\bar u)$ has a graphical representation shown on Fig.~\ref{P-tottrace}. Generically, the indices on the edges of the lattice on Fig.~\ref{P-totprodR} can take three values: $1,2,3$. However, in the case of the lattice on Fig.~\ref{P-tottrace} the value $1$ is forbidden. Indeed, we have seen that moving through any vertex, every index goes in the direction from the north-east to the south-west. \begin{figure}[ht!] \begin{picture}(450,140) \put(160,-10){% \begin{picture}(200,200) \multiput(10,20)(20,0){6}{\line(0,1){100}} \multiput(0,30)(0,20){5}{\line(1,0){120}} \multiput(12,117)(20,0){6}{$\scriptstyle 2$} \multiput(118,31)(0,20){5}{$\scriptstyle 3$} \put(7,132){$\scriptstyle V_{k_a}$} \put(109,132){$\scriptstyle V_{k_1}$} \put(89,132){$\scriptstyle V_{k_2}$} \put(7,12){$\scriptstyle u_{a}$} \put(109,12){$\scriptstyle u_{1}$} \put(89,12){$\scriptstyle u_{2}$} \put(12,22){$\scriptstyle j_{a}$} \put(94,22){$\scriptstyle j_{2}$} \put(114,22){$\scriptstyle j_{1}$} \put(-20,28){$\scriptstyle v_{b}$} \put(-20,88){$\scriptstyle v_{2}$} \put(-20,108){$\scriptstyle v_{1}$} \put(-5,22){$\scriptstyle \beta_{b}$} \put(-5,82){$\scriptstyle \beta_{2}$} \put(-5,102){$\scriptstyle \beta_{1}$} \put(133,28){$\scriptstyle V_{n_b}$} \put(133,88){$\scriptstyle V_{n_2}$} \put(133,108){$\scriptstyle V_{n_1}$} \thicklines \put(30,20){\line(0,1){30}} \put(30,50){\line(1,0){20}} \put(50,50){\line(0,1){20}} \put(50,70){\line(1,0){40}} \put(90,70){\vector(-1,0){30}} \put(90,70){\line(0,1){40}} \put(90,110){\line(1,0){20}} \put(110,110){\line(0,1){10}} \put(30.5,20){\line(0,1){30}} \put(30,50.5){\line(1,0){20}} \put(50.5,50){\line(0,1){20}} \put(50,70.5){\line(1,0){40}} \put(90,70){\vector(-1,0){30}} \put(90.5,70){\line(0,1){40}} \put(90,110.5){\line(1,0){20}} \put(110.5,110){\line(0,1){10}} \end{picture}} \end{picture} \caption{\label{P-conind} Line of the constant index } \end{figure} Thus, any index of an arbitrary edge has its source either on the northern or eastern lattice boundary. But all the indices on those boundaries take the values $2$ or $3$. Thus, there is no the index $1$ on the edges of the lattice on the Fig.~\ref{P-conind}. The above consideration shows that the original $\mathfrak{gl}_3$-invariant $R$-matrix $R^{\alpha\beta;ij}(u,v)$ turns into the $\mathfrak{gl}_2$-invariant $R$-matrix $r^{\alpha\beta;ij}(u,v)$, where all the indices take values $2$ and $3$. Let \be{S} \mathcal{T}^{(a)}(v_p|\bar u)=r_{n_p,k_a}(v_p,u_a)\dots r_{n_p,k_1}(v_p,u_1). \end{equation} It is easy to see that $\mathcal{T}^{(a)}(v_p|\bar u)$ coincides with the monodromy matrix introduced by \eqref{01-calT}. Then \be{r-T} \bs{r}^{\bar\beta,\bar j}(\bar v,\bar u)= \prod_{p=1}^b \mathcal{T}^{(a)}_{\beta_p,3}(v_p|\bar u), \end{equation} and we obtain \begin{equation}\label{BVtfdec3} |\Psi_{a,b}(\bar u;\bar v)\rangle=\sum_{\bar j=2}^3 \sum_{\bar \beta=2}^3 T_{1,j_1}(u_1)\dots T_{1,j_a}(u_a)T_{2,\beta_1}(v_1)\dots T_{2,\beta_b}(v_b) \prod_{p=1}^b \mathcal{T}^{(a)}_{\beta_p,3}(v_p|\bar u)|0\rangle. \end{equation} Observe that here we have changed the summation limits for $\bar j$ and $\bar\beta$. When taking the sum over $\bar j$ one should remember that the monodromy matrix $\mathcal{T}^{(a)}_{\beta_p,3}(v_p|\bar u)$ acts in the tensor product of $V_{k_s}$, $s=1,\dots,a$, where it has indices $j_s$. These indices are not shown explicitly in \eqref{BVtfdec3}. If we introduce \be{D-new} \mathbb{D}(v)=\begin{pmatrix} T_{22}(v)&T_{23}(v)\\ T_{32}(v)&T_{33}(v)\end{pmatrix}, \end{equation} and \be{tT} \widehat{\mathcal{T}}^{(a)}(v_p|\bar u)=\mathbb{D}(v_p)\mathcal{T}^{(a)}(v_p|\bar u), \end{equation} then \eqref{BVtfdec3} takes the form \begin{equation}\label{BVtfdec4} |\Psi_{a,b}(\bar u;\bar v)\rangle=\sum_{\bar j=2}^3 T_{1,j_1}(u_1)\dots T_{1,j_a}(u_a) \prod_{p=1}^b \widehat{\mathcal{T}}^{(a)}_{2,3}(v_p|\bar u)|0\rangle. \end{equation} Now it becomes obvious that this formula coincides with \eqref{01-BVtensM}, where $\mathbb{F}(\bar u;\bar v)|0\rangle$ is given by \eqref{01-GF-BBB}. Concluding this section, we note that the trace formula has a fairly obvious generalization to the case of models with $\mathfrak{gl}_N$-invariant $R$-matrix. We do not give the explicit formula, since this would require the introduction of a large number of new notations. The reader, however, can find this formula in \cite{TarV13}. \subsection{Symmetry over \texorpdfstring{$\bar u$}{}\label{SS-SOU}} Trace formula \eqref{BVtf} allows us to prove the long standing problem of the symmetry of the Bethe vectors over the set $\bar u$. For this we first consider some properties of the matrix $R(u,v)P$: \be{RP} R(u,v)P=\sum_{a,b=1}^3E^{a,b}\otimes E^{b,a}+g(u,v)\mathbb{I}. \end{equation} Let $j<3$. Then \be{iden1EE} f(u,v)E^{j+1,j}\otimes E^{j+1,j} = R(u,v)P E^{j+1,j}\otimes E^{j+1,j} =E^{j+1,j}\otimes E^{j+1,j}R(u,v)P . \end{equation} Indeed, using \eqref{RP} we have, for example, \begin{multline}\label{exs1} E^{j+1,j}\otimes E^{j+1,j}R(u,v)P =\sum_{a,b=1}^3(E^{j+1,j}\otimes E^{j+1,j}) (E^{a,b}\otimes E^{b,a}) +g(u,v)E^{j+1,j}\otimes E^{j+1,j}\\ =g(u,v)E^{j+1,j}\otimes E^{j+1,j}+\sum_{a,b=1}^3E^{j+1,b}\otimes E^{j+1,a}\delta_{ja}\delta_{jb}=f(u,v)E^{j+1,j}\otimes E^{j+1,j}. \end{multline} Consider now the right action of the matrix $R_{2,1}(u_2,u_1)P_{2,1}$ on the product of the $R$-matrices $R_{n,2}(v,u_2)R_{n,1}(v,u_1)$, where the subscript $n$ refers to some space $V_n$ which is different from $V_1$ and $V_2$. Due to the Yang--Baxter equation we have \be{RRRP2} R_{n,2}(v,u_2)R_{n,1}(v,u_1)\; R_{2,1}(u_2,u_1)P_{2,1}=R_{2,1}(u_2,u_1)\;R_{n,1}(v,u_1)R_{n,2}(v,u_2)\; P_{2,1}. \end{equation} Then moving the permutation matrix to the left we exchange the spaces $V_1$ and $V_2$: \be{RRRP3} R_{n,2}(v,u_2)R_{n,1}(v,u_1)\; R_{2,1}(u_2,u_1)P_{2,1}=R_{2,1}(u_2,u_1)P_{2,1}\; R_{n,2}(v,u_1)R_{n,1}(v,u_2) . \end{equation} Thus, acting form the right on $R_{n,2}(v,u_2)R_{n,1}(v,u_1)$, the matrix $R_{2,1}(u_2,u_1)P_{2,1}$ in fact makes the replacement $u_1\leftrightarrow u_2$. Similarly $R_{2,1}(u_2,u_1)P_{2,1}$ acts on the product $T_1(u_1)T_2(u_2)$. For this we first write down the $RTT$-relation \be{Rtt1} R_{1,2}(u_1,u_2)T_1(u_1)T_2(u_2)=T_2(u_2)T_1(u_1)R_{1,2}(u_1,u_2), \end{equation} and multiply it from both sides by $R_{2,1}(u_2,u_1)$: \be{Rtt2} T_1(u_1)T_2(u_2)R_{2,1}(u_2,u_1)=R_{2,1}(u_2,u_1)T_2(u_2)T_1(u_1). \end{equation} Here we used the fact that $R_{1,2}(u_1,u_2)R_{2,1}(u_2,u_1)=f(u_1,u_2)f(u_2,u_1)\mathbb{I}$. Consider now how the permutation matrix acts on $T_1(u_1)T_2(u_2)$. We have \be{TT1} T_1(u_1)T_2(u_2)=\sum_{i,j,k,l}T_{ij}(u_1)T_{kl}(u_2) E^{ij}_1 E^{kl}_2. \end{equation} Then \begin{multline}\label{TT2} P_{1,2}T_1(u_1)T_2(u_2)P_{1,2}=\sum_{a,b,c,d}\sum_{i,j,k,l}T_{ij}(u_1)T_{kl}(u_2) E^{ab}_1 E^{ba}_2E^{ij}_1 E^{kl}_2 E^{cd}_1 E^{dc}_2\\ =\sum_{a,b,c,d}\sum_{i,j,k,l}T_{ij}(u_1)T_{kl}(u_2) E^{ab}_1 E^{id}_1\delta_{jc} E^{ba}_2 E^{kc}_2 \delta_{ld}\\ =\sum_{a,b,c,d}\sum_{i,j,k,l}T_{ij}(u_1)T_{kl}(u_2) E^{ad}_1 \delta_{bi}\delta_{jc} E^{bc}_2 \delta_{ak} \delta_{ld}\\ =\sum_{i,j,k,l}T_{ij}(u_1)T_{kl}(u_2) E^{kl}_1 E^{ij}_2 = T_2(u_1)T_1(u_2). \end{multline} Thus, using \eqref{Rtt2} and \eqref{TT2} we obtain \be{ttRP} T_1(u_1)T_2(u_2)\;R_{2,1}(u_2,u_1)P_{1,2}=R_{2,1}(u_2,u_1)\;T_2(u_2)T_1(u_1)\;P_{1,2}=R_{2,1}(u_2,u_1)P_{1,2}\;T_1(u_2)T_2(u_1). \end{equation} Hence, here we also deal with the replacement $u_1\leftrightarrow u_2$. Now everything is ready for the proof of the symmetry of the Bethe vector $|\Psi_{a,b}(\bar u;\bar v)\rangle$ over the parameters $\bar u$. Consider a vector \be{BVtf-f} f(u_{i+1},u_i)|\Psi_{a,b}(\bar u;\bar v)\rangle=f(u_{i+1},u_i)\mathop{\rm tr}\nolimits_{\bar k,\bar n}\Bigl( \mathbb{T}_{\bar k}(\bar u)\mathbb{T}_{\bar n}(\bar v) \mathbb{R}_{\bar n,\bar k}(\bar v,\bar u) E_{k_1}^{21}\dots E_{k_a}^{21} E_{n_1}^{32}\dots E_{n_b}^{32}\Bigr)|0\rangle, \end{equation} for some $i=1,\dots,a-1$. Due to \eqref{iden1EE} we have \begin{multline}\label{BVtf-f1} f(u_{i+1},u_i)|\Psi_{a,b}(\bar u;\bar v)\rangle=\mathop{\rm tr}\nolimits_{\bar k,\bar n}\Bigl( \mathbb{T}_{\bar k}(\bar u)\mathbb{T}_{\bar n}(\bar v)\mathbb{R}_{\bar n,\bar k}(\bar v,\bar u) E_{k_1}^{21}\dots E_{k_a}^{21} E_{n_1}^{32}\dots E_{n_b}^{32}\\ \times R_{k_{i+1},k_i}(u_{i+1},u_i)P_{k_{i+1},k_i}\Bigr)|0\rangle. \end{multline} The matrix $R_{k_{i+1},k_i}(u_{i+1},u_i)P_{k_{i+1},k_i}$ first can be moved to the left through the products of the matrices $E_{n_1}^{32}\dots E_{n_b}^{32}$ and $E_{k_1}^{21}\dots E_{k_a}^{21}$. Then, moving through the $R$-matrices $\mathbb{R}_{\bar n,\bar k}$ we should exchange it with the combinations $R_{n_s,k_{i+1}}(v_{n_s},u_{i+1})R_{n_s,k_i}(v_{n_s},u_i)$. Due to \eqref{RRRP3} this leads to the replacement $u_i\leftrightarrow u_{i+1}$ in $\mathbb{R}_{\bar n,\bar k}$. After this we should move $R_{k_{i+1},k_i}(u_{i+1},u_i)P_{k_{i+1},k_i}$ through the product of the $T$-matrices $\mathbb{T}_{\bar k}(\bar u)$. Here we meat a combination $T_{k_i}(u_{i})T_{k_{i+1}}(u_{i+1})$. And again we obtain the replacement $u_i\leftrightarrow u_{i+1}$ in $\mathbb{T}_{\bar k}(\bar u)$. Thus, we arrive at \begin{multline}\label{BVtf-f2} f(u_{i+1},u_i)|\Psi_{a,b}(\bar u;\bar v)\rangle=\mathop{\rm tr}\nolimits_{\bar k,\bar n}\Bigl( R_{k_{i+1},k_i}(u_{i+1},u_i)P_{k_{i+1},k_i} \mathbb{T}_{\bar k}(\bar u)\Bigr|_{u_i\leftrightarrow u_{i+1}}\\ \times\mathbb{T}_{\bar n}(\bar v)\mathbb{R}_{\bar n,\bar k}(\bar v,\bar u)\Bigr|_{u_i\leftrightarrow u_{i+1}} E_{k_1}^{21}\dots E_{k_a}^{21} E_{n_1}^{32}\dots E_{n_b}^{32} \Bigr)|0\rangle. \end{multline} Using cyclicity of the trace we move $R_{k_{i+1},k_i}(u_{i+1},u_i)P_{k_{i+1},k_i}$ to its original position \begin{multline}\label{BVtf-f3} f(u_{i+1},u_i)|\Psi_{a,b}(\bar u;\bar v)\rangle=\mathop{\rm tr}\nolimits_{\bar k,\bar n}\Bigl( \mathbb{T}_{\bar k}(\bar u)\Bigr|_{u_i\leftrightarrow u_{i+1}}\mathbb{T}_{\bar n}(\bar v)\mathbb{R}_{\bar n,\bar k}(\bar v,\bar u)\Bigr|_{u_i\leftrightarrow u_{i+1}}\\ \times E_{k_1}^{21}\dots E_{k_a}^{21} E_{n_1}^{32}\dots E_{n_b}^{32} R_{k_{i+1},k_i}(u_{i+1},u_i)P_{k_{i+1},k_i}\Bigr)|0\rangle\\ =f(u_{i+1},u_i)\mathop{\rm tr}\nolimits_{\bar k,\bar n}\Bigl( \mathbb{T}_{\bar k}(\bar u)\Bigr|_{u_i\leftrightarrow u_{i+1}}\mathbb{T}_{\bar n}(\bar v)\mathbb{R}_{\bar n,\bar k}(\bar v,\bar u)\Bigr|_{u_i\leftrightarrow u_{i+1}} E_{k_1}^{21}\dots E_{k_a}^{21} E_{n_1}^{32}\dots E_{n_b}^{32} \Bigr)|0\rangle. \end{multline} Thus, we obtain \be{BB-symu} |\Psi_{a,b}(\bar u;\bar v)\rangle=|\Psi_{a,b}(\bar u;\bar v)\rangle\Bigr|_{u_i\leftrightarrow u_{i+1}}. \end{equation} Formally, one can prove the symmetry over the set $\bar v$ using exactly the same way. However, this symmetry is obvious in the representations \eqref{01-BVtensM}, \eqref{01-GF-BBB}. \section{Recursion for the Bethe vectors\label{S-RBV}} If we restrict ourselves to the problem of the spectrum of the transfer matrix (and hence, the Hamiltonian and other integrals of motion), then this problem is solved. We constructed eigenvectors of the transfer matrix and found the corresponding eigenvalues. However, if we expect to use the NABA for calculating correlation functions, the results obtained are still insufficient. As we have already mentioned, in this case, one has to calculate scalar products containing off-shell Bethe vectors. The representation \eqref{def-OSBV} and the trace formula \eqref{BVtf} are not very convenient for these purposes. Therefore, we need to get other representations for Bethe vectors, with a view to their further application to the calculation of scalar products. One of the steps on this way is recursive formulas. These formulas, in particular, make it possible to prove by induction many statements concerning scalar products. In this section we derive a relation between Bethe vectors $|\Psi_{a,b}\rangle$, $|\Psi_{a-1,b}\rangle$, and $|\Psi_{a-1,b-1}\rangle$. To do this, we will use some formulas for the composite model in the $\mathcal{R}_2$ algebra. Therefore, we begin this section with a brief description of the composite model. More details can be found in \cite{IzeK84,BogIK93L,Sla07,Sla18}. \subsection{Composite model in \texorpdfstring{$\mathcal{R}_2$}{} algebra\label{A-CM}} Let we have two $2\times2$ monodromy matrices $T^{(1)}(v )$ and $T^{(2)}(v)$: \be{11-ABAT-2s} T^{(j)}(v)=\left( \begin{array}{cc} A^{(j)}(v)&B^{(j)}(v)\\ C^{(j)}(v)&D^{(j)}(v) \end{array}\right), \qquad j=1,2. \end{equation} We assume that both of them satisfy the $RTT$-relation \eqref{00-RTT} with the $R$-matrix \eqref{00-RYang}. We also assume the entries of $T^{(1)}(v )$ and $T^{(2)}(v)$ act in different Hilbert spaces and each of these matrices possesses a vacuum vector $|0\rangle^{(j)}$ with the standard properties % \be{11-action-2s} A^{(j)}(v)|0\rangle^{(j)}=a^{(j)}(v)|0\rangle^{(j)},\qquad D^{(j)}(v)|0\rangle^{(j)}=d^{(j)}(v)|0\rangle^{(j)},\qquad C^{(j)}(v)|0\rangle^{(j)}=0, \end{equation} where $a^{(j)}(v)$ and $d^{(j)}(v)$ are some complex valued functions. Then we can define off-shell Bethe vectors \be{11-PBV} B^{(j)}(\bar v)|0\rangle^{(j)}=B^{(j)}(v_1)\dots B^{(j)}(v_n)|0\rangle^{(j)}, \end{equation} for each $T^{(j)}(v)$. Here we used the shorthand notation for the product of the operators $B^{(j)}$. Obviously, under these conditions, a matrix $T(v)$ \be{11-TTT} T(v)=T^{(2)}(v )T^{(1)}(v)= \begin{pmatrix} A(v)&B(v)\\ C(v)&D(v) \end{pmatrix} \end{equation} satisfies the $RTT$-relation and has a vacuum vector $|0\rangle=|0\rangle^{(2)}\otimes|0\rangle^{(1)}$. We also can define off-shell Bethe vectors corresponding to the matrix $T(v)$: \be{11-TBV} B(\bar v)|0\rangle=B(v_1)\dots B(v_n)|0\rangle. \end{equation} A model in which the monodromy matrix is defined in the form \eqref{11-TTT} is called composite. The matrix $T(u)$ is called a full monodromy matrix, and the matrices $T^{(j)}(v)$ are called partial monodromy matrices. Similarly, vectors \eqref{11-TBV} are called full Bethe vectors, and vectors \eqref{11-PBV} are called partial. In fact, we have already dealt with the composite model in section~\ref{S-NABA}. Indeed, the matrix $\widehat{\mathcal{T}}^{(a)}(z)$ \eqref{01-calT} is the product of two partial monodromy matrices $\mathbb{D}(z)$ and $\mathcal{T}^{(a)}_0(z)$. However, so far we have needed fairly simple statements about the composite model. For example, we used the fact that the functions $a(v)$ and $d(v)$ of the full matrix $T(v)$ are respectively products of the functions $a^{(j)}(v)$ and $d^{(j)}(v)$. This statement immediately follows from representations \be{a-aad-dd} \begin{aligned} &A(v)=A^{(2)}(v)A^{(1)}(v)+B^{(2)}(v) C^{(1)}(v),\\ &D(v)=D^{(2)}(v)D^{(1)}(v)+C^{(2)}(v) B^{(1)}(v). \end{aligned} \end{equation} Now we need to express the full Bethe vectors in terms of the partial ones. Using \be{11-B-BB} B(v)=A^{(2)}(v)B^{(1)}(v)+B^{(2)}(v) D^{(1)}(v), \end{equation} we easily find \be{11-B-BBv} B(v)|0\rangle=a^{(2)}(v)|0\rangle^{(2)}\otimes B^{(1)}(v)|0\rangle^{(1)}+d^{(1)}(v) B^{(2)}(v)|0\rangle^{(2)}\otimes|0\rangle^{(1)}. \end{equation} However, if the cardinality of the set $\bar v$ is more than $1$, then the problem to express $B(\bar v)|0\rangle$ in terms of $B^{(j)}(\bar v)|0\rangle^{(j)}$ becomes more sophisticated. It was solved in \cite{IzeK84}: \be{11-IK} B(\bar v)|0\rangle=\sum_{\bar v\mapsto\{\bar v_{\scriptscriptstyle \rm I},\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}\}} a^{(2)}(\bar v_{\scriptscriptstyle \rm I}) d^{(1)}(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}) B^{(2)}(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I})|0\rangle^{(2)}\otimes B^{(1)}(\bar v_{\scriptscriptstyle \rm I})|0\rangle^{(1)} \cdot f(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I},\bar v_{\scriptscriptstyle \rm I}). \end{equation} Here the sum is taken over all possible partitions of the set $\bar v$ into two subsets $\bar v_{\scriptscriptstyle \rm I}$ and $\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}$. We have also extended our convention on the shorthand notation to the products of the functions $a^{(j)}$ and $d^{(j)}$. For example, $d^{(1)}(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I})$ means the product of $d^{(1)}(v_i)$ over $v_i\in \bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}$. A detailed proof of \eqref{11-IK} can be found in \cite{Sla18}. \subsection{First recursion for Bethe vectors\label{SS-FRBV}} We now proceed to the derivation of recursion for Bethe vectors in the $\mathcal{R}_3$ algebra. For this, we use representation \eqref{01-BVtensM} \be{BV-anz1-a} |\Psi_{a,b}(\bar u;\bar v)\rangle=\mathbb{B}_1(u_1)\mathbb{B}_2(u_2)\dots \mathbb{B}_a(u_a)\mathbb{F}^{(a)}(\bar u;\bar v)|0\rangle, \end{equation} where we equipped the vector $\mathbb{F}^{(a)}$ with the additional superscript $(a)$. This superscript stresses that $\mathbb{F}^{(a)}|0\rangle$ belongs to the space $\mathcal{H}\otimes \mathcal{H}^{(a)}$, where $\mathcal{H}^{(a)}$ is the tensor product of $a$ spaces $\mathbb{C}^2$ (see \eqref{01-spaceHa}). Our goal is to express this vector in terms of vectors $\mathbb{F}^{(a-1)}|0\rangle$, which belong to the space $\mathcal{H}\otimes \mathcal{H}^{(a-1)}$, where $\mathcal{H}^{(a-1)}$ is the tensor product of $a-1$ spaces $\mathbb{C}^2$. This is a typical problem of a composite model \cite{IzeK84,BogIK93L,Sla07,Sla18}. Recall that representation \eqref{BV-anz1-a} is equivalent to the following sum: \be{BV-anz1-com} |\Psi_{a,b}(\bar u;\bar v)\rangle=\sum_{\beta_1,\dots,\beta_a}B_{\beta_1}(u_1)B_{\beta_2}(u_2)\dots B_{\beta_a}(u_a) F^{(a)}_{\beta_1,\dots,\beta_a}(\bar u;\bar v)|0\rangle, \end{equation} where every $\beta_i\in\{\beta_1,\dots,\beta_a\}$ takes two values $\beta_i=1,2$. Let us write explicitly the sum over $\beta_1$: \begin{multline}\label{BV-anz1-a2} |\Psi_{a,b}(\bar u;\bar v)\rangle=B_1(u_1)\sum_{\beta_2,\dots,\beta_a}B_{\beta_2}(u_2)\dots B_{\beta_a}(u_a) F^{(a)}_{1,\beta_2,\dots,\beta_a}(\bar u;\bar v)|0\rangle\\ +B_2(u_1)\sum_{\beta_2,\dots,\beta_a} B_{\beta_2}(u_2)\dots B_{\beta_a}(u_a)F^{(a)}_{2,\beta_2,\dots,\beta_a}(\bar u;\bar v)|0\rangle, \end{multline} where the remaining sums are taken over $\{\beta_2,\dots,\beta_a\}$. Thus, we should find explicit representations for the components $F^{(a)}_{1,\beta_2,\dots,\beta_a}(\bar u;\bar v)|0\rangle$ and $F^{(a)}_{2,\beta_2,\dots,\beta_a}(\bar u;\bar v)|0\rangle$ in terms of vectors belonging to the space of lower dimension. We know that the vector $\mathbb{F}^{(a)}|0\rangle$ has the form \eqref{01-GF-BBB}, where the operator $\widehat{\mathcal{B}}^{(a)}$ is the matrix element of the monodromy matrix $\widehat{\mathcal{T}}^{(a)}$ \eqref{01-GMMTa}. We can present this monodromy matrix as follows: \be{Ta-Taa0} \widehat{\mathcal{T}}^{(a)}(z)=\widehat{\mathcal{T}}^{(a-1)}(z) r_{01}(z,u_1), \end{equation} where \be{Taa-def} \widehat{\mathcal{T}}^{(a-1)}_0(z)=\mathbb{D}_0(z)r_{0a}(z,u_a)\dots r_{02}(z,u_2)=\begin{pmatrix} \widehat{\mathcal{A}}^{(a-1)}(z)& \widehat{\mathcal{B}}^{(a-1)}(z)\\ \widehat{\mathcal{C}}^{(a-1)}(z)&\widehat{\mathcal{D}}^{(a-1)}(z)\end{pmatrix}_0. \end{equation} The entries of $\widehat{\mathcal{T}}^{(a-1)}_{ij}(z)$ act in the space $\mathcal{H}\otimes\mathcal{H}^{(a-1)}$. The space $\mathcal{H}^{(a-1)}$ has a natural vacuum $\tilde{|0\rangle}=|0\rangle\otimes|\Omega^{(a-1)}\rangle$, where \be{03-vacXXX} |\Omega^{(a-1)}\rangle =\underbrace{\left(\begin{smallmatrix} 1\\0 \end{smallmatrix}\right)\otimes \dots \otimes \left(\begin{smallmatrix} 1\\0 \end{smallmatrix}\right)}_{a-1\quad\text{times}}. \end{equation} It is easy to see that \be{ad-aaa} \begin{aligned} \widehat{\mathcal{A}}^{(a-1)}(z)\tilde{|0\rangle}&=\lambda_2(z)f(z,\bar u_1)\tilde{|0\rangle},\\ \widehat{\mathcal{D}}^{(a-1)}(z)\tilde{|0\rangle}&=\lambda_3(z)\tilde{|0\rangle}, \end{aligned} \end{equation} and we recall that $\bar u_1=\bar u\setminus u_1$. In its turn, the matrix $r_{01}(z,u_1)$ can be considered as the monodromy matrix of the $XXX$ chain consisting of one site. We dealt already with this monodromy matrix in section~\ref{SS-DBV} (see \eqref{t0a1}). Recall that the auxiliary space of this matrix is $V_0\sim\mathbb{C}^2$, the quantum space is $V_1\sim\mathbb{C}^2$. It can be presented as a $2\times 2$ matrix in the space $V_0$ \be{T2T1-Taa} r_{01}(z,u_1)=\begin{pmatrix}\bs{a}(z)&\bs{b}(z)\\ \bs{c}(z)&\bs{d}(z) \end{pmatrix}= \begin{pmatrix}\mathbf{1}&0\\ 0&\mathbf{1} \end{pmatrix}+g(z,u_1)\begin{pmatrix} E_1^{11}& E_1^{21} \\ E_1^{12} & E_1^{22} \end{pmatrix}, \end{equation} where the entries act in the space $V_1\sim\mathbb{C}^2$ with the vacuum vector $\left(\begin{smallmatrix} 1\\0 \end{smallmatrix}\right)$. Obviously \be{ad-aaa1} \begin{aligned} \bs{a}(z)\left(\begin{smallmatrix} 1\\0 \end{smallmatrix}\right)&=f(z,u_1)\left(\begin{smallmatrix} 1\\0 \end{smallmatrix}\right),\\ \bs{d}(z)\left(\begin{smallmatrix} 1\\0 \end{smallmatrix}\right)&=\left(\begin{smallmatrix} 1\\0 \end{smallmatrix}\right). \end{aligned} \end{equation} A peculiarity of this monodromy matrix is that $\bs{b}^n(z)=0$ for $n>1$, because $\bs{b}(z)=g(z,u_1)E_1^{21}$. We can treat $\widehat{\mathcal{T}}^{(a)}(z)$ as the monodromy matrix of the composite model with $T^{(2)}(z)=\widehat{\mathcal{T}}^{(a-1)}(z)$ and $T^{(1)}(z)=r_{01}(z,u_1)$. In this model \be{ad-r2} a^{(2)}(z)=\lambda_2(z)f(z,\bar u_1), \qquad d^{(1)}(z)=1, \end{equation} due to \eqref{ad-aaa} and \eqref{ad-aaa1}. Hence, in the case under consideration equation \eqref{11-IK} takes the form \be{11-IKpart} \widehat{\mathcal{B}}^{(a)}(\bar v)|0\rangle=\sum_{\bar v\mapsto\{\bar v_{\scriptscriptstyle \rm I},\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}\}} \lambda_2(\bar v_{\scriptscriptstyle \rm I})f(\bar v_{\scriptscriptstyle \rm I},\bar u_1)f(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I},\bar v_{\scriptscriptstyle \rm I}) \widehat{\mathcal{B}}^{(a-1)}(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I})|\tilde 0\rangle\otimes \bs{b}(\bar v_{\scriptscriptstyle \rm I})\left(\begin{smallmatrix} 1\\0 \end{smallmatrix}\right) . \end{equation} Here we have extended the convention on the shorthand notation \eqref{01-SH-prod} to the products of the commuting operators $\widehat{\mathcal{B}}^{(a)}(\bar v)$, $\widehat{\mathcal{B}}^{(a-1)}(\bar v)$, and $\bs{b}(\bar v)$. Formally, the sum in \eqref{11-IKpart} is taken over all possible partitions of the set $\bar v$ into two disjoint subsets $\bar v_{\scriptscriptstyle \rm I}$ and $\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}$. However, due to the property $\bs{b}^n(z)=0$ for $n>1$ we conclude that either $\#\bar v_{\scriptscriptstyle \rm I}=0$ or $\#\bar v_{\scriptscriptstyle \rm I}=1$. In the first case we have $\bar v_{\scriptscriptstyle \rm I}=\emptyset$ and $\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}=\bar v$. In the second case we can set $\bar v_{\scriptscriptstyle \rm I}=v_\ell$ and $\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}=\bar v_\ell$, where $\ell=1,\dots,b$. Thus, we find \be{BV-BVpart} \widehat{\mathcal{B}}^{(a)}(\bar v)|0\rangle=\widehat{\mathcal{B}}^{(a-1)}(\bar v)\tilde{|0\rangle} \otimes \left(\begin{smallmatrix} 1\\0 \end{smallmatrix}\right) +\sum_{\ell=1}^b \lambda_2(v_\ell)g(v_\ell,u_1)f(v_\ell,\bar u_1)f(\bar v_\ell,v_\ell) \widehat{\mathcal{B}}^{(a-1)}(\bar v_\ell)\tilde{|0\rangle} \otimes \left(\begin{smallmatrix} 0\\1\end{smallmatrix}\right), \end{equation} where the first term corresponds to $\bar v_{\scriptscriptstyle \rm I}=\emptyset$ and the sum over $\ell$ corresponds to the partitions $\bar v_{\scriptscriptstyle \rm I}=v_\ell$. From this equation we find the components $F^{(a)}_{1,\beta_2,\dots,\beta_a}$ and $F^{(a)}_{2,\beta_2,\dots,\beta_a}$ in terms of components $F^{(a-1)}_{\beta_2,\dots,\beta_a}$: \be{Fa-Fa-1} \begin{aligned} &F^{(a)}_{1,\beta_2,\dots,\beta_a}(\bar u;\bar v)|0\rangle=F^{(a-1)}_{\beta_2,\dots,\beta_a}(\bar u_1;\bar v)|\tilde 0\rangle,\\ &F^{(a)}_{2,\beta_2,\dots,\beta_a}(\bar u;\bar v)|0\rangle=\sum_{\ell=1}^b \lambda_2(v_\ell)g(v_\ell,u_1)f(v_\ell,\bar u_1)f(\bar v_\ell,v_\ell) F^{(a-1)}_{\beta_2,\dots,\beta_a}(\bar u_1;\bar v_\ell)|\tilde 0\rangle. \end{aligned} \end{equation} Substituting this into \eqref{BV-anz1-a2} we arrive at \begin{multline}\label{BV-anz1-a3} |\Psi_{a,b}(\bar u;\bar v)\rangle=B_1(u_1)\sum_{\beta_2,\dots,\beta_a}B_{\beta_2}(u_2)\dots B_{\beta_a}(u_a) F^{(a-1)}_{\beta_2,\dots,\beta_a}(\bar u_1;\bar v)|\tilde 0\rangle\\ +B_2(u_1)\sum_{\ell=1}^b \lambda_2(v_\ell)g(v_\ell,u_1)f(v_\ell,\bar u_1)f(\bar v_\ell,v_\ell) \sum_{\beta_2,\dots,\beta_a} B_{\beta_2}(u_2)\dots B_{\beta_a}(u_a)F^{(a-1)}_{\beta_2,\dots,\beta_a}(\bar u_1;\bar v_\ell)|\tilde 0\rangle. \end{multline} Then we recognise the Bethe vector $|\Psi_{a-1,b}(\bar u_1;\bar v)\rangle$ in the first line of \eqref{BV-anz1-a3} and the sum of the Bethe vectors $|\Psi_{a-1,b-1}(\bar u_1;\bar v_\ell)\rangle$ in the second line: \begin{multline}\label{Rec-Bab} |\Psi_{a,b}(\bar u;\bar v)\rangle=B_{1}(u_1)|\Psi_{a-1,b}(\bar u_1;\bar v)\rangle\\ +B_{2}(u_1)\sum_{\ell=1}^b \lambda_2(v_\ell)g(v_\ell,u_1)f(v_\ell,\bar u_1)f(\bar v_\ell,v_\ell) |\Psi_{a-1,b-1}(\bar u_1;\bar v_\ell)\rangle. \end{multline} Since $B_{1}(u)=T_{12}(u)$ and $B_{2}(u)=T_{13}(u)$, we recast \eqref{Rec-Bab} as follows: \begin{multline}\label{Rec-Bab0} |\Psi_{a,b}(\bar u;\bar v)\rangle=T_{12}(u_1)|\Psi_{a-1,b}(\bar u_1;\bar v)\rangle\\ +T_{13}(u_1)\sum_{\ell=1}^b \lambda_2(v_\ell)g(v_\ell,u_1)f(v_\ell,\bar u_1)f(\bar v_\ell,v_\ell) |\Psi_{a-1,b-1}(\bar u_1;\bar v_\ell)\rangle. \end{multline} Recursion \eqref{Rec-Bab0} allows one to build the Bethe vectors successively, starting from the case $a=0$. Indeed, for $a=0$ we have $|\Psi_{0,b}(\emptyset;\bar v)\rangle =T_{23}(\bar v)|0\rangle$ (see \eqref{BVa0-1}). Then we immediately find an explicit expression for the Bethe vector $|\Psi_{1,b}(u;\bar v)\rangle$: \begin{equation}\label{B1b} |\Psi_{1,b}(u;\bar v)\rangle=T_{12}(u)T_{23}(\bar v)|0\rangle +\sum_{\ell=1}^b \lambda_2(v_\ell)g(v_\ell,u)f(\bar v_\ell,v_\ell) T_{13}(u)T_{23}(\bar v_\ell)|0\rangle. \end{equation} To conclude this section we note that it follows from recursion \eqref{Rec-Bab0} and initial condition \eqref{BVa0-1} that the Bethe vectors are the states of fixed coloring \be{colBV} \mathop{\rm Col}\bigl(|\Psi_{a,b}(\bar u;\bar v)\rangle\bigr)=\{a,b\}. \end{equation} This statement can be easily proved via induction over $a$. \subsection{Second recursion for Bethe vectors\label{SS-SRBV}} Using representation for the Bethe vectors described in section~\ref{SS-RDE} one can obtain another recursion \begin{multline}\label{Rec-Bab0p} |\Psi_{a,b}(\bar u;\bar v)\rangle=T_{23}(v_b)|\Psi_{a,b-1}(\bar u;\bar v_b)\rangle\\ +T_{13}(v_b)\sum_{j=1}^a \lambda_2(u_j)g(v_b,u_j)f(\bar v_b,u_j)f(u_j,\bar u_j) |\Psi_{a-1,b-1}(\bar u_j;\bar v_b)\rangle. \end{multline} This recursion is derived via the composite model in the same way as recursion \eqref{Rec-Bab0}. We provide the readers with the opportunity to do this themselves. Recursion \eqref{Rec-Bab0p} allows one to build the Bethe vectors starting from \be{inBv2} |\Psi_{a,0}(\bar u;\emptyset)\rangle=T_{12}(\bar u)|0\rangle. \end{equation} \section{Explicit form of Bethe vector\label{S-EFBV}} Successive application of recursion \eqref{Rec-Bab0} allows us to guess a general explicit formula for the Bethe vector. In other words, we can now represent the off-shell Bethe vector as a polynomial in the creation operators $T_{12}$, $T_{13}$, and $T_{23}$ applied to the vacuum vector $0\rangle$ and explicitly specify the coefficients of this polynomial. The proof of this formula relies on the recursion \eqref{Rec-Bab0}, but it is rather long, therefore, we do not give it here. Nevertheless, we consider it necessary to give this explicit representation, since it plays an important role in calculating scalar products. In the explicit formula for the Bethe vector, a partition function of the six-vertex model with domain wall boundary conditions (DWPF) appears. Therefore, we give its brief description (see \cite{BogIK93L,Sla07,BelPRS12a,Sla18} for more details). \subsection{Partition function with domain wall boundary conditions\label{SS-DWPF}} We denote the DWPF by $K_n(\bar v|\bar u)$. It depends on two sets of variables $\bar v$ and $\bar u$; the subscript indicates that $\#\bar v=\#\bar u=n$. The function $K_n$ has the following determinant representation \begin{equation}\label{K-def} K_n(\bar v|\bar u) =\Delta'_n(\bar v)\Delta_n(\bar u)\;\frac{f(\bar v,\bar u)}{g(\bar v,\bar u)} \det_n\left( \frac{g^2(v_j,u_k)}{f(v_j,u_k)}\right), \end{equation} where $\Delta'_n(\bar v)$ and $\Delta_n(\bar u)$ are \be{def-Del} \Delta'_n(\bar v) =\prod_{j<k}^n g(v_j,v_k),\qquad {\Delta}_n(\bar u)=\prod_{j>k}^n g(u_j,u_k). \end{equation} More explicitly \begin{equation}\label{K-def0} K_n(\bar v|\bar u) =\frac{\prod_{j,k=1}^n(v_j-u_k+c)}{\prod_{j<k}^n(v_j-v_k)(u_k-u_j)} \det_n\left( \frac{c}{(v_j-u_k)(v_j-u_k+c)}\right). \end{equation} Obviously, $K_n(\bar v|\bar u)$ is a symmetric function of $\bar v$ and a symmetric function of $\bar u$. It decreases as $1/v_1$ (resp. as $1/u_1$), if $v_1\to\infty$ (resp. $u_1\to\infty$) and all other variables are fixed. It has simple poles at $v_j=u_k$, $j,k=1,\dots,n$. The residues in the poles are proportional to $K_{n-1}$: \be{poles-K0} K_n(\bar v|\bar u)\Bigr|_{u_n\to v_n}= g(v_n,u_n)f(\bar v_n,v_n)f(v_n,\bar u_n)K_{n-1}(\bar v_n|\bar u_n)+ reg, \end{equation} where $reg$ stays for the terms which remain regular at $u_n\to v_n$. To prove \eqref{poles-K0} it is enough to observe that the determinant in \eqref{K-def0} becomes singular at $v_n\to u_n$ due to the pole of the matrix element at the intersection of the $n$th row and the $n$th column. Then the determinant \eqref{K-def0} reduces to the product of this matrix element and the corresponding minor, leading eventually to \eqref{poles-K0}. Using this property one can decompose $K_n(\bar v|\bar u)$ over the poles at $u_n=v_i$, $i=1,\dots,n$, as follows: \be{dec-K0} K_n(\bar v|\bar u)=\sum_{i=1}^n g(v_i,u_n)f(\bar v_i,v_i)f(v_i,\bar u_n)K_{n-1}(\bar v_i|\bar u_n). \end{equation} In particular, \be{dec-K02} K_2(\bar v|\bar u)=g(v_1,u_2)g(v_2,u_1)f(v_2,v_1)f(v_1,u_1) +g(v_2,u_2)g(v_1, u_1)f(v_1,v_2)f(v_2,u_1), \end{equation} where we used \be{dec-K01} K_1(v|u)=g(v,u). \end{equation} \subsection{Bethe vector as a sum over partitions\label{SS-BVSP}} \begin{prop}\label{P-GenFBV}\cite{BelPRS13a} Bethe vectors of the $\mathcal{R}_3$ algebra have the following form: \be{Bab} |\Psi_{a,b}(\bar u;\bar v)\rangle=\sum_{\substack{\bar u\mapsto\{\bar u_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}\}\\ \bar v\mapsto\{\bar v_{\scriptscriptstyle \rm I},\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}\} }} K_n(\bar v_{\scriptscriptstyle \rm I}|\bar u_{\scriptscriptstyle \rm I})f(\bar u_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I})f(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I},\bar v_{\scriptscriptstyle \rm I})T_{13}(\bar u_{\scriptscriptstyle \rm I})T_{12}(\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}) T_{23}(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I})\lambda_{2}(\bar v_{\scriptscriptstyle \rm I})|0\rangle. \end{equation} Here $K_n(\bar v_{\scriptscriptstyle \rm I}|\bar u_{\scriptscriptstyle \rm I})$ is the DWPF \eqref{K-def}. The sum is taken over partitions $\bar u\mapsto\{\bar u_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}\}$ and $\bar v\mapsto\{\bar v_{\scriptscriptstyle \rm I},\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}\}$ such that $\#\bar u_{\scriptscriptstyle \rm I}=\#\bar v_{\scriptscriptstyle \rm I}=n$ and $n=0,1,\dots,\min(a,b)$. Everywhere the convention on the shorthand notation \eqref{01-SH-prod} is used. \end{prop} {\sl Remark}. Note that the r.h.s. of \eqref{Bab} is obviously symmetric over $\bar u$ and symmetric over $\bar v$. The proof of proposition~\ref{P-GenFBV} can be found in \cite{BelPRS13a}. Actually, it is enough to show that \eqref{Bab} satisfies recursion \eqref{Rec-Bab0}. However, as we have mentioned in the beginning of this section, this proof is rather bulky, and we do not give it here. Instead we illustrate representation \eqref{Bab} by several examples for $a$ small and $b$ arbitrary. First of all, it follows form \eqref{Bab} that for $a=0$ we have \be{Bbb} |\Psi_{0,b}(\emptyset;\bar v)\rangle=T_{23}(\bar v)|0\rangle, \end{equation} what coincides with \eqref{BVa0-1}. Let $a=1$. Then either $\#\bar u_{\scriptscriptstyle \rm I}=\#\bar v_{\scriptscriptstyle \rm I}=0$ or $\#\bar u_{\scriptscriptstyle \rm I}=\#\bar v_{\scriptscriptstyle \rm I}=1$. In both cases the product $f(\bar u_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I})$ disappears from equation \eqref{Bab}, because one of the subsets of $\bar u$ is empty. We obtain \be{Baba1} |\Psi_{1,b}(u;\bar v)\rangle=T_{12}(u)T_{23}(\bar v)|0\rangle+ \sum_{\bar v\mapsto\{\bar v_{\scriptscriptstyle \rm I},\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}\} } g(\bar v_{\scriptscriptstyle \rm I},u)f(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I},\bar v_{\scriptscriptstyle \rm I})T_{13}(u) T_{23}(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I})\lambda_{2}(\bar v_{\scriptscriptstyle \rm I})|0\rangle, \end{equation} where we used \eqref{dec-K01}, and the sum over partitions of the set $\bar v$ is taken under restriction $\#\bar v_{\scriptscriptstyle \rm I}=1$. Setting in this sum $\bar v_{\scriptscriptstyle \rm I}=v_\ell$ and $\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}=\bar v_\ell$, $\ell=1,\dots,b$, we immediately arrive at \eqref{B1b}. Let now $a=2$. Then either $\#\bar u_{\scriptscriptstyle \rm I}=\#\bar v_{\scriptscriptstyle \rm I}=0$, or $\#\bar u_{\scriptscriptstyle \rm I}=\#\bar v_{\scriptscriptstyle \rm I}=1$, or $\#\bar u_{\scriptscriptstyle \rm I}=\#\bar v_{\scriptscriptstyle \rm I}=2$. Respectively, we have three contributions to the Bethe vector \eqref{Bab}: \be{Baba2} |\Psi_{2,b}(\bar u;\bar v)\rangle=|\Psi^{(0)}\rangle+|\Psi^{(1)}\rangle+|\Psi^{(2)}\rangle. \end{equation} The contribution $|\Psi^{(0)}\rangle$ corresponds to the case $\#\bar u_{\scriptscriptstyle \rm I}=\#\bar v_{\scriptscriptstyle \rm I}=0$. It is quite obvious that \be{Psi0} |\Psi^{(0)}\rangle=T_{12}(\bar u)T_{23}(\bar v)|0\rangle. \end{equation} The next contribution $|\Psi^{(1)}\rangle$ has the form \be{Psi1} |\Psi^{(1)}\rangle=\sum_{\substack{\bar u\mapsto\{\bar u_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}\}\\ \bar v\mapsto\{\bar v_{\scriptscriptstyle \rm I},\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}\} }} g(\bar v_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I})f(\bar u_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I})f(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I},\bar v_{\scriptscriptstyle \rm I})T_{13}(\bar u_{\scriptscriptstyle \rm I})T_{12}(\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}) T_{23}(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I})\lambda_{2}(\bar v_{\scriptscriptstyle \rm I})|0\rangle, \end{equation} where the sum over partitions is taken under restriction $\#\bar u_{\scriptscriptstyle \rm I}=\#\bar v_{\scriptscriptstyle \rm I}=1$. Taking explicitly the sum over partitions $\bar u\mapsto\{\bar u_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}\}$ we obtain \begin{multline}\label{Psi11} |\Psi^{(1)}\rangle=\sum_{ \bar v\mapsto\{\bar v_{\scriptscriptstyle \rm I},\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}\} } \Bigl(g(\bar v_{\scriptscriptstyle \rm I},u_1)f(u_1,u_2)T_{13}(u_1)T_{12}(u_2)+g(\bar v_{\scriptscriptstyle \rm I},u_2)f(u_2,u_1)T_{13}(u_2)T_{12}(u_1)\Bigr)\\ % \times\lambda_{2}(\bar v_{\scriptscriptstyle \rm I})f(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I},\bar v_{\scriptscriptstyle \rm I})T_{23}(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I})|0\rangle. \end{multline} Finally, setting here $\bar v_{\scriptscriptstyle \rm I}=v_\ell$ and $\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}=\bar v_\ell$, $\ell=1,\dots,b$, we arrive at \begin{multline}\label{Psi12} |\Psi^{(1)}\rangle=\sum_{\ell=1}^b \Bigl(g(v_\ell,u_1)f(u_1,u_2)T_{13}(u_1)T_{12}(u_2)+g(v_\ell,u_2)f(u_2,u_1)T_{13}(u_2)T_{12}(u_1)\Bigr)\\ % \times\lambda_{2}(v_\ell)f(\bar v_\ell,v_\ell)T_{23}(\bar v_\ell)|0\rangle. \end{multline} The last contribution $|\Psi^{(2)}\rangle$ in \eqref{Baba2} has the form: \be{Psi2} |\Psi^{(2)}\rangle=\sum_{\bar v\mapsto\{\bar v_{\scriptscriptstyle \rm I},\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}\} } K_2(\bar v_{\scriptscriptstyle \rm I}|\bar u)f(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I},\bar v_{\scriptscriptstyle \rm I})T_{13}(\bar u) T_{23}(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I})\lambda_{2}(\bar v_{\scriptscriptstyle \rm I})|0\rangle, \end{equation} where $\#\bar v_{\scriptscriptstyle \rm I}=2$. Setting here $\bar v_{\scriptscriptstyle \rm I}=\{v_k,v_\ell\}$ and $\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}=\bar v_{k,\ell}\equiv\bar v\setminus\{v_k,v_\ell\}$ we find \be{Psi21} |\Psi^{(2)}\rangle=\sum_{1\le\ell<k\le b }\lambda_{2}(v_k)\lambda_{2}(v_\ell) K_2(\{v_k,v_\ell\}|\bar u)f(\bar v_{k,\ell},v_\ell)f(\bar v_{k,\ell},v_k)T_{13}(\bar u) T_{23}(\bar v_{k,\ell})|0\rangle. \end{equation} Let us reproduce all the contributions $|\Psi^{(j)}\rangle$ for $j=0,1,2$ via recursion \eqref{Rec-Bab0}. For $a=2$, recursion \eqref{Rec-Bab0} has the form \begin{multline}\label{Rec-Baba2} |\Psi_{2,b}(\bar u;\bar v)\rangle=T_{12}(u_2)|\Psi_{1,b}(u_1;\bar v)\rangle\\ +T_{13}(u_2)\sum_{\ell=1}^b \lambda_2(v_\ell)g(v_\ell,u_2)f(v_\ell,u_1)f(\bar v_\ell,v_\ell) |\Psi_{1,b-1}(u_1;\bar v_\ell)\rangle. \end{multline} where we replaced $u_1\leftrightarrow u_2$ due to the symmetry of the Bethe vector over the set $\bar u$. Using \eqref{B1b} for $|\Psi_{1,b}(u_1;\bar v)\rangle$ and $|\Psi_{1,b-1}(u_1;\bar v_\ell)\rangle$ we find \begin{multline}\label{Rec-Bab2} |\Psi_{2,b}(\bar u;\bar v)\rangle=T_{12}(u_2)\Bigl(T_{12}(u_1)T_{23}(\bar v)|0\rangle +\sum_{\ell=1}^b \lambda_2(v_\ell)g(v_\ell,u_1)f(\bar v_\ell,v_\ell) T_{13}(u_1)T_{23}(\bar v_\ell)|0\rangle\Bigr)\\ +T_{13}(u_2)\sum_{k=1}^b \lambda_2(v_k)g(v_k,u_2)f(v_k,u_1)f(\bar v_k,v_k) \Bigl(T_{12}(u_1)T_{23}(\bar v_k)|0\rangle\\ +\sum_{\substack{\ell=1 \\ \ell\ne k}}^b \lambda_2(v_\ell)g(v_\ell,u_1)f(\bar v_{\ell,k},v_\ell) T_{13}(u_1)T_{23}(\bar v_{\ell,k})|0\rangle \Bigr). \end{multline} One can easily see the contribution $|\Psi^{(0)}\rangle=T_{12}(\bar u)T_{23}(\bar v)|0\rangle$. The contribution $|\Psi^{(2)}\rangle$ comes from the double sum \begin{multline}\label{RecPsi2} |\Psi^{(2)}\rangle=T_{13}(\bar u)\sum_{k=1}^b \sum_{\substack{\ell=1 \\ \ell\ne k}}^b \lambda_2(v_k)\lambda_2(v_\ell) g(v_k,u_2)g(v_\ell,u_1) \\ \times f(v_k,u_1)f(\bar v_k,v_k)f(\bar v_{\ell,k},v_\ell) T_{23}(\bar v_{\ell,k})|0\rangle. \end{multline} Indeed, using \be{sumiden} \sum_{k=1}^b \sum_{\substack{\ell=1 \\ \ell\ne k}}^b X_{k\ell}=\sum_{1\le\ell<k\le b}(X_{k\ell}+X_{\ell k}), \end{equation} we recast \eqref{RecPsi2} as follows: \begin{multline}\label{W2} |\Psi^{(2)}\rangle=T_{13}(\bar u)\sum_{1\le\ell<k\le b}\lambda_2(v_k)\lambda_2(v_\ell)f(\bar v_{k,\ell},v_k)f(\bar v_{\ell,k},v_\ell)T_{23}(\bar v_{\ell,k})|0\rangle\\ \times\Bigl\{g(v_k,u_2)g(v_\ell,u_1)f(v_k,u_1)f(v_\ell,v_k)+g(v_\ell,u_2)g(v_k,u_1)f(v_\ell,u_1)f(v_k,v_\ell) \Bigr\}. \end{multline} Comparing the expression in braces with \eqref{dec-K02} we see that \be{dec-K02kl} g(v_k,u_2)g(v_\ell,u_1)f(v_\ell,v_k)f(v_k,u_1) +g(v_\ell,u_2)g(v_k, u_1)f(v_k,v_\ell)f(v_\ell,u_1)=K_2(\{v_k,v_\ell\}|\bar u), \end{equation} and we do reproduce \eqref{Psi21}. It remains to check that \be{Psi1W1} |\Psi^{(1)}\rangle=\sum_{\ell=1}^bW_\ell \;T_{23}(\bar v_\ell)|0\rangle, \end{equation} where \begin{equation}\label{W1} W_\ell = \lambda_2(v_\ell)f(\bar v_\ell,v_\ell)\Bigl(T_{12}(u_2)T_{13}(u_1)g(v_\ell,u_1) +T_{13}(u_2)T_{12}(u_1)g(v_\ell,u_2)f(v_\ell,u_1)\Bigr). \end{equation} Using commutation relations \eqref{00-CRcomp} we find \be{T12T13} T_{12}(u_2)T_{13}(u_1)=f(u_1,u_2)T_{13}(u_1)T_{12}(u_2)+ g(u_2,u_1)T_{13}(u_2)T_{12}(u_1). \end{equation} Substituting this into \eqref{W1} we arrive at \begin{multline}\label{W1-1} W_\ell = \lambda_2(v_\ell)f(\bar v_\ell,v_\ell)\Bigl(T_{13}(u_1)T_{12}(u_2)f(u_1,u_2)g(v_\ell,u_1)\\ +T_{13}(u_2)T_{12}(u_1)\bigl[g(v_\ell,u_2)f(v_\ell,u_1)+g(u_2,u_1)g(v_\ell,u_1)\bigr]\Bigr). \end{multline} Simple straightforward calculation shows that \be{sumrat1} g(v_\ell,u_2)f(v_\ell,u_1)+g(u_2,u_1)g(v_\ell,u_1)=f(u_2,u_1)g(v_\ell,u_2). \end{equation} Substituting this into \eqref{W1-1} we obtain \eqref{Psi12}. Thus, we have convinced our selves that recursion \eqref{Rec-Bab0} does give the Bethe vector \eqref{Bab} for $a=2$. In concluding this section, we note that explicit formulas for the $\mathcal{R}_N$ Bethe vectors in terms of polynomials in the creation operators acting on the vacuum vector were obtained in \cite{HutLPRS17}. \subsection{Alternative representation for Bethe vectors\label{SS-ARBV}} Along with the representation \eqref{Bab}, there is another representation for the Bethe vectors: \be{NBab} |\Psi_{a,b}(\bar u;\bar v)\rangle=\sum_{\substack{\bar u\mapsto\{\bar u_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}\}\\ \bar v\mapsto\{\bar v_{\scriptscriptstyle \rm I},\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}\} }} K_n(\bar v_{\scriptscriptstyle \rm I}|\bar u_{\scriptscriptstyle \rm I})f(\bar u_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I})f(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I},\bar v_{\scriptscriptstyle \rm I})T_{13}(\bar v_{\scriptscriptstyle \rm I})T_{23}(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}) T_{12}(\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I})\lambda_{2}(\bar u_{\scriptscriptstyle \rm I})|0\rangle. \end{equation} Here the sum is taken over partitions $\bar u\mapsto\{\bar u_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}\}$ and $\bar v\mapsto\{\bar v_{\scriptscriptstyle \rm I},\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}\}$ like in \eqref{Bab}. In order to prove \eqref{NBab} we first prove the following proposition. \begin{prop} Let us extend the action of the automorphism \eqref{00-varphi} on the vectors by \be{00v-varphi} \begin{aligned} &\varphi\bigl(|0\rangle)= |0\rangle,\\ &\varphi\bigl(T_{ij}(u)|0\rangle\bigr)=\varphi\bigl(T_{ij}(u)\bigr) |0\rangle, &\varphi\bigl(\lambda_i(u))= \lambda_{4-i}(-u). \end{aligned} \end{equation} Then \be{act-phiBV} \varphi\bigl( |\Psi_{a,b}(\bar u;\bar v)\rangle\bigr)=|\widetilde{\Psi}_{b,a}(-\bar v;-\bar u)\rangle, \end{equation} where $|\widetilde{\Psi}_{b,a}(-\bar v;-\bar u)\rangle$ is the Bethe vector corresponding to the monodromy matrix $\tilde T(z)$. \end{prop} \textsl{Proof.} We use induction over $a$. We have for $a=0$ and $b$ arbitrary \be{act-vpha0} \varphi\bigl( |\Psi_{0,b}(\emptyset;\bar v)\rangle\bigr)= \varphi\bigl( T_{23}(\bar v)|0\rangle\bigr)=\tilde T_{12}(-\bar v)|0\rangle=|\widetilde{\Psi}_{b,0}(-\bar v;\emptyset)\rangle, \end{equation} where we used \eqref{BVa0-1} and \eqref{inBv2}. Assume that \eqref{act-phiBV} holds for some $a-1\ge 0$ and $b$ arbitrary. Applying the automorphism $\varphi$ to the recursion \eqref{Rec-Bab0} we obtain \begin{multline}\label{phiRec-Bab0} \varphi\bigl(|\Psi_{a,b}(\bar u;\bar v)\rangle\bigr)=\varphi\bigl(T_{12}(u_1)\bigr) \varphi\bigl(|\Psi_{a-1,b}(\bar u_1;\bar v)\rangle\bigr)\\ +\varphi\bigl(T_{13}(u_1)\bigr)\sum_{\ell=1}^b \varphi\bigl(\lambda_2(v_\ell)\bigr) g(v_\ell,u_1)f(v_\ell,\bar u_1)f(\bar v_\ell,v_\ell) \varphi\bigl(|\Psi_{a-1,b-1}(\bar u_1;\bar v_\ell)\rangle\bigr). \end{multline} Due to induction assumption we find \begin{multline}\label{phiRec-Bab1} \varphi\bigl(|\Psi_{a,b}(\bar u;\bar v)\rangle\bigr)=T_{23}(-u_1)|\widetilde{\Psi}_{b,a-1}(-\bar v,-\bar u_1)\rangle\\ +T_{13}(-u_1)\sum_{\ell=1}^b \lambda_2(-v_\ell)g(v_\ell,u_1)f(v_\ell,\bar u_1)f(\bar v_\ell,v_\ell) |\widetilde{\Psi}_{b-1,a-1}(-\bar v_\ell,-\bar u_1)\rangle. \end{multline} Setting $a'=b$, $b'=a$, $\bar u'=-\bar v$, and $\bar v'=-\bar u$, one obtains \begin{multline}\label{phiRec-Bab2} \varphi\bigl(|\Psi_{a,b}(\bar u;\bar v)\rangle\bigr)=T_{23}(v'_1)|\widetilde{\Psi}_{a',b'-1}(\bar u',\bar v'_1)\rangle\\ +T_{13}(v'_1)\sum_{\ell=1}^{a'} \lambda_2(u'_\ell)g(-u'_\ell,-v'_1)f(-u'_\ell,-\bar v'_1)f(-\bar u'_\ell,-u'_\ell) |\widetilde{\Psi}_{a'-1,b'-1}(\bar u'_\ell,\bar v'_1)\rangle. \end{multline} Using $g(-v,-u)=g(u,v)$ and $f(-v,-u)=f(u,v)$ we arrive at \begin{multline}\label{phiRec-Bab3} \varphi\bigl(|\Psi_{a,b}(\bar u;\bar v)\rangle\bigr)=T_{23}(v'_1)|\widetilde{\Psi}_{a',b'-1}(\bar u',\bar v'_1)\rangle\\ +T_{13}(v'_1)\sum_{\ell=1}^{a'} \lambda_2(u'_\ell)g(v'_1,u'_\ell)f(\bar v'_1,u'_\ell)f(u'_\ell,\bar u'_\ell) |\widetilde{\Psi}_{a'-1,b'-1}(\bar u'_\ell,\bar v'_1)\rangle. \end{multline} One recognizes in the r.h.s. of \eqref{phiRec-Bab3} the recursion formula\footnote{% One can replace $v_b$ in \eqref{Rec-Bab0p} by any other $v_k$ due to the symmetry of Bethe vectors over $\bar v$.} \eqref{Rec-Bab0p} for $|\widetilde{\Psi}_{a',b'}(\bar u';\bar v')\rangle$. Hence, \be{phiRec-Bab4} \varphi\bigl(|\Psi_{a,b}(\bar u;\bar v)\rangle\bigr)=|\widetilde{\Psi}_{a',b'}(\bar u';\bar v')\rangle =|\widetilde{\Psi}_{b,a}(-\bar v;-\bar u)\rangle. \end{equation} \qed \medskip Now we are ready to prove representation \eqref{NBab}. We start with equation \eqref{Bab} for the Bethe vector $|\widetilde{\Psi}_{b,a}(\bar v;\bar u)\rangle$: \be{phiBab4} |\widetilde{\Psi}_{b,a}(\bar v;\bar u)\rangle=\sum_{\substack{\bar u\mapsto\{\bar u_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}\}\\ \bar v\mapsto\{\bar v_{\scriptscriptstyle \rm I},\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}\} }} K_n(\bar u_{\scriptscriptstyle \rm I}|\bar v_{\scriptscriptstyle \rm I})f(\bar v_{\scriptscriptstyle \rm I},\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I})f(\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I},\bar u_{\scriptscriptstyle \rm I})\tilde T_{13}(\bar v_{\scriptscriptstyle \rm I})\tilde T_{12}(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}) \tilde T_{23}(\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I})\tilde\lambda_{2}(\bar u_{\scriptscriptstyle \rm I})|0\rangle. \end{equation} It is easy to see that equations \eqref{00-varphi}, \eqref{00v-varphi} imply $\varphi^2=1$, and hence, \be{phiBV} \varphi(|\widetilde{\Psi}_{b,a}(\bar v;\bar u)\rangle)=\varphi^2(|\Psi_{a,b}(-\bar u;-\bar v)\rangle)=|\Psi_{a,b}(-\bar u;-\bar v)\rangle. \end{equation} Thus, acting with $\varphi$ on \eqref{phiBab4} we obtain \be{phiBab1} |\Psi_{a,b}(-\bar u;-\bar v)\rangle=\sum_{\substack{\bar u\mapsto\{\bar u_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}\}\\ \bar v\mapsto\{\bar v_{\scriptscriptstyle \rm I},\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}\} }} K_n(\bar u_{\scriptscriptstyle \rm I}|\bar v_{\scriptscriptstyle \rm I})f(\bar v_{\scriptscriptstyle \rm I},\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I})f(\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I},\bar u_{\scriptscriptstyle \rm I})T_{13}(-\bar v_{\scriptscriptstyle \rm I})T_{23}(-\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}) T_{12}(-\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I})\lambda_{2}(-\bar u_{\scriptscriptstyle \rm I})|0\rangle. \end{equation} It follows form $g(-u,-v)=g(v,u)$ and $f(-u,-v)=f(v,u)$ that $K_n(-\bar v|-\bar u)= K_n(\bar u|\bar v)$. Thus, changing $\bar u\to-\bar u$ and $\bar v\to-\bar v$ in \eqref{phiBab1} we immediately arrive at \eqref{NBab}. \subsection{Commutation relations and Bethe vectors\label{SS-BCCR}} Comparing representations \eqref{Bab} and \eqref{NBab} we see that we deal with different ordering of the operators in these formulas. Nevertheless, these formulas are equivalent. Let us consider a simple but non-trivial example of the Bethe vector $|\Psi_{1,1}(u;v)\rangle$. Using \eqref{Bab} and \eqref{NBab} we respectively obtain \be{Psi11a} |\Psi_{1,1}(u;v)\rangle= T_{12}(u)T_{23}(v)|0\rangle+g(v,u)T_{13}(u)\lambda_{2}(v)|0\rangle, \end{equation} and \be{Psi11b} |\Psi_{1,1}(u;v)\rangle=T_{23}(v)T_{12}(u)|0\rangle+g(v,u)T_{13}(v)\lambda_{2}(u)|0\rangle. \end{equation} It is easy to see that \eqref{Psi11a} and \eqref{Psi11b} do coincide. Indeed, due to the commutation relations \eqref{00-CRcomp} we have \begin{equation}\label{1223-CRcomp} [T_{12}(u),T_{23}(v)]=g(u,v)\bigl(T_{13}(u)T_{22}(v) -T_{13}(v)T_{22}(u)\bigr), \end{equation} or equivalently \begin{equation}\label{1223-CRcomp1} T_{12}(u)T_{23}(v)+g(v,u)T_{13}(u)T_{22}(v)= T_{23}(v)T_{12}(u)+g(v,u)T_{13}(v)T_{22}(u). \end{equation} Applying \eqref{1223-CRcomp1} to the vacuum vector we obtain \begin{equation}\label{1223-CRcomp2} T_{12}(u)T_{23}(v)|0\rangle+g(v,u)T_{13}(u)\lambda_{2}(v)|0\rangle= T_{23}(v)T_{12}(u)|0\rangle+g(v,u)T_{13}(v)\lambda_{2}(u)|0\rangle. \end{equation} A similar effect takes place in the case of general Bethe vectors $|\Psi_{a,b}(\bar u;\bar v)\rangle$. Namely, one can prove \cite{Sla16} that \begin{multline}\label{X-Y} \sum_{\substack{\bar u\mapsto\{\bar u_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}\}\\ \bar v\mapsto\{\bar v_{\scriptscriptstyle \rm I},\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}\} }} K_n(\bar v_{\scriptscriptstyle \rm I}|\bar u_{\scriptscriptstyle \rm I}) f(\bar u_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}) f(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I},\bar v_{\scriptscriptstyle \rm I})\\ \times\Bigl(T_{13}(\bar u_{\scriptscriptstyle \rm I})\,T_{12}(\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I})\,T_{23}(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I})\,T_{22}(\bar v_{\scriptscriptstyle \rm I})- T_{13}(\bar v_{\scriptscriptstyle \rm I})\,T_{23}(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I})\,T_{12}(\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I})\,T_{22}(\bar u_{\scriptscriptstyle \rm I})\Bigr)=0. \end{multline} Here the sum is taken over partitions $\bar u\mapsto\{\bar u_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}\}$ and $\bar v\mapsto\{\bar v_{\scriptscriptstyle \rm I},\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}\}$ such that $\#\bar u_{\scriptscriptstyle \rm I}=\#\bar v_{\scriptscriptstyle \rm I}=n$ and $n=0,1,\dots,\min(a,b)$, where $a=\#\bar u$ and $b=\#\bar v$. On the one hand, acting with \eqref{X-Y} on $|0\rangle$ we immediately prove the equivalence of the representations \eqref{Bab} and \eqref{NBab}. On the other hand, \eqref{X-Y} can be considered as a multiple commutation relation between the products $T_{23}(\bar v)$ and $T_{12}(\bar u)$. Indeed, extracting explicitly the term $n=0$ we recast \eqref{X-Y} as follows: \begin{multline}\label{X-Y1} [T_{23}(\bar v),T_{12}(\bar u)]= \sum_{\substack{\bar u\mapsto\{\bar u_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}\}\\ \bar v\mapsto\{\bar v_{\scriptscriptstyle \rm I},\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I}\} \\ n>0 }} K_n(\bar v_{\scriptscriptstyle \rm I}|\bar u_{\scriptscriptstyle \rm I}) f(\bar u_{\scriptscriptstyle \rm I},\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I}) f(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I},\bar v_{\scriptscriptstyle \rm I})\\ \times\Bigl(T_{13}(\bar u_{\scriptscriptstyle \rm I})\,T_{12}(\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I})\,T_{23}(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I})\,T_{22}(\bar v_{\scriptscriptstyle \rm I})- T_{13}(\bar v_{\scriptscriptstyle \rm I})\,T_{23}(\bar v_{\scriptscriptstyle \rm I\hspace{-1pt}I})\,T_{12}(\bar u_{\scriptscriptstyle \rm I\hspace{-1pt}I})\,T_{22}(\bar u_{\scriptscriptstyle \rm I})\Bigr). \end{multline} Thus, we see that the explicit representations for the Bethe vectors in the $\mathfrak{gl}_3$ based models are closely related to the multiple commutation relations. Whether this correspondence exists in the models with higher rank of symmetry is an open question. \section*{Summary} We have considered the basic principles of NABA. The main example for us were models with the $\mathfrak{gl}_3$-invariant $R$-matrix. However, in more general cases, the principle scheme for obtaining off-shell and on-shell Bethe vectors persists. In particular, this scheme works in models with the $q$-deformed $R$-matrix \eqref{00-RUqglN}, as well as in the models based on graded algebras \cite{KulS80,Goh02,BelRag08,HutLPRS17}. Further development of NABA is associated with the application of this method to the calculation of the matrix elements of local operators and correlation functions. In these matters, however, there are still many unsolved problems. Basically these problems are of a technical nature and involve very non-trivial representations for the Bethe vectors. Partially these problems are solved in the models with the $\mathfrak{gl}_3$-invariant $R$-matrix, as well as its graded and $q$-deformed analogues \cite{PakRS15,Sla15sp,FukS17}. \section*{Acknowledgements} I am grateful to the organizers of Les Houches summer school 2018 {\it Integrability in Atomic and Condensed Matter Physics} for hospitality and beautiful scientific atmosphere. I also thank all listeners of the School for their attention, patience, and questions that have allowed me to improve my notes.
1911.12950
\section{Introduction} \hspace*{1em} As a special case of image object segmentation, object co-segmentation refers to the task of jointly discovering and segmenting the objects shared in a group of images. It has been widely used to support various computer vision applications, such as interactive image segmentation~\cite{kamranian2018iterative}, 3D reconstruction~\cite{mustafa2017semantically} and object co-localization~\cite{Wei2017Unsupervised,Han2018Robust}, to name a few. Image features that characterize the co-objects in the image group are vital for a co-segmentation task. Conventional approaches use the hand-crafted cues such as color histograms, Gabor filter outputs and SIFT descriptors as feature representations~\cite{yuan2014novel,dai2013cosegmentation,lee2015multiple}. Those hand-crafted features cannot well handle the challenging cases in co-segmentation such as background clutter and large-scale appearance variations of the co-objects in images. In recent years, deep-learning-based co-segmentation methods have attracted much attention. For example,~\cite{li2018deep,chen2018semantic} leverage a Siamese network architecture for object co-segmentation and an attention mechanism is used to enhance the co-object feature representations. These methods have shown superior performance compared to the traditional methods~\cite{yuan2014novel,dai2013cosegmentation,lee2015multiple}, which inspire us to explore a deep-learning-based solution to object co-segmentation. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{newnewfigure1.pdf} \caption{Object co-segmentation examples by our approach. (a) Horse group; (b) Horse group co-segmentation maps.} \label{colocosegexample} \end{figure} One critical property of object co-segmentation is that the co-objects in images belong to the same semantic category. Those co-objects usually occupy part of each image. One illustrative example is shown in Figure~\ref{colocosegexample}. It is desirable that the deep convolutional network layers, being used as a feature extractor, are targeted on modelling the co-objects. To this end, we propose a spatial-semantic modulated network structure to model this property. The two modulators are achieved by the designed group-wise mask learning branch and co-category classification branch, respectively. We summarize the technical contributions of this work as follows: \begin{quote} \begin{itemize} \item We propose a spatial-semantic modulated deep network for object co-segmentation. Image features extracted by a backbone network are used to learn a spatial modulator and a semantic modulator. The outputs of the modulators guide the image features up-sampling to generate the co-segmentation results. The network parameter learning is formulated into a multi-task learning task, and the whole network is trained in an end-to-end manner. \item For the spatial modulation branch, an unsupervised learning method is proposed to learn a mask for each image. With the fused multi-resolution image features as input, we formulate the mask learning as an integer programming problem. Its continuous relaxation has a closed-form solution. The learned parameter indicates whether the corresponding image pixel corresponds to foreground or background. \item In the semantic modulation branch, we design a hierarchical second-order pooling (HSP) operator to transform the convolutional features for object classification. Spatial pooling (SP) is shown to be able to capture the high-order feature statistical dependency~\cite{gao2019global}. The proposed HSP module has a stack of two SP layers. They are dedicated to capturing the long-range channel-wise dependency of the holistic feature representation. The output of the HSP layer is fed into a fully-connected layer for object classification and used as the semantic modulator. \end{itemize} \end{quote} We conduct extensive evaluations on four object co-segmentation benchmark datasets~\cite{faktor2013co,rubinstein2013unsupervised}, including the sub-set of MSRC, Internet, the sub-set of iCoseg and PASCAL-VOC datasets. The proposed model achieves a significantly higher accuracy than state-of-the-art methods. Especially, on the most challenging PASCAL-VOC dataset, our method outperforms the second best-performing state-of-the-art approach~\cite{li2018deep} by $6\%$ in terms of average Jaccard index $\mathcal{J}$. The rest of this work is organized as follows: In~\S~\ref{sec:relatedwork}, we introduce the related works of our study. \S~\ref{sec:proposedapproach} describes the proposed framework and its main components. Afterwards, comprehensive experimental evaluations are presented in~\S~\ref{sec:experiments}. Finally, we conclude this work in~\S~\ref{sec:conclusions}. \section{Related Work} \label{sec:relatedwork} \subsection{Object Co-segmentation} \hspace*{1em} A more comprehensive literature review about image co-segmentation can be found in~\cite{zhu2016beyond}. Existing object co-segmentation methods can be roughly grouped into four categories including graph-based model, saliency-based model, joint processing model and deep learning model. Conventional approaches such as \cite{yuan2014novel,collins2012random,lee2015multiple} assume the pixels or superpixels in the co-objects can be grouped together and then they formulate co-segmentation as a clustering task to search for the co-objects. Saliency-detection-based methods assume regions of interest in the images are usually the co-objects to be segmented. They conduct image co-segmentation through detecting the regions that attract human attention most. Representative models include~\cite{tsai2018image,zhang2019co,lu2019survey}. The work in~\cite{dai2013cosegmentation,jerripothula2017object} employs a coupled framework for co-skeletonization and co-segmentation tasks so that they are well informed by each other, and benefit each other synergistically. The idea of joint processing can exploit the inherent interdependencies of two tasks to achieve better results jointly. Recently, \cite{li2018deep,chen2018semantic} respectively propose an end-to-end deep-learning-based method for object co-segmentation using a Siamese encoder-decoder architecture and a semantic-aware attention mechanism. \subsection{Network Modulation} \hspace*{1em} Modulation module has been proved to be an effective way to manipulate network parameter learning. The modulator can be modelled as parameters or output of the auxiliary branch that are used to guide the main branch parameter learning. In the segmentation method~\cite{dai2015convolutional}, an image mask is used as a modulator for background removal. In the Mask R-CNN model~\cite{he2017mask}, a classification branch is used to guide the segmentation branch learning. Feature-wise linear modulation is a widely-used scheme, which has been applied to object detection~\cite{Lin2017Feature} and graph neural networks learning~\cite{brockschmidt}. In visual reasoning problem, network modulation is used to encode the language information~\cite{de2017modulating,perez2018film}. The attention module in the image caption model~\cite{chen2017sca} can be viewed as a modulator. \cite{yang2018efficient} proposes to model the visual and spatial information by a modulator for video object segmentation. In~\cite{flores2019saliency}, a saliency detection branch is added to an existing CNN architecture as a modulator for fine-grained object recognition. A cross-modulation mechanism is proposed in~\cite{prol2018cross} for few-shot learning. \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{network.pdf} \caption{Overview of the proposed object co-segmentation framework. Firstly, a group of images are fed into the backbone network to yield a set of muti-resolution CFMs. Then, the CFMs are modulated by a group of spatial heatmaps and a feature channel selector vector. The former is generated by a clustering approach that can well capture the coarse localizations of the co-objects in the images. Under the supervision of co-category labels, the latter is obtained by learning a group-wise semantic representation that indicates the importance of the feature channels. Finally, the multi-resolution modulated CFMs are fused in a way similar to the feature pyramid network (FPN)~\cite{Lin2017Feature} to produce the co-segmentation maps. `\textbf{conv}', `\textbf{FC}', `\textbf{up}' and `\textbf{down}' are short for convolutional, fully-connected, upsampling and downsampling layers, respectively.} \label{network} \end{figure*} \section{Proposed Approach} \label{sec:proposedapproach} \subsection{Problem Formulation} \hspace*{1em} Figure~\ref{network} presents an overview of our model. Given a group of $N$ images $\mathcal{I}=\{I^n\}_{n=1}^N$ containing co-objects of a specific category, our objective is to learn a feed-forward network $f$ that produces a set of object co-segmentation masks $\mathcal{M}=\{\textit{\textbf{M}}^n\}_{n=1}^N$: \begin{equation} \mathcal{M} = f(\mathcal{I};\bm{\theta}), \end{equation} where $\bm{\theta}$ denotes the network parameters to be optimized. The network $f$ is composed of three sub-networks: spatial modulation sub-net $f_{spa}$, semantic modulation sub-net $f_{sem}$ and segmentation sub-net $f_{seg}$. The renowned SPP-Net~\cite{he2015spatial} has shown that the convolutional feature maps (CFMs) for object recognition encode both spatial layouts of objects (by their positions) and the semantics (by strengths of their activations). Inspired by this model, we design $f_{spa}$ and $f_{sem}$ to encode the spatial and semantic information of the co-objects in $\mathcal{I}$, respectively. The two modulators guide the convolution layers learning in $f_{seg}$ to focus on the co-objects in the images. Specifically, the sub-net $f_{spa}$ is to learn a mask for each image to coarsely localize the co-object in it. Given the input CFMs $\{{\bm\varphi}(I^n)\}_{n=1}^N$ produced by fusing all the output CFMs of our backbone network, the sub-net $f_{spa}$ produces a set of spatial masks $\mathcal{S}=\{\textit{\textbf{S}}^n\in \Re^{w\times h}\}_{n=1}^N$ with width $w$ and height $h$: \begin{equation} \label{eq:spa modulator} \mathcal{S}=f_{spa}(\{{\bm\varphi}(I^n)\}_{n=1}^N;\bm{\theta}_{spa}), \end{equation} where $\bm{\theta}_{spa}$ denotes the corresponding network parameters to be optimized. Although the coarse spatial layout information of the co-objects in all images can be embedded into $\mathcal{S}$ in (\ref{eq:spa modulator}), the useful high-level semantic information that are essential to differentiate co-objects from distractors fails to be transferred into $\mathcal{S}$. To address this issue, we further propose $f_{sem}$ as a complement. The sub-net $f_{sem}$ learns a channel selector vector $\bm{\gamma}\in \Re^d$ with $d$ channels. The entries of $\bm{\gamma}$ indicate the importance of feature channels, that is \begin{equation} \label{eq:semantic modulator} \bm{\gamma}=f_{sem}(\{{\bm\phi}(I^n)\}_{n=1}^N;\bm{\theta}_{sem}), \end{equation} where $\bm{\phi}$ denotes the output CFMs with the lowest resolution generated by our backbone network, and $\bm{\theta}_{sem}$ is the corresponding sub-net parameters to be learned. $\bm\gamma$ is optimized using the co-category labels as supervision. Finally, we use the spatial and the semantic modulators as guidance to segment the co-object regions in each image $I^n$: \begin{equation} \label{eq:seg} \textit{\textbf{M}}^n = f_{seg}(I^n,\textit{\textbf{S}}^n,\bm\gamma;\bm\theta_{seg}), \end{equation} where $\bm\theta_{seg}$ is the parameters of the segmentation sub-net. To be specific, we transfer the spatial and semantic guidance $\{\mathcal{S},\bm\gamma\}$ into $f_{seg}$ using a simple shift-and-scale operation on the input CFMs of $f_{seg}$: for each image $I^n\in \mathcal{I}$, its modulated feature maps are formulated as \begin{equation} \label{eq:modulator} \textit{\textbf{Y}}^n_c = \gamma_c\textit{\textbf{X}}^n_c+\textit{\textbf{S}}^n, c=1,\ldots,d, \end{equation} where $\textit{\textbf{X}}^n_c$, $\textit{\textbf{Y}}^n_c\in \Re^{w\times h}$ are the input and output CFMs in the $c_{th}$ channel, $\gamma_c$ is the $c_{th}$ element of $\bm{\gamma}$. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{clustering.pdf} \caption{Schematic illustration of the proposed clustering objective for spatial modulator. The objective considers the mutual effects between any two samples, which pulls the samples of the same cluster together while pushing away the samples of different clusters. } \label{fig:cluttering} \end{figure} \subsection{Spatial Modulator} \hspace*{1em} In the sub-net $f_{spa}$ (\ref{eq:spa modulator}), the $i$-th channel feature $\textit{\textbf{x}}_i^n\in\Re^d$ of the input ${\bm\varphi}(I^n)\in\Re^{w\times h\times d}$ represents a corresponding local region in $I^n$. For expression clarity, we represent all the channel feature representations of $\mathcal{I}$ as $\mathcal{X}=\{\textit{\textbf{x}}_i\in\Re^d\}_{i=1}^{whN}$. The sub-net $f_{spa}$ aims at partitioning the data points in $\mathcal{X}$ into two classes $\mathcal{C}_f$, $\mathcal{C}_b$ of foreground and background. However, if training $f_{spa}$ using a supervised learning method with a fixed set of categories, it cannot generalize well to unseen categories. To this end, we propose a simple yet effective clustering approach to partitioning $\mathcal{X}$ into two clusters $\mathcal{C}_f$, $\mathcal{C}_b$ without knowing object semantic categories. Our unsupervised method can highlight category-agnostic co-object regions in images and hence can better generalize to unseen categories. As shown by Figure~\ref{fig:cluttering}, this can be achieved by maximizing all the distances between the foreground and the background samples while minimizing all the distances between the foreground samples and between the background ones respectively. To this end, we define the clustering objective as follows: \begin{equation} \label{eq:spacluter} \min\{\ell_{spa}=-2\sum_{i\in \mathcal{C}_f,j\in\mathcal{C}_b}d_{ij}+\sum_{i,j\in \mathcal{C}_f}d_{ij}+\sum_{i,j\in \mathcal{C}_b}d_{ij}\}, \end{equation} where $d_{ij}=\|\textit{\textbf{x}}_i-\textit{\textbf{x}}_j\|_2^2$ is the squared Euclidean distance between samples $i$ and $j$. Since we use normalized channel features satisfying $\|\textit{\textbf{x}}_i\|_2^2=1$, $d_{ij}$ can be reformulated as \begin{equation} \label{eq:dij} d_{ij} = 2-2\textit{\textbf{x}}_i^\top \textit{\textbf{x}}_j. \end{equation} Using a cluster indictor vector $\textit{\textbf{s}}=[s_1,\ldots,s_{whN}]^\top$ subject to $\|\textit{\textbf{s}}\|_2^2=1$, where $s_i=1/\sqrt{whN}$ if $i\in \mathcal{C}_f$ and $s_i=-1/\sqrt{whN}$ if $i\in \mathcal{C}_b$, the loss function $\ell_{spa}$ in (\ref{eq:spacluter}) can be reformulated as \begin{equation} \label{eq:sparegularizor} \ell_{spa}(\textit{\textbf{s}}) = whN\textit{\textbf{s}}^\top \textit{\textbf{D}}\textit{\textbf{s}}, \end{equation} where the $(i,j)$-th entry of $\textit{\textbf{D}}=d_{ij}$. Putting (\ref{eq:dij}) into (\ref{eq:sparegularizor}) and removing the trivial constant $whN$, $\ell_{spa}$ can be reformulated as \begin{equation} \label{eq:spaloss} \ell_{spa}(\textit{\textbf{s}}) = -\textit{\textbf{s}}^\top\textit{\textbf{G}}\textit{\textbf{s}}, \end{equation} where $\textit{\textbf{G}}=\textit{\textbf{X}}^\top\textit{\textbf{X}}-\textit{\textbf{1}}$ with $\textit{\textbf{X}}=[\textit{\textbf{x}}_1,\ldots,\textit{\textbf{x}}_{whN}]$, $\textit{\textbf{1}}$ denotes an all-ones matrix. Relaxing the elements in $\textit{\textbf{s}}$ from binary indictor values to continuous values in $[-1,1]$ subject to $\|\textit{\textbf{s}}\|_2^2=1$, the solution $\widehat{\textit{\textbf{s}}}=\arg\min_{\textit{\textbf{s}}}\ell_{spa}(\textit{\textbf{s}})$ satisfies~\cite{Ding2004K} \begin{equation} \textit{\textbf{G}}\widehat{\textit{\textbf{s}}}=\lambda_{max}\widehat{\textit{\textbf{s}}}, \end{equation} where $\lambda_{max}$ denotes the maximum eigenvalue of $\textit{\textbf{G}}$, and its corresponding eigenvector is $\widehat{\textit{\textbf{s}}}\in \Re^{whN}$. The optimal solution $\widehat{\textit{\textbf{s}}}$ is then reshaped to a set of $N$ spatial masks $\{\widehat{\textit{\textbf{S}}}^n\in\Re^{w\times h}\}_{n=1}^N$ as the spatial guidance in (\ref{eq:modulator}). \begin{figure}[t] \centering \includegraphics[width=0.88\columnwidth]{paperfigure4.pdf} \caption{Illustration of the SP and the HSP. The sub-net $f_{sem}$ (\ref{eq:semantic modulator}) is composed of the HSP module.} \label{fig:semanticmodulator} \end{figure} \subsection{Semantic Modulator} \hspace*{1em} Figure~\ref{fig:semanticmodulator} shows the diagram of the key modules in the sub-net $f_{sem}$ (\ref{eq:semantic modulator}), including the SP and the HSP. The SP exploits the high-order statistics of the holistic representation to enhance the non-linear representative capability of the learned model~\cite{gao2019global}, while the HSP can capture the long-range dependency along channel dimension of the group-wise feature tensors, paying more attention to important channels for classification task under the supervision of co-category labels. \textbf{SP:} Given input feature tensor $\bm\phi\in \Re^{w\times h\times d}$, we firstly leverage a $1\times 1$ convolution to reduce the number of channels from $d$ to $c$ to reduce the computational cost for the following operations. Then, we compute pairwise channel correlations of the $w\times h\times c$ tensor to yield a $c \times c$ covariance matrix. Each entry in the $c \times c$ covariance matrix measures the relevance between the feature maps in two channels, which leverages a quadratic operator to model high-order statistics of the holistic representation, hence enabling to enhance non-linear modeling capability. Afterwards, we use an FC layer to transform the $c\times c$ covariance matrix into a $1\times 1\times d$ tensor that indicates the feature channel importance. \textbf{HSP:} For each image $I^n\in \mathcal{I}$, its feature tensor $\bm\phi(I^n)$ is fed into an SP layer, outputting a $1\times 1\times d$ indicator tensor. Then, all the indictor tensors are concatenated vertically to yield a group-wise semantic representation, which is again fed into an SP layer to capture the long-range dependency along the channel dimension of the group-wise semantic representation, yielding an indictor vector $\bm\gamma$ that steers attention to the important channels that are essential for co-category classification. \textbf{Loss:} The output $\bm\gamma$ of $f_{sem}$ is followed by an FC layer and a sigmoid layer, yielding a classifier response: % \begin{equation} \label{eq:cls} \widehat{\textit{\textbf{y}}}=sigmoid(\textit{\textbf{W}}\bm\gamma+\textit{\textbf{b}}), \end{equation} % where $\textit{\textbf{W}}\in\Re^{L\times d}$ and $\textit{\textbf{b}}\in\Re^{L}$ are the parameters of the FC layer, $L$ denotes the number of the co-category in the training set. % The widely used cross-entropy loss function for classification is adopted to learn the indictor $\bm\gamma$ in (\ref{eq:cls}): % \begin{equation} \label{eq:semloss} \ell_{sem} = - \frac{1}{L} \sum\limits_{l=1}^{L} y_{l} \log \widehat{y}_{l} - (1 - y_{l}) \log (1 - \widehat{y}_{l}), \end{equation} where $\widehat{y}_{l}$ is the $l$-th entry of $\widehat{\textit{\textbf{y}}}$ that is the prediction value for the $l$-th co-category and $y_l\in\{0,1\}$ is the ground-truth label. \subsection{Segmentation Sub-net} \hspace*{1em} Given the input group-wise CFMs $\{\textit{\textbf{X}}^n\}_{n=1}^N$ of the images $\mathcal{I}$, the sub-net $f_{seg}$ are modulated by the outputs $\{\mathcal{S},\bm\gamma\}$ of $f_{spa}$ and $f_{sem}$, yielding a group of modulated representations $\{\textit{\textbf{Y}}^n\}_{n=1}^N$ using (\ref{eq:modulator}). Each $\textit{\textbf{Y}}^n$ is composed of a group of multi-resolution representations $\{\textit{\textbf{R}}_i^n\}_{i=1}^4$. Similar to the FPN~\cite{Lin2017Feature}, we fuse $\{\textit{\textbf{R}}_i^n\}_{i=1}^4$ from coarse to fine: with the coarser-resolution feature maps, we use a $1\times 1$ convolution layer to make the channel number equal to the corresponding top-down ones, following by an upsampling layer to make their spatial resolutions the same. Then, the upsampled maps are merged with the corresponding top-down ones via element-wise addition. The process is repeated until the finest resolution maps are generated as $\textit{\textbf{R}}^n=\textit{\textbf{R}}_1^n\oplus\cdots\oplus\textit{\textbf{R}}_{4}^n$. Finally, the maps $\textit{\textbf{R}}^n$ are fed into a convolutional layer, following by a $1\times 1$ convolutional layer and an upsampling layer to generate the corresponding segmentation mask $\widehat{\textit{\textbf{M}}}^n$. Denoting the ground-truth binary co-segmentation masks in the training image group as $\mathcal{M}_{gt} =\{\textit{\textbf{M}}_{gt}^n\}_{n=1}^{N}$, the loss function for the segmentation task is formulated as a weighted cross-entropy loss for pixel-wise classification: \begin{align} \label{eq:segloss} \ell_{seg} =&- \frac{1}{NP} \sum_{n=1}^N\sum_{i=1}^{P}\{\delta^n \textit{\textbf{M}}_{gt}^{n}(i)\log \widehat{\textit{\textbf{M}}}^{n}(i)\notag\\&- (1-\delta^n)(1-\textit{\textbf{M}}_{gt}^{n}(i))\log (1-\widehat{\textit{\textbf{M}}}^{n}(i))\}, \end{align} where $P$ is the number of the pixels in each training image, $i$ denotes the pixel index, $\delta^n$ is the ratio between all positive pixels and all pixels in image $I^n$, which balances the positive and negative samples. \subsection{Loss Function} \hspace*{1em} The three sub-nets $f_{spa}$, $f_{sem}$ and $f_{seg}$ are trained jointly via optimizing the following multi-task loss function \begin{equation} \label{eq:loss} \ell = \ell_{spa}+\ell_{sem}+\ell_{seg}, \end{equation} where $\ell_{spa}$, $\ell_{sem}$ and $\ell_{seg}$ are defined by (\ref{eq:spaloss}) , (\ref{eq:semloss}) and (\ref{eq:segloss}), respectively. \section{Experiments} \label{sec:experiments} \subsection{Implementation Details} \hspace*{1em} We leverage the HRNet \cite{sun2019deep} pre-trained on ImageNet \cite{deng2009imagenet} as the backbone network to extract the multi-resolution semantic features. Moreover, we also report the results of using the VGG16 backbone network~\cite{simonyan2014very}, which still demonstrate competing performance over state-of-the-art methods. Except for using the pretrained backbone network parameters as initialization, all other parameters are trained from scratch. We follow the same settings as \cite{wei2017group,wang2019robust}: the input image group $\mathcal{I}$ consists of $N=5$ images that are randomly selected from a group of images with co-object category, and a mini-batch of $4\times \mathcal{I}$ is fed into the model simultaneously during training. All images in $\mathcal{I}$ are resized to $224\times 224$ as input, and then the predicted co-segmentation maps are resized to the original image sizes as outputs. We leverage the Adam algorithm \cite{kingma2014adam} to optimize the whole network in an end-to-end manner, among which the exponential decay rates for estimating the first and the second moments are set to $0.9$ and $0.999$, respectively. The learning rate starts from 1e-4 and reduces by a half every $25,000$ steps until the model converges at about 200,000 steps. Our model is implemented in PyTorch and a Nvidia RTX $2080$Ti GPU is adopted for acceleration. We adopt the COCO-SEG dataset released by~\cite{wang2019robust} to train our model. The dataset contains $200,000$ images belonging to $L=78$ groups, among which each image has a manually labeled binary mask with co-category label information. The training process takes about $40$ hours. \subsection{Datasets and Evaluation Metrics} ~~\textbf{Datasets:} We conduct extensive evaluations on four widely-used benchmark datasets~\cite{faktor2013co,rubinstein2013unsupervised} including sub-set of MSRC, Internet, sub-set of iCoseg, and PASCAL-VOC. Among them, the sub-set of MSRC includes $7$ classes: bird, car, cat, cow, dog, plane, sheep, and each class contains $10$ images. The Internet has $3$ categories of airplane, car and horse. Each class has $100$ images including some images with noisy labels. The sub-set of iCoseg contains $8$ categories, and each has a different number of images. The PASCAL-VOC is the most challenging dataset with $1,037$ images of $20$ categories selected from the PASCAL-VOC 2010 dataset~\cite{Everingham10}. \textbf{Evaluation Metrics:} We adopt two widely-used metrics to evaluate the co-segmentation results, including the \textit{precision} $\mathcal{P}$ and the \textit{Jaccard index} $\mathcal{J} $. The precision $\mathcal{P}$ measures the percentage of the correctly segmented pixels for both foreground and background, while the Jaccard index $\mathcal{J}$ is defined as the intersection area of the predicted foreground objects and the ground truth divided by their union area. \renewcommand\arraystretch{1.2} \begin{table}[t] \caption{Quantitative comparison results on the sub-set of MSRC. The bold numbers indicate the best results. }\smallskip \centering \resizebox{.95\columnwidth}{!}{ \smallskip\begin{tabular}{|l||c|c|} \hline \multicolumn{1}{|c||}{MSRC} & Ave. $\mathcal{P} ($\%$)$ &Ave. $\mathcal{J}$ ($\%$)\\ \hline \cite{vicente2011object} & 90.2 & 70.6 \\ \cite{rubinstein2013unsupervised} & 92.2 & 74.7 \\ \cite{wang2013image} & 92.2 & - \\ \cite{faktor2013co} & 92.0 & 77.0 \\ \cite{mukherjee2018object} & 84.0 & 67.0 \\ \cite{li2018deep} & 92.4 & 79.9 \\ \cite{chen2018semantic} & \textbf{95.2} & 77.7\\ \hline\hline Ours-VGG16 & 94.3 & 79.4 \\ Ours-HRNet & \textbf{95.2} & \textbf{81.9} \\ \hline \end{tabular} } \label{MSRC} \end{table} \renewcommand\arraystretch{1.2} \begin{table}[t] \caption{Quantitative comparison results on the Internet. The bold numbers indicate the best results. }\smallskip \centering \resizebox{1\columnwidth}{!}{ \smallskip\begin{tabular}{|l||cc|cc|cc|} \hline \multirow{2}{*}{~~~~~~~~~~~~~~~~~~~~~Internet} & \multicolumn{2}{|c|}{Airplane} & \multicolumn{2}{c|}{Car} & \multicolumn{2}{c|}{Horse} \\ & Ave. $\mathcal{P}$ ($\%$) & Ave. $\mathcal{J}$ ($\%$) & Ave. $\mathcal{P}$ ($\%$) & Ave. $\mathcal{J}$ ($\%$) & Ave. $\mathcal{P}$ ($\%$) & Ave. $\mathcal{J}$ ($\%$) \\ \hline \cite{joulin2012multi} & 47.5 & 11.7 & 59.2 & 35.2 & 64.2 & 29.5\\ \cite{rubinstein2013unsupervised} & 88.0 & 55.8 & 85.4 & 64.4 & 82.8 & 51.6\\ \cite{chen2014enriching} & 90.2 & 40.3 & 87.6 & 64.9 & 86.2 & 33.4\\ \cite{jerripothula2016image} & 90.5 & 61.0 & 88.0 & 71.0 & 88.3 & 60.0\\ \cite{quan2016object} & 91.0 & 56.3 & 88.5 & 66.8 & 89.3 & 58.1\\ \cite{sun2016learning} & 88.6 & 36.3 & 87.0 & 73.4 & 87.6 & 54.7\\ \cite{tao2017image} & 79.8 & 42.8 & 84.8 & 66.4 & 85.7 & 55.3\\ \cite{yuan2017deep} & 92.6 & 66.0 & 90.4 & 72.0 & 90.2 & 65.0\\ \cite{li2018deep} & 94.1 & 65.4 & 93.9 & \textbf{82.8} & 92.4 & 69.4\\ \cite{chen2018semantic} & - & 65.9 & - & 76.9 & - & 69.1\\ \cite{MaCoSNet} & 94.1 & 65.0 & \textbf{94.0} & 82.0 & 92.2 & 63.0\\ \hline\hline Ours-VGG16 & 94.6 & 66.7 & 89.7 & 68.1 & 93.2 & 66.2\\ Ours-HRNet & \textbf{94.8} & \textbf{69.6} & 91.6 & 82.5 & \textbf{94.4} & \textbf{70.2}\\ \hline \end{tabular} } \label{Internet} \end{table} \renewcommand\arraystretch{1.3} \begin{table*}[t] \caption{Quantitative comparison results on the sub-set of iCoseg. The bold numbers indicate the best results.}\smallskip \centering \resizebox{2.0\columnwidth}{!}{ \smallskip\begin{tabular}{|l||c|cccccccc|} \hline \multicolumn{1}{|c||}{iCoseg} & Ave. $\mathcal{J}$ ($\%$) & bear2 & brownbear & cheetah & elephant & helicopter & hotballoon & panda1 & panda2 \\ \hline \cite{rubinstein2013unsupervised} & 70.2 & 65.3 & 73.6 & 69.7 & 68.8 & 80.3 & 65.7 & 75.9 & 62.5\\ \cite{jerripothula2014automatic} & 73.8 & 70.1 & 66.2 & 75.4 & 73.5 & 76.6 & 76.3 & 80.6 & 71.8\\ \cite{faktor2013co} & 78.2 & 72.0 & 92.0 & 67.0 & 67.0 & \textbf{82.0} & 88.0 & 70.0 & 55.0\\ \cite{jerripothula2016image} & 70.4 & 67.5 & 72.5 & 78.0 & 79.9 & 80.0 & 80.2 & 72.2 & 61.4\\ \cite{li2018deep} & 84.2 & 88.3 & \textbf{92.0} & 68.8 & 84.6 & 79.0 & 91.7 & 82.6 & 86.7\\ \cite{chen2018semantic} & 86.0 & 88.3 & 91.5 & 71.3 & 84.4 & 76.5 & 94.0 & \textbf{91.8} & \textbf{90.3}\\ \hline\hline Ours-VGG16 & 88.0 & 87.4 & 90.3 & 84.9 & 90.6 & 76.6 & 94.1 & 90.6 & 87.5\\ Ours-HRNet & \textbf{89.2} & \textbf{91.1} & 89.6 & \textbf{88.6} & \textbf{90.9} & 76.4 & \textbf{94.2}& 90.4 & 87.5\\ \hline \end{tabular} } \label{iCoseg} \end{table*} \renewcommand\arraystretch{1.3} \begin{table*}[t] \caption{Quantitative comparison results on the PASCAL-VOC. The bold numbers indicate the best results.}\smallskip \centering \resizebox{2.1\columnwidth}{!}{ \smallskip\begin{tabular}{|l||c|c|cccccccccccccccccccc|} \hline \multicolumn{1}{|c||}{PASCAL-VOC} & Ave. $\mathcal{P}$ ($\%$) & Ave. $\mathcal{J}$ ($\%$) & A.P. & Bike & Bird & Boat & Bottle & Bus & Car & Cat & Chair & Cow & D.T. & Dog & Horse & M.B. & P.S. & P.P. & Sheep & Sofa & Train & TV\\ \hline \cite{faktor2013co} & 84.0 & 46 & 65 & 14 & 49 & 47 & 44 & 61 & 55 & 49 & 20 & 59 & 22 & 39 & 52 & 51 & 31 & 27 & 51 & 32 & 55 & 35\\ \cite{lee2015multiple} & 69.8 & 33 & 50 & 15 & 29 & 37 & 27 & 55 & 35 & 34 & 13 & 40 & 10 & 37 & 49 & 44 & 24 & 21 & 51 & 30 & 42 & 16\\ \cite{chang2015optimizing} &82.4 & 29 & 48 & 9 & 32 & 32 & 21 & 34 & 42 & 35 & 13 & 50 & 6 & 22 & 37 & 39 & 19 & 17 & 41 & 21 & 41 & 18\\ \cite{quan2016object} & 89.0 & 52 & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - \\ \cite{hati2016image} & 72.5 & 25 & 44 & 13 & 26 & 31 & 28 & 33 & 26 & 29 & 14 & 24 & 11 & 27 & 23 & 22 & 18 & 17 & 33 & 27 & 26 & 25\\ \cite{jerripothula2016image} & 85.2 & 45 & 64 & 20 & 54 & 48 & 42 & 64 & 55 & 57 & 21 & 61 & 19 & 49 & 57 & 50 & 34 & 28 & 53 & 39 & 56 & 38\\ \cite{jerripothula2017object} & 80.1 & 40 & 53 & 14 & 47 & 43 & 42 & 62 & 50 & 49 & 20 & 56 & 13 & 38 & 50 & 45 & 29 & 26 & 40 & 37 & 51 & 37\\ \cite{wang2017multiple} & 84.3 & 52 & 75 & 26 & 53 & 59 & 51 & 70 & 59 & 70 & 35 & 63 & 26 & 56 & 63 & 59 & 35 & 28 & 67 & 52 & 52 & 48\\ \cite{li2018deep} & 94.2 & 65 & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - \\ \cite{hsu2018co} & 91.0 & 60 & 77 & 27 & 70 & 61 & 58 & 79 & 76 & 79 & 29 & 75 & \textbf{28} & 63 & 66 & 65 & 37 & 42 & 75 & 67 & 68 & 51 \\ \hline\hline Ours-VGG16 & 93.7 & 66 & \textbf{83} & 35 & \textbf{75} & 69 & 58 & 87 & 77 & \textbf{80} & 26 & 86 & 7 & 74 & 79 & 71 & 45 & 39 & 81 & 68 & 83 & 59 \\ Ours-HRNet & \textbf{94.9} & \textbf{71} & 82 & \textbf{37} & 74 & \textbf{70} & \textbf{67}& \textbf{88} & \textbf{82} & 77 & \textbf{36} & \textbf{87} & 15 & \textbf{75} & \textbf{82} & \textbf{72} & \textbf{58} & \textbf{46} & \textbf{82} & \textbf{77} & \textbf{84} & \textbf{69}\\ \hline \end{tabular} } \label{PASCALVOC} \end{table*} \subsection{Results} \hspace*{1em} We quantitatively and qualitatively compare our algorithm with several state-of-the-art co-segmentation methods on the four benchmark datasets. \textbf{Quantitative Results:} Tables \ref{MSRC}, \ref{Internet}, \ref{iCoseg}, \ref{PASCALVOC} list the comparison results of our method with other state-of-the-arts on the sub-set of MSRC, Internet, sub-set of iCoseg and PASCAL-VOC. For fair comparisons, the reported results of the compared methods are directly obtained from their publications. We can observe that our algorithm outperforms the other state-of-the-arts in term of both metrics on most object categories in each dataset. Especially on the PASCAL-VOC, which has more challenging scenarios, the proposed algorithm achieves the best average $\mathcal{P}$ and average $\mathcal{J}$ with a score of $94.9\%$ and $71\%$, respectively, significantly outperforming the others by a large margin. Moreover, on the sub-set of MSRC and sub-set of iCoseg, the average $\mathcal{J}$ by our method has a score of $81.9\%$ and $89.2\%$, outperforming the others by about $3\%$. Besides, on the Internet, our algorithm achieves the best performance on airplane and horse categories, as well as a competitive performance on car category in terms of both metrics average $\mathcal{P}$ and average $\mathcal{J}$. \begin{figure*}[t] \centering \includegraphics[width=0.497\textwidth]{figseen.pdf} \includegraphics[width=0.497\textwidth]{figunseen.pdf} \caption{Some qualitative comparison results generated by the proposed method, SAAB~\cite{chen2018semantic} and DCOS~\cite{li2018deep} for co-segmenting objects associated to the training categories and unseen categories, respectively.} \label{coseg} \end{figure*} \textbf{Qualitative Results:} Figure \ref{coseg} shows some qualitative results by comparing our method with SAAB~\cite{chen2018semantic} and DCOS~\cite{li2018deep}. Those images are chosen from all of the four datasets composed of co-objects with seen categories (inside the $78$ categories of the COCO-SEG) and unseen categories (outside the categories of the COCO-SEG). For the seen categories shown by Figure \ref{coseg}(a), we can observe that SAAB and DCOS cannot discover the co-objects in the dog group accurately and two distractors (sheep) have been mis-classified as co-objects. However, the proposed approach does not suffer from this issue since it uses co-category labels as supervision to learn an effective semantic modulator that can well capture high-level semantic category information. Besides, as shown by Figure~\ref{coseg}(a), (b), the proposed approach can discover the whole co-objects of seen and unseen categories well because its spatial modulator is learned by an unsupervised method that can not only help to locate the co-object regions of seen categories well, but also generalize well to unseen categories. \renewcommand\arraystretch{1.2} \begin{table}[t] \caption{Ablative experiments of the proposed model on the PASCAL-VOC. The bold numbers indicate the best results. The symbol `$-f$' denotes removing the module $f$.}\smallskip \centering \resizebox{.75\columnwidth}{!}{ \smallskip\begin{tabular}{|l||cc|} \hline \multicolumn{1}{|c||}{PASCAL-VOC} & Avg. $\mathcal{P}$ ($\%$) & Avg. $\mathcal{J}$ ($\%$)\\ \hline $f_{spa}\&f_{sem}\&f_{seg}$ & \textbf{94.9} & \textbf{71} \\ $-$ $f_{spa}$ & 94.5 & 69 \\ $-$ $f_{sem}$ & 85.0 & 38 \\ $-$ ($f_{spa}\&f_{sem}$) & 82.0 & 27 \\ \hline \end{tabular} } \label{fig:ablative} \end{table} \subsection{Ablative Study } \hspace*{1em} To further show our main contributions, we compare different variants of our model including those without spatial modulator ($-f_{spa}$), semantic modulator ($-f_{sem}$) and both modulators ($-(f_{spa}\&f_{sem})$), respectively. Table~\ref{fig:ablative} lists the results of ablative experiments on the PASCAL-VOC. We can observe that without $f_{spa}$, the average $\mathcal{P}$ score drops from $94.9\%$ to $94.5\%$ while the average $\mathcal{J}$ score reduces by $2\%$ from $71\%$ to $69\%$, which verifies the effectiveness of the proposed module $f_{spa}$. Moreover, without $f_{sem}$, the performance suffers from a significant loss with a big drop of $9.9\%$ and $33\%$ for the average $\mathcal{P}$ and $\mathcal{J}$ scores, respectively, indicating the critical role of the semantic modulator as a guidance to learn an effective segmentation network for accurate co-segmentation. Besides, compared to that only removes $f_{sem}$, removing both modulators $f_{spa}$ and $f_{sem}$ further makes the performance of our model drop by $3\%$ and $11\%$ in terms of average $\mathcal{P}$ and average $\mathcal{J}$, respectively. These experiments confidently validate that both modulators have a positive effect to boost the performance of our model. \section{Conclusions} \label{sec:conclusions} \hspace*{1em} In this paper, we have presented a spatial-semantic modulated deep network framework for object co-segmentation. Our model is composed of a spatial modulator, a semantic modulator and a segmentation sub-net. The spatial modulator is to learn a mask to coarsely localize the co-object regions in each image that captures the correlations of image feature descriptors with unsupervised learning. The semantic modulator is to learn a channel importance indictor under the supervision of co-category labels. We have proposed the HSP module to transform the input image features of the semantic modulator for classification use. The outputs of the two modulators manipulate the input feature maps of the segmentation sub-net by a simple shift-and-scale operation to adapt it to target on segmenting the co-object regions. Both quantitative and qualitative evaluations on four image co-segmentation benchmark datasets have demonstrated superiority of the proposed method to the state-of-the-arts. \section*{Acknowledgments} This work is supported in part by National Major Project of China for New Generation of AI (No. 2018AAA0100400), in part by the Natural Science Foundation of China under Grant nos. 61876088, 61825601, in part by the Natural Science Foundation of Jiangsu Province under Grant no. BK20170040.
1411.4143
\section{Introduction} QCD at finite baryon density and low temperatures is one of the least understood regions of the QCD phase diagram. The reason is that the baryon chemical potential introduces a sign problem which invalidates standard stochastic methods to evaluate the path integral for the QCD partition function. The source of the problem goes back to the fluctuations of the pions when the chemical potential exceeds half the pion mass. Then the value of the average fermion determinant is strongly suppressed with respect to the average of the magnitude of the fermion determinant (for reviews and additional details see \cite{deForcrand:2010ys,Splittorff:2006vj,Splittorff:2007zh,Splittorff:2007ck,Splittorff:2006fu}). One way to evade this problem might be to use the canonical ensemble \cite{haas,deforcrand,liu,DG}. This approach only requires the QCD partition function at imaginary chemical potential which can be calculated reliably by standard methods \cite{roberge,alford,mario,philip}. However, the extraction of the canonical partition function requires the evaluation of the Fourier transform \begin{eqnarray} Z^q = \frac 1 {2\pi} \int_{-\pi}^\pi d\theta\; e^{-iq\theta} Z(\mu= i\theta T), \end{eqnarray} which in the thermodynamic limit, $q \to \infty$, leads to unmanageable cancellations. In lattice simulations, one can do better though. The fermion determinant at non-zero chemical potential is a polynomial in $\exp[\pm \mu/T] $, \begin{eqnarray} Z(\mu) = \langle \sum_{q=-N}^N D_q \; e^{\mu q/T} \rangle, \end{eqnarray} so that $Z^q =\langle D_q \rangle$. Because this is a finite polynomial the Fourier coefficients can be determined exactly by means of a discrete Fourier transform. The computation of the average of the canonical determinants, however, remains a challenge. In this paper we show that this challenge depends crucially on the value of $q$. For small $q$ the signal to noise ratio is tractable while it becomes exponentially small with increasing $q$. To address the signal to noise problem in the canonical approach we will evaluate the average magnitude of the canonical determinants to one-loop order in chiral perturbation theory and compare them to the average of the canonical determinants. This will give us information on the degree of cancellations that take place in evaluating the canonical partition function. The magnitude is obtained from the absolute value of the canonical determinants. This carries an isospin component which couples to the pions. On the contrary the charge dependence of the average of the canonical determinants themselves is not directly coupled to the pions. This difference leads to an exponentially small (in the charge) signal to noise ratio. We also obtain the full distribution of the absolute value of the canonical determinants and make comparisons with lattice results for the $q$-dependence of the canonical determinants. The problems facing the canonical approach have also been emphasized in \cite{kaplan}. Though the entire approach is different, the problems were traced back to the same source, namely that the average value of the baryon correlator is strongly suppressed with respect to the ``noise'' given by the absolute value squared correlator which at large distances is dominated by the contribution of the pions. We start this paper with a discussion of canonical partition functions corresponding to non-zero isospin chemical potential (section 2). In section 3 we show that the same expressions give the magnitude of the canonical determinants at non-zero baryon number density. The relative size of the cancellations which take place in the evaluation of the canonical partition function is estimated in section 4 by modeling the nucleon contribution in terms of a resonance gas model. In section 5 we compare the $q$-dependence of the canonical partition functions with results from lattice QCD simulations. The full distribution of the magnitude of the canonical determinants is derived in section 6 and concluding remarks are made in section 7. Additional details are worked out in three appendices. \section{Canonical partition functions for isospin charge} Before we turn to the distribution of the canonical determinants in QCD with non-zero quark charge it is useful to compute the canonical partition functions at fixed isospin number. \vspace{2mm} In oder to derive the canonical partition function with isospin charge we first recall the relation between the grand canonical partition function and the canonical partition function. For simplicity we consider two light flavors of mass $m$. The two flavor QCD partition function at non-zero isospin chemical potential $\mu$ is given by \begin{eqnarray} \label{Z11*} && Z_{1+1^*}(\mu) = \langle{\det}(D+\mu\gamma_0+m) \det(D-\mu\gamma_0+m)\rangle, \end{eqnarray} where $D$ is the Dirac operator and the average is over the Yang-Mills action. This grand canonical partition function can be decomposed in terms of canonical partition functions as \begin{eqnarray} Z_{1+1^*}(\mu)=\sum_{q=-\infty}^\infty e^{q\mu/T}Z_{1+1^*}^q, \end{eqnarray} with \begin{eqnarray} Z_{1+1^*}^q = \frac{1}{2\pi}\int_{-\pi}^\pi d\theta \ e^{-i\theta q} Z_{1+1^*}(\mu/T=i\theta). \label{Ftrans-muI} \end{eqnarray} We evaluate the canonical partition functions to one-loop order in chiral perturbation theory. At this order, the grand canonical partition function in the normal phase is given by \cite{Splittorff:2007ck} \begin{eqnarray} \label{Z11*-normal} && Z_{1+1^*}(\mu) = \langle{\det}(D+\mu\gamma_0+m) \det(D-\mu\gamma_0+m)\rangle \simeq e^{G_0|_{V=\infty}+g_0(\mu)}, \end{eqnarray} where the $\mu$ dependence of the free energy resides entirely in the part (see \cite{STV}) \begin{eqnarray} g_0(\mu) =\frac{Vm_\pi^2T^2}{\pi^2}\sum_{n=1}^\infty\frac{K_2(\frac{m_\pi n}{T})}{n^2}\cosh(\frac{2\mu n}{T}). \end{eqnarray} The $\mu$ independent part, $G_0|_{V=\infty}$, will be suppressed throughout (the final results will be ratios where this contribution drops out). The corresponding canonical partition functions are given by the coefficients of $\exp(\mu q/T)$ in the expansion of the one-loop result for the $\mu$-dependent part of the grand canonical partition function in powers of $\exp(\mu/T)$ \begin{eqnarray} Z_{1+1^*}(\mu)&=& Z_{1+1^*}(\mu=0)\; \exp\left({\frac{Vm_\pi^2T^2}{\pi^2}\sum_{n=1}^\infty\frac{K_2(\frac{m_\pi n}{T})}{n^2} (\cosh(2\mu n/T)-1)}\right)\nn\\ &=& \sum_q e^{q\mu/T}Z^q_{1+1^*}. \label{zex} \end{eqnarray} Using the asymptotic form of the Bessel function $K_2$ (which corresponds to the non-relativistic limit) we can make an estimate for the parameter domain where the terms with $n > 1$ can be ignored. From the condition that the correction to the free energy density due to the $n=2$ term should be much less than the free energy density from the $n=1$ contribution we obtain \begin{eqnarray} \frac {q (2mT)^{3/2}}{V_3}\ll \log\frac {V_3}{q (2mT)^{3/2}}. \end{eqnarray} Therefore for a dilute pion gas, \begin{eqnarray} \frac q{V_3} \ll (2mT)^{3/2}, \label{dilute} \end{eqnarray} we can restrict ourselves to the $n=1$ term. This results in the canonical partition function \begin{eqnarray} \frac{Z_{1+1^*}^q}{Z_{1+1^*}^{q=0}} & = & \frac{ e^{-\omega}\int_{-\pi}^\pi \frac{d\theta}{2\pi} \ e^{-i\theta q} e^{\omega\cos(2\theta)}}{ e^{-\omega}\int_{-\pi}^\pi \frac{d\theta}{2\pi} \ e^{\omega\cos(2\theta)}} \end{eqnarray} with $\omega$ defined by \begin{eqnarray} \omega = \frac {V_3 m_\pi^2T }{\pi^2}K_2(m_\pi/T). \end{eqnarray} For odd values of $q$ the canonical partition functions vanish. This is natural since the pions (in the convention used here) carry two units of isospin charge. For even $q$ the canonical integral is given by the modified Bessel function resulting in the ratio \begin{eqnarray} \frac{Z_{1+1^*}^{q} }{Z_{1+1^*}^{q=0} }& = & \frac{I_{q/2}(\omega)}{I_0(\omega)}, \qquad q \ {\rm even}. \label{I-ratio} \end{eqnarray} Thus we have found that the distribution of the canonical partition functions over $q$ is described by the modified Bessel functions \footnote{This so called Skellam distribution \cite{skellam} is the distribution of two independent stochastic variables distributed according to Poisson distributions. In our case the Poisson distribution for the pions and anti-pions is given by $\omega^k\exp(\pm 2\mu k/T)/k$ as follows by expanding the exponent in (\ref{Z11*-normal}) for $n=1$.}. For $m_\pi/T\gg1$ we can also replace $K_2$ by its asymptotic form resulting in \begin{eqnarray} \frac{Z_{1+1^*}^{q} }{Z_{1+1^*}^{q=0} } & = &\frac{I_{q/2}\big(\frac{V_3 m_\pi^{3/2}T^{3/2}} {\sqrt{2\pi^3}}e^{-m_\pi/T}\big) } {I_{0}\big(\frac{V_3 m_\pi^{3/2}T^{3/2}}{\sqrt{2\pi^3}}e^{-m_\pi/T}\big)} , \qquad q \ {\rm even}. \end{eqnarray} The chemical potential corresponding to the canonical partition function is worked out in \ref{app:mufromZq}. Finally, it is instructive to sum over $q$ to obtain the grand canonical partition function \begin{eqnarray} Z_{1+1^*}(\mu) & = & \sum_{q=-\infty}^\infty e^{q\mu/T}Z_{1+1^*}^q \\ & = & Z_{1+1^*}(\mu=0) \sum_{j=-\infty}^\infty e^{2j\mu/T-\omega} I_{j}\Big(\frac{Vm_\pi^{3/2}T^{5/2}}{\sqrt{2}\pi^{3/2}}e^{-m_\pi/T}\Big) \nn \\ & = & Z_{1+1^*}(\mu=0) \; \exp\left(\frac{Vm_\pi^{3/2}T^{5/2}}{2\sqrt{2}\pi^{3/2}}(e^{(2\mu-m_\pi)/T}+e^{(-2\mu-m_\pi)/T}-2 e^{-m_\pi/T} ) \right), \nn \end{eqnarray} which, as it should, brings us back to our starting point. As we shall see below, the computation of the distribution of the canonical {\sl determinants} at non-zero quark charge has many analogies to the above computation of the canonical {\sl partition functions} as a function of isospin charge. Let us therefore briefly discuss the overall structure of these computations: For the Fourier transform (\ref{Ftrans-muI}) we need the partition function at non-zero imaginary isospin chemical potential. For real isospin chemical potential less than $m_\pi/2$ the partition function is in the normal phase and the expression (\ref{Z11*-normal}) is the 1-loop result from chiral perturbation theory for this phase. This leading order expression for small real isospin chemical potential is also the leading contribution at imaginary isospin chemical potential. For a real isospin chemical potential larger than $m_\pi/2$ the partition function is in a pion condensed phase. At imaginary isospin chemical potential this implies that there is a sub-leading saddle point, which is not taken into account above. The analytic form of this contribution is worked out using mean field chiral perturbation theory in \ref{condensed}. \section{The canonical determinants at non-zero quark charge} Let us now turn to quark chemical potential. Since the pions have zero quark charge we obviously have \begin{eqnarray} Z^{q\neq0}=0 \end{eqnarray} when evaluated in CPT. However, the fermion determinant at non-zero chemical potential can be decomposed into canonical determinants before averaging over the gauge fields \begin{eqnarray} \det(D+m+\mu\gamma_0) = \sum_q e^{\mu q/T} D_q, \end{eqnarray} with \begin{eqnarray} D_q = \frac 1{2\pi } \int_{-\pi}^\pi d\theta e^{-i\theta q} \det(D+m+i\theta T\gamma_0). \end{eqnarray} Note that $D_{q=0}$ is real. Although pions do not have baryon charge, they contribute to the magnitude of $D_q$, \begin{eqnarray} \langle |D_q|^2\rangle &=& \int \frac{d\theta_1 \ d\theta_2}{(2\pi)^2} \ e^{-i(\theta_1-\theta_2)q} \; \langle \det(D+m+i\theta_1T\gamma_0)\det(-D+m-i\theta_2T\gamma_0)\rangle\nn\\ &=& \int \frac{d\theta_1 \ d\theta_2}{(2\pi)^2} \ e^{-i(\theta_1-\theta_2)q} \; \langle \det(D+m+i\theta_1T\gamma_0)\det(D+m+i\theta_2T\gamma_0)\rangle. \label{absdq2} \end{eqnarray} The reason is that \begin{eqnarray} \langle \det(D+m+i\theta_1T\gamma_0)\det(D+m+i\theta_2T\gamma_0)\rangle \end{eqnarray} is the partition function at non-zero imaginary quark, $i(\theta_1+\theta_2)T$, {\sl and} isospin, $i(\theta_1-\theta_2)T$, chemical potential. The double Fourier transform in (\ref{absdq2}) singles out the contribution with isospin charge $q$ and zero baryon charge. To one-loop order in chiral perturbation theory it is given by \begin{eqnarray} &&\frac{\langle \det(D+m+i\theta_1T\gamma_0)\det(D+m+i\theta_2T\gamma_0)\rangle} {\langle \det(D+m)^2\rangle}\nn \\ & = & \frac{ \exp\Big(\frac{Vm_\pi^2T^2}{\pi^2}\sum_{n=1}^\infty\frac{K_2(\frac{m_\pi n}{T})}{n^2} \cos((\theta_1-\theta_2) n)\Big)}{ \exp\Big(\frac{Vm_\pi^2T^2}{\pi^2}\sum_{n=1}^\infty\frac{K_2(\frac{m_\pi n}{T})}{n^2}\Big)}. \label{one-loop} \end{eqnarray} For $m_\pi/T\gg 1$ when the inequality (\ref{dilute}) is satisfied, the average $\langle |D_q|^2\rangle$ normalized to the $q=0$ expression simplifies to \begin{eqnarray} \frac{\langle |D_q|^2\rangle}{\langle |D_{q=0}|^2\rangle} & = & \frac{\int d\theta_- \ e^{-iq\theta_-} e^{\frac{Vm_\pi^2T^2}{\pi^2}K_2(\frac{m_\pi}{T})\cos(\theta_-)} } {\int d\theta_- \ e^{\frac{Vm_\pi^2T^2}{\pi^2}K_2(\frac{m_\pi}{T})\cos(\theta_-)} } \nn \\ & = &\frac{I_{q}(\omega)}{I_{q=0}(\omega)}. \label{Iq-dist} \end{eqnarray} This main result is compared to lattice data in section \ref{sec:lattice}. In order to address the cancellations in the average of the canonical determinant we now evaluate also $\langle D_q^2 \rangle$. This computation follows the same lines as above and instead of (\ref{Iq-dist}) we obtain \begin{eqnarray} \langle D_q^2\rangle &=& \int \frac{d\theta_1 \ d\theta_2}{(2\pi)^2} \ e^{-i(\theta_1+\theta_2)q} \; \langle \det(D+m+i\theta_1T\gamma_0)\det(D+m+i\theta_2T\gamma_0)\rangle. \label{dq2} \end{eqnarray} Using the one-loop result we obtain for $m_\pi/T\gg1$, \begin{eqnarray} \frac{\langle D_q^2\rangle}{ \langle D_{q=0}^2\rangle } & = & \frac{\int d\theta_+ d\theta_- \ e^{-iq\theta_+} e^{\frac{Vm_\pi^2T^2}{\pi^2}K_2(\frac{m_\pi}{T})\cos(\theta_-)}}{\int d\theta_+ d\theta_- \ e^{\frac{Vm_\pi^2T^2}{\pi^2}K_2(\frac{m_\pi}{T})\cos(\theta_-)}} \nn \\ & = & \delta_{q0}, \label{Iq-distb} \end{eqnarray} as expected, since the double Fourier transform in (\ref{dq2}) singles out the contribution with quark charge $q$ and pions have zero baryon number. In order to understand better how $\langle D_q^2\rangle=0$ for $q\neq0$ is formed, first note that we have $\langle {D_q^*}^2\rangle = \langle D_q^2\rangle$, which can be rewritten as \begin{eqnarray} \langle {\rm Re}[D_q] {\rm Im}[D_q] \rangle =0. \end{eqnarray} Now, let us express the expectation value of $D_q^2$ in terms of the expectation value of its real and imaginary parts \begin{eqnarray} \langle D_q^2 \rangle = \frac 14\langle (D_q+D_q^* )^2\rangle + \frac 14\langle (D_q-D_q^* )^2\rangle. \end{eqnarray} Next note that both the square of the real part and the square of the imaginary part contain $|D_q|^2$: \begin{eqnarray} \left\langle(D_q^*\pm D_q)^2\right\rangle = 2\left\langle D_q^2 \pm |D_q|^2\right\rangle. \end{eqnarray} Therefore, the canonical determinants with non-zero $q$ have equal variance in the real and the imaginary direction of the complex plane (when evaluated within chiral perturbation theory). These contributions cancel in the evaluation of $\left\langle D_q^2 \right\rangle$. We can also calculate the average value of $|D_q|^2$ from the mean field expression for the free energy in the condensed phase. The calculation proceeds along the steps of \ref{condensed} and one obtains the result \begin{eqnarray} \frac{\langle| D_q|^2 \rangle}{\langle| D_{q=0}|^2 \rangle} = e^{-q^2/8 V_4F^2 T^2 -|q|m_\pi/T} \; . \end{eqnarray} \section{Signal to noise ratio} In the previous section we have seen that to one-loop order in chiral perturbation theory $\langle D_q^2 \rangle = 0$ for $q \ne 0$. The reason is that in this limit the partition function does not contain any baryons. The effect of nucleons can be taken into account schematically by means of the Hadron Resonance Gas model for $T\ll m_N$ resulting in the ratio of the two-flavor partition functions \begin{eqnarray} \frac{Z_q(m_N)}{Z_{q=0}(m_N)} \equiv \frac {\langle D_q^2\rangle}{\langle D_{q=0}^2\rangle} = \frac{I_{q/3}(\omega_N)}{I_{q=0}(\omega_N)}, \end{eqnarray} where \begin{eqnarray} \omega_N= \frac {Vm_N^2T^2}{\pi^2}K_2(m_N/T), \end{eqnarray} with $m_N$ the nucleon mass \footnote{A similar form of the $q$ dependence of the canonical partition function was found in \cite{BraunMunzinger,Morita}. In \cite{Shinsuke} a Gaussian form was obtained.}. Therefore the 'signal to noise' ratio of the average canonical determinants is given by (note that $D_{q=0}$ is real and hence that $D_{q=0}^2=|D_{q=0}|^2$) \begin{eqnarray} \frac{\langle D_q^2\rangle}{\langle |D_q|^2\rangle} = \frac{I_{q/3}(\omega_N)}{I_{q/2}(\omega)}. \end{eqnarray} For large $q$ the ratio $I_{q/3}(Vx)/I_{q/2}(Vy)$ goes as $(x^{1/3}/y^{1/2})^q$. Using this we find in the dilute limit \begin{eqnarray} \frac{\langle D_q^2\rangle}{\langle |D_q|^2\rangle} \simeq e^{-q(m_N/3-m_\pi/2)/T} . \end{eqnarray} Note that the factors of $V$ cancels. Since $m_N/3>m_\pi/2$ we conclude that at fixed baryon density the signal to noise ratio becomes exponentially small in the thermodynamic limit. \section{Lattice Results for Canonical Determinants} \label{sec:lattice} The distribution of the canonical determinants has been measured in lattice QCD in \cite{DG,BDGLL,AW}. In this section we make a first qualitative comparison of the analytical prediction for the $q$ dependence of the average magnitude of the canonical determinants, Eq.~(\ref{Iq-dist}), to lattice data. The results we present are for ensembles of $N_f = 2$ Wilson fermions on $12^3\times 6$ lattices generated with the MILC code \cite{MILC}. A hopping parameter of $\kappa = 0.162$ is used and we show results for $\beta = 5.025$ and $\beta = 5.325$, which corresponds to temperatures of $T = 100 MeV$ and $T = 140$ MeV. The lattice spacing was determined from the Wilson flow and the pion masses are $m_\pi \sim 900$ MeV. The errors we show are statistical errors determined with the jackknife method. \begin{center} \begin{figure}[t!] \includegraphics[width=16cm,angle=0]{fit_12x6_k0162_b5025_T100MeV_reduced_fit_range.eps} \caption{\label{fig:lattice1} {\bf Left:} Lattice results for $\langle|D_q|^2\rangle/\langle |D_0|^2\rangle$ (points) at $T=100$ MeV together with a fit of $I_q(\omega)/I_0(\omega)$ (solid red curve) as a function of $q$ (left). {\bf Right:} Same data after taking the logarithm, i.e.~$\log\left(\langle|D_q|^2\rangle/\langle |D_0|^2\rangle\right)$ as a function of $q$. The fit includes the range $-6\leq q\leq6$ and the fitted value is $\omega = 0.085$ (reduced $\chi^2 \simeq 0.4$). The $q$ independence of the data points for $|q|\geq13$ is due to insufficient numerical precision.} \end{figure} \end{center} \begin{center} \begin{figure}[t!] \includegraphics[width=16cm,angle=0]{fit_12x6_k0162_b5325_T140MeV_reduced_fit_range.eps} \caption{\label{fig:lattice2} As in figure \ref{fig:lattice1} but now for $T=140$ MeV: Lattice results for $\langle|D_q|^2\rangle/\langle |D_0|^2\rangle$ ({\bf left}) and for $\log(\langle|D_q|^2\rangle/\langle |D_0|^2\rangle)$ ({\bf right}). Again the fit is based on the lower range of $q$, here $-9\leq q\leq9$ ($\omega=0.628$ reduced $\chi^2 \simeq 3.6$), where the analytical expression is expected to hold best.} \end{figure} \end{center} \vspace{-1cm} We have computed $\langle |D_q|^2\rangle/\langle |D_0|^2\rangle$ on these lattices and have used the argument, $\omega$, of the Bessel $I_q$ functions in Eq.~(\ref{Iq-dist}) as a fitting parameter. In Fig.~\ref{fig:lattice1} we show the results for a temperature of $T=100$ MeV. The data for $|q|\geq13$ are not to be considered (the accuracy of the Fourier transform used does not allow to compute canonical determinants for $|q|\geq13$). We observe that the fit works well over 20 orders of magnitude. We have used the data in the range $|q|\leq 6$ for the fit which returns the value $\omega = 0.085$, with a reduced $\chi^2 \simeq 0.4$. Despite the higher temperature the fit for $T=140$ MeV shown in Fig.~\ref{fig:lattice2} is almost as good. The fitted values of $\omega$ in this case is $\omega=0.628$ with a reduced $\chi^2$ of $3.6$. For larger values of $q$ we find significant deviations between the lattice results and the analytical fits. This could be due to the larger $n$ contributions in the ratio of the canonical determinants \begin{eqnarray} \frac{\langle |D_q|^2\rangle}{\langle |D_{q=0}|^2\rangle} & = & \frac{\int \frac{d\theta_-}{2\pi} \ e^{-iq\theta_-} e^{\frac{Vm_\pi^2T^2}{\pi^2}\sum_{n=1}^\infty \frac{K_2(\frac{m_\pi n}{T})}{n^2}\cos(n\theta_-)}}{\int \frac{d\theta_-}{2\pi} \ e^{\frac{Vm_\pi^2T^2}{\pi^2}\sum_{n=1}^\infty \frac{K_2(\frac{m_\pi n}{T})}{n^2}\cos(n\theta_-)}}. \end{eqnarray} Numerically it is no problem to keep more terms in this sum. In Fig.~\ref{fig:n2} we compare the $I_q/I_0$ distribution (obtained keeping only the $n=1$ term) to the distribution we get keeping the $n\le 2$ terms and all terms. As expected the higher $n$ terms affect the larger $q$ values more. The effect is of the same magnitude as the deviation of the lattice data from the analytical prediction but the sign is opposite. Because of the rather large pion mass in the present simulation, further lattice studies are required in order to quantify the test of the analytic expression. \begin{center} \begin{figure}[t!] \includegraphics[width=10cm,angle=0]{canonical-n.eps} \caption{\label{fig:n2} The curves show $\log[\langle |D_q|^2 \rangle/ \langle |D_{q=0}|^2 \rangle$ for $V_3 m_\pi^3= 2000$ and $T/m_\pi= 1/6$ (i.e. $ \omega =0.56/\pi^2$), for $n=1$ (blue), $n\le 2$ (red) and with all values of $n$ included (black). } \end{figure} \end{center} \section{Full Distribution of the Magnitude of the Canonical Determinants} In this section we will evaluate the average of all moments of $|D_q|^2$ in the limit of a dilute pion gas and obtain an analytical expression for the density $\rho_q(x)$. The probability density of the magnitude of the determinants is given by \begin{eqnarray} \rho_q(x)& = &\langle \delta(|D_q|^2 -x) \rangle \nn\\ &=& \Big\langle \frac 1{2\pi}\int_{-\infty}^\infty ds e^{-is(|D_q|^2 -x)}\Big\rangle \nn\\ &=& \frac 1{2\pi}\int_{-\infty}^\infty ds \sum_{p=0}^\infty\frac{\langle (-is|D_q|^2)^p \rangle}{p!} e^{ isx}. \label{dens} \end{eqnarray} Let us first consider the $p$-th moment of the absolute value squared of the fermion determinant to one loop order in chiral perturbation theory. The partition function is the same as for the theory for $2p$ flavors and, to one loop order, each flavor contributes \cite{GL} \begin{eqnarray} -\frac 12 g_0(m_\pi^2,T,L) \end{eqnarray} to the free energy. For the $2p$ moment we thus find for vanishing isospin chemical potential \begin{eqnarray} \langle {\det}^{2p}(D+m)\rangle = C e^{\frac 12 (4p^2 -1) g_0(m_\pi,T)}, \end{eqnarray} which are the moments of a log-normal distribution. For convenience we use in this section a slightly different normalization of $D_q$ which includes the contribution of the neutral pions and denote the canonical determinants by $\tilde D_q$. In this normalization we obtain the moments \begin{eqnarray} \langle |\tilde D_q|^{2p}\rangle &=& e^{(p-\frac 12)\omega}\int \prod_{k=1}^p \frac{d\theta_k \ d\phi_k}{(2\pi)^2} \\ &&\times \prod_{k=1}^pe^{-i(\theta_k-\phi_k)q} e^{\frac{Vm_\pi^2T^2}{\pi^2}K_2(\frac{m_\pi}{T})[\sum_{k<l} \cos(\theta_k-\theta_l)+\cos(\phi_k-\phi_l) +\sum_{k,l}\cos(\theta_k-\phi_l)]}. \nn \end{eqnarray} where the first exponent is the contribution due to the $2p-1$ neutral pions. The second factor in the exponent can be rewritten as \begin{eqnarray} &&\sum_{k<l}[ \cos(\theta_k-\theta_l) + \cos(\phi_k-\phi_l)] +\sum_{k,l} \cos (\theta_k-\phi_l) \nn\\ && = \frac 12( \sum_{k=1}^p\cos \theta_k +\sum_{k=1}^p \cos\phi_l)^2 +\frac 12( \sum_{k=1}^p\sin \theta_k +\sum_{k=1}^p \sin\phi_l)^2 -p. \end{eqnarray} The squares can be linearized by a Hubbard-Stratonovich transformation. This results in \begin{eqnarray} \langle |\tilde D_q |^{2p}\rangle &=&\int \frac{d\alpha d\beta}{2\pi} e^{-\alpha^2/2-\beta^2/2-\omega/2} \int \prod_{k=1}^{2p} \frac{d\phi_k d\theta_k}{2\pi^2} e^{iq(\phi_1+\cdots +\phi_p)} e^{-iq(\theta_1+\cdots +\theta_p)} \nn \\ &&\times e^{ -\omega^{1/2}\alpha(\sum_{k=1}^p\cos \theta_k +\sum_{k=1}^p \cos\phi_l) -\omega^{1/2}\beta(\sum_{k=1}^p\sin \theta_k +\sum_{k=1}^p \sin\phi_l)}. \label{mom2pe2} \end{eqnarray} The sines and cosines can be added as \begin{eqnarray} \alpha\cos \phi +\beta\sin\phi =\sqrt{\alpha^2+\beta^2} \cos(\phi+\phi_0), \end{eqnarray} where \begin{eqnarray} \cos\phi_0 = \frac \alpha{\sqrt{\alpha^2+\beta^2}}, \end{eqnarray} and the same for the $\theta$-variable. After shifting $\phi$ and $\theta$ by $-\phi_0$ we obtain \begin{eqnarray} \langle |D_q |^{2p}\rangle &=&\int \frac{d\alpha d\beta}{2\pi} e^{-\alpha^2/2-\beta^2/2-\omega/2} \nn\\ &&\times \left [ \int \frac{d\phi}{2\pi} e^{iq \phi -\omega^{1/2} \sqrt{\alpha^2+\beta^2}\cos \phi} \right ]^p \left [ \int \frac{d\theta}{2\pi} e^{-iq \theta -\omega^{1/2} \sqrt{\alpha^2+\beta^2}\cos \theta }\right ]^p \nn \\ &=& e^{-\omega/2} \int_0^\infty r dr e^{-r^2/2} \left [ \int d\phi e^{iq \phi -\omega^{1/2} r\cos \phi} \right ]^{2p} \nn\\ &=& e^{-\omega/2} \int_0^\infty r dr e^{-r^2/2} [I_q(\omega^{1/2} r)]^{2p}. \label{mom2pe4} \end{eqnarray} For $p=1$ the integral can be evaluated analytically \begin{eqnarray} e^{-\omega/2} \int_0^\infty r dr e^{-r^2/2} [I_q(\omega^{1/2} r)]^{2} =e^{\omega/2}I_q(\omega). \label{mom2pe1} \end{eqnarray} \begin{figure}[t!] \includegraphics[width=8.0cm]{rhodq.eps} \includegraphics[width=8.0cm]{ndq.eps} \caption{The distribution (left) and the cumulative distribution (right) of the absolute value squared for the canonical determinants $\tilde D_q$. Results are given for $q=0$, $q=1$ and $q=2$.} \end{figure} The normalized moments are given by \begin{eqnarray} \frac{\langle |\tilde D_q |^{2p}\rangle }{\langle |\tilde D_q |^{0}\rangle } &=& \int_0^\infty r dr e^{-r^2/2} [I_q(\omega^{1/2} r)]^{2p} . \label{mom2pe6} \end{eqnarray} The sum over $p$ in Eq. (\ref{dens}) can be evaluated analytically after inserting the expression (\ref{mom2pe6}) for the moments. This results in the average density \begin{eqnarray} \rho_q(x) &=& \frac 1{2\pi}\int_{-\infty}^\infty ds \int_0^\infty rdr e^{-r^2/2} e^{-is(I_q(\omega^{1/2} r))^2 } e^{ isx} \nn\\ &=& \int_0^\infty rdr e^{-r^2/2} \delta (I_q(\omega^{1/2} r))^2-x) \nn\\ &=& \left . \frac{y e^{-y^2/2\omega}} {2\omega I_q'(y) I_q(y) } \right |_{I_q(y)=\sqrt x}. \end{eqnarray} This can also be written as \begin{eqnarray} P\Big [[I_q^2]^{-1}[|\tilde D_q|^2] \in [x, x+dx] \Big ] = \frac x\omega e^{-x^2/2\omega}dx, \end{eqnarray} with $[I_q^2]^{-1}$ the inverse of the function $ x \to I_q^2(x)$. This is a variant of a log-normal distribution which can be seen by keeping only the exponential factor in the asymptotic expansion of the Bessel functions. For large determinants we thus find \begin{eqnarray} P\Big [\log|\tilde D_q| \in [x, x+dx] \Big ] \sim \frac x\omega e^{-x^2/2\omega}dx. \end{eqnarray} Another useful quantity is the cumulative distribution of the canonical determinants. It is given by \begin{eqnarray} N_q(y) &=& \int_ 0^y \rho_q(y) dy \nn \\ &=&\frac 1\omega \int_0^{\bar r(y)} rdr e^{-r^2/2\omega} \nn \\ &=& 1- e^{-\bar r^2(y)/2\omega} \end{eqnarray} with $ I_q(\bar r(y)) = \sqrt y$. For large $q$ we have that $I_q(x)\sim (x/2)^q/q!$ so that $\bar r \sim 2 qy^{1/2q}/e $ resulting in the cumulative distribution \begin{eqnarray} N(y) \sim 1- e^{-2q^2y^{1/q}/\omega e^2}. \end{eqnarray} \section{Conclusions} \label{sec:conc} For a fixed non-zero quark charge the canonical determinants in QCD take complex values. The average of these canonical determinants is the corresponding canonical partition function, which is a real and positive number. To address the cancellations which take place in forming the canonical partition functions we have computed the distribution of the canonical determinants by means of chiral perturbation theory. In the limit of a dilute pion gas, the result simplifies to an expression in terms of modified Bessel functions. There are strong cancellations between the real and imaginary parts of the canonical determinants which lead to an exponential suppression, $\exp(-q(m_N/3-m_\pi/2))$, with respect to the average magnitude. The magnitude is strongly fluctuating as well with a distribution that in the low-temperature limit is given by a variant of the log-normal distribution. Moreover, we have evaluated the canonical partition functions at non-zero isospin density. Our results were obtained by means of chiral perturbation theory in the dilute limit, and as demonstrated the analytical form agrees qualitatively with lattice QCD results over 20 orders of magnitude. The only caveat is that we used the arguments of the Bessel functions as a fitting parameter and that the value of the pion mass is outside the domain where chiral perturbation theory can be applied reliably. Consistent with the evaluation of the absolute value squared of the determinants in the dilute limit, the agreement is better for small $q$. Further lattice studies with lighter quarks are needed for a full quantitative test of the analytic results. For such a test additional terms from the low temperature expansion should be included for the larger $q$ values. We have also determined the contribution to the canonical determinants from the mean field saddle point corresponding to the Bose condensed phase. We find an exponential suppression for small $q$ which turns into a Gaussian tail for large $q$. This effect becomes more relevant for larger values of $q$. It would be most interesting to determine the distribution of the canonical determinants in improved lattice simulations with light quarks. This would allow for a quantitative test of the analytical predictions and lead to a better understanding of the large $q$ behavior of the canonical determinants. \vspace{3mm} \noindent {\bf Acknowledgments:} This work was supported by U.S. DOE Grant No. DE-FG-88ER40388 (JV), the Austrian Science Fund FWF Grant Nr.~I 1452-N27 (CG), FWF DK W1203 ``Hadrons in Vacuum, Nuclei and Stars'' (H-PS), The U.S.~National Science Foundation CAREER grant PHY-1151648 and the {\sl Sapere Aude} program of The Danish Council for Independent Research (KS). \renewcommand{\thesection}{Appendix \Alph{section}} \setcounter{section}{0} \section{Isospin chemical potential from the canonical partition functions} \label{app:mufromZq} The isospin chemical potential corresponding to the canonical partition function (\ref{I-ratio}) is given by \begin{eqnarray} \mu_I(T,n_I) &=&\frac 12( -T \log Z_{q+2} +T \log Z_q)\nn\\ &=&-\frac T2 \log \frac{I_{(q+2)/2}(\omega)}{I_{q/2}(\omega)}. \end{eqnarray} In the thermodynamic limit the chemical potential at fixed isospin density is given by \begin{eqnarray} \mu_I(T, n_I)&=& -\frac T 2\lim_{q\to \infty } \log \frac{I_{(q+2)/2}(q \tilde \omega )}{I_{q/2}(q \tilde \omega)} \end{eqnarray} with \begin{eqnarray} \tilde \omega = \frac \omega q =\frac { m_\pi^2T }{n_I \pi^2}K_2(m_\pi/T), \end{eqnarray} and $n_I$ is the isospin charge density \begin{eqnarray} n_I = \frac {q}{V_3}. \end{eqnarray} For large $q$ we can use the uniform approximation for modified Bessel functions \begin{eqnarray} I_q(q z) \sim \frac 1{\sqrt{2\pi q }}\frac {e^{q\eta}}{(1+z^2)^{1/4}} \end{eqnarray} with \begin{eqnarray} \eta = \sqrt{1+z^2} +\log (z/2) -\log(\frac 12+ \frac 12 \sqrt{1+z^2}). \end{eqnarray} In the low temperature limit, the argument of the modified Bessel functions is small so that \begin{eqnarray} \eta \sim 1 +\frac 14 z^2 +\log(z/2), \end{eqnarray} and \begin{eqnarray} I_\nu(\nu z) \sim \frac {(z/2)^\nu}{\sqrt{2\pi\nu}} e^{\nu(1+\frac 14 z^2) -\frac 14 z^2}. \end{eqnarray} For large $q$ and low temperatures we thus have \begin{eqnarray} I_{q/2}(q\tilde \omega) &\sim& \frac {\tilde \omega^{q/2}}{\sqrt{\pi q}} e^{q/2 +\tilde \omega^2(q/2 -1)},\nn \\ I_{(q+2)/2}(q\tilde \omega) &\sim&\frac {(\tilde \omega q/(q+2))^{(q+2)/2}}{\sqrt{\pi (q+2)}} e^{(q+2)/2 +\tilde \omega^2 q^3/2(q+1)^2 }. \end{eqnarray} In the large $q$ limit $I_{(q+2)/2}(q\tilde \omega)$ simplifies to \begin{eqnarray} I_{(q+2)/2}(q\tilde \omega) &\sim&\frac {\tilde \omega^{q/2}}{\sqrt{\pi q}} e^{(q+2)/2 +\tilde \omega^2q /2 }. \end{eqnarray} This results in the ratio \begin{eqnarray} \frac{I_{(q+2)/2}(q\tilde \omega)}{I_{q/2}(q\tilde \omega)} \sim \tilde \omega e^{\tilde \omega^2/2}. \end{eqnarray} For $\tilde \omega \ll 1$ we obtain \begin{eqnarray} \frac{I_{(q+2)/2}(q\tilde \omega)}{I_{q/2}(q\tilde \omega)} \sim \tilde \omega . \label{uniform} \end{eqnarray} This results in the chemical potential \begin{eqnarray} \mu_I(T,n_I) &\approx& -\frac T2 \log \tilde \omega \nn\\ &=& \frac {m_\pi}2 -\frac T2 \log\left [ \frac{m_\pi^{3/2} T^{3/2}}{n_I \sqrt{2\pi^3}} \right ] . \label{mui-high} \end{eqnarray} In order to occupy the zero momentum states we need that $\mu > m_\pi/2$. The critical temperature for the formation of a pion condensate is thus given by the relation $\mu_I (T_c, n_I)=m_\pi/2$. In the low temperature limit the critical temperature in the $n=1$ approximation is thus given by \begin{eqnarray} T_c = 2^{1/3} \pi \frac {n_I^{2/3}}{m_\pi}, \label{crit} \end{eqnarray} which up to the proportionality constant agrees with the result first obtained by Einstein \cite{einstein} (See \ref{app:B}). It should be noted that for $(2mT)^{3/2}/n_I \sim O(1)$ the sum over $n$ in Eq. (\ref{zex}) cannot be truncated to its first term. A calculation of the critical temperature that includes all terms is given in \ref{app:B}. \section{Critical Temperature for an Ideal Bose Gas} \label{app:B} In this Appendix we determine the critical temperature for a noninteracting Bose gas (see for example \cite{einstein,vosk}) and explain that the low temperature approximation gives the correct scaling behavior but does not reproduce the proportionality constant. For density $n_I$, the critical temperature is given by the condition \begin{eqnarray} \frac 12 n_I = \left .\int \frac{ d^3p}{(2\pi)^3} \frac 1{e^{\beta(\sqrt{p^2+m^2_\pi} -2\mu)}-1} \right |_{\mu =m_\pi/2}. \end{eqnarray} In the nonrelativistic approximation, this simplifies to \begin{eqnarray} \frac 12 n_I &=& \frac 1{2\pi^2} \int p^2 dp \frac 1{e^{\beta p^2/2m }-1} \nn\\ &=& \frac 1{2\pi^2} (2mT)^{3/2}\int x^2 dx \frac 1{e^{x^2}-1} . \end{eqnarray} The integral can be evaluated as \begin{eqnarray} \int x^2 dx \frac 1{e^{x^2}-1} = \frac 14 \sqrt \pi \zeta(3/2) \approx 1.15758. \end{eqnarray} The numerical constant obtained this way differs from the constant in (\ref{crit}). However, if we make the same approximation as in the derivation of Eq. (\ref{crit}), namely replacing the integral by \begin{eqnarray} \int_0^\infty dx \frac{x^2}{e^{x^2}-1} \to \int_0^\infty dx x^2 e^{-x^2}, \end{eqnarray} which corresponds to only keeping the $n=1$ term, we obtain \begin{eqnarray} \frac 12 n_I &=& \frac 1{2\pi^2} (2mT)^{3/2}\int x^2 e^{-x^2} dx \nn \\ &=& \left( \frac{ mT}{2\pi} \right )^{3/2}. \end{eqnarray} This results in the expression (\ref{crit}) for the critical temperature. \section{Canonical Partition Function at fixed isospin charge from mean field chiral perturbation theory} \label{condensed} When the isospin chemical potential is larger than half the pion mass, $\mu > m_\pi/2$, the grand canonical partition function enters in a phase in which the negatively charged pions have condensed. In this case the mean field free energy depends on the chemical potential and the mean field partition function is given by \cite{KSTVZ} \begin{eqnarray} Z(\mu) = e^{2V_4 F^2 \mu^2(1 +m_\pi^4/16\mu^4)}. \end{eqnarray} For $\mu \le m_\pi/2$ the mean field partition function is $ \mu $-independent, and for the entire range of $\mu$ it can be written as \begin{eqnarray} Z(\mu) &=& e^{4V_4 F^2 (\mu^2 -m_\pi^2/4 )^2\theta(\mu^2-m_\pi^2/4)/2\mu^2+V_4 m_\pi^2 F^2}. \end{eqnarray} For $\mu$ close to $m_\pi/2$ it can be approximated by \begin{eqnarray} Z(\mu) &=& e^{8V_4 F^2 (|\mu| -m_\pi/2 )^2\theta(|\mu| -m_\pi/2) +V_4 m_\pi^2 F^2}. \label{ztreshold} \end{eqnarray} For low temperatures, the corresponding canonical partition function is given by \begin{eqnarray} \frac{Z_q}{Z_{q=0}} &=& e^{-q^2/32 V_4F^2 T^2 -|q|m_\pi/2T}. \end{eqnarray} This can be seen by evaluating \begin{eqnarray} \frac{Z(\mu)}{Z(\mu=0)} = \sum_q Z_q e^{\beta \mu q}. \end{eqnarray} For $|\mu| < m_\pi/2$ the sum over $q$ is dominated by the $q =0$ term so that the partition function does not depend on $\mu$. For $\mu > m_\pi/2$, we can do a saddle point approximation in $q$. This results in the partition function (\ref{ztreshold}). For $\mu < -m_\pi/2$, we find a saddle point at negative $q$ again reproducing (\ref{ztreshold}). The free energy is an even function of $\mu$ so that the canonical partition functions are even in $q$ as well. In the mean field approximation the distinction between even and odd $q$ has been lost and we do not find that $Z_q$ vanishes for odd $q$. The canonical partition function can be expressed in terms of the isospin density $n_I= q/V_3$, \begin{eqnarray} \frac{Z_q}{Z_0} &=& e^{-V_3 n_I^2/32 F^2 T - |n_I| V_3 m_\pi/2T }. \end{eqnarray} This is the partition function of a repulsive Bose gas with vacuum energy density given by \cite{KSTVZ} \begin{eqnarray} E_0 = \frac{n_I^2}{32 F^2} + \frac 12 n_I m_\pi . \end{eqnarray} In this case, the chemical potential for positive $q$ is given by \begin{eqnarray} \mu_I &=& -\frac T2 (\log Z_{q+2} - \log Z_q)\nn\\ &=& \frac {m_\pi}2 +\frac q{16V_3F^2}. \end{eqnarray} Both $E_0$ and $\mu_I$ have been studied in lattice simulations \cite{Detmold:2012wc} where qualitatively the same behavior was found. If we create a density $n_0$ at zero temperature in the grand canonical ensemble, so that \begin{eqnarray} n_0 = 16 F^2(\mu - m_\pi/2), \end{eqnarray} and then we heat the sample in the canonical ensemble at this density. Then critical temperature from this mean field result is thus given by (see (\ref{crit})) \begin{eqnarray} T_c &\sim& \frac {n_0^{2/3}}{m_\pi}\nn \\ &=& \frac {(\mu-m_\pi/2)^{2/3}}{m_\pi}. \end{eqnarray}
2109.02290
\section{Introduction} Logistics systems are becoming increasingly complex and interrelated, from the operational level to the strategic level, due to economic globalization, offshoring of production, increasing product complexity, and fast-changing trends, among others. \citep{Harland2003-ez, Dolgui2020-yf, Choi2001-pl}. Under these circumstances, companies have been pursuing various types of efficiency improvements for greater competitiveness. One of the main pillars of such measures is the consolidation of logistics systems \citep{Buffa1987-ne, Hall1987-id, Cetinkaya2006-wr}. Logistics integration is now being practiced not only within companies but also between different companies, i.e., the horizontal collaboration \citep{Chan2004-xx, Cruijssen2007-qq,Naesens2009-mu}. The horizontal cooperation has been attempted and practiced at various levels. \cite{Pan2019-sm} classified extant horizontal cooperation schemes into six categories, i.e., (i) single carrier collaboration \citep{Puettmann2010-jq,Hernandez2011-xq}, (ii) carrier alliance and coalition \citep{Klaas-Wissing2010-yx}, (iii) transport market place \citep{Huang2013-qf}, (iv) flow-controlling entities collaboration \citep{Cruijssen2007-qq}, (v) logistics pooling \citep{Pan2019-sm}, and (vi) Physical Internet (PI) \citep{Ballot2011-hm, Montreuil2011-hh}. These schemes connect logistics networks of participating parties, generating larger and more complex network structure. More advanced cooperation generally allows for more global efficiency, but requires integrations of many stakeholders not only at an operational level but also at organizational and managerial levels \citep{Chan2004-xx, Cruijssen2007-qq, Chen2008-yb, Naesens2009-mu,Barthe-Delanoe2014-wz, Da_Silva_Serapiao_Leal2019-xw, Pan2019-zz, Pan2021-ib}. In addition to these implementation issues, here we raise another issue related to the increased complexity of the system. The transportation dynamics in such interrelated and automated systems and how they respond to changes in demands are difficult to grasp intuitively. While simulating various situations can provide predictive assessments on the performance, sustainability, and robustness of logistics operations, they are often not based on a phenomenological understanding and thus may be sensitive to unanticipated perturbations.We believe that understanding the basic behavior of the entire system in a logistics network is useful for (i) strategic decision making on how to proceed with horizontal cooperation, (ii) scenario design for predictive simulation, and (iii) interpretation of empirical data and simulation results. However, research based on such an approach is still scarce \citep{Treiblmaier2020-ur}. Notably, the fields of applied mathematics, statistical physics, and network science have advanced our understanding of the fundamental characteristics of traffic congestion in networks in various contexts \citep{Boccaletti2006-ru, Tadic2007-kk, Chen2011-fy}. Past literature has also underscored the usefulness of such findings in supply chain management \citep{Hearnshaw2013-kf}. Such theoretical studies have elucidated the emergence mechanisms of congestion in computer networks \citep{Ohira1998-vk}, road networks \citep{Biham1992-or}, airport networks \citep{Ezaki2014-fi}, and production networks \citep{Ezaki2015-sy}, among others. Previous studies have found critical determinants of transport performance, e.g., network topology \citep{Guimera2002-eo,Zhao2005-vk}, capacity distribution \citep{Zhao2005-vk, Wu2008-do}, and routing algorithm \citep{Echenique2004-jw, Zhang2007-vy, Ling2010-dh, Ezaki2015-yu}. These factors are taken into consideration in this study. Although such theoretical studies have provided useful implications to various applications, the conclusions are not directly applicable to the design of logistics networks for the following reasons. First, to enable mathematical analysis, these studies used models that are too simple (e.g., without considering delivery costs and optimized route finding) relative to logistics networks. Second, because the focus of these studies is often on the (very) large-scale behavior of the system, e.g., scale-free properties of networks and phase transitions, the conclusions may not be true or dominant in a realistic system of a relatively small size. In addition, for this reason, heuristic algorithms with very low computational costs were used as the routing algorithms \citep{Chen2011-fy}. However, given the distributed computations in logistics networks, high-performance routing should be assumed. On the other hand, network-level modeling of logistic systems has been performed in the context of the PI, which is a decentralized logistics network, characterized by the use of modularized containers (PI containers) and standardized distribution centers (PI hubs) that connect to each other. In this context, the typical uses of mathematical models can be divided into three main categories: analytical computation, optimization, and simulation. For example, \cite{Ballot2011-hm} considered different types of network topology and evaluated the performance of the PI using an analytical computation based on the continuous approximation method \citep{Langevin1996-ip}. Optimization studies have shown the efficacy of the PI \citep{Sohrabi2011-nr, Venkatadri2016-nl} and the PI with inventory management \citep{Pan2015-sa, Ji2019-il}. Compared with these two uses of models, simulation allows for more complex settings, e.g., optimized route-finding and dynamic changes in the environment. Past studies have simulated relatively large-scale systems considering realistic operations \citep{Hakimi2012-xy, Sarraj2014-fk}, a logistics system in a road network \citep{Fazili2017-jy}, disruptions in the network \citep{Yang2017-sh}, and urban logistics operations \citep{Kim2021-gu}. These studies have successfully provided useful insights based on realistic settings, but their approaches are not suitable for drawing out general understanding of the network dynamics. In contrast to the previous simulation studies of the PI, we use a simple model that abstracts the essence of network logistics to investigate general and fundamental properties of the system, focusing on how the response of the system to various disturbances is affected by the network structure, capacity distribution, and type of (non-)adaptive routing algorithm. We extensively perform numerical experiments based on the model and elucidate the system behavior. This approach allows us to find and analyze the intrinsic behaviors of logistics network that do not depend on specific details of the system. Of course, we need to be careful that our conclusions are not significantly influenced by factors that we ignored in the modeling process, which will be discussed later. We examine three types of changes in demand (two temporal changes and one permanent change), and assess the traffic. We find that the network topology significantly affects performance in all cases and show that hub-and-spoke networks are not robust demand changes in general. The remainder of the article is organized as follows. First, Sec. \ref{sec:model} defines the examined model and scenarios. Then, Sec. \ref{sec:results} shows the comprehensive simulation results. Finally, we summarize the results and discuss their implications and future prospects in Sec. \ref{sec:conclusion}. \section{Model}\label{sec:model} \subsection{Overview of model} Consider a network of transit centers. Transit centers connected by a link can send freight to each other. Here, a packet is the minimum unit of delivery, and the link capacity is the maximum number of packets that can be carried at one time on the link. The cost of delivery is also defined for each link. If demand exceeds capacity, packets must either wait until the demand on that link falls below the capacity or use a bypass route. We place $N$ packets in the system. When each packet is generated, we randomly choose the origin and destination nodes and compute the route plan by using one of the algorithms explained in Sec. \ref{sec:algo}. We update the system in discrete time. At each time step, each packet is allowed to jump to the next node if the capacity of the link is not exceeded. When a packet arrives at the destination node, the packet is removed from the system, and a new packet is generated with a randomly chosen origin and destination. \subsection{Networks} We consider three types of networks with 25 nodes. In the square lattice network (Fig. \ref{fig:network_sch}(a)), nodes adjacent to each other on the top, bottom, left, and right are connected together. On the link, the cost and capacity values are defined in both directions separately; i.e., the cost and capacity values of the link from node 1 to node 2 may be different from those of the opposite link. The cost value of the link from node $i$ to node $j$ ($1\leq i\neq j\leq 25$), denoted by $c_{ij}$, is chosen uniformly at from the interval $[0.5, 1.5]$. We determine the capacity values via three ways, which are explained in the next subsection. The lattice has 40 bidirectional links (= 80 directional links). This type of network is often used to model road networks \citep{Zeng2014-xg}. In the random network (Fig. \ref{fig:network_sch}(b)), the 25 nodes are randomly connected by 40 bidirectional links. To wire the links, we randomly select a pair of non-wired nodes until the total number of links reaches 40. If the generated network is disconnected (i.e., with isolated parts), then we discard it and regenerate a new random network. As with the square lattice network, the cost value is randomly drawn from the interval $[0.5,1.5]$ for each direction of each link. This network is examined to test whether the results are affected by the regular structures of the two other networks. The hub-and-spoke network is shown in Fig. \ref{fig:network_sch}(c). This network has 25 nodes and 24 bidirectional (48 unidirectional) links. The cost value is randomly chosen, as with the two other networks. This type of network is widely used in traditional supply chains for its efficiency \citep{Abdinnour-Helm1999-ke}. \subsection{Capacity} The capacity of a link from node $i$ to node $j$ ($1\leq i\neq j\leq 25$), $C_{ij}$, defines the maximum number of packets that can pass through it at the same time. We use three types of capacity distributions (Fig. \ref{fig:capacity_sch}). The first one, which we call ``\textit{single packet per link},'' defines the capacity to be $C^{\rm Single}_{ij}=1$ (packet) for all links. The second one, ``\textit{two packets per link},'' doubles this capacity to $C^{\rm Two}_{ij}=2$ (packets) for all links. The last one, ``\textit{weighted by betweenness centrality},'' weights link capacities according to pre-estimated demand for use. The demand is estimated by the betweenness centrality \citep{Yoon2006-lp}, which is computed as follows. First, the shortest paths between all the possible pairs of nodes are determined ($25\times 24=600$ pairs). Second, for each link from node $i$ to node $j$ ($1\leq i\neq j\leq 25$), the number of shortest paths that use the link is counted and denoted by $n_{ij}$. Note that this definition gives the link (or edge) betweenness centrality, which is a derivative of the commonly used (node) betweenness centrality \citep{Barthelemy2004-bk}. Finally, the capacity is computed by \begin{equation} C^{\rm Weighted}_{ij} = 1 + \frac{n_{ij}L}{\displaystyle\sum_{i'=1}^{25}\sum_{\substack{j'=1 \\ j'\neq i'}}^{25}{n_{i'j'}}}, \end{equation} where $L$ is the total number of (unidirectional) links ($L=80$ for the square lattice and random networks and $L=48$ for the hub-and-spoke network). The sum of the capacity values, $C^{\rm Weighted}_{ij}$, over the links coincides with that of $C^{\rm Two}_{ij}$, allowing comparisons between the capacity distributions given the total capacity resources. We use these two distributions, i.e., \textit{the two-packets-per-link} and \textit{weighted-by-betweenness-centrality} capacity distributions, as representatives of a uniform distribution and a distribution weighted by demands, respectively. The \textit{single-packet-per-link} distribution is used as the baseline to be compared with these two distributions to examine the effects of doubling the capacity. We do not limit the number of packets that nodes can hold. \subsection{Routing algorithms}\label{sec:algo} We use three types of routing algorithms: static shortest path (SSP) algorithm, temporary fastest path (TFP) algorithm, and adaptive fastest path (AFP) algorithm. When a packet enters a system with randomly selected origin and destination nodes, a routing algorithm generates a path plan. Because the availability of a link changes over time due to congestion, a packet may not always be able to travel with the minimum time steps. A packet using the SSP algorithm always follows the pre-calculated shortest path. If a move is blocked, the packet will wait until the link becomes available. The TFP algorithm considers the path plans of other existing packets and avoids the blocked link in the future if necessary. The AFP applies the TFP algorithm every time the situation is changed (e.g., when a new packet is added to the system or the pattern of link availability is updated). In the SSP and TFP algorithms, the planned path is reserved for the packet and thus not changed on the way to the destination node. In the AFP algorithm, the planned path is updated adaptively according to the environment. The SSP algorithm follows the shortest path regardless of congestion. If a link in the shortest path is temporally unavailable (e.g., due to excess demand), the packet will wait until the link becomes available. The shortest path between nodes is computed via Dijkstra's algorithm \citep{Dijkstra1959-is} on the network with cost values. Note that capacity values were not used to compute the shortest path. The TFP and AFP algorithms take the number of time steps of travel, including wait time, into consideration. They find the fastest path, i.e., the path that requires the fewest time steps to travel from the given origin to the destination. (Note that the shortest path is the path with the minimum total cost, but it does not consider the waiting cost, which is explained later.) In our model, because the cost values on the links are not significantly different from each other, the fastest path coincides with the shortest path in many cases when all the links in the network are available. To find the fastest path, we use Dijkstra's algorithm on a time-expanded graph in which a node in a network at different time points is represented by two different nodes (Fig. \ref{fig:algo_sch}). The move from node $i$ to $j$ ($1\leq i\neq j\leq 25$) available at time $t$ is represented by a transition link from node $i$ in time $t$ to node $j$ in time $t + 1$. The cost of using this transition link is $c_{ij}$. If a packet at node $i$ does not move at time $t$ (to wait until a link becomes available), the transition is represented by a transition link from node $i$ in time $t$ to node $i$ in time $t+1$. The cost for using this transition link is defined as $c_{\rm W}$. In this paper, we examine two conditions of the waiting cost: $c_{\rm{W}}=0$ and $c_{\rm{W}}=0.5$. The waiting cost represents a penalty for delayed deliveries, which increases storage cost and reduces user satisfaction. In this time-expanded network, we compute the minimum cost to reach each of the nodes in each time step from a given node at a given time (nodes that cannot be reached at each time step are also identified in this process). The path that reaches the destination first was is defined as the fastest path. The technical details of this procedure are as follows. First, we determine the time when the use of the reserved transition links will be completed, $t'$. We construct a time-expanded graph from the current time $t_0$ to $t'$. If the number of packets that will use a link from $i$ to $j$ at time $t$ is smaller than the link capacity, we place a transition link; otherwise, we do not. At time $t'$, we also place links between connected nodes, because the standard Dijkstra's algorithm is available for $t\geq t'$. If more than one fastest path (with the same travel time and cost) is found, the path that approaches the destination in the shortest time is adopted. For example, if routes $1-1-2-3$ and $1-2-2-3$ are found from node 1 to node 3, the latter is adopted as the fastest path. This happens when the link from node 2 to node 3 is not available until the last time step, in which the choice of waiting at node 1 or node 2 does not affect the cost or time. Note that although Dijkstra's algorithm \citep{Dijkstra1959-is} is computationally affordable in the systems used in this paper, it is not applicable when the size of the network becomes large, in which case we need to use more efficient algorithms. To generate a time-expanded graph, we first identify the links that will be available in the future, considering the already planned paths of existing packets. The TFP algorithm sets the path plan in this way and does not change it. The AFP algorithm performs these procedures every time a new packet enters the system or a blocked pattern of links is changed. When such an event occurs, we clear all the planned paths. Then, we re-compute the fastest path for each packet in order of proximity to the destination node (i.e., the number of links in the shortest path from the current position to the destination node). This ordering is based on the premise that (i) a packet far from the destination has a higher possibility of finding a good alternative path than does a packet close to the destination, even when a link along the shortest path is blocked by other packets, and that (ii) such an algorithm generates globally better path plans than those produced without ordering. Note that for simplicity of arguments, we do not consider the total-cost-minimization algorithms given the information of unoccupied links (i.e., temporary \textit{shortest} path (TSP) algorithm and adaptive \textit{shortest} path (ASP) algorithm) whereas we use the TFP and AFP algorithms. Such algorithms yield paths that wait at a node when the cost of waiting is smaller than the additional cost of using a detour path. Therefore, the results are qualitatively expected to be interpolations of those obtained from the SSP and TFP/AFP algorithms (with the total cost larger than that of the SSP but smaller than that of the TFP/AFP, and the total time of travel shorter than that of the SSP but longer than that of the TFP/AFP). In fact, if $c_{\rm W}$ is set to a sufficiently large value, the TSP and ASP algorithms will be reduced to the TFP and AFP algorithms, respectively. Also, if $c_{\rm W}=0$ is set, they will be reduced to the SSP algorithm. \subsection{Scenarios}\label{sec:scenario} We simulated four different settings. The first one was the baseline scenario, in which the system was not perturbed. The origin and destination nodes of each packet were selected uniformly at random. We examined how the three algorithms functioned in the three types of networks for various numbers of packets, three capacity distributions, and two waiting cost values. The second one was the \textit{one-shot-demand-concentration} scenario, in which a set of $M$ ($=2$ or $5$ $>$ capacity) packets that could not be sent at the same time was randomly placed in addition to the $N$ regular packets already existing in the system (Fig. \ref{fig:concentration_sch}(a)). This elucidated the system behavior in response to a temporary and localized excess demand. These $M$ packets had the same origins and destinations. The origins and destinations of the packets, including the regular ones, were selected uniformly at random. We traced these $M$ packets and measured the time taken for the first and last packets to arrive at the destination. Each packet that completed its travel was removed from the system. After all the $M$ packets had arrived, we generated a new set of traced packets and repeated the measurement. The simulations were performed for the three algorithms, three types of networks, various numbers of packets, three capacity distributions, two waiting cost values, and two packet numbers, $M$. The third one was the \textit{dynamical-demand-change} scenario, in which a portion of links became temporally unavailable (Fig. \ref{fig:concentration_sch}(b)). Every five time steps, we selected 12.5$\%$, 25$\%$, or 50$\%$ of the links uniformly at random and blocked them for five time steps. We assumed that the routing algorithms could use the information about how long the link blockage would last. When the pattern of blockage changed, the AFP algorithm recomputed the path plans of all the packets. A packet using the SSP algorithm had to wait until the next link becomes available if it is blocked. The TFP algorithm took into account the blockage information available at the time (i.e., when a packet was generated) but was not flexible to new blockage patterns that will occur in the future. The simulations were performed for the three algorithms, three types of networks, various numbers of packets, three capacity distributions, two waiting cost values, and three fractions of link blockages. The fourth one was the \textit{permanent-demand-concentration} scenario, in which each origin and destination of the packets concentrated on a different single node (Fig. \ref{fig:concentration_sch}(c)). When an origin (destination) node was to be selected, a single node was chosen with a probability of $0.25$, while the other 24 nodes were selected with a probability of $0.75/24$ each. This demand pattern was fixed throughout the simulation. The simulations were performed for the three algorithms, three types of networks, various numbers of packets, three capacity distributions, and two waiting cost values. \subsection{Simulation conditions} We ran these procedures for $T=1000$ time steps for each condition. Because the cost values and one type of the network (i.e., the random network) were generated at random, we independently generated 10 networks with cost values under the same condition and averaged the results over these 10 runs. \clearpage \section{Results}\label{sec:results} \subsection{Network statistics} Here, we report the statistics of the networks (Fig. \ref{fig:network_sch}). Figure \ref{fig:network_sch}(d--f) shows an example of the length distribution of the shortest paths computed for all the pairs of nodes for each network. The distributions for the square lattice network (Fig. \ref{fig:network_sch}(d)) and random network (Fig. \ref{fig:network_sch}(e)) both showed peaks at length 4. Because the square lattice network had no shortcut links, it had a larger percentage of node pairs with longer shortest paths than did the random network. The distribution for the hub-and-spoke network had its peak at length 5, which was the maximum value of the shortest path length (Fig. \ref{fig:network_sch}(f)). Figure \ref{fig:network_sch}(g--i) shows the distribution of the betweenness centrality of the links, which is a measure of how likely a link was to be used in the shortest path. The distributions for the square lattice and random networks were similar, but the variance was smaller in the random network (Fig. \ref{fig:network_sch}(g, h)). Meanwhile, the betweenness centrality of the hub-and-spoke network took only two values, namely, 24 or 114, corresponding to 40 (directional) links connecting to the peripheral nodes and 8 links connecting to the central node, respectively. These distributions were reflected by the capacity distribution (Fig. \ref{fig:capacity_sch}(d--f)). \subsection{Fundamental analyses of system} \subsubsection{Routing algorithm and network structure} We compared the total cost and travel time of packets across the three different algorithms and waiting cost conditions for the baseline scenario for each network. Note that because the average path length, number of links, and total capacity of links differ between the three networks, we do not quantitatively compare the results between networks. First, we report the cases with $c_{\rm W}=0$ (i.e., negligible waiting cost) and the \textit{single-packet-per-link} capacity condition. Figure \ref{fig:congestion_pattern} illustrates how the three algorithms did (did not) even out the traffic demand in each network. In the square lattice and random networks, the TFP and AFP algorithms effectively dispersed the delivery routes. In contrast, in the hub-and-spoke network, the congestion pattern remained the same regardless of the choice of routing algorithm. These properties were also reflected by the total cost and travel time of individual packets. In the square lattice and random networks, the TFP and AFP algorithms increased the travel cost slightly compared with the SSP algorithm (Fig. \ref{fig:fundamental}(a, c)), but they significantly reduced the travel time when the number of packets in the system increased, which suggests that they were successful in finding alternative routes efficiently (Fig. \ref{fig:fundamental}(b, d)). The TFP and AFP algorithms performed similarly. Meanwhile, in the hub-and-spoke network, all three algorithms resulted in similar travel costs and times (Fig. \ref{fig:fundamental}(e, f)) because this network had no bypass route for any pair of nodes. Also, for this reason, the travel cost and time rapidly increased for excess demand. When the waiting cost was not negligible (e.g., $c_{\rm W}=0.5$), the cost of travel increased with travel time, significantly deteriorating the performance of the SSP algorithm (Fig. \ref{fig:fundamental_cw05}(a, c)). The presence of the waiting cost did not substantially change the travel time (Figs. \ref{fig:fundamental_cw05}(b, d, f) and \ref{fig:fundamental}(b, d, f)). As in the cases with $c_{\rm W}=0$, the performance measures of the three algorithms were similar in the hub-and-spoke network (Fig. \ref{fig:fundamental_cw05}(e, f)). Because the waiting cost did not change the route selecting behavior significantly, we focus on the cases with $c_{\rm W}=0$ in the rest of this article and show the results for $c_{\rm W}=0.5$ in Appendices. \subsubsection{Capacity distributions} Next, we examined the effect of capacity distributions on the performance of the algorithms. The total capacities (i.e., the sum of the capacity values over all the links in a network) of the \textit{two-packets-per-link} and \textit{weighted-by-betweenness-centrality} conditions, were both twice that of the \textit{single-packet-per-link} condition. These two capacity distributions reduced the travel time in all three networks and three algorithms with $c_{\rm W}=0$ (Fig. \ref{fig:fundamental}) and $c_{\rm W}=0.5$ (Fig. \ref{fig:fundamental_cw05}) as compared with the \textit{single-packet-per-link} distribution. In detail, for the square lattice random networks, they kept the travel time almost constant when the number of packets, $N$, was smaller than $\approx 50$, while it was suppressed to about 30-60$\%$ of that of the \textit{single-packet-per-link} condition when $N=100$. In the hub-and-spoke network, the rapid increase in the travel time was significantly suppressed (for example, the travel time was reduced to less than half for $N=50$). The total cost was not changed when $c_{\rm W}=0$, but was reduced when $c_{\rm W}=0.5$ because it was associated with the total time. With the SSP algorithm, the total travel time decreased considerably in the square lattice and hub-and-spoke networks under the \textit{weighted-by-betweenness-centrality} condition (Figs. \ref{fig:fundamental}(b, h, n, f, l, r) and \ref{fig:fundamental_cw05}(b, h, n, f, l, r)), suggesting that the capacity resource was effectively allocated to important links. In particular, the hub-and-spoke network benefited from this capacity distribution because the demands on the links were concentrated on a small number of links (Fig. \ref{fig:network_sch}(i)). In contrast, the difference between the two capacity conditions was not large in the random network because it had a smaller variance of the betweenness centrality than did the other two networks; thus the shortest paths were more dispersed (Fig. \ref{fig:network_sch}(g--i)). The TFP and AFP algorithms performed similarly under the two capacity conditions except in the hub-and-spoke network (Figs. \ref{fig:fundamental}(g--j, m--p) and \ref{fig:fundamental_cw05}(g--j, m--p)). These algorithms dispersed the paths without significantly increasing the cost in the square lattice and random networks, and thus, the \textit{weighted-by-betweenness-centrality} condition was not superior to the \textit{two-packets-per-link} condition. In the hub-and-spoke network, because these two algorithms generated paths similar to those of the SSP algorithm, the \textit{weighted-by-betweenness-centrality} capacity distribution resulted in a larger decrease in the travel time (Figs. \ref{fig:fundamental}(f, l, r) and \ref{fig:fundamental_cw05}(f, l, r)). \subsection{One-shot demand concentration} Next, we examine the response of the system to a temporary and localized increase in demand (i.e., the \textit{one-shot-demand-concentration} scenario). We placed a set of $M$ (= 2 or 5) packets having the same destination at a random position (Fig. \ref{fig:concentration_sch}(a)) and recorded the travel paths of the first and last packets to arrive. Figure \ref{fig:oneshot} shows the total costs and travel times of the first and last packets for $c_{\rm W} =0$ and $M=5$. Because the SSP algorithm generated the same paths for the set of traced packets, the total cost was the same between the first and last packets in all the networks. However, as the last packet had to wait until the previous packets had cleared out and the shortest path became available, the difference in the arrival times between the first and last packets was large (Fig. \ref{fig:oneshot}). The difference was the smallest in the \textit{two-packets-per-link} capacity distribution, while the two other capacity distributions yielded similar time differences. This is in stark contrast to the baseline scenario, in which the \textit{weighted-by-betweenness-centrality} capacity distribution performed the best in the majority of cases (Figs. \ref{fig:oneshot} and \ref{fig:oneshot_cw05}). The time difference was not substantially influenced by the number of packets in the system in the range of $1\leq N \leq 100$. The TFP and AFP algorithms reduced the difference in the arrival time in the square lattice and random networks (Fig. \ref{fig:oneshot}(b, d, h, j, n, p)) by dispersing the paths, thereby increasing the effective capacity of delivery from the origin to the destination. The difference in arrival time was the smallest in the \textit{two-packets-per-link} capacity distribution because detour routes were likely to exist everywhere, as the capacity resources were not concentrated on a limited number of links. The total travel cost was only slightly greater for the last packet than for the first (Fig. \ref{fig:oneshot}(a, c, g, i, m, o)). In the hub-and-spoke network, the two algorithms yielded results similar to those of the SSP algorithm because no detour routes existed. The pattern of the results remained the same when we set $c_{\rm W}=0.5$ (Fig. \ref{fig:oneshot_cw05}) except that the total cost increased with the travel time. When we set $M=2$ (and $c_{\rm W}=0$), the results were qualitatively unchanged, but the difference in the travel time between the first and last (i.e., second) packets was small (Fig. \ref{fig:oneshot_packet_size2}). \subsection{Dynamical demand change} We considered a situation where some links were randomly closed for a certain length of time for external reasons (i.e., \textit{dynamical-demand-change} scenario; Fig. \ref{fig:concentration_sch}(b); see Sec. \ref{sec:scenario} for details). Link closures that may occur in the future were not taken into account by the routing algorithms (but information about current link closures, i.e., which links were being closed and until when they were closed, was available for the TFP and AFP algorithms). We simulated the system dynamics with three percentages of blocked links, i.e., 12.5$\%$, 25$\%$, and 50$\%$. Figure \ref{fig:dynamical} shows the total cost and travel time when 25 $\%$ of the links were closed and $c_{\rm W}=0$. With the SSP algorithm, the travel time increased compared with that in the baseline scenario (Fig. \ref{fig:fundamental}). The results were similar between the \textit{two-packets-per-link} and \textit{weighted-by-betweenness-centrality} capacity conditions because of the large negative impact of the closure of a critical large-capacity link, which offset the advantage of the \textit{weighted-by-betweenness-centrality} capacity distribution. The TFP and AFP algorithms both reduced the travel times in the square lattice and random networks (Fig. \ref{fig:dynamical}). In particular, the AFP algorithm yielded faster travels than did the TFP algorithm by adaptively rerouting the packets. The performance measures of these algorithms were similar between the \textit{two-packets-per-link} and \textit{weighted-by-betweenness-centrality} capacity conditions. The total cost was larger for the AFP, TFP, and SSP algorithms, in that order. In the hub-and-spoke network, no substantial difference was found between these algorithms. The results remained qualitatively the same for the different percentages of link closure (i.e., $12.5\%$ and $50\%$; Figs. \ref{fig:dynamical_12_5} and \ref{fig:dynamical_50}, respectively) and a different value of waiting cost (i.e., $c_{\rm W}=0.5$; Fig. \ref{fig:dynamical_25_cw05}). \subsection{Permanent demand concentration} Finally, we examined the permanent change in the demand pattern (i.e., the \textit{permanent-demand-concentration} scenario; Fig. \ref{fig:concentration_sch}(c); Sec. \ref{sec:scenario}). The permanent demand concentration caused significant increases in the cost and travel time in all the cases (Figs. \ref{fig:permanent} and \ref{fig:permanent_cw05}). The TFP and AFP algorithms effectively dispersed the travel paths, thus reducing the travel time compared with the SSP algorithm in the square lattice and random networks. In the hub-and-spoke network, the three algorithms yielded similar results. The TFP and AFP algorithms performed similarly in all the networks. The travel time was the smallest in the \textit{two-packets-per-link} capacity distribution (Figs. \ref{fig:permanent}(g--l) and \ref{fig:permanent_cw05}(g--l)), because the uniform capacity distribution allowed detour routes to be found even when the demand pattern was changed. \section{Conclusion}\label{sec:conclusion} We examined the effect of network topology on the performance of the network logistics system under perturbations. The following results were obtained consistently for the three demand change scenarios: (i) Adaptive algorithms (i.e., TFP and AFP algorithms) performed better than did the SSP algorithm except in the hub-and-spoke network, where no difference was found between all the algorithms. (ii) The capacity distribution based on the betweenness centrality (i.e., \textit{weighted-by-betweenness-centrality}) was more effective than the uniform capacity distribution with the same total capacity (i.e., \textit{two-packets-per-link}) in reducing the travel time when the demand was generated uniformly at random. In contrast, under the demand change scenarios, the \textit{two-packets-per-link} distribution reduced the travel time more than did the \textit{weighted-by-betweenness-centrality} condition. (iii) The performance of the square lattice and random networks were qualitatively similar. These results collectively suggest that networks with redundancy can respond well to changes in demand, while hub-and-spoke networks (which do not have such redundancy) cannot benefit of the adaptive delivery operations. For redundant networks (i.e., square lattice and random networks), we also found the following results specific to the scenarios: (i) In the one-shot-demand-concentration scenario, the TFP and AFP algorithms effectively reduced the difference between the arrival times of the first and last packets of a set of packets having the same origins, destinations, and departure times. This quantity would be a useful measure for evaluating the performance of logistics network. (ii) In the dynamical-demand-change scenario, the AFP algorithm effectively found a temporally better route than the TFP algorithm, which did not change the route during the travel. In this paper, for simplicity of discussion, we focused on the total cost and time of travel as measures of performance. However, other measures, including sustainability ones \citep{Montreuil2011-hh}, should be considered in more detailed system evaluation. For example, studies have been shown that the PI can reduce green gas emissions from 14\% to 50\% \citep{Pan2013-xd}. In our model, the length of travel is roughly measured by the total cost (when we set $c_{\rm w}=0$). We found that travel time can be reduced in redundant networks without a substantial increase in the total cost, suggesting the efficiency of the algorithms in terms of sustainability. In addition, we have only partially considered the cost of inventory by using a waiting cost. By describing it in more detail, we can also consider the problem of coordinated inventory management \citep{Darmawan2021-pl} in logistics networks with our approach. It should be noted that we set the cost of using a link at random in the range of 0.5 to 1.5. In practice, the cost of travel may significantly differ between links if multimodal transportations and/or different types of players (e.g., manufacturers, wholesalers and retailers) are to be included. The same issue applies to the cost of waiting, $c_{\rm W}$. In this paper, we did not pursue realistic settings for these values, focusing instead on the general behavior of the system independent of the specific parameter settings. We believe that the main conclusion of the paper (i.e., redundant networks are robust) is not substantially influenced by the choice of the parameter values unless one sets extreme values. Also, in our model, the number of packets sent in a single transport on a link was limited to a small non-negative integer, while many packets can be sent at once in real logistics systems. We believe that this does not substantially affect our conclusions. Even if the link capacities and number of packets in the system were increased, a network with bypass routes would have still allowed for the efficient dispersal and delivery of packets in case of excess demand. In order to provide baseline results as a first step, the efficiency gains in logistics due to consolidation (and other delivery constraints) were not considered in the current model, but their impacts should be carefully examined in the future. For example, if an operation that increases the load factor of a single delivery by consolidating the transportation of one link to another link is possible, the routing algorithm will be modified. However, even in such a case, the shortest or fastest route would be the baseline choice, and the overall pattern of logistics is not expected to change significantly. From the standpoint of load factor, hub-and-spoke networks enable delivery with a high load factor by accumulating the freight from subordinate distribution centers. On the other hand, networks with many bypass routes cannot benefit from accumulation, but as the number of delivery route patterns is large, there would be many chances of consolidation between different delivery routes. Comparing these two types of networks under various conditions will be an interesting research topic in the future. In this paper, we examined three types of capacity distributions but did not try to determine the best one. Because the packets interacted with each other's routes, selection of the best capacity distribution was not a straightforward process. For example, a previous study indicated that traffic congestions cannot be predicted simply from the betweenness centrality \citep{Holme2003-ig}. The results will also vary depending on which measure of robustness \citep{Gu2020-bd} is adopted. Therefore, identification of an efficient capacity allocation given the network structure and operation of a logistics system is a subject of future research. Our results suggest that hub-and-spoke networks are not robust to demand changes and cannot benefit from the flexible routing in a logistic network in general. However, such a structure is beneficial in terms of (static) efficiency and is quite widely used in practice \citep{Abdinnour-Helm1999-ke}. Thus, ways of modifying a part of the network to increase its robustness efficiently is a highly interesting subject for future research. From managerial perspective, our findings suggest that establishing regular services between distribution centers and/or building new distribution centers which allow bypass routes in the network are preferable. The robustness of a supply chain network is closely related to the concept of resilience \citep{Ponomarov2009-fr, Pettit2010-mm, Tukamuhabwa2015-ff, Ivanov2017-hl, Hosseini2019-mv, Dolgui2020-yf,Chen2021-ez, Ulusan2021-mw}. Resilience is the capacity of an enterprise that enables it ``\textit{to survive, adapt, and grow in the face of turbulent change}'' \citep{Fiksel2006-oa}. For logistics networks to be resilient, its long-term stability should be guaranteed, and this is an important issue that needs to be clarified in the future. Another direction for future research is multimodal transportation networks \citep{Agamez-Arias2017-iy, Crainic2018-ys, Zhou2021-ws}. In order to understand the multi-modal logistics network, it will be necessary to understand the entire network, which is composed of networks corresponding to each mode with different characteristics. We believe that our approach will provide useful insights into such issues. \nolinenumbers \clearpage
1911.03407
\section{Credits} \section{Introduction} Question Generation (QG) from text has gained significant popularity in recent years in both academia and industry, owing to its wide applicability in a range of scenarios including conversational agents, automating reading comprehension assessment, and improving question answering systems by generating additional training data. Neural network based methods represent the state-of-the-art for automatic question generation. These models do not require templates or rules, and are able to generate fluent, high-quality questions. Most of the work in question generation takes sentences as input \citep{cardie2018harvesting,kumar2018automating,song2018leveraging,kumar2018framework}. QG at the paragraph level is much less explored and it has remained a challenging problem. The main challenges in paragraph-level QG stem from the larger context that the model needs to assimilate in order to generate relevant questions of high quality. Existing question generation methods are typically based on recurrent neural networks (RNN), such as bi-directional LSTM. Equipped with different enhancements such as the attention, copy and coverage mechanisms, RNN-based models~\citep{du2017learning,kumar2018automating,song2018leveraging} achieve good results on sentence-level question generation. However, due to their ineffectiveness in dealing with long sequences, paragraph-level question generation remains a challenging problem for these models. Recently, \cite{zhao2018paragraph} proposed a paragraph-level QG model with maxout pointers and a gated self-attention encoder. To the best of our knowledge this is the only model that is designed to support paragraph-level QG and outperforms other models on the SQuAD dataset~\citep{rajpurkar2016squad}. One straightforward extension to such a model would be to reflect the structure of a paragraph in the design of the encoder. Our first attempt is indeed a hierarchical BiLSTM-based paragraph encoder ( \hpe), wherein, the hierarchy comprises the word-level encoder that feeds its encoding to the sentence-level encoder. Further, dynamic paragraph-level contextual information in the BiLSTM-\hpe is incorporated via both word- and sentence-level selective attention. However, LSTM is based on the recurrent architecture of RNNs, making the model somewhat rigid and less dynamically sensitive to different parts of the given sequence. Also LSTM models are slower to train. In our case, a paragraph is a sequence of sentences and a sentence is a sequence of words. The Transformer~\citep{vaswani2017attention} is a recently proposed neural architecture designed to address some deficiencies of RNNs. Specifically, the Transformer is based on the (multi-head) attention mechanism, completely discarding recurrence in RNNs. This design choice allows the Transformer to effectively attend to different parts of a given sequence. Also Transformer is relatively much faster to train and test than RNNs. As humans, when reading a paragraph, we look for important sentences first and then important keywords in those sentences to find a concept around which a question can be generated. Taking this inspiration, we give the same power to our model by incorporating word-level and sentence-level selective attention to generate high-quality questions from paragraphs. In this paper, we present and contrast novel approachs to QG at the level of paragraphs. Our main contributions are as follows: \begin{itemize} \item We present two hierarchical models for encoding the paragraph based on its structure. We analyse the effectiveness of these models for the task of automatic question generation from paragraph. \item Specifically, we propose a novel hierarchical Transformer architecture. At the lower level, the encoder first encodes words and produces a sentence-level representation. At the higher level, the encoder aggregates the sentence-level representations and learns a paragraph-level representation. \item We also propose a novel hierarchical BiLSTM model with selective attention, which learns to attend to important sentences and words from the paragraph that are relevant to generate meaningful and fluent questions about the encoded answer. \item We also present attention mechanisms for dynamically incorporating contextual information in the hierarchical paragraph encoders and experimentally validate their effectiveness. \end{itemize} \section{Related Work} Question generation (QG) has recently attracted significant interests in the natural language processing (NLP)~\citep{du2017learning,kumar2018automating,song2018leveraging,kumar2018framework} and computer vision (CV)~\citep{li2018visual,fan2018question} communities. Given an input ({\em e.g.},\ a passage of text in NLP or an image in CV), optionally also an answer, the task of QG is to generate a natural-language question that is answerable from the input. Existing text-based QG methods can be broadly classified into three categories: (a) rule-based methods, (b) template-base methods, and (c) neural network-based methods. Rule based methods~\citep{heilman2010good} perform syntactic and semantic analysis of sentences and apply fixed sets of rules to generate questions. They mostly rely on syntactic rules written by humans~\citep{heilman2011automatic} and these rules change from domain to domain. On the other hand, template based methods~\citep{ali2010automation} use generic templates/slot fillers to generate questions. More recently, neural network-based QG methods~\citep{du2017learning,kumar2018automating,song2018leveraging} have been proposed. They employ an RNN-based encoder-decoder architecture and train in an end-to-end fashion, without the need of manually created rules or templates. \cite{du2017learning} were the first to propose a sequence-to-sequence (Seq2Seq) architecture for QG. \cite{kumar2018automating} proposed to augment each word with linguistic features and encode the most relevant \emph{pivotal answer} in the text while generating questions. Similarly, \cite{song2018leveraging} encode ground-truth answers (given in the training data), use the copy mechanism and additionally employ context matching to capture interactions between the answer and its context within the passage. They encode ground-truth answer for generating questions which might not be available for the test set. \cite{zhao2018paragraph} recently proposed a Seq2Seq model for paragraph-level question generation, where they employ a maxout pointer mechanism with a gated self-attention encoder. \cite{tran2018importance} contrast recurrent and non-recurrent architectures on their effectiveness in capturing the hierarchical structure. In Machine Translation, non-recurrent model such as a Transformer~\citep{vaswani2017attention} that does not use convolution or recurrent connection is often expected to perform better. However, Transformer, as a non-recurrent model, can be more effective than the recurrent model because it has full access to the sequence history. Our findings also suggest that LSTM outperforms the Transformer in capturing the hierarchical structure. In contrast, \cite{goldberg2019assessing} report settings in which attention-based models, such as BERT are better capable of learning hierarchical structure than LSTM-based models. \section{Hierarchical Paragraph Representation} We propose a general hierarchical architecture for better paragraph representation at the level of words and sentences. This architecture is agnostic to the type of encoder, so we base our hierarchical architectures on BiLSTM and Transformers. We then present two decoders (LSTM and Transformer) with hierarchical attention over the paragraph representation, in order to provide the dynamic context needed by the decoder. The decoder is further conditioned on the provided (candidate) answer to generate relevant questions. \paragraph{Notation:} The question generation task consists of pairs $(\vX,\pmb{y})$ conditioned on an encoded answer $\vz$, where $\vX$ is a paragraph, and $\vy$ is the target question which needs to be generated with respect to the paragraph.. Let us denote the $i$-th sentence in the paragraph by $\pmb{x}_i$, where $x_{i,j}$ denotes the $j$-th word of the sentence. We assume that the first and last words of the sentence are special beginning-of-the-sentence $<\textsc{BOS}>$ and end-of-the-sentence $<\textsc{EOS}>$ tokens, respectively. \subsection{Hierarchical Paragraph Encoder} \label{sec:hpe} Our hierarchical paragraph encoder (\hpe) consists of two encoders, {\em viz.}, a sentence-level and a word-level encoder; ({\em c.f.} Figure~\ref{fig:fig1}). \paragraph{Word-Level Encoder:} The lower-level encoder \wenc encodes the words of individual sentences. This encoder produces a sentence-dependent word representation $\pmb{r}_{i,j}$ for each word $x_{i,j}$ in a sentence $\pmb{x}_i$, {\em i.e.}, $\pmb{r}_i=\wenc(\pmb{x}_i)$. This representation is the output of the last encoder block in the case of Transformer, and the last hidden state in the case of BiLSTM. Furthermore, we can produce a fixed-dimensional representation for a sentence as a function of $\pmb{r_i}$, {\em e.g.}, by summing (or averaging) its contextual word representations, or concatenating the contextual representations of its \bos and \eos tokens. We denote the resulting sentence representation by $\tilde{\pmb{s}}_i$ for a sentence $\pmb{x}_i$. \paragraph{Sentence-Level Encoder:} At the higher level, our HPE consists of another encoder to produce paragraph-dependent representation for the sentences. The input to this encoder are the sentence representations produced by the lower level encoder, which are insensitive to the paragraph context. In the case of the transformer, the sentence representation is combined with its positional embedding to take the ordering of the paragraph sentences into account. The output of the higher-level encoder is contextual representation for each set of sentences $\pmb{s}=\senc(\tilde{\pmb{s}})$, where $\pmb{s}_i$ is the paragraph-dependent representation for the $i$-th sentence. In the following two sub-sections, we present our two hierarchical encoding architectures, {\em viz.}, the hierarchical BiLSTM in Section \ref{sec:hbilstm}) and hierarchical transformer in Section \ref{sec:htrans}). \subsection{Dynamic Context in BiLSTM-\hpe} \label{sec:hbilstm} \begin{figure*}[htb] \centering \includegraphics[width=\textwidth]{hier_lstm_new.pdf} \caption{Our hierarchial LSTM Architecture.} \label{fig:fig1} \end{figure*} In this first option, {\em c.f.}, Figure \ref{fig:fig1}, we use both word-level attention and sentence level attention in a Hierarchical BiLSTM encoder to obtain the hierarchical paragraph representation. We employ the attention mechanism proposed in \citep{Luong2015EffectiveAT} at both the word and sentence levels. We employ the BiLSTM (Bidirectional LSTM) as both, the word as well as the sentence level encoders. We concatenate forward and backward hidden states to obtain sentence/paragraph representations. Subsequently, we employ a unidirectional LSTM unit as our decoder, that generates the target question one word at a time, conditioned on (i) all the words generated in the previous time steps and (ii) on the encoded answer. The methodology employed in these modules has been described next. \paragraph{Word-level Attention:} We use the LSTM decoder's previous hidden state and the word encoder's hidden state to compute attention over words (Figure~\ref{fig:fig1}). We the concatenate forward and backward hidden states of the BiLSTM encoder to obtain the final hidden state representation ($h_t$) at time step t. Representation ($h_t$) is calculated as: $\pmb{h}_{t} = \wenc(\mathbf{h}_{t-1}, [\pmb{e}_{t}, \pmb{f}^{w}_{t}])$, where $\pmb{e}_{t}$ represents the GLoVE \citep{pennington-etal-2014-glove} embedded representation of word ($x_{i,j}$) at time step t and $\pmb{f}^{w}_{t}$ is the embedded BIO feature for answer encoding.\\ The word level attention ($\pmb{a}^{w}_{t}$) is computed as: $\pmb{a}^{w}_{t} = Softmax({[{u}^{w}_{t_i}]}^{M}_{i=1})$, where M is the number of words, and $ u^{w}_{t_i} = \pmb{v}^{T}_{w} \tanh(\mathbf{W}_{w} [\pmb{h}_{i},\pmb{d}_{t}])$ and $\pmb{d}_{t}$ is the decoder hidden state at time step t. \par We calculate sentence representation ($\tilde{\pmb{s}}_i$) using word level encoder's hidden states as: $\tilde{\pmb{s}}_i= \frac{1}{|\pmb{x}_{i}|} \sum_{j} \pmb{r}_{i,j}$, where $\pmb{r}_{i,j}$ is the word encoder hidden state representation of the $j^{th}$ word of the $i^{th}$ sentence. \paragraph{Sentence-level Attention:} We feed the sentence representations $\tilde{\pmb{s}}$ to our sentence-level BiLSTM encoder ({\em c.f.} Figure~\ref{fig:fig1}). Similar to the word-level attention, we again the compute attention weight over every sentence in the input passage, using (i) the previous decoder hidden state and (ii) the sentence encoder's hidden state. As before, we concatenate the forward and backward hidden states of the sentence level encoder to obtain the final hidden state representation. The hidden state ($\pmb{g}_{t}$) of the sentence level encoder is computed as: $\pmb{g}_{t} = \senc(\pmb{g}_{t-1}, [\tilde{\pmb{s}}_{t}, \pmb{f}^{s}_{t}])$, where $\pmb{f}^{s}_{t}$ is the embedded feature vector denoting whether the sentence contains the encoded answer or not. \par The selective sentence level attention ($\pmb{a}^{s}_{t}$) is computed as: $\pmb{a}^{s}_{t} = Sparsemax({[{u}^{w}_{t_i}]}^{K}_{i=1})$, where, K is the number of sentences, $u^{s}_{t_i} = \pmb{v}^{T}_{s} \tanh(\mathbf{W}_{s} [\pmb{g}_{i},\pmb{d}_{t}])$. The final context ($\pmb{c}_t$) based on hierarchical selective attention is computed as: $\pmb{c}_{t} = \sum_{i} {a}^{s}_{t_i} \sum_{j}\overline{a}^{w}_{t_{i,j}} \pmb{r}_{i,j}$, where $\overline{a}^{w}_{t_{i,j}}$ is the word attention score obtained from $\pmb{a}^{w}_{t}$ corresponding to $j^{th}$ word of the $i^{th}$ sentence. The context vector $\pmb{c}_t$ is fed to the decoder at time step t along with embedded representation of the previous output. \subsection{Dynamic Context in Transformer-\hpe \label{sec:htrans}} \begin{figure}[htb] \centering \includegraphics[scale=0.35]{hier_QG.pdf} \caption{Our hierarchical Transformer architecture.} \label{fig:Hierarchical transformer} \end{figure} In this second option ({\em c.f.} Figure~\ref{fig:Hierarchical transformer}), we make use of a Transformer decoder to generate the target question, one token at a time, from left to right. For generating the next token, the decoder attends to the previously generated tokens in the question, the encoded answer and the paragraph. We postulate that attention to the paragraph benefits from our hierarchical representation, described in Section \ref{sec:hpe}. That is, our model identifies firstly the relevance of the sentences, and then the relevance of the words within the sentences. This results in a hierarchical attention module (HATT) and its multi-head extension (MHATT), which replace the attention mechanism to the source in the Transformer decoder. We first explain the sentence and paragragh encoders (Section~\ref{sec:sentparaenc}) before moving on to explanation of the decoder (Section~\ref{sec:decoder}) and the hierarchical attention modules (HATT and MHATT in Section~\ref{sec:mhatt}). \subsubsection{Sentence and paragraph encoder \label{sec:sentparaenc}} The sentence encoder transformer maps an input sequence of word representations $\textbf{x} = (x_0,\cdots,x_n)$ to a sequence of continuous sentence representations $\textbf{r} = (r_0,\cdots,r_n)$. Paragraph encoder takes concatenation of word representation of the start word and end word as input and returns paragraph representation. Each encoder layer is composed of two sub-layers namely a multi-head self attention layer (Section~\ref{sec:mhatt}) and a position wise fully connected feed forward neural network (Section~\ref{sec:posenc}). To be able to effectively describe these modules, we will benefit first from a description of the decoder (Section~\ref{sec:decoder}). \subsubsection{Decoder} \label{sec:decoder} The decoder stack is similar to encoder stack except that it has an additional sub layer (encoder-decoder attention layer) which learn multi-head self attention over the output of the paragraph encoder. The output of the paragraph encoder is transformed into a set of attention vectors $K_{encdec}$ and $V_{encdec}$. Encoder-decoder attention layer of decoder takes the key $K_{encdec}$ and value $V_{encdec}$ . Decoder stack will output a float vector, we can feed this float vector to a linear followed softmax layer to get probability for generating target word. \subsubsection{The HATT and MHATT Modules} \label{sec:mhatt} Let us assume that the question decoder needs to attend to the source paragraph during the generation process. To attend to the hierarchical paragraph representation, we replace the multi-head attention mechanism (to the source) in Transformer by introducing a new multi-head hierarchical attention module $MHATT(q^s, K^s, q^w, K^w, V^w)$ where $q^s$ is the sentence-level query vector, $q^w$ is the word level query vector, $K^s$ is the key matrix for the sentences of the paragraph, $K^w$ is the key matrix for the words of the paragraph, and $V^w$ is the value matrix fr the words of the paragraph. The vectors of sentence-level query $q^s$ and word-level query $q^s$ are created using non-linear transformations of the state of the decoder $\pmb{h}_{t-1}$, i.e. the input vector to the softmax function when generating the previous word $w_{t-1}$ of the question. The matrices for the sentence-level key $K^s$ and word-level key $K^w$ are created using the output. We take the input vector to the softmax function $\pmb{h}_{t-1}$, when the $t$-th word in the question is being generated. Firstly, this module attends to paragraph sentences using their keys and the sentence query vector: $\pmb{a} = softmax(q_s K_s/d)$. Here, $d$ is the dimension of the query/key vectors; the dimension of the resulting attention vector would be the number of sentences in the paragraph. Secondly, it computes an attention vector for the words of each sentence: $\pmb{b}_i = softmax(q_w K_w^i/d)$. Here, $K^i_w$ is the key matrix for the words in the $i$-th sentences; the dimension of the resulting attention vector $\pmb{b}_i$ is the number of tokens in the $i$-th sentence. % Lastly, the context vector is computed using the word values of their attention weights based on their sentence-level and word-level attentions: $$HATT(q_s, K_s, q_w, K_w, V_w) = \sum_{i=1}^{|\pmb{d}|} a_i \big( \pmb{b}_i \cdot V^w_i \big)$$ Attention in MHATT module is calculated as: \begin{equation} \textit{Attention}(Q_w,V_w,K_w) = \textit{softmax}(\frac{Q_wK_w}{\sqrt{d_{k}}})V_w \end{equation} Where \textit{Attention}$(Q_w,V_w,K_w)$ is reformulation of scaled dot product attention of \citep{vaswani2017attention}. For multiple heads, the multihead attention $\textbf{z}= \textit{Multihead}(Q_w,K_w,V_w)$ is calculated as: \begin{equation} \textit{Multihead}(Q_w,K_w,V_w) = \textit{Concat}(h_0,h_2,..,h_n)W^O \end{equation} where $\ h_i = \textit{Attention}(Q_wW_i^Q,K_wW_i^K,V_wW_i^V)$, $W_i^Q\in R^{d_{model}\times d_k}$, $W_i^K \in R^{d_{model}\times d_k}$ , $W_i^V \in R^{d_{model}\times d_v}$, $W^O \in R^{hd_v \times d_{model}}$, $d_k=d_v=d_{model}/h=64$.\\ \textbf{z} is fed to a position-wise fully connected feed forward neural network to obtain the final input representation. \subsubsection{Position-wise Fully Connected Feed Forward Neural Network \label{sec:posenc}} Output of the HATT module is passed to a fully connected feed forward neural net (FFNN) for calculating the hierarchical representation of input (\textbf{r}) as: $\textbf{r} = FFNN(x) = (max(0,xW_1+b1))W_2+b_2$, where \textbf{r} is fed as input to the next encoder layers. The final representation \textbf{r} from last layer of decoder is fed to the linear followed by softmax layer for calculating output probabilities. \section{Experimental Setup} \subsection{Datasets} We performed all our experiments on the publicly available SQuAD \citep{rajpurkar2016squad} and MS MARCO~\citep{nguyen2016ms} datasets. SQuAD contains 536 Wikipedia articles and more than 100K questions posed about the articles by crowd-workers. We split the SQuAD train set by the ratio 90\%-10\% into train and dev set and take SQuAD dev set as our test set for evaluation. We take an entire paragraph in each train/test instance as input in all our experiments. MS MARCO contains passages that are retrieved from web documents and the questions are anonimized versions of BING queries. We take a subset of MS MARCO v1.1 dataset containing questions that are answerable from atleast one paragraph. We split train set as 90\%-10\% into train (71k) and dev (8k) and take dev set as test set (9.5k). Our split is same but our dataset also contains (para, question) tuples whose answers are not a subspan of the paragraph, thus making our task more difficult. \subsection{Evaluation metrics} For evaluating our question generation model we report the standard metrics, {\em viz.}, BLEU~\citep{papineni2002bleu} and ROUGE-L\citep{Lin2004ROUGEAP}. We performed human evaluation to further analyze quality of questions generated by all the models. We analyzed quality of questions generated on a) syntactic correctness b) semantic correctness and c) relevance to the given paragraph. \subsection{Models} We compare QG results of our hierarchical LSTM and hierarchical Transformer with their \emph{flat} counterparts. We describe our models below: \noindent \textbf{Seq2Seq + att + AE} is the attention-based sequence-to-sequence model with a BiLSTM encoder, answer encoding and an LSTM decoder. \noindent \textbf{HierSeq2Seq + AE } is the hierarchical BiLSTM model with a BiLSTM sentence encoder, a BiLSTM paragraph encoder and an LSTM decoder conditioned on encoded answer. \noindent \textbf{TransSeq2Seq + AE} is a Transformer-based sequence-to-sequence model with a Transformer encoder followed by a Transformer decoder conditioned on encoded answer. \noindent \textbf{HierTransSeq2Seq + AE} is the hierarchical Transformer model with a Transformer sentence encoder, a Transformer paragraph encoder followed by a Transformer decoder conditioned on answer encoded. \begin{table*}[!htb] \centering \begin{tabular}{| l | c | c | c | c |c | c | } \hline Model& BLEU-1 & BLEU-2 & BLEU-3 & BLEU-4 &ROUGE-L \\ \hline Seq2Seq + att + AE & 52.86 &29.02 & 17.06& 10.26 &38.17 \\ TransSeq2Seq + AE & 42.07 & 22.03 & 12.33 & 7.45 & 36.77\\ \hline HierSeq2Seq + AE & 54.36 &\textbf{30.62} &\textbf{18.43} &\textbf{11.50} & 38.83\\ HierTransSeq2Seq + AE & \textbf{54.49} & 29.79 &17.45 &10.80 & \textbf{41.13}\\ \hline \end{tabular} \caption{Automatic evaluation results on the SQuAD dataset. For each metric, best result is \textbf{bolded} \label{tab:res_squad} \end{table*} \begin{table*}[!htb] \centering \begin{tabular}{| l | c | c | c | c |c | c | } \hline Model& BLEU-1 & BLEU-2 & BLEU-3 & BLEU-4 &ROUGE-L \\ \hline Seq2Seq + att + AE & 35.90 &23.52 & 15.15& 10.10 &31.05 \\ TransSeq2Seq + AE & 24.90 & 15.32 &9.46 &6.13 & 30.76\\ \hline HierSeq2Seq + AE & \textbf{38.08} &\textbf{25.33} &\textbf{16.48} &\textbf{11.13} &\textbf{32.82}\\ HierTransSeq2Seq + AE & 31.49 & 20.05 & 12.60 & 8.68 &31.88\\ \hline \end{tabular} \caption{Automatic evaluation results on the MS MARCO dataset. For each metric, best result is \textbf{bolded} \label{tab:res_msmarco} \end{table*} \begin{table*}[!htb] \begin{center} \begin{tabular}{| l | c | c | c | c |c | c | } \hline \multirow{2}{*}{Model}& \multicolumn{2}{c|}{Syntax} & \multicolumn{2}{c|}{Semantics} & \multicolumn{2}{c|}{Relevance} \\ \cline{2-7} & Score & Kappa & Score & Kappa & Score & Kappa \\ \hline Seq2Seq + att + AE &86 &0.57 & 79.33 &0.61 &70.66 & 0.56 \\ TransSeq2Seq + AE &86 &0.66 & 84 &0.62 &50 & 0.60 \\ \hline HierSeq2Seq + AE &80 &0.49 & 73.33 &0.54 &\textbf{81.33} & 0.64 \\ HierTransSeq2Seq + AE &\textbf{90} &0.62 & \textbf{85.33} &0.65 &68 & 0.56 \\ \hline \end{tabular} \end{center} \caption{Human evaluation results (column ``Score'') as well as inter-rater agreement (column ``Kappa'') for each model on the SQuAD test set. The scores are between 0-100, 0 being the worst and 100 being the best. Best results for each metric (column) are \textbf{bolded}. The three evaluation criteria are: (1) syntactically correct ({Syntax}), (2) semantically correct ({Semantics}), and (3) relevant to the text ({Relevance}).} \label{heresults_squad} \end{table*} \begin{table*}[!htb] \begin{center} \begin{tabular}{| l | c | c | c | c |c | c | } \hline \multirow{2}{*}{Model}& \multicolumn{2}{c|}{Syntax} & \multicolumn{2}{c|}{Semantics} & \multicolumn{2}{c|}{Relevance} \\ \cline{2-7} & Score & Kappa & Score & Kappa & Score & Kappa \\ \hline Seq2Seq + att + AE &83.33 &0.68 & 69.33 & 0.65 &38.66 & 0.52 \\ TransSeq2Seq + AE &80.66 &0.73 & 73.33 &0.55 &35.33 & 0.47 \\ \hline HierSeq2Seq + AE &85.33 &0.73 & 70.66 &0.68 &\textbf{51.33} & 0.60 \\ HierTransSeq2Seq + AE &\textbf{86} &0.87 & \textbf{73.33} &0.65 &32.66 & 0.60 \\ \hline \end{tabular} \end{center} \caption{Human evaluation results (column ``Score'') as well as inter-rater agreement (column ``Kappa'') for each model on the MS MARCO test set. The scores are between 0-100, 0 being the worst and 100 being the best. Best results for each metric (column) are \textbf{bolded}. The three evaluation criteria are: (1) syntactically correct ({Syntax}), (2) semantically correct ({Semantics}), and (3) relevant to the text ({Relevance}).} \label{heresults_msmarco} \end{table*} \section{Results and discussion} In Table \ref{tab:res_squad} and Table \ref{tab:res_msmarco} we present automatic evaluation results of all models on SQuAD and MS MARCO datasets respectively. We present human evaluation results in Table \ref{heresults_squad} and Table \ref{heresults_msmarco} respectively. A number of interesting observations can be made from automatic evaluation results in Table \ref{tab:res_squad} and Table \ref{tab:res_msmarco}: \begin{itemize} \item Overall, the hierarchical BiLSTM model HierSeq2Seq + AE shows the best performance, achieving best result on BLEU2--BLEU4 metrics on both SQuAD dataset, whereas the hierarchical Transformer model TransSeq2Seq + AE\ performs best on BLEU1 and ROUGE-L on the SQuAD dataset. \item Compared to the flat LSTM and Transformer models, their respective hierarchical counterparts always perform better on both the SQuAD and MS MARCO datasets. \item On the MS MARCO dataset, we observe the best consistent performance using the hierarchical BiLSTM models on all automatic evaluation metrics. \item On the MS MARCO dataset, the two LSTM-based models outperform the two Transformer-based models. \end{itemize} Interestingly, human evaluation results, as tabulated in Table \ref{heresults_squad} and Table \ref{heresults_msmarco}, demonstrate that the hierarchical Transformer model TransSeq2Seq + AE\ outperforms all the other models on both datasets in both syntactic and semantic correctness. However, the hierarchical BiLSTM model HierSeq2Seq + AE \ achieves best, and significantly better, relevance scores on both datasets. From the evaluation results, we can see that our proposed hierarchical models demonstrate benefits over their respective flat counterparts in a significant way. Thus, for paragraph-level question generation, the hierarchical representation of paragraphs is a worthy pursuit. Moreover, the Transformer architecture shows great potential over the more traditional RNN models such as BiLSTM as shown in human evaluation. Thus the continued investigation of hierarchical Transformer is a promising research avenue. In the Appendix, in Section B, we present several examples that illustrate the effectiveness of our Hierarchical models. In Section C of the appendix, we present some failure cases of our model, along with plausible explanations. \section{Conclusion} We proposed two hierarchical models for the challenging task of question generation from paragraphs, one of which is based on a hierarchical BiLSTM model and the other is a novel hierarchical Transformer architecture. We perform extensive experimental evaluation on the SQuAD and MS MARCO datasets using standard metrics. Results demonstrate the hierarchical representations to be overall much more effective than their flat counterparts. The hierarchical models for both Transformer and BiLSTM clearly outperforms their flat counterparts on all metrics in almost all cases. Further, our experimental results validate that hierarchical selective attention benefits the hierarchical BiLSTM model. Qualitatively, our hierarchical models also exhibit better capability of generating fluent and relevant questions.
1905.03110
\section{Introduction} Gaussian mixture models provide accurate estimates of free energy landscapes~\cite{westerlund_inference_2018}. Determining metastable core states within a protein's free energy landscape is key to obtaining important biological insights. However, extracting such states from molecular dynamics (MD) simulations with conventional clustering methods is far from straightforward. First of all, we are interested in the metastable configurations at free energy minima, the so-called core states. Since proteins move continuously as they explore free energy landscapes, it is difficult to assess an exact state boundary. Moreover, configurations on transition pathways between metastable states generally contribute to noise when characterizing these states. On top of this, the original data is high dimensional, and the necessary dimensionality reduction results in poorly separated states. Finally, the number of metastable core states is typically not known a priori. Thus, to robustly characterize states without any knowledge of the conformational ensemble, we need a clustering method that is solely based on the data. Many popular clustering methods are based on simple geometric criteria~\cite{sorensen_method_1948, sneath_application_1957,ward_hierarchical_1963,macqueen_methods_1967}. K-means and agglomerative-Ward, for example, attempt to minimize the within-cluster variance. They work very well on datasets with well-separated spherical clusters, but fail when these assumptions are not met. Spectral clustering~\cite{ng_spectral_2001}, on the other hand, can accurately assign labels to nonconvex clusters by performing spectral embedding prior to K-means clustering. The spectral embedding involves learning the data manifold using local neighborhoods around data points. In general, geometric clustering methods assign labels to all points and may not accurately identify the boundary between states at the free energy barrier, which leads to noisy state definitions. An idea is to use the data density to identify clusters and make cuts at free energy barriers, as done by for example Hierarchical DBSCAN~\cite{campello_density-based_2013}, density peaks advanced clustering~\cite{rodriguez_clustering_2014,derrico_automatic_2018}, and robust density clustering~\cite{sittel_robust_2016}. Although the idea may appear simple, there are still problems to address. The choice of density basis functions, for example, greatly affects density estimation; local discrete basis functions tend to overfit the density and therefore yield poor estimates in sparsely sampled regions~\cite{westerlund_inference_2018}. Part of the introduced error can be decreased by either using adaptive basis functions~\cite{rodriguez_computing_2018}, or by optimizing the model on the full dataset, as done by Gaussian mixture models (GMMs)~\cite{dempster_EM_1977,bishop_pattern_GMM_2011}. A Gaussian mixture model, however, rests on the assumption of Gaussian shaped clusters. The number of Gaussian components is usually chosen based on how well the model fits the density, which does not necessarily coincide with the number of clusters. Therefore, methods for merging components in GMM to find the correct number of clusters have been proposed~\cite{hennig_merging_2010}. Another problem is the definition of core state boundaries, which typically are determined with a chosen cutoff~\cite{ester_density_based_1996,rodriguez_clustering_2014}. Such a cutoff does, however, not account for the possibly varying structure and hierarchical nature of a protein free energy landscape. In this paper, we propose a clustering method that makes minimal assumptions about cluster shapes or dataset structure. We call it the inflection core state (InfleCS) clustering. The functional form of the density landscapes estimated from Gaussian mixtures models is exploited to extract well-defined core states. We show that InfleCS outperforms conventional methods on three different types of toy models, describes its properties with various challenging datasets, and use it to characterize the conformational landscape spanned by molecular simulations of Ca\textsuperscript{2+}-bound Calmodulin. \section{Clustering Gaussian mixture free energy landscapes} The density landscape obtained from a Gaussian mixture model (GMM) estimator~\cite{dempster_EM_1977,westerlund_inference_2018} is used to extract core states with InfleCS. These core states correspond to metastable states in a free energy landscape along collective variables (CVs), $x\in \mathbb{R}^{N_\text{dims}}$, \begin{equation} G(x) = -k_BT\log \rho_{\boldsymbol{a},\boldsymbol{\mu},\boldsymbol{\Sigma}}(x), \label{eq:free_energy} \end{equation} where $\rho_{\boldsymbol{a},\boldsymbol{\mu},\boldsymbol{\Sigma}}(x)$ is the Gaussian mixture density at $x$. \subsection{Gaussian mixture model density estimation} A Gaussian mixture density is described by a sum of Gaussians with individual amplitudes, $\boldsymbol{a}:= (a_i)_{i=1}^{N_\text{basis}}$, means $\boldsymbol{\mu}:=(\mu_i)_{i=1}^{N_\text{basis}}$ and covariances $\boldsymbol{\Sigma}:=(\Sigma_i)_{i=1}^{N_\text{basis}}$, \begin{equation} \rho_{\boldsymbol{a},\boldsymbol{\mu},\boldsymbol{\Sigma}}(x) = \sum\limits_{i=1}^{N_{\text{basis}}} a_i \; \mathcal{N}(x|\; \mu_i,\; \Sigma_i), \label{eq:GMM_density} \end{equation} where $\mathcal{N}(x|\; \mu_i,\; \Sigma_i) = \frac{1}{\sqrt{(2\pi)^{N_\text{dims}} |\Sigma_i |}} e^{\hat{f}_i}$ is a Gaussian with inner function $\hat{f}_i = -\frac{(x-\mu_i)^T \Sigma_i^{-1}(x-\mu_i)}{2}$. The parameters of Gaussian mixture models are optimized iteratively with expectation-maximization~\cite{dempster_EM_1977,scikit-learn}. The log-likelihood, $\mathcal{L}$, of the trained data will increase with increased number of parameters. At some point, however, adding more parameters will lead to overfitting. Contrary to what was done in our original GMM free energy estimator~\cite{westerlund_inference_2018}, the number of Gaussian components, $N_\text{basis}$, that allows for a detailed description of the density without overfitting, is selected using the Bayesian information criterion (BIC),~\cite{mclaclan_number_2014, schwarz_BIC_1978}. BIC adds a penalty to the log-likelihood that grows with model complexity, \begin{equation} I_{BIC} = \log(N_{\text{points}}) N_{\text{param}} - 2\mathcal{L}. \end{equation} To select a final model, we fit multiple GMMs to the data for a given range of $N_\text{basis}$ values and evaluate the $I_{BIC}$ corresponding to each GMM. The model that gives rise to the smallest $I_{BIC}$ is ultimately chosen as the one with best fit. \begin{figure} \centering \includegraphics[width=0.6\paperwidth]{clustering_algorithm.pdf} \caption{An illustration of the InfleCS clustering method. First, the density Hessian is computed at all points. This is used to identify core state points. Connected graphs, or components, of core state points are then built using spatial proximity between core state points and transition state points. Finally, the connected components are extracted and cluster labels are assigned to the points in these components.} \label{fig:second_derivative} \end{figure} \subsection{Extracting core states from the density landscape} Each point in a free energy landscape either belongs to a metastable core state or a transition state. To analyze the conformational ensemble, we seek well-defined core states. Such core states are easily extracted from maxima of the estimated density landscape by exploiting its functional definition. The cutoff between core state and transition state is taken at the density inflection point. Figure~\ref{fig:second_derivative} outlines the steps involved in the clustering method. In Figure~\ref{fig:second_derivative} a), a plot of a 1-dimensional 2-component Gaussian mixture free energy landscape is shown. The density second-order derivative values are displayed by colors. All points with negative density second derivative values are labeled metastable core states, while the rest are labeled transition points. Islands of core state points are isolated by transition points, Figure~\ref{fig:second_derivative} a,b). Two points within the same free energy minimum are connected by not allowing any transition point to be closer to both core state points. This creates connected components, that are extracted by assigning the same cluster label to all points within the same connected component, Figure~\ref{fig:second_derivative} c,d). To generalize the clustering to $N_\text{dims}$, we derive an expression for the density Hessian (matrix of second-order partial derivatives) with respect to the CVs. The partial derivative of a Gaussian mixture density, Eq.~\ref{eq:GMM_density}, with respect to the $d$th CV, $x_d$, is \begin{equation} \frac{\partial }{\partial x_d} \rho_{\boldsymbol{a},\boldsymbol{\mu},\boldsymbol{\Sigma}} = \sum\limits_{i=1}^{N_{\text{basis}}}a_i\;\mathcal{N}(x|\; \mu_i,\; \Sigma_i)\frac{\partial \hat{f}_i}{\partial x_d}, \end{equation} where $\frac{\partial \hat{f}_i}{\partial x_d}$ is the $d$th element of the inner function gradient, $\nabla \hat{f_i} = -\Sigma_i^{-1}(x-\mu_i)$. From this we obtain an expression of the ($d$,$d'$)th element of the GMM density Hessian, \begin{equation} \frac{\partial^2 }{\partial x_d \partial x_{d'}} \rho_{\boldsymbol{a},\; \boldsymbol{\mu}, \; \boldsymbol{\Sigma}} = \sum\limits_{i=1}^{N_\text{basis}} a_i \; \mathcal{N}(x | \; \mu_i, \; \Sigma_i)\bigg( \frac{\partial^2 \hat{f}_i}{\partial x_d \partial x_{d'}} + \frac{\partial \hat{f}_i}{\partial x_d}\frac{\partial \hat{f}_i}{\partial x_{d'}} \bigg), \end{equation} where $ \frac{\partial^2 \hat{f}_i}{\partial x_d \partial x_{d'}} = -\Sigma^{-1}_{i,(d,d')}$. The Hessian reflects the curvature of the landscape, such that a point belongs to a metastable core state if the Hessian is negative definite. Since the Hessian is symmetric, this is the same as all its eigenvalues being negative. Thus, the shape of the density landscape is used to label each point as core state or transition state. The continuous definition of the density makes it possible to carry out the clustering on a grid instead of the sampled data, and subsequently use the grid clustering to assign labels to the sampled data. This is done by computing the Hessian of each grid cell center coordinate and mark each cell as core state or transition point, followed by the identification and extraction of connected components. Each sampled data point is then given the cluster label of the closest grid cell. This makes the core state extraction independent of the number of data points. The grid resolution mainly affects computational efficiency, and can be determined by the user, Figure S1. Here, we specify the resolution explicitly when a grid is used. Transition points are left unassigned when identifying core states. For full clustering, the transition points are first sorted in order of descending density and one-by-one assigned to the closest cluster. The highest-density point of a cluster is taken as its cluster center. \subsection{Population of states} To quantify the relative size of metastable states, we estimate the population of states, $\pi$. It reports on the probability of observing a configuration in any of the metastable states. The probability of the $k$th cluster is computed by integrating the density over its spanned volume, $V_k$, \begin{equation} \pi_k = \int\limits_{V_k} \rho_{\boldsymbol{a},\boldsymbol{\mu},\boldsymbol{\Sigma}}(x) \text{d} x = \int \limits_{X} I(x\in V_k) \rho_{\boldsymbol{a},\boldsymbol{\mu},\boldsymbol{\Sigma}}(x) \text{d} x. \end{equation} Here, $X$ is the full density domain and $I(x\in V_k)$ is an indicator function which is unity if $x\in V_k$, and zero otherwise. The integral is approximated with Monte Carlo integration with $10^5$ points sampled from the density landscape. \section{Methods} \subsection{Conventional and state-of-the-art clustering methods} To evaluate performance and properties of InfleCS, its full clustering is compared to K-Means, agglomerative-Ward, Spectral clustering, hierarchical DBSCAN (HDBSCAN), density peaks advanced (DPA), robust density clustering (RDC), and canonical GMM. Since the number of clusters is usually not known in real-world datasets, we use simple heuristics that are based on the data to determine these. Clustering and heuristics for K-means, Agglomerative-Ward and GMM are obtained with scikit-learn~\cite{scikit-learn}. HDBSCAN is performed with the python package HDBSCAN~\cite{McInnes_hdbscan_2017}, while DPA and RDC are performed with tools from the original authors~\cite{derrico_automatic_2018,sittel_robust_2016}. \subsubsection{K-means} K-Means clustering is done by repeatedly assigning points to the cluster label corresponding to the nearest cluster center and updating the cluster center to the new cluster centroid. The silhouette score~\cite{rousseeuw_silhouettes_1987} is used to select the number of clusters. A high silhouette score indicates small within cluster distances and large distances to the closest cluster, and thus a good partitioning of spherical clusters. \subsubsection{Agglomerative-Ward} Agglomerative-Ward (AW) clustering initially treats all data points as separate clusters. In each iteration, two clusters are merged to minimize within-cluster variance. This is similar to K-Means, but the cluster assignment is greedy while K-Means is optimized globally. Just as for K-Means, we determine the number of clusters with the silhouette score. \subsubsection{Spectral clustering} Spectral clustering makes use of local relationships by passing the data through a Gaussian kernel. Here, the Gaussian standard deviation is set to the maximum nearest neighbor distance to ensure that no point is disconnected. The processed data is used to create a random walk matrix from which the $K$ largest eigenvectors are identified. The row-normalized matrix with eigenvectors represents the embedding on a $K$-dimensional hypersphere. The embedded points are then clustered with K-means. The silhouette score is not easily applied to spectral clustering because the clustering is done in different spaces of spectral embeddings, which requires a non-trivial normalization~\cite{ponzoni_spectrus:_2015}. Instead, the largest eigengap is used to determine the number of clusters. \subsubsection{Hierarchical DBSCAN} Hierarchical DBSCAN (HDBSCAN) extends the conventional DBSCAN clustering~\cite{ester_density_based_1996}, where core points are defined as points with at least $N_\text{neighbors}$ within a cutoff, $\varepsilon$. Connected components defining the clusters are built by connecting two core points if they belong to each other's $\varepsilon$-neighborhood. Because the fixed cutoff makes it difficult to identify clusters of different densities, HDBSCAN~\cite{campello_density-based_2013,McInnes_hdbscan_2017} instead hierarchically represents the DBSCAN partitions of varying $\varepsilon$ in a dendrogram. Stable clusters are then extracted through local cluster tree cuts~\cite{hartigan_consistency_1981}. To successfully extract clusters, a minimum allowed cluster size is required. It takes on the same value as $N_\text{neighbors}$. The default value of $N_\text{neighbors}=5$ is used on the toy model datasets. \subsubsection{Density peaks advanced clustering} The original density peaks~\cite{rodriguez_clustering_2014} is a recently developed method based on density estimation with local discrete basis functions. A decision graph is used to pick cluster centers with relatively high density and large distance to the closest point of higher density. The remaining points are then assigned to the closest cluster in order of decreasing density. This subjective picking of cluster centers, however, can be hampered by ambiguity, Figure~S2. Therefore, the density peaks advanced (DPA)~\cite{derrico_automatic_2018} clustering instead represents the data hierarchically and extracts clusters based on the data and significance of peaks. Whether or not a peak is significant is dictated by the parameter $Z$. A lower $Z$ value yields more sensitivity to density variations, which may result in identifying false clusters where the finite sampling leads to spurious fluctuations. A higher value of $Z$, on the other hand, decreases the sensitivity to density fluctuations, but may instead result in unidentified clusters. To pick $Z$, we plot the resulting number of clusters as a function of $Z$-values and choose $Z$ where the number of clusters plateaus, Figure~S3. In addition to the hierarchical representation, DPA uses a point adaptive $k$ nearest neighbor (PA$k$~\cite{rodriguez_computing_2018}) estimator to estimate the density and free energy along the data manifold. This requires a method to accurately estimate the intrinsic dimension of the dataset~\cite{granata_accurate_2016,facco_estimating_2017}. To carry out the full DPA pipeline, we used the TWO-NN method~\cite{facco_estimating_2017}. \subsubsection{Robust density clustering} The robust density clustering~\cite{sittel_robust_2016} (RDC) was developed for clustering free energy intermediate states. First, the density is estimated by counting the number of points within a radius $R$ of each point. This yields a free energy estimate of each point. Starting with a low free energy cutoff, all points below the cutoff that are within a distance, $d_\text{lump}$, of each other are joined. The free energy cutoff is then iteratively increased by 0.1 $k_BT$ until all points are lumped together. This lumping procedure yields a separation of the clusters at free energy barriers. The lumping threshold, $d_\text{lump}$, is set to twice the mean of nearest neighbor distances~\cite{sittel_robust_2016}. Furthermore, guided by recent developments, we set $R=d_\text{lump}$~\cite{nagel_dynamical_2019}. Lastly, a minimum cluster size is required. To pick this parameter, we plot the resulting number of clusters as a function of the parameter, and choose a value where the number of clusters plateaus~\cite{nagel_dynamical_2019}, Figure~S4. \subsubsection{Canonical Gaussian mixture model clustering} Canonical Gaussian mixture model clustering is done by fitting a Gaussian mixture density to the data, where each Gaussian component represents a cluster. A data point is assigned the cluster label corresponding to the component that contributes the most to its density. The number of components, and thus number of clusters, is chosen with BIC~\cite{mclaclan_number_2014, schwarz_BIC_1978}. \subsection{Evaluating clustering on toy models} Three two dimensional toy models are used to compare and quantify performance of the different clustering methods. The first toy model is a dataset with seven Gaussian clusters. The clusters are non-equidistantly spaced and have different densities. The second dataset consists of three well-separated clusters, with one clearly non-Gaussian cluster. The third toy model attempts to mimic a real-world dataset with three poorly separated and nonlinear clusters with different densities and sizes. For each toy model, the minimum number of clusters (or Gaussian components) was set to 2, and the maximum to 15, for K-means, AW, spectral clustering, GMM and InfleCS. Clustering quality can be assessed by computing the fraction of clustered points that originate from the same true class. This is the homogeneity score. A maximum homogeneity score is reached when the clustering is perfect, but also if points from one true class are divided into more than one cluster. A remedy is to instead report on the fraction of points from a true class that belongs to a single cluster, the completeness score. However, a perfect score is then obtained if points from different true classes are assigned to the same cluster. Since this is complementary to the homogeneity score, we use the average of the homogeneity and completeness scores, the so called V-measure~\cite{scikit-learn,rosenberg_vmeasure_2007}, to evaluate clustering quality. It gives a score between zero and one, where one indicates perfect clustering. To gather statistics, we repeat the sampling, clustering and V-measure evaluation 50 times for each toy model and clustering method. To further characterize InfleCS in terms of computational time and accuracy depending on the amount of added noise, number of data points, dimensionality and number of grids cells, we use simple clustering datasets (''blobs'', scikit-learn~\cite{scikit-learn}). We also used a nonconvex cluster dataset (''moons'', scikit-learn~\cite{scikit-learn}) to highlight InfleCS properties. \subsection{Molecular simulations of Ca\textsuperscript{2+}-bound Calmodulin} We apply InfleCS to an ensemble of Ca\textsuperscript{2+}-bound Calmodulin (CaM) configurations~\cite{stevens_calmodulin_1983}, with minimum number of Gaussian components set to 10 and maximum to 25. CaM consists of 148 residues arranged in eight helices and three domains; the N-terminal and C-terminal lobes, as well as the flexible linker between them. The two lobes have two EF-hand motifs each, enabling CaM to bind four Ca\textsuperscript{2+} ions. The helices are named from A to H, from N- to C-terminus. In the Ca\textsuperscript{2+}-bound state, CaM commonly adopts a dumbbell shaped conformation with exposed hydrophobic clefts. The exposed hydrophobic clefts facilitate binding to, and thus regulating, a wide range of target proteins, including ion channels and kinases~\cite{kanehisa_kegg:_2017}. We investigate the conformational dynamics of Ca\textsuperscript{2+}-bound CaM with 460 ns already published~\cite{westerlund_effect_2018} replica exchange solute tempering (REST) simulations~\cite{wang_replica_2011,bussi_hamiltonian_2014}. To analyze CaM, we project the protein heavy atom coordinates onto two collective variables. The first CV, difference in distribution of reciprocal interatomic distances (DRID)~\cite{zhou_distribution_2012}, reflects on the global conformational change relative the initial crystal structure~\cite{babu_structure_1988}. In short, the distribution of inverse distances between $C_\alpha$ atoms are used to extract three features for each residue; the mean, the square root of the second central moment, and the cubic root of the third central moment. Thus, each frame $j$ is described by a feature matrix, $\boldsymbol{v}_j\in \mathbb{R}^{3\times N_{\text{Residues}}}$. The difference to the initial structure with features $\boldsymbol{v}_0$ is computed as the average residue feature distance \begin{equation} \text{DRID}_j = \frac{1}{3N_\text{Residues}} \sum\limits_{n=1}^{N_\text{Residues}} \| \boldsymbol{v}_j^n-\boldsymbol{v}_0^n \|. \end{equation} The second CV, backbone dihedral angle correlation (BDAC)~\cite{westerlund_inference_2018}, reports on secondary structure changes in the linker relative the initial structure, \begin{equation} \text{BDAC}_j = \frac{1}{4N'_\text{Residues}}\sum\limits_{n=1}^{N'_{\text{Residues}}} (2 + \cos (\varphi_j^n - \varphi_0^n) + \cos (\psi_j^n - \psi_0^n) ). \end{equation} Here, $\varphi_j^n$ and $\psi_j^n$ are the backbone dihedral angles of linker residue $n$ in frame $j$. MDtraj~\cite{mcgibbon_mdtraj:_2015} was used to compute DRID feature vectors and backbone dihedral angles. \section{Results and discussion} \subsection{Toy models demonstrate properties of InfleCS} The first toy model consists of Gaussian clusters generated from the free energy landscape in Figure~\ref{fig:toy_models}~a.i). An example of sampled data with true cluster labels is displayed in Figure~\ref{fig:toy_models}~a.ii), and performances of the different clustering methods on this toy model are shown in Figure~\ref{fig:toy_models}~a.iii). Because the clusters are Gaussian and have similar spatial size, most methods perform well. GMM and InfleCS specifically provide close to perfect clustering. \begin{figure} \centering \includegraphics[width=0.7\paperwidth]{toy_model_performance.pdf} \caption{The performance of all clustering methods (K-Means, Agglomerative-Ward (AW), spectral clustering, HDBSCAN, density peaks advanced (DPA), robust density clustering (RDC), Gaussian mixture model (GMM) and InfleCS) on three toy model datasets with a) Gaussian clusters, b) Non-Gaussian clusters and c) hierarchical and poorly separated clusters. For each toy model, we show i) the true free energy landscape of the toy model, ii) an example of sampled data and its true clustering and iii) box plots showing the V-measure scores for the clustering methods. Each box is constructed with 50 sampled datasets. The horizontal line of each box shows the median, while the box covers the region between the lower quartile $Q1$ (25th percentile) and the upper quartile $Q3$ (75th percentile). The whiskers mark data obtained within $Q1-1.5IQR$ and $Q3+1.5IQR$, where $IQR=Q3-Q1$ is the interquartile range. Outliers are shown as dots.} \label{fig:toy_models} \end{figure} To investigate the impact of cluster shapes on clustering performance, a second toy model with one non-Gaussian cluster is introduced, Figure~\ref{fig:toy_models}~b.i-ii). This toy model highlights how GMM, K-means and agglomerative-Ward fail because their intrinsic assumptions are not met, Figure~\ref{fig:toy_models}~b.iii). DPA clustering is highly variable, indicating that the chosen peak significance value is not optimal across sampled datasets. Interestingly, spectral clustering has similar performance as on the first toy model, and the other methods, especially HDBSCAN and InfleCS, provide high performance clustering. The assumption of a density landscape that can be accurately described by a linear combination of Gaussian components is the core of GMM density estimation and, consequently, InfleCS clustering. Even so, most densities can be modeled by a GMM given enough sampled points and Gaussian components. Moreover, because InfleCS extracts clusters with graphs based on local relationships between data points, it is possible to identify non-Gaussian and, to some extent, even uniform and highly nonlinear clusters, as exemplified with the moons dataset, Figure~S5. Most free energy landscapes rendering conformational ensembles of biomolecules, however, do not contain uniform or highly nonlinear states. The problem of identifying metastable states is then instead related to the hierarchical nature of the landscapes, as well as the noisy state definitions that occur when a high dimensional dataset is projected onto much fewer dimensions. The third toy model mimics such a dataset, where the originally high dimensional data is projected onto a low dimensional space with poorly separated clusters. By mere inspection of the scattered data, Figure~\ref{fig:toy_models}~c.ii), it is difficult to identify the clusters. The free energy landscape, however, clearly depicts three states, Figure~\ref{fig:toy_models}~c.i). Because the clusters are poorly separated and of different spatial sizes, the geometric (not density based) clustering methods completely fail, Figure~\ref{fig:toy_models}~c.iii). Furthermore, HDBSCAN, DPA and RDC have low clustering qualities due to the spatially small-sized clusters of relatively low density. Despite the poorly separated states with different density and sizes, InfleCS maintains high quality clustering, Figure~\ref{fig:toy_models}~c.iii). Thus, it is the only method in this set that successfully clusters all toy model datasets. In addition to projection issues, data sampled by molecular dynamics simulations may contain spurious noise. Uniformly distributed noise is relatively well tolerated by the InfleCS clustering, Figure~S6. Another important aspect to consider is the dimensionality of the data: in some cases, more than 5 dimensions are needed to describe the conformational ensemble of a protein~\cite{facco_estimating_2017,sittel_perspective_2018}. Due to the curse of dimensionality, datasets of larger dimensionality are required to contain more data points to allow for density estimation. Therefore, a dataset with larger dimensionality notably affects the computational time, which increases with both data dimensionality and the number of data points, Figure~S7~b) and~S8. Still, because GMMs use continuous basis functions and estimate densities globally, the estimates in sparse regions tend to be more accurate than those obtained from methods with discrete basis functions~\cite{wasserman_nonparametric_2004,westerlund_inference_2018}. Because InfleCS relies on such density estimation, it is possible to perform accurate clustering in higher dimensional space when enough data is available, Figure~S7~a). \subsection{Application to Ca\textsuperscript{2+}-bound Calmodulin ensemble} \begin{figure} \centering \includegraphics[width=0.7\paperwidth]{InfleCS_CaM.pdf} \caption{a) The estimated free energy landscape of the CaM conformational ensemble along the distribution of reciprocal interatomic distances (DRID) and the linker backbone dihedral angle correlation (BDAC). The final density landscape consists of 16 Gaussian components. The estimated parameters of the 16-component GMM on this dataset are listed under Section~S2. b) The identified core states based on the estimated density on a $100\times 100$ grid, colored by cluster labels. Transition points are shown as gray dots. c) The state populations. d) Structures of each cluster center. The ribbons are colored according to the cluster the structure belongs to, while the cylinders, helix A to H, are colored according to a rainbow. The structures are visualized with VMD~\cite{humphrey_VMD_1996}.} \label{fig:CaM_InfleCS} \end{figure} The estimated free energy landscape of CaM along the two CVs and the corresponding InfleCS clustered data are shown in Figure~\ref{fig:CaM_InfleCS} a-b). As expected, InfleCS correctly identifies the metastable states within the estimated free energy landscape. The complex nature of the CaM dataset and the stochasticity of GMM density estimation result in a relatively plateaued $I_{BIC}$ profile, Figure~S9~a). Nonetheless, by repeating the clustering 50 times, we remark that InfleCS maintains robust clustering, Figure~S9~b). Note that the clustering is carried out on a grid, the resolution of which has a negligible impact on the clustering, Figure~S1. The core state probabilities, Figure~\ref{fig:CaM_InfleCS} c), show that the seventh state is the most common state in this dataset. Representative structures of the most populated states are shown in Figure~\ref{fig:CaM_InfleCS} d). The most common structure is in a canonical dumbbell conformation, but we also identify two different compact conformations among these well-populated states, namely state 11 and 16. Compact states similar to state 16 have been identified in previous MD datasets~\cite{aykut_designing_2013,fiorin_unwinding_2005,shepherd_molecular_2004,wriggers_structure_1998} as well as in an experimental structure~\cite{fallon_closed_2003}. \begin{figure} \centering \includegraphics[width=0.5\paperwidth]{InfleCS_CaM_pathways.pdf} \caption{a) A free energy pathway between the most common state 7 to the compact state 11. The path goes through state 4 and 9. b) Representative structures of the involved states. The ribbons are colored according to the cluster the structure belongs to, while the cylinder, helix A, is colored red. The positively (negatively) charged residues that participate in salt-bridge formation and breaking are shown with blue (orange) sticks (GLU11, LYS77, ASP80, GLU83, ARG86, ARG90). LYS75 and GLU84 are shown with blue and orange spheres, respectively. The structures are visualized with VMD~\cite{humphrey_VMD_1996}.} \label{fig:CaM_pathway} \end{figure} With the estimated free energy landscape available, we can infer possible pathways between states and thus understand how the states are connected. For example, the eleventh state is a relatively highly populated compact state with helix A collapsed onto the linker. By mere inspection of the representative structures, it is not obvious how CaM transitions to this state from the canonical conformation (state 7). Through visual inspection of the free energy landscape, we characterize one possible pathway between the canonical and compact state that goes through state 4 and 9, Figure~\ref{fig:CaM_pathway}~a). The representative structures corresponding to cluster centers, Figure~\ref{fig:CaM_pathway} b), indicate that the transition along this pathway likely occurs through a twisting motion of the two lobes around the linker and breaking of the linker helix. Early MD simulations and experimental studies of CaM suggest that destabilization of the linker helix is driven by electrostatic interactions~\cite{fiorin_unwinding_2005,spoel_bending_1996, shepherd_molecular_2004,torok_effects_1992,kataoka_linker_1996}. We therefore study salt bridges in the three core state ensembles along the pathway. Going from the canonical to the intermediate state, the linker helix breaks which enables formation of a stabilizing salt bridge between LYS75 and GLU84, Figure~\ref{fig:CaM_pathway}~b) and~S10 a-c). To transition to the compact state, GLU11 is likely recruited by LYS77, which breaks the LYS75-GLU84 salt bridge (cluster 9) in favor of a new charged cluster with salt bridges between LYS77-GLU83, ASP80-ARG86, GLU11-ARG86 and GLU11-ARG90, Figure~\ref{fig:CaM_pathway} b) and~S10~c-d). LYS75 is a common target-protein binding residue~\cite{villarroel_ever_2014}. Thus, the salt bridge formations in this compact state that expose LYS75 to solvent may promote initiation of long-ranged interactions with target proteins. \section{Conclusions} We presented InfleCS, a clustering method that uses the shape of an estimated Gaussian mixture density to identify metastable core states. The method was shown to consistently outperform other common clustering methods on three toy models with different properties. The advantages with InfleCS for free energy landscape clustering are five-fold. First, clusters are identified at density peaks, which guarantees that clusters are metastable states. Second, core state boundaries identified by density second derivatives result in well-defined states. Third, clusters are constructed by building graphs, thus making less assumptions about cluster shapes. Fourth, because the clustering method naturally involves density and free energy landscape estimation, it is possible to derive pathways between states and thus understand fundamental mechanisms. Finally, the number of clusters is naturally determined by the number of density peaks in the landscape and therefore requires no a priori system knowledge. By applying InfleCS to a conformational ensemble of Ca\textsuperscript{2+}-bound CaM, we identified a possible pathway from the canonical to a compact state through a twisting motion of the two lobes followed by salt bridge breaking and formation. This pathway highlights electrostatically driven structural rearrangements that may allow CaM to bind to a wide range of target proteins.
1905.02967
\section{} \unskip \subsection{} The appendix is an optional section that can contain details and data supplemental to the main text. For example, explanations of experimental details that would disrupt the flow of the main text, but nonetheless remain crucial to understanding and reproducing the research shown; figures of replicates for experiments of which representative data is shown in the main text can be added here if brief, or as Supplementary data. Mathematical proofs of results not central to the paper can be added as an appendix. \section{} All appendix sections must be cited in the main text. In the appendixes, Figures, Tables, etc. should be labeled starting with `A', e.g., Figure A1, Figure A2, etc. \reftitle{References}
2012.03956
\section{Introduction}\label{sec:intro} Since their first observations over 100 years ago~\cite{Hess:1912srp} much has been learned in the study of cosmic rays. Modern experiments (both direct and air-shower observations) provide invaluable information about the composition and energy spectra of cosmic rays over a huge energy range shedding light on their origin and propagation environment~\cite{Strong:2007nh,Gaisser:2013bla,Amato:2017dbs}. The AMS-02 experiment on-board the International Space Station has measured the fluxes of a large set of cosmic-ray nuclei in the GeV to TeV range~\cite{Aguilar:2015ooa,Aguilar:2015ctt,Aguilar:2016kjl,Aguilar:2016vqr,Aguilar:2017hno,Aguilar:2018njt} that are of Galactic origin. The most abundant species are \emph{primary} cosmic rays, such as protons ($\sim90\%$), He ($\sim10\%$) and C, O ($\sim1\%$), that are accelerated to high energies through diffusive shock acceleration, \emph{e.g.}~in the environment of supernova remnants. In contrast, the origin of `fragile' nuclei, such as Li, Be, B that are easily destroyed in stellar processes, are of \emph{secondary} nature. The same applies to antiprotons, as any significant amount of anti-baryons is absent in stellar matter. Secondaries are produced in collisions of primaries with the interstellar medium, \emph{i.e.}~spallation or antiparticle production processes. Dark matter constitutes an additional primary source of antiprotons in the Galaxy, provided its interaction supports pair-annihilation into standard-model particles (see \emph{e.g.}~Ref.~\refcite{Bertone:2004pz} for a review). The blueprint of such a dark-matter candidate is a weakly interacting massive particle, with theoretically appealing features~\cite{Jungman:1995df}. In fact, antiprotons have been suggested as a possible probe of such a candidate over 30 years ago~\cite{Silk:1984zy,Stecker:1985jc}. Since then they have become a standard tool for indirect detection of dark matter~\cite{Bergstrom:1999jc,Donato:2003xg,Bringmann:2006im,Donato:2008jk,Fornengo:2013xda,Hooper:2014ysa,Pettorino:2014sua,Boudaud:2014qra,Cembranos:2014wza,Cirelli:2014lwa,Bringmann:2014lpa,Giesen:2015ufa,Jin:2015sqa,Evoli:2015vaa,Cuoco:2016eej,Cuoco:2017iax,Reinert:2017aga,Cui:2018klo,Cuoco:2019kuu,Heisig:2020nse}. However, the interpretation of cosmic-ray data is complicated by several aspects that are interconnected. First, the propagation of cosmic-rays through the Galaxy is a highly non-trivial process whose modeling is still lacking a sufficiently predictive theoretical framework, rending a fit to the data the only way to determine its parameters. Secondly, the modeling of primary sources faces a similar problem. While diffusive shock acceleration generically predicts a simple power-law behavior for the injection spectra, observations~\cite{Ahn:2010gv,Adriani:2011cu,Aguilar:2015ooa,Aguilar:2015ctt} necessitate the use of different spectral indices (for protons and heavier nuclei) as well as (possibly)~\cite{Serpico:2018lkb} the introduction of spectral breaks. This has led to an increase in the number of model parameters that, again, can only be constrained by a fit to the data. Thirdly, several sources of systematic uncertainties are at play that often arise from poorly constrained nuclear processes (in its relevant kinematic regime for the cosmic-ray analysis). Notably, these uncertainties have been shown to exhibit sizeable correlations calling out for a careful assessment~\cite{Reinert:2017aga,Cuoco:2019kuu,Derome:2019jfs,Boudaud:2019efq,Heisig:2020nse}. With the AMS-02 experiment, cosmic-ray physics has entered a precision era. For the first time, it provides measurements at a percent-level precision that allows for a welcomed redundancy in the constraints of currently considered propagation models. Strong efforts have been made to exploit such a wealth of data to draw solid conclusions on the existence of a primary antiproton contribution from dark matter. However, tackling the inverse problem of cosmic-ray propagation remains a major challenge. It requires the disentangling of effects from the modeling of astrophysical cosmic-ray sources, cosmic-ray propagation, involved uncertainties and a possible dark matter signal. While this effort is subject to ongoing research, this article reviews recent results on the derivation of robust limits on heavy dark matter~\cite{Cuoco:2017iax} as well as a potential hint for dark matter with a mass $\lesssim 100\,$GeV~\cite{Cuoco:2016eej,Cui:2016ppb,Cuoco:2017rxb,Reinert:2017aga,Cui:2018klo,Cuoco:2019kuu,Cholis:2019ejx,Lin:2019ljc,Heisig:2020nse}. We discuss possible degeneracies between propagation and dark-matter parameters and various sources of (correlated) systematic uncertainties. The remainder of this paper is organized as follows. In section~\ref{sec:crde} we review cosmic-ray propagation and the ways to constrain its parameters. In sections~\ref{sec:limits} and \ref{sec:exc} we discuss limits on dark matter and a tentative dark-matter excess, respectively. We take a closer look into systematic uncertainties in section~\ref{sec:uncer} before summarizing in section~\ref{sec:sum}. \section{Cosmic-ray propagation}\label{sec:crde} \subsection{Diffusion equation} The propagation of charged cosmic rays is characterized by numerous deflections on the turbulent magnetic fields in our Galaxy (see \emph{e.g.}~Refs.~\refcite{Strong:2007nh,Amato:2017dbs} and references therein). They render propagation to be a diffusive process with a typical residence time of several million years in the diffusion volume. A state-of-the-art description is provided by the diffusion equation for the particle density $\psi_i$ of species $i$ per volume and absolute value of momentum $p$:~\cite{Ginzburg:1990sk,Strong:2007nh} \begin{equation} \begin{split} \label{eq:PropagationEquation} \frac{\partial \psi_i (\vec{x}, p, t)}{\partial t} = \;& q_i(\vec{x}, p) + \vec{\nabla} \cdot \left( D_{xx} \vec{\nabla} \psi_i - \vec{V} \psi_i \right) + \frac{\partial}{\partial p} p^2 D_{pp} \frac{\partial}{\partial p} \frac{1}{p^2} \psi_i \\ &- \frac{\partial}{\partial p} \left( \frac{\mathrm{d} p}{\mathrm{d} t} \psi_i - \frac{p}{3} (\vec{\nabla \cdot V}) \psi_i \right) - \frac{1}{\tau_{f,i}} \psi_i - \frac{1}{\tau_{r,i}} \psi_i\,. \end{split} \end{equation} Here, $q_i(\bm{x}, p)$ is the source term (for primary and secondary sources). Terms proportional to $D_{xx}, \vec{V}$, and $D_{pp}$ correspond to spatial diffusion, convection and reacceleration, respectively. The second line of eq.~\eqref{eq:PropagationEquation} represents the momentum gain or loss rate~$\propto {\mathrm{d} p}/{\mathrm{d} t}$, adiabatic energy losses $\propto \bm{\nabla \cdot V}$, and the catastrophic loss of particles by fragmentation and radioactive decay, $\propto 1/\tau_{f,i}$ and $1/\tau_{r,i}$, respectively. We discuss the various terms in more detail below. Equation~\eqref{eq:PropagationEquation} constitutes a coupled set of partial differential equations. As fragmentation and decay of heavier nuclei provide a source of lighter ones, the chain of coupled equations is usually solved starting from heavier to lighter nuclei (and from primaries to secondaries). In practice, one often employs the steady-state solution and adopts the free escape boundary condition, \emph{i.e.}~the condition of vanishing densities at the boundaries of the propagation volume. The latter is typically chosen to be a cylindrical volume with a radius $r\sim20\,$kpc and half-height $z_\text{h}\sim2\!-\!10\,$kpc~\cite{Evoli:2019iih,Weinrich:2020ftb} centered around our Galaxy. The equation can be solved fully numerically, using computer codes such as \textsc{Galprop}~\cite{Strong:1998fr,Strong:2015zva}, \textsc{Dragon}~\cite{Evoli:2008dv,Evoli:2017vim} and \textsc{Picard}~\cite{Kissmann:2014sia} or by (semi-) analytical methods, as implemented \emph{e.g.}~in~\textsc{Usine}~\cite{Putze:2010zn,Maurin:2018rmm}. \smallskip \textbf{Primary source terms.} Diffusive shock acceleration implies a power-law behavior of primary injection spectra (see \emph{e.g.}~Ref.~\refcite{Drury:1983zz}). However, AMS-02 data~\cite{Aguilar:2015ooa,Aguilar:2015ctt} (in parts confirming earlier indications~\cite{Ahn:2010gv,Adriani:2011cu}) suggest the introduction of spectral breaks. This concerns a high-rigidity (${\cal R}\sim300$\,GV) and low-rigidity (${\cal R}\lesssim 10$\,GV) break~\cite{Trotta:2010mx,Evoli:2015vaa,Johannesson:2016rlh}, often modeled with smoothened transitions~\cite{Korsmeier:2016kha}. (Here, ${\cal R} = p/Z$, with $Z$ being the particle's charge number.) The spatial distribution of primary sources is highly concentrated around the Galactic disk, $\propto \exp(-|z|/z_0)$, with a characteristic half-hight of $z_0\simeq0.2\,$kpc. \smallskip \textbf{Spatial diffusion.} The spatial diffusion coefficient $D_{xx}$ constitutes the central piece of the diffusion equation~\cite{Ginzburg:1964,Ginzburg:1990sk}. A commonly made assumption is a homogeneous and isotropic coefficient. In its simplest form, it is parametrized by a single power-law in rigidity: \begin{equation} \label{eq:diff} D_{xx} = \beta^\eta D_0 {\cal R}^\delta \end{equation} with $\beta$ being the particle's velocity (in units of $c=1$) and $\eta=1$. However, with the precision of AMS-02 data, for the first time, one can distinguish whether the breaks seen in the primary fluxes arise from the injection spectra or propagation, see \emph{e.g.}~Ref.~\refcite{Serpico:2018lkb}. In fact, AMS-02 data on Li,\,Be,\,B~\cite{Aguilar:2018njt} suggest to assign the high-rigidity break (${\cal R} \sim 300\,$GV) to the diffusion coefficient rather than the injection spectra~\cite{Genolini:2017dfb}. Several microphysical mechanisms have been proposed that support such a behavior~\cite{Blasi:2012yr,Aloisio:2015rsa}. Furthermore, the inclusion of the low-rigidity break (${\cal R} \lesssim 10\,$GV) in diffusion and/or $\eta\neq1$ has recently shown to provide good fits to AMS-02 data on B/C~\cite{Genolini:2019ewc} as well as other secondary-to-primary ratios~\cite{Weinrich:2020cmw}. Such a break may arise as a consequence of damping of small-scale magnetic turbulences (see \emph{e.g.}~Ref.~\refcite{Blasi:2012yr}). \smallskip \textbf{Convection and reacceleration.} Convective winds generated by astrophysical sources are directed perpendicular to the Galactic plane. They are parametrized by the convection velocity, $v_\text{c}$. Reacceleration in the turbulent magnetic fields introduces diffusion in momentum space. It is linked to the spatial diffusion coefficient via the velocity of Alfven magnetic waves, $v_\text{A}$.~\cite{Ginzburg:1990sk,1994ApJ...431..705S} Both processes are most relevant at low rigidities. \smallskip \textbf{Secondary source terms.} The secondary source terms depend on the fluxes $\phi_i$ of projectile nuclei $i$ and the number density $n_{\mathrm{ISM},j}$ of target nuclei $j$ in the interstellar medium. To obtain the secondary antiproton source spectrum the energy-differential cross section, $\mathrm{d} \sigma_{i,j}/\mathrm{d} T_{\bar{p}} $, has to be convoluted with the energy-dependent flux $\phi_i(T_i)$:~\cite{Shen:1968zza,Bottino:1998tw} \begin{eqnarray} \label{eq:sec_sourceTerm} q_{\bar p}^{ij}({\bm x},T_{\bar{p}}) &=& \int\limits_{T_{\rm th}}^\infty \mathrm{d} T_i \,\, 4\pi \,n_{\mathrm{ISM},j}({\bm x}) \, \phi_i (T_i) \, \frac{\mathrm{d}\sigma_{ij}}{\mathrm{d} T_{\bar{p}}}(T_i , T_{\bar{p}})\,, \end{eqnarray} where $T$ denotes the kinetic energy and the threshold for antiproton production is $T_\text{th} = 6 m_p$ (assuming no anti-matter in the final state). The dominant contribution for antiproton production comes from proton-proton, proton-He and He-proton. See section~\ref{sec:xs} for more details. For the spallation of heavier into lighter nuclei (responsible for the production of Li, Be, B) no convolution has to be performed as, to first approximation~\cite{1995ApJ...451..275T}, the kinetic energy per nucleon is constant, $T_i/A_i = T_k/A_k$, where $k$ is the secondary nucleus and $A$ the nucleon number. \smallskip \textbf{Dark-matter source term.} If dark matter is made of a self-annihilating particle it introduces a primary source term for antiprotons: \begin{equation} \label{eq:DM_source_term} q_{\bar{p}}^{(\mathrm{DM})}(\bm{x}, E_\mathrm{kin}) = \frac{1}{2} \left( \frac{\rho(\bm{x})}{m_\mathrm{DM}}\right)^2 \sum_f \left\langle \sigma v \right\rangle_f \frac{\mathrm{d} N^f_{\bar{p}}}{\mathrm{d} E_\mathrm{kin}} , \end{equation} where $m_\mathrm{DM}$ and $\rho(\bm{x})$ denotes the dark-matter mass and density profile in the Galaxy, respectively, $\left\langle \sigma v \right\rangle_f$ the velocity averaged annihilation cross section and $\mathrm{d} N^f_{\bar{p}}/\mathrm{d} E_\mathrm{kin}$ the corresponding antiproton energy spectrum per dark-matter annihilation.\footnote{Note that the factor $1/2$ in eq.~\eqref{eq:DM_source_term} corresponds to the case of a self-conjugate dark-matter candidate, \emph{e.g.}~a Majorana fermion. The corresponding factor is $1/4$ otherwise.} For annihilation into all pairs of standard-model particles, annihilation spectra have been computed and made publicly available for a large range of dark-matter masses~\cite{Cirelli:2010xx}. They can also be computed for arbitrary (and mixed) final states using the automated numerical tool \textsc{MadDM}~\cite{Ambrogi:2018jqj} (utilizing \textsc{Pythia}8~\cite{Sjostrand:2014zea} for showing and hadronization). For a given density profile $\rho$ and annihilation channel $f$ the source introduces the two dark-matter model parameters, $m_\mathrm{DM}$ and $\left\langle \sigma v \right\rangle_f$. While the local dark-matter density at the solar position is constrained at a level of $\sim30\%$~\cite{Salucci:2010qr}, the density profile towards the Galactic center is only loosely constrained by data allowing for both `cuspy' profiles, such as Navarro-Frenk-White~\cite{Navarro:1995iw} or Einasto~\cite{Einasto:1965czb} as well as `cored' profiles such as Burkert~\cite{Burkert:1995yz}. We briefly discuss the role of different choices for cosmic-ray antiproton observation in section~\ref{sec:limits}. Finally, we would like to mention that other exotic astrophysical sources of antiprotons have been considered in the literature, see \emph{e.g.}~Ref.~\refcite{Blasi:2009bd,Kohri:2015mga}. \subsection{Solar modulation} For observations near Earth, the inverse problem of cosmic-ray propagation is further complicated by a variety of transport processes in the heliosphere, collectively referred to as \emph{solar modulation} (see \emph{e.g.}~Ref.~\refcite{Potgieter:2013pdj} for a review). Upon entering the solar system cosmic rays interact with the turbulent solar wind and heliospheric magnetic field. This causes a suppression of the measured flux near Earth compared to the interstellar one, in particular at small rigidities. For (anti)protons the effect becomes important for rigidities below a few tenths of GV. The strength of solar modulation is correlated with the solar activity which undergoes an 11-year cycle. Several numerical codes have been developed to solve the transport equation for heliospheric models on different levels of sophistication~\cite{Kappl:2015hxv,Vittino:2017fuh,Aslam:2018kpi,Boschini:2019ubh,Kuhlen:2019hqb}. Recent progress in constraining solar-modulation models has been due to the direct measurement~\cite{Stone_VOYAGER_CR_LIS_FLUX_2013} of interstellar fluxes by the Voyager I spacecraft that has left the heliosphere in 2012 as well as the time-dependent fluxes released by PAMELA~\cite{Adriani:2013as,Adriani:2016uhu} and AMS-02~\cite{Aguilar:2018wmi,Aguilar:2018ons}. For practical purposes, in cosmic-ray studies, one often adopts the force-field approximation~\cite{Gleeson:1968zza,Fisk_SolarModulation_1976}, describing solar modulation by a single parameter: the solar-modulation potential. Deviations from this simple picture have been captured \emph{e.g.}~by a charge-, rigidity- and/or time-dependent solar-modulation potential~\cite{Cholis:2015gna,Tomassetti:2017hbe,Gieseler:2017xry,Cholis:2020tpi}. For (anti)protons that are subject to this review deviations from the force-field approximation become apparent at rigidities below roughly 5\,GV (see \emph{e.g.}~Ref.~\refcite{Cuoco:2019kuu}). \subsection{Constraining propagation} Constraining the diffusion model requires knowledge over at least one primary and secondary cosmic-ray species. As the measured primary spectra (corrected for solar modulation; see above) are, to first approximation, identical to the ones entering the source term for secondaries, we can infer the effect of propagation by the comparison of secondaries to primaries. Approximately, $\psi_\text{s}/\psi_\text{p}\propto D_{xx}^{-1} \propto {\cal R}^{-\delta}$, in the case of simple power-law diffusion. The standard secondary-to-primary ratio to constrain diffusion is B/C~\cite{Maurin:2001sj,Moskalenko:2001ya,Maurin:2010zp,Genolini:2015cta,Derome:2019jfs,Genolini:2019ewc}, but also Li/C, Be/C, Li/O, Be/O, B/O, N/O, $^3$He/$^4$He as well as $\bar p / p$ have been considered~\cite{Evoli:2008dv,Korsmeier:2016kha,Johannesson:2016rlh,Wu:2018lqu,Boschini:2019gow,Evoli:2019wwu,Weinrich:2020cmw}. A commonly made assumption is the universality of diffusion, requiring the validity of a model from (anti)protons to O. This assumption is, however, subject to ongoing scrutiny (see \emph{e.g.}~discussion in Refs.~\refcite{Johannesson:2016rlh,Weinrich:2020cmw}). Note that the diffusion parameter inference from secondary-to-primary flux ratios introduces a high level of degeneracy between the normalization of the diffusion coefficient and the halo size, $z_\text{h}$, consequently leading to rather weak constraints on either of these quantities individually. This is relevant for dark-matter searches -- their sensitivity is strongly affected by $z_\text{h}$. We will briefly comment on this aspect in section~\ref{sec:limits}. Independent constraints on $z_\text{h}$ can be derived from radioactive secondaries like $^{10}$Be, see \emph{e.g.}~Refs.~\refcite{Evoli:2019iih,Weinrich:2020ftb} for a recent account. \section{Constraints on dark matter}\label{sec:limits} Using cosmic-ray antiprotons for probing dark-matter annihilation in our Galaxy has a long history. In particular, the increasing level of precision of data from balloon- and space-borne experiments, like BESS~\cite{Orito:1999re,Maeno:2000qx}, AMS~\cite{Aguilar:2002ad}, BESS-Polar~\cite{Abe:2011nx}, PAMELA~\cite{Adriani:2010rc,Adriani:2012paa} and, eventually, AMS-02~\cite{Aguilar:2016kjl} has established this channel as an important tool to place constraints on dark-matter models~\cite{Bergstrom:1999jc,Donato:2003xg,Bringmann:2006im,Donato:2008jk,Fornengo:2013xda,Hooper:2014ysa,Pettorino:2014sua,Boudaud:2014qra,Cembranos:2014wza,Cirelli:2014lwa,Bringmann:2014lpa,Giesen:2015ufa,Jin:2015sqa,Evoli:2015vaa,Cuoco:2016eej,Cuoco:2017iax,Reinert:2017aga,Cui:2018klo}. A central challenge in the interpretation of the data has been the diffusion model uncertainties. In the pre-AMS-02 era, employing the MIN/MED/MAX benchmark scenarios~\cite{Donato:2003xg} (consistent with B/C data at that time~\cite{Maurin:2001sj}), these led to uncertainties in the upper limits on annihilation cross sections reaching up to three orders of magnitude~\cite{Fornengo:2013xda}. With the AMS-02 antiproton data, uncertainties in the measured fluxes are reduced to a few percent over a wide range of rigidities~\cite{Aguilar:2016kjl} providing sensitivity to a primary dark-matter contribution to antiprotons as low as around 10\%. However, not only does the new data promote dark-matter searches to a new level of sensitivity, it has also brought forward cracks in the standard minimal scenario, requiring a refinement of the propagation model. This, in turn, comes at the price of increasing the number of free parameters associated with the modeling of propagation (and/or sources). Accordingly, care has to be taken before assigning a possible anomaly to dark-matter annihilation -- or considered from the viewpoint of imposing constraints in the absence of an observed excess: We have to make sure that the dark-matter hypothesis to be excluded is not only incompatible with data for certain propagation parameters but for all choices that are allowed by current data, thereby exploring possible degeneracies in the parameter space. This task calls out for a global fit of both propagation and dark-matter parameters where the former are treated as nuisance parameters. In Ref.~\refcite{Cuoco:2017iax} we have performed such an analysis. As opposed to constraining the propagation model by an independent measurement of B/C, in this analysis, we choose a minimal set of primary and secondary fluxes that allows us to constrain both the propagation and dark-matter model while not relying on the assumption of universality. It includes the proton~\cite{Aguilar:2015ooa} and He~\cite{Aguilar:2015ctt} fluxes and the antiproton-to-proton flux ratio ($\bar p/p$)~\cite{Aguilar:2016kjl}. We will refer to this setup as the `minimal network' scenario in the following. The analysis assigns two (smoothened) breaks to the primary injection spectra allowing for individual slopes for proton and He and, accordingly, no break in diffusion. We employ convection and reaccelaration and use \textsc{Galprop} for the numerical solution of the diffusion equation. Solar modulation is modeled by the force-field approximation constrained by additional proton and He data from Voyager~\cite{Stone_VOYAGER_CR_LIS_FLUX_2013}. However, for the considered range of dark-matter masses (200\,GeV--50\,TeV) uncertainties from solar modulation have little impact on the results. For the derivation of cross-section upper limits we employ a frequentist analysis and construct the profile likelihood as a function of the dark-matter parameters by profiling over all propagation parameters. This is a non-trivial task. In particular, it requires careful exploration of the best-fit regions in the propagation parameter space for annihilation cross sections around the exclusion limit to be derived. Note that an insufficient parameter sampling in this region would result in a too strong limit. The results for annihilation into a pair of $W$-boson as well as other non-leptonic channels are shown in figure~\ref{fig:limits}. For the $WW$ and $bb$ channel, they exclude the thermal cross section, $\left\langle \sigma v \right\rangle\sim 3\times 10^{-26}\,\text{cm}^3/\text{s}$, for masses up to 800\,GeV. Note that very similar results have been obtained in Ref.~\refcite{Reinert:2017aga}, in the region 200\,GeV--3\,TeV considered in both analyses. That study uses the proton flux (instead of $\bar p/p$) and constrains propagation by the independent measurement of the B/C flux ratio. Figure~\ref{fig:limits} also provides information about further uncertainties that go beyond the uncertainties on the propagation parameters already taken into account in the course of profiling. These concern the parametrizations of secondary antiproton production cross sections as well as changes in the propagation setup, like setting individual key parameters in the fit to fixed (extreme) values. Most significantly, it concerns a fixed half-height, set to $z_\text{h}=2\,$kpc and $z_\text{h}=10\,$kpc. In almost the entire range the latter two choices provide the upper and lower boundary, respectively, of the dark blue shaded band that represents the envelope of all chosen setups. However, in the light of recent analyses of the fluxes of unstable secondary cosmic-ray nuclei~\cite{Weinrich:2020ftb,Evoli:2019iih} these values already appear somewhat extreme. Unlike dark-matter searches in gamma rays that utilize the morphology of the signal to discriminate signal from background, cosmic-ray observations are much less sensitive to the dark-matter density profile in the Galaxy. This is quantified in the right panel of figure~\ref{fig:limits} where the cosmic-ray limits are shown for the default (cuspy) Navarro-Frenk-White profile as well as for a (cored) Burkert profile with a core radius of 5 and 10\,kpc. The difference in the limits between the former and the latter amounts to a factor of around 2. In comparison, the limits from gamma-ray observation of the Galactic center by H.E.S.S.~\cite{Abdallah:2016ygi} vary by 2 orders of magnitude between these choices (\emph{cf.}~green curves and shaded region). \begin{figure}[t] \vspace{0.5cm} \centering \setlength{\unitlength}{1\textwidth} \begin{picture}(1,0.313) \put(0.00,-0.01){\includegraphics[width=1\textwidth]{figs/BR_Fig_1.pdf}} \end{picture} \caption{ Upper limits at 95\% C.L. on the dark-matter annihilation cross section. Left panel:~Limits for non-leptonic annihilation channels. The dark and light blue shaded bands provide an estimate for additional systematics and the uncertainty from the local dark-matter density (linearly added) for annihilation into $W$-bosons. Right panel: Comparison of limits from cosmic-ray antiprotons with gamma-ray observations of the Galactic center by H.E.S.S.~\cite{Abdallah:2016ygi} and dwarf spheroidal galaxies by Fermi-LAT~\cite{Fermi-LAT:2016uux} (dSphs) for annihilation into $W$-bosons and for various choices of the dark-matter density profile. The figure is taken from Ref.~{\protect\refcite{Cuoco:2017iax}}. } \label{fig:limits} \end{figure} \smallskip So far we have parametrized the dark-matter signal by its mass and cross section assuming 100\% annihilation into single final state channels. But we have not derived implications for realistic particle physics models of dark matter. The arguably most literal realization of a weakly interacting massive particle is obtained in the framework of minimal dark matter~\cite{Cirelli:2005uq} which supplements the standard model by just an $SU(2)$ multiplet. This theoretically appealing model comes with only one free parameter (the tree-level dark-matter mass) which, in principle, can be fixed by the relic density constraint. Notably, the fermion doublet, triplet or quintuplet (with hypercharge 1/2, 0 and 0, respectively) are of particular interest. The former ones represent limiting cases of a supersymmetric standard model with a pure higgsino and wino dark matter, respectively. The latter is the simplest representation where dark matter is stable without imposing an additional $Z_2$ symmetry. For this model, indirect detection constitutes the prime search strategy. The cosmologically preferred range of masses appears mostly out-of-reach for upcoming collider searches, while direct detection cross sections vanish at tree level. \begin{figure}[t] \vspace{0.5cm} \centering \setlength{\unitlength}{1.0\textwidth} \begin{picture}(1,0.322) \put(0.00,-0.01){\includegraphics[width=1\textwidth]{figs/BR_Fig_2.pdf}} \end{picture} \caption{ 95\% CL exclusion limits on the annihilation cross section for minimal dark matter. The left and right panels show the fermion triplet (wino) and fermion quintuplet, respectively. The blue curves show the upper limits from AMS-02 antiprotons. The dark and light blue shaded bands indicate the systematic uncertainties and uncertainties from the local dark-matter density (added linearly), respectively. The solid black curves show the cross-section prediction within the model. The vertical green shaded bands (around 2850\,GeV and 9.4\,TeV, respectively) denote the cosmologically preferred regions. For the limit setting the candidate's relic density is set to the observed value in the entire range. The figure is taken from Ref.~{\protect\refcite{Cuoco:2017iax}}. } \label{fig:MDM} \end{figure} In all three cases, annihilation into a pair of electroweak vector boson, $VV=WW, ZZ, Z\gamma $, dominates the dark-matter induced antiproton source term. Note that its cross section is significantly enhanced due to non-perturbative effects (Sommerfeld enhancement). We show the corresponding limits (blue solid curves and shaded bands) and model predictions~\cite{Hryczuk:2011vi,Cirelli:2015bda} (black solid curves) for the triplet and quintuplet in triplet or quintuplet in figure~\ref{fig:MDM}. In both cases, the cosmologically preferred regions~\cite{Cirelli:2007xd,Cirelli:2015bda} (satisfying the relic density constraint through thermal freeze-out; vertical green shaded bands) are challenged by observations of cosmic-ray antiprotons. In particular, in the case of the wino, consistency with data requires the full exploitation of the uncertainty band towards the most conservative edge. The higgsino case (not shown here; see Ref.~{\protect\refcite{Cuoco:2017iax} for details) is less constrained, leaving the cosmologically preferred region unchallenged. Note that the model is also in tension with results of searches for gamma-line signatures in observations of the Galactic center~\cite{Abramowski:2013ax}. However, this conclusion only holds for the choice of cuspy density profile, \emph{cf.}~the related discussion above. For cored profiles, these searches are not sensitive to the model~\cite{Cuoco:2017iax}. The situation is, however, expected to change with the upcoming Cherenkov Telescope Array experiment~\cite{Consortium:2010bc} as recently projected in Ref.~\refcite{Rinchiuso:2020skh} (see also~Ref.~\refcite{Hryczuk:2019nql}). \section{The antiproton excess}\label{sec:exc} While cosmic-ray antiprotons allow us to place strong limits on heavy dark matter, $m_\text{DM}>200\,$GeV, the constraints significantly weaken towards smaller masses. In fact, data supports a preference for a dark-matter contribution in this region. This spectral feature was first found in Ref.~\refcite{Cuoco:2016eej} employing the minimal network scenario (\emph{cf.}~section~\ref{sec:limits}) and Ref.~\refcite{Cui:2016ppb} using B/C to constrain propagation. Subsequently, the excess has been analyzed by several groups using various analysis setups~\cite{Cuoco:2017rxb,Reinert:2017aga,Cui:2018klo,Cuoco:2019kuu,Cholis:2019ejx,Lin:2019ljc,Heisig:2020nse}. While the significance of the excess is subject to controversies, ranging from around $1\sigma$ to above $5\sigma$ in the aforementioned studies, a common picture of the preferred dark-matter properties has, in fact, emerged. It hints at a dark-matter mass of $40\sim\!130\:\text{GeV}$ and an annihilation cross section around the thermal one, $\langle \sigma v\rangle\sim 10^{-26}\:\text{cm}^2\text{s}^{-1}$. Following the analysis setup of Ref.~\refcite{Cuoco:2016eej}, in Ref.~\refcite{Cuoco:2017rxb} we have performed joint fits for a variety of non-leptonic channels. The left panel of figure~\ref{fig:fitreg} shows the respective best-fit regions in the dark-matter mass versus cross-section plane. The displayed channels all provide a similar improvement of the fit (formally above $4\sigma$) except for $t\bar t$ performing somewhat worse (around $3\sigma$). \begin{figure}[t] \centering \setlength{\unitlength}{1\textwidth} \begin{picture}(1,0.36) \put(0.005,-0.01){\includegraphics[width=0.99\textwidth]{figs/BR_Fig_3.pdf}} \end{picture} \caption{Left: Best-fit regions (1--3$\sigma$ frequentist contours) for a dark-matter component of the cosmic-ray antiproton flux assuming 100\% annihilation into $gg$~(cyan), $WW^{(*)}$~(green), $b\bar b$~(red), $ZZ^{(*)}$~(blue), $hh$~(pink) and $t\bar t$~(orange). Right:~Same for a combined fit of cosmic-ray antiprotons, the gamma-ray Galactic center excess and gamma-ray observations of dwarf spheroidal galaxies. The figure is taken from Ref.~{\protect\refcite{Cuoco:2017rxb,Cuoco:2017okh}}. } \label{fig:fitreg} \end{figure} Intriguingly, dark matter with very similar properties\footnote{See \emph{e.g.}~Ref.~\refcite{Calore:2014nla} showing a pattern very similar to the best-fit regions in the left panel of figure~\ref{fig:fitreg}.} has also been found to fit the gamma-ray Galactic center excess seen in the Fermi-LAT data~\cite{Goodenough:2009gk,Vitale:2009hr,Hooper:2010mq,Hooper:2011ti,Abazajian:2012pn,Hooper:2013rwa,Gordon:2013vta,Abazajian:2014fta,Daylan:2014rsa,Calore:2014xka,TheFermi-LAT:2015kwa,Karwin:2016tsw}.${}^,$\footnote{The origin of the excess is, however, controversially discussed in the literature~\cite{Petrovic:2014uda,Petrovic:2014xra,Cholis:2015dea,Bartels:2015aea,Lee:2015fea,Fermi-LAT:2017yoi,Leane:2019xiy,Chang:2019ars,Leane:2020pfc}.} In fact, a simultaneous fit of antiprotons and gamma-rays reveals very good compatibility of the two observations when interpreted as a signal from dark matter~\cite{Cuoco:2017rxb}. This is, in particular, true for annihilation into a pair of $b$-quarks, $W$- and $Z$-bosons, as well as Higgses. Annihilation into gluons is slightly disfavored as both signals individually prefer somewhat different regions in the dark-matter mass while annihilation into top-quarks fits neither observation to the same level as the lighter final states. The combined fit (including a further likelihood contribution from gamma-ray observations of dwarf spheroidal galaxies~\cite{Fermi-LAT:2016uux}) is shown in the left panel of figure~\ref{fig:fitreg}. The preference for somewhat smaller annihilation cross sections in the combined fit is driven by the gamma-ray observations. It takes advantage of the relatively large uncertainties in the local dark matter density (taken into account in the fit) towards larger values. While figure~\ref{fig:fitreg} shows the best-fit regions for individual final state channels, realistic dark-matter models often provide annihilation in an admixture of final states. In Ref.~\refcite{Cuoco:2017rxb} we have performed a dedicated analysis of the Higgs portal dark-matter model~\cite{Silveira:1985rk}, where (for dark-matter masses below the Higgs mass) annihilation proceeds via an intermediate Higgs in the $s$-channel. In this case, the composition of final state particles corresponds to their coupling to the Higgs and the kinematically accessible phase space. Hence, it is solely a function of the dark-matter mass. The analysis shows that the model provides an excellent fit to the antiproton excess (alone and in combination with the gamma-ray Galactic center excess; see also Ref.~\refcite{Cuoco:2016jqt}) for a dark-matter mass around 60\,GeV, where annihilation into $b\bar b$ and $WW^*$ dominate. This is interesting, since independent of this observation, the resonantly enhanced region, $m_\text{DM}\simeq m_h/2$, is one of the few regions the model survives a set of other model constraints~\cite{Cuoco:2017rxb}. \begin{figure}[t] \centering \setlength{\unitlength}{1\textwidth} \begin{picture}(1,0.445) \put(0,-0.01){\includegraphics[width=1\textwidth]{figs/190301472_Fig_2.pdf}} \end{picture} \caption{Antiproton-over-proton ratio for the respective best-fit points without (left) and with (right) a dark-matter component (annihilation into $b\bar b$). The data is fitted in the range $R=(5\!-\!300)$\,GV (between the dotted lines). The figure is taken from Ref.~{\protect\refcite{Cuoco:2019kuu}}. } \label{fig:pbar_spectra_fits} \end{figure} \section{Systematic uncertainties of the antiproton excess}\label{sec:uncer} In this section, we take a closer look at the origin of the discrepancy in the signifi\-cance of the antiproton excess mentioned in the last section. This motivates us to take a deeper look into systematic uncertainties. As shown in figure~\ref{fig:pbar_spectra_fits} (taken from Ref.~\refcite{Cuoco:2019kuu}) the preference for dark matter arises from a relatively subtle effect, namely a spectral feature around 10--20\,GV, best seen in the residuals of the left panel. As shown in the right panel, the tension with data can be reconciled (\emph{cf.}~the residuals) by a spiky contribution from dark matter that amounts to around 10\% of the total flux only (red solid curve in the main plot on the right). Note that the relative uncertainty on the measurement of the $\bar p / p$ flux ratio is around 4\%. However, the global fit analysis induces a complex network of constraints on the free parameters and, hence, requires a careful assessment of the various sources of systematics and their possible correlations. In the following, we discuss systematic errors that could have `faked' the signal. Notably, this concerns uncertainties in the production cross sections of secondary antiprotons as well as correlations in the AMS-02 systematic error. \subsection{Uncertainties in the antiproton production cross sections}\label{sec:xs} The inclusive antiproton production cross sections entering the secondary antiproton source term, eq.~\eqref{eq:sec_sourceTerm}, cannot be computed from first principles. There are, however, two different frameworks to utilize measurements of the antiproton production cross sections at laboratory experiments for making corresponding predictions. The historically first one is the introduction of an analytic parametrization of the fully differential Lorentz invariant cross section using a functional form that is theoretically motivated by the scaling hypothesis~\cite{Taylor:1975tm}. Its parameters are then fitted to data. This framework has first been employed in Ref.~\refcite{Tan:1983de}. While, at the time, its use involved a significant degree of extrapolation into unconstraint territory, in particular towards high energies, the framework underwent a continuous refinement supported by new data that became available~\cite{Duperray:2003bd,diMauro:2014zea,Kappl:2014hha,Winkler:2017xor,Korsmeier:2018gcy}. Notably, this involved a careful assessment of uncertainties~\cite{diMauro:2014zea}, a dedicated modeling of the hyperon and antineutron contributions~\cite{Kappl:2014hha} and the introduction of scaling violations towards high energies~\cite{Winkler:2017xor}. Most recently, the parametrizations of Refs.~\refcite{diMauro:2014zea} and~\refcite{Winkler:2017xor} have been re-fitted including new data from the NA61 and LHCb experiments in Ref.~\refcite{Korsmeier:2018gcy}. The second path to computing the antiproton source term is the use of Monte Carlo generators of hadronic interactions that have been calibrated on a wide range of data from accelerator experiments. This approach has, for example, been pursued in~Refs.~\refcite{Simon_Antiproton_CS_Scaling_1998,Donato:2001ms} utilizing DTUNUC and more recently in Ref.~\refcite{Kachelriess:2015wpa} using QGSJET-II-04 and EPOS-LHC which underwent a re-tuning based on LHC data~\cite{Pierog:2013ria}. While this approach is favorable for high energies, it is rather unwarranted for kinetic energies relevant here, $T_{\bar p}\lesssim 10$\,GeV, as the underlying theoretical framework neglects several reaction mechanisms of potential importance in this regime, like~\emph{e.g.}~Reggeon exchanges or intranuclear cascading~\cite{Kachelriess:2015wpa}. A comparison of the resulting source terms from the process $p p \to \bar p X$ for different parametrizations is shown in the left panel of figure~\ref{fig:pbar_sec_source}. It includes the parametrizations of Refs.~\refcite{diMauro:2014zea} and~\refcite{Winkler:2017xor} after (before) the parameter reevaluation in Ref.~\refcite{Korsmeier:2018gcy} [denoted by `Param.~I' (`di Mauri') and `Param.~II' (`Winkler'), respectively] as well as the model of Ref.~\refcite{Kachelriess:2015wpa} (denoted by `KMO'). Furthermore, it shows the $2\sigma$ uncertainties derived in Ref.~\refcite{Korsmeier:2018gcy}. The reevaluated models are roughly consistent with each other within errors. While, towards high energies, the parametrization of Refs.~\refcite{diMauro:2014zea} changed significantly with the parameter update, the one of Ref.~\refcite{Kachelriess:2015wpa} was only affected mildly. Notably, it is consistent with Monte Carlo based prediction of Ref.~\refcite{Kachelriess:2015wpa} at high energies. As expected, large discrepancies emerge between the latter and all analytic parametrizations at low energies. \begin{figure}[t] \centering \setlength{\unitlength}{1\textwidth} \begin{picture}(0.96,0.43) \put(0,-0.04){\includegraphics[width=0.5\textwidth]{figs/1802_03030_Fig_5a.pdf}} \put(0.505,-0.04){\includegraphics[width=0.5\textwidth]{figs/1802_03030_Fig_9.pdf}} \end{picture} \caption{Secondary antiproton source term as a function of the antiproton kinetic energy. Left panel: Contribution from $pp\to\bar p +X$ for the parametrizations of Refs.~{\protect\refcite{diMauro:2014zea}} and~{\protect\refcite{Winkler:2017xor}} with re-fitted parameters~\cite{Korsmeier:2018gcy} (denoted by `Param.~I' and `Param.~II', respectively) as well as with the original parameters (denoted by `di Mauri' and `Winkler', respectively). Furthermore, the Monte Carlo model of Ref.~{\protect\refcite{Kachelriess:2015wpa}} is shown (denoted by `KMO'). Right panel: The total source term as well as all its sub-contributions. The shaded bands in both panels report the $2 \sigma$ uncertainty for prompt $\bar p$ production. The additional outer lines in the bottom right panel (showing the relative uncertainty on the total source term) denote the uncertainty due to isospin effects and to hyperon decay. The figure is taken from Ref.~{\protect\refcite{Korsmeier:2018gcy}}. } \label{fig:pbar_sec_source} \end{figure} The second most relevant antiproton production processes are $p$He and He\hspace{0.1ex}$p$ scattering. Note that in all parametrizations these processes have been modeled solely based on re-scaling and extrapolation from $pp$ scattering as well as proton scattering off heavier nuclei, such as C\@. Only recently, the LHCb experiment has provided measurements of the cross section for $p\,\text{He} \to \bar p +X$ utilizing the SMOG device~\cite{Aaij:2018svt}. As shown in Ref.~\refcite{Reinert:2017aga}, the measurements are in remarkable agreement with the predictions of the model from Ref.~\refcite{Winkler:2017xor}. This is an important result, providing confidence in the underlying framework. The data has first been included in a cross-section fit within the reevaluation of Ref.~\refcite{Korsmeier:2018gcy}. The total secondary antiproton source term as well as all its sub-contributions are summarized in the right panel of figure~\ref{fig:pbar_sec_source} (for the parametrization Param.~I). The corresponding relative uncertainty on the total source term is around 10--20\,\%. As the fit typically imposes strong correlations between the cross-section parameters~\cite{Winkler:2017xor,Korsmeier:2018gcy}, it is crucial to take into account the corresponding error correlation matrix in cosmic-ray fits. The effect of uncertainties in the antiproton production cross section on the tentative dark-matter signal has been studied using B/C to constrain diffusion~\cite{Reinert:2017aga,Cui:2018klo} as well as following the minimal network scenario described in section~\ref{sec:limits} (employing a joint fit of $p$, He and $\bar p / p$)~\cite{Cuoco:2019kuu}. The analysis in Ref.~\refcite{Reinert:2017aga} finds a significant reduction of the global significance of the excess to around $1\sigma$ once the above uncertainties are taken into account. In this study, correlations in the cross-section parameters have been translated into a covariance matrix for the rigidity bins of the measured flux in a Gaussian approximation. In Ref.~\refcite{Cuoco:2019kuu} we have followed the same approach as well as performed a combined fit of cross-section and propagation parameters. While both approaches give comparable results, the significance of the excess is much less affected by the inclusion of cross-section uncertainties in this setup. The different sensitivity to cross-section uncertainties in the two setups might in parts be explained by the total flux-normalization freedom in the setup of Ref.~\refcite{Cuoco:2019kuu} which effectively renders a fully correlated contribution to the cross-section uncertainty redundant. \subsection{Correlation in the AMS-02 data}\label{sec:corr} In the rigidity region of the antiproton excess (around $10\!-\!20$\,GV) the experimental systematic uncertainties reported by AMS-02 dominate over the statistical ones. While these systematics are expected to be subject to sizeable correlations in rigidity, so far this information has not been provided by the AMS-02 collaboration. Hence, a common treatment has been to simply add statistical and systematic uncertainties in quadrature considering them to be fully uncorrelated. However, such a treatment can effectively overestimate the uncertainties significantly, if a sizeable fraction of errors is correlated over a wide (or even the full) range of rigidities (the latter of which would amount to an overall normalization uncertainty). In fact, the goodness-of-fit achieved in the analyses discussed above~\cite{Cuoco:2016eej,Cuoco:2017rxb,Cuoco:2019kuu}, typically $\chi^2/\text{d.o.f}\sim 0.2$, points in this direction. Furthermore, unaccounted error correlations on intermediate scales (\emph{i.e.}~over an intermediate number of rigidity-bins) could induce unwanted features, falsely interpreted as an excess in the data. Therefore a realistic assessment of the underlying correlations is of paramount importance for data interpretation. A first attempt to model these correlations in a data-driven method has been provided in Ref.~\refcite{Cuoco:2019kuu} utilizing covariance functions, characterized by a correlation length. Notably, this analysis has revealed that correlations potentially have a dramatic effect on the significance of the antiproton excess. The modeling in terms of covariance functions has been further refined in Refs.~\refcite{Derome:2019jfs,Boudaud:2019efq} by assigning individual correlation lengths to each sub-contributions of the systematic error. However, the derivation of the individual correlation lengths had to rely on `educated guesses'~\cite{Derome:2019jfs}. \begin{figure*}[t] \centering \setlength{\unitlength}{1\textwidth} \begin{picture}(0.99,0.43) \put(-0.0055,-0.02){\includegraphics[width=0.53\textwidth, trim= {3.3cm 2.2cm 3cm 0.8cm}, clip]{figs/plot_AMS_unc_pbar.pdf}} \put(0.506,-0.02){\includegraphics[width=0.53\textwidth, trim= {3.3cm 2.2cm 3cm 0.8cm}, clip]{figs/plot_AMS_unc_pbarp.pdf}} \end{picture} \caption{ Reconstructed relative systematic uncertainties of the AMS-02 antiproton flux (left) and $\bar{p}/p$ flux ratio data (right). The individual contributions listed in the legend are ordered according to their size at 10\,GV, as indicated by the arrow. The figure is taken from Ref.~{\protect\refcite{Heisig:2020nse}}. \label{fig:systematicerrors} } \end{figure*} While this is \emph{a priori} due to the limited public availability of information from the collaboration, some correlations entering the analysis have, in fact, not been rigorously computed until very recently~\cite{Heisig:2020nse}. This concerns the uncertainties in the cross sections for (anti)proton absorption in the detector material the measured fluxes are corrected for. As figure~\ref{fig:systematicerrors} shows, these turn out to be the dominant uncertainties around 10--20\,GV, \emph{i.e.}~in the region most relevant for the excess, both for the antiproton flux and the $\bar{p}/p$ flux ratio. At the same time, measurements of the involved nucleon-carbon\footnote{The AMS-02 detector is dominantly composed of carbon.} absorption cross sections from laboratory experiments often date back to the 1950s to 1980s and may involve unaccounted systematics. In Ref.~\refcite{Heisig:2020nse} we, hence, performed a detailed reevaluation of these cross sections within the Glauber-Gribov theory of inelastic scattering~\cite{Glauber1959,Gribov:1968jf,Pumplin:1968bi}. The theory links the nuclear absorption cross section to the nucleon-nucleon scattering cross sections and nuclear density functions, that are subject to independent experimental measurements. It thus enables a welcomed redundancy in the parameter determination that we have exploited in a global fit of the data, see figure~\ref{fig:pbCxs}. Most significantly, the fit allowed us to compute the correlations in the absorption cross-section uncertainties. Note that the respective correlation matrix cannot be characterized by a constant correlation length. Compared to the estimate in Ref.~\refcite{Boudaud:2019efq}, it tends to provide stronger large-scale correlations, \emph{i.e.}~correlations over a wider range of rigidity bins. \begin{figure*}[t] \centering \setlength{\unitlength}{1\textwidth} \begin{picture}(0.52,0.35) \put(-0.0055,-0.03){\includegraphics[width=0.55\textwidth, trim= {3.3cm 2.2cm 3cm 2cm}, clip]{figs/plot_sigmaabs_pbC_rev2020-03_ne.pdf}} \end{picture} \caption{ Absorption cross section for $\bar p$C as a function of the antiproton momentum $p_\text{lab}$. The solid dark green curve and green shaded band denote the best-fit cross section and its $1\sigma$ uncertainty, respectively, within our global fit. Data points (containing $1\sigma$ error bars) of different experiments are denoted by individual symbols. For comparison, the corresponding cross section used in the AMS-02 analyses is shown (dashed curve and gray shaded band). The figure is taken from Ref.~{\protect\refcite{Heisig:2020nse}}. \label{fig:pbCxs} } \end{figure*} For the antiproton flux, the second most relevant uncertainty stems from the effective acceptance, which derives from a comparison of the detector response between data and Monte Carlo simulation. It is the only relevant uncertainty that exhibits strong correlations on short scales, \emph{i.e.}~between few neighboring bins~\cite{Boudaud:2019efq,Heisig:2020nse}. This contribution is, however, absent in the $\bar p/p$ flux ratio as it cancels out. Considering all mentioned uncertainties discussed so far allows us to revisit our conclusions on the possible existence of a dark-matter contribution in cosmic-ray antiprotons. As shown in Ref.~\refcite{Heisig:2020nse}, when fitting the $\bar p$ flux, the preference for a signal is entirely gone once uncertainties from the antiproton production cross section and the correlations in the AMS-02 data are taken into account. This is independent of whether the propagation model is constrained in a joint fit together with proton and He (minimal network setup of Ref.~\refcite{Cuoco:2019kuu}) or through a fit of the B/C flux ratio (setup of Ref.~\refcite{Reinert:2017aga}). In particular, the effective acceptance error, which exclusively introduces sizable small-scale correlations, appears to be capable of `absorbing' the sharp spectral feature seen as the excess. However, when fitting $\bar p/p$, additional freedom in the diffusion model at low rigidities is needed [in this case parametrized by $\eta\neq1$; \emph{cf.}~eq.~\eqref{eq:diff}] to eliminate the preference for dark matter. This appears to be in line with the conclusions drawn in~Ref.~\refcite{Boudaud:2019efq}, showing that antiprotons can be well fitted when allowing either for $\eta\neq1$ or a low-rigidity break in diffusion (while taking into account the above discussed uncertainties, although estimated slightly differently). However, we emphasize that both analyses do not exclude the possibility of a dark-matter signal. Note that the correlations in the effective acceptance error (derived in Ref.~\refcite{Heisig:2020nse} in a data-driven approach) plays a crucial role in the analysis entering all measured (absolute) fluxes. First-hand information about this contribution would hence be a valuable input to the discussion. \section{Summary and conclusions}\label{sec:sum} Cosmic-ray antiprotons constitute a remarkable diagnostic tool for the study of astroparticle physics' processes in our Galaxy. Certainly, the bulk of the measured antiprotons are consistent with a secondary origin arising from collisions of primary cosmic-ray nuclei with the interstellar gas. However, with new data from the AMS-02 experiment, uncertainties are -- for the first time -- at the percent level, equipping us with encouraging prospects to pinpoint a possible primary component of antiprotons, either of astrophysical origin or of exotic nature, such as dark-matter annihilation. In this article, we reviewed recent developments in the search for the latter. For heavy dark matter, $m_\text{DM}>200\,\text{GeV}$, a joint fit of propagation and dark-matter parameters in the `minimal network' scenario (using $\bar p /p $, $p$ and He only) has led to strong limits on the annihilation cross section excluding the canonical value, $3\times 10^{-26}\,\text{cm}^3/\text{s}$, up to dark-matter masses around 800\,GeV for a variety of non-leptonic channels. This analysis considers uncertainties in the propagation model by profiling over its parameters in the fit. Being largely insensitive to the choice of the dark-matter density profile in the Galaxy, the limits are robust and among the strongest current limits on self-annihilating dark matter. Similar results have been obtained using B/C to constrain propagation. Probing smaller dark-matter masses, around or below 100\,GeV, the data supports a possible hint for an annihilation signal. This excess -- originating from a subtle spectral anomaly around $\mathcal{R}= 10\!-\!20\:\text{GV}$ -- has been seen by several groups using different analysis setups. Interestingly, the signal is compatible with a thermal annihilation cross section for frozen-out dark matter as well as a dark-matter interpretation of the gamma-ray Galactic center excess. While the different studies agree on the preferred dark-matter properties, they draw very different conclusions on the significance of the excess, calling out for further scrutiny of the finding. Several systematic uncertainties have been assessed that could have `faked' the signal. An important one comes from our limited knowledge of the secondary antiproton production cross sections in the kinematic regime relevant for the antiproton source term. With new experimental results (most recently, LHCb data on $p \,\text{He}\to \bar p +X$) and recent progress on the modeling of the hyperon and antineutron contributions as well as scaling violations, their description has significantly improved over the last couple of years. However, the models largely rely on input from old data, in particular in the low-energy regime. The involved experimental systematics, \emph{e.g.}~regarding the hyperon contribution, are often poorly known. Another important aspect is the presence of correlations in the experimental errors of the AMS-02 measurements, which have not been provided by the collaboration. In the rigidity region of interest, around 10--20\,GV, errors of the antiproton flux (or flux-ratio) are dominated by systematics. The most relevant ones are uncertainties in the cross sections for cosmic-ray absorption in the AMS-02 detector the measured fluxes are corrected for. A careful reevaluation of these absorption cross sections, that led to the computation of the corresponding error correlations, has only been performed and made publicly available recently. Remarkably, the consideration of all these uncertainties eliminates the statistical preference for an additional contribution from dark matter in various analysis setups. While these findings cast severe doubts on the robustness of the excess, the situation is not fully conclusive. Correlated uncertainties in the `effective acceptance' (that have as well been incorporated but could only be estimated on the basis of the limited information publicly available) and the modeling of the diffusion coefficient at low energies also play an important role. They motivate further investigations. To fully exploit the wealth of data from AMS-02 requires various improvements both on the experimental and theoretical sides. First, the provision of the covariances for key systematic errors such as the effective acceptance would settle doubts in their estimates done outside the collaboration. Secondly, taking advantage of the recent progress in the computation of cosmic-ray absorption cross sections, a reevaluation of the measured fluxes and their uncertainties could be performed by the collaboration. Thirdly, to further gain sensitivity to the low-rigidity behavior of the antiproton spectrum, solar modulation may have to be incorporated beyond an improved force-field approximation making use of the time-resolved data provided. Finally, an independent test of the excess and its dark-matter interpretation can only be done by a multi-messenger approach. In particular, observations of low-energy antideuterons provide a low-background search channel with promising prospects for future experiments~\cite{Aramaki:2015pii,Korsmeier:2017xzj,vonDoetinchem:2020vbj}. \section*{Acknowledgments} I thank Alessandro Cuoco and Martin W.~Winkler for valuable comments on the manuscript. I acknowledge support from the F.R.S.-FNRS, of which I am a postdoctoral researcher.
2012.04021
\section{Introduction} Two-dimensional nanostructured materials are promising cost-efficiency solutions for developing novel flat optoelectronic applications \cite{sun2016optical,gupta2015recent,zhang2016van,yu2020two}. Their controllable size allows for electronic wave function confinement, which is desirable for design semiconducting devices \cite{zhang2016van}. Among the materials that share this property, hexagonal boron nitride (h-BN) \cite{bechelany2008preparation,han2008structure,warner2010atomic,zhang2015two} and graphene \cite{novoselov2004electric,geim2007rise,geim2009graphene} sheets stand out. Nanoribbons are obtained by extracting a strip of the corresponding material, and they present a quasi-one-dimensional nature \cite{son2006energy,han2007energy,son2006half}. Even after producing the graphene or h-BN nanoribbon, the lattice parameter of the derived system differs only by 0.03 \r{A} concerning the original one \cite{bokai2020hybrid}. Despite the structural similarities between h-BN and graphene, it is well known that their electronic nature is substantially different \cite{sachs2011adhesion}. Recently, several experimental \cite{wang2020towards,liu2013plane,yang2013epitaxial,song2010large,GONZALEZORTIZ2020100107,beniwal2017graphene,wu2012nitrogen,maeda2017orientation} and theoretical \cite{leon2019interface,brugger2009comparison,kaloni2012electronic,giovannetti2007substrate,slawinska2010energy,dos2019defective,dos2019electronic,zhang2015two} studies were carried out to propose routes for the precise control of the graphene and h-BN bandgaps. Among them, the substitutional doping and the reshaping of their lattice structures have been widely used \cite{doi:10.1021/jp402297n,chen2018carbon,bokdam2011electrostatic,doi:10.1021/acs.nanolett.6b03709,doi.org/10.1002/asia.201500450,enyashin2011graphene,hirsch2010era,gomes2013stability,zhao2013local}. It was experimentally demonstrated the creation of 2D in-plane graphene/h-BN heterojunctions with controlled domain sizes by using lithography patterning and sequential CVD growth steps \cite{liu2013plane}. Through this technique, the shapes of graphene and h-BN domains were controlled precisely, and sharp graphene/h-BN interfaces were created. Other materials such as MoS$_2$, AlN, and GaN have some structural properties similar to graphene \cite{splendiani2010emerging,lopez2013ultrasensitive}. However, monolayer graphene and h-BN have a lattice mismatch of only about 1.5 \% \cite{C7RA00260B}. Moreover, in the h-BN monolayer, the difference in grid energy between nitrogen atoms and boron atoms leads to a broad bandgap of about 5.9 eV, which can help to open the bandgap of graphene by producing a heterojunction between them \cite{C7RA00260B}. Herein, motivated by the recent achievements in the synthesis of in-plane h-BN/graphene heterojunctions \cite{liu2013plane,wang2020towards}, we used density functional theory (DFT) calculations to study the structural and electronic properties of zigzag and armchair graphene (h-BN) nanoribbons containing h-BN (graphene) domains of different sizes. Our findings revealed that the in-plane heterojunctions of h-BN/graphene nanoribbons present a ferromagnetic behavior, which can be useful for magnetic applications at the nanoscale. \section{Details of Modeling} The DFT calculations of the electronic and structural properties of the in-plane heterojunctions h-BN/graphene Nanoribbons were performed using the SIESTA code \cite{soler2002siesta}. These calculations were conducted within the framework of generalized gradient approximation (GGA) with localized basis sets \cite{PhysRevLett.77.3865}, and the exchange-correlation functional was used based on the Perdew–Burke–Ernzerhof (PBE) scheme \cite{PhysRevLett.77.3865,PhysRevLett.80.891}. To treat the electron core interaction, we used the Troullier–Martins norm-conserving pseudopotentials \cite{PhysRevB.64.235111}. The polarization effects were included, and the Kohn–Sham orbitals are expanded with double-$\zeta$ basis \cite{PhysRevB.64.235111}. The structural relaxation of all model h-BN/graphene heterojunctions studied here is carried out until the force on each atom is less than $10^{-3}$ eV/\r{A} and the total energy change between each self-consistency step achieves a value less or equal to $10^{-5}$ eV. The Brillouin zone is sampled by a fine $15\times 15\times 1$ grid and, to determine the self-consistent charge density we use a mesh cutoff of 200 Ry. A supercell geometry was adopted with a vacuum distance of 30 \r{A} to avoid interaction among each structure and its images. Figure \ref{fig:systems} illustrates the model graphene/h-BN heterojunctions studied here. Figure \ref{fig:systems}(a) represents the heterojunctions in which a zigzag h-BN nanoribbon contains a central graphene domain (named from now on as $n$-ZZGBNNRs), and Figure \ref{fig:systems}(b) depicts the cases where a graphene nanoribbon is endowed with an h-BN domain (named from now on as $m$-ZZGBNNRs). The indexes $n$ and $m$ denote the number of zigzag lines in the graphene and h-BN domains, respectively. The number of zigzag lines, which stands the domain size, varied from 1 up to 12. The inset panels in Figure \ref{fig:systems} present the C-C, B-C, and B-N bond lengths for the ground state structures of each case. \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{figure1.pdf} \caption{Diagrammatic representation of (a) $n$-ZZGBNNRs and (b) $m$-ZZGBNNRs.} \label{fig:systems} \end{figure} \section{Results} We begin our discussions by presenting the total charge density for all the model heterojunctions studied here. Figure \ref{fig:charge}(a) and \ref{fig:charge}(b) illustrate the cases $n$-ZZGBNNRs and $m$-ZZGBNNRs, respectively. As a general trend, one can note in these figures that the charge density is localized at the edges of the nanoribbons, regardless of the domain size. In Figure \ref{fig:charge}(a) one can see that the left edges of the nanoribbons possess more charge concentration than the right ones. This signature for the charge distribution is related to the distinct atomic configurations at the edges. On the left edge, there are boron atoms with only one electron occupying the p-orbital, whereas the right edge is dominated by nitrogen atoms with three electrons occupying the p-orbital. In this sense, the left edge has more a pronounced electronegativity, which contributes to the symmetry breaking in the charge distribution pattern illustrated in Figure \ref{fig:charge}. On the other hand, in the $m$-ZZGBNNR cases (Figure \ref{fig:charge}(b)), we can realize that the charge density is symmetrically distributed on the ribbon's edges once their termination is equal, i.e., just carbon atoms with the same electronegativity are present. \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{figure2.pdf} \caption{Schematic representation of the total charge density, that was obtained from the spin density difference $\rho(\uparrow)-\rho(\downarrow)$.} \label{fig:charge} \end{figure} Figures \ref{fig:bgn} and \ref{fig:bgm} show the band structures for the $n$-ZZGBNNRs and $m$-ZZGBNNRs systems, respectively, for the cases presented in Figure \ref{fig:charge}. The related partial density of states (PDOS) is depicted alongside the band structures. In these figures, one can note the clear symmetry breaking between the two spin channels --- spin-up (orange lines) and spin-down (red lines) --- that cross the Fermi level. Such a bandgap signature suggests a strong ferromagnetic behavior. This symmetry breaking in the energy levels is a consequence of the pattern for the charge density localization presented in Figure \ref{fig:charge}(a). Importantly, this ferromagnetic behavior was not predicted in other theoretical studies by using tight-binding and plane-wave DFT methods \cite{leon2019interface}. The PDOS shows that the major contribution to the intragap levels comes from boron and nitrogen atoms. These levels belong to the chemical species that compose the ribbon's edges, where the net charge is concentrated due to the presence of dangling bonds. In this sense, the edge states are crucial in characterizing the electronic transport and the magnetic moment of these h-BN/graphene heterojunctions. In Figure \ref{fig:bgm}, we note that the energy levels near the Fermi level for the $m$-ZZGBNNRs band structures present a higher degree of symmetry when contrasted to the $n$-ZZGBNNR cases. As expected, the PDOS shows a major contribution of carbons atoms for these levels. The overall ferromagnetic behavior is smaller in the $m$-ZZGBNNR cases due to the degree of symmetry showed by the energy levels that crossed the Fermi level. \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{figure3.pdf} \caption{Electronic band structure for the $n$-ZZGBNNRs related to the cases presented in Figure \ref{fig:charge}(a).} \label{fig:bgn} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.9\linewidth]{figure4.pdf} \caption{Electronic band structure for the $m$-ZZGBNNRs related to the cases presented in Figure \ref{fig:charge}(b).} \label{fig:bgm} \end{figure} \section{Conclusion} In summary, the electronic and structural properties of in-plane heterojunctions of h-BN/graphene nanoribbons were numerically studied using DFT calculations. The model heterojunctions studied were composed of a zigzag h-BN nanoribbon containing a central graphene domain (named $n$-ZZGBNNRs) and a zigzag graphene nanoribbon endowed with an h-BN domain (named $m$-ZZGBNNRs). The indexes $n$ and $m$ denote the number of zigzag lines in the graphene and h-BN domains, respectively. These indexes defined the domain size. Our computational protocol was based on performing the DFT calculations in heterojunctions of different domain sizes. Results showed that the charge density is localized in the edge of the heterojunctions, regardless of the domain size. As a consequence, these heterojunctions presented a ferromagnetic behavior, which can be interesting for magnetic applications in flat optoelectronics. \section{Acknowledgments} The authors gratefully acknowledge the financial support from Brazilian Research Councils CNPq, CAPES, and FAPDF and CENAPAD-SP for providing the computational facilities. W.F.G. gratefully acknowledges the financial support from FAP-DF grant $0193.0000248/2019-32$. L.A.R.J. gratefully acknowledges the financial support from CNPq grant $302236/2018-0$. R.T.S.J. gratefully acknowledges, respectively, the financial support from CNPq grant $465741/2014-2$, CAPES grants $88887.144009/2017-00$, and FAP-DF grants $0193.001366/2016$ and $0193.001365/2016$. L.A.R.J. gratefully acknowledges the financial support from DPI/DIRPE/UnB (Edital DPI/DPG $03/2020$) grant $23106.057541/2020-89$ and from IFD/UnB (Edital $01/2020$) grant $23106.090790/2020-86$. D.A.S.F acknowledges the financial support from the Edital DPI - UnB N. $04/2019$, from CNPq (grants $305975/2019-6$ and $420836/2018-7$) and FAP-DF grants $193.001.596/2017$ and $193.001.284/2016$. \bibliographystyle{iopart-num}
1706.01341
\chapter{Hardware} \label{app:hardware} This appendix gives an overview of the processors used throughout this work and their relevant properties. Note that, while the single-threaded peak performance is, where appropriate, based on the processors' maximum turbo frequency, the multi-threaded peak performance is instead computed from the base frequency. Furthermore, we only list the vector instructions that allow to reach a processor's theoretical peak performance. \section{\hwstyle Harpertown E5450} \label{hardware:E5450} \href{http://ark.intel.com/products/33083/Intel-Xeon-Processor-E5450-12M-Cache-3_00-GHz-1333-MHz-FSB}{\nolinkurl{http://ark.intel.com/products/33083/Intel-Xeon-Processor-E5450-}\\\nolinkurl{12M-Cache-3_00-GHz-1333-MHz-FSB}} Our {\namestyle Harpertown E5450}s were part of our compute cluster. Because they were disposed of in mid~2016, they are only used in a part of this work's performance analyses. \begin{hwtable} Name &\namestyle Intel\textsuperscript\textregistered{} Xeon\textsuperscript\textregistered{} Processor E5450 \\ Codename &\namestyle Harpertown \\ Lithography &\SI{45}{\nano\meter} \\ Release &Q4 2007 \\ Cores / Threads &4 / 4 \\ Base Frequency &\SI{3.00}{\giga\hertz} \\ Peak Performance &\SI{12}{\giga\flops\per\second} (single-threaded) \\\nopagebreak &\SI{48}{\giga\flops\per\second} (all cores) \\ Peak Bandwidth &\SI{10.6}{\giga\byte\per\second} \\ L2~cache &\SI6{\mebi\byte} {\em per 2~cores}, 24-way set associative\\ L1d~cache &\SI{32}{\kibi\byte} per core, 8-way set-associative \\ Vector Instructions &1 SSE FMUL + 1 SSE FADD per cycle \\\nopagebreak &$= \SI4{\flops\per\cycle}$ \\ \end{hwtable} \section{\hwstyle Sandy Bridge-EP E5-2670} \label{hardware:E5-2670} \href{http://ark.intel.com/products/64595/Intel-Xeon-Processor-E5-2670-20M-Cache-2_60-GHz-8_00-GTs-Intel-QPI}{\nolinkurl{http://ark.intel.com/products/64595/Intel-Xeon-Processor-E5-2670-}\\\nolinkurl{20M-Cache-2_60-GHz-8_00-GTs-Intel-QPI}} Our {\namestyle Sandy Bridge E5-2680 v2}s are part of our compute cluster. \intel{} \turboboost is disabled on these machines unless otherwise stated. \begin{hwtable} Name &\namestyle Intel\textsuperscript\textregistered{} Xeon\textsuperscript\textregistered{} Processor E5-2670 \\ Codename &\namestyle Sandy Bridge-EP \\ Lithography &\SI{32}{\nano\meter} \\ Release &Q1 2012 \\ Cores / Threads &8 / 16 \\ Base Frequency &\SI{2.60}{\giga\hertz} \\ Max Turbo Frequency &\SI{3.30}{\giga\hertz} ({\em disabled unless otherwise stated})\\ Peak Performance &\SI{20.8}{\giga\flops\per\second} (single-threaded) \\\nopagebreak &\SI{166.4}{\giga\flops\per\second} (all cores) \\ Peak Bandwidth &\SI{51.2}{\giga\byte\per\second} \\ L3~cache &\SI{20}{\mebi\byte} shared, 20-way set associative \\ L2~cache &\SI{256}{\kibi\byte} per core, 8-way set associative \\ L1d~cache &\SI{32}{\kibi\byte} per core, 8-way set-associative \\ Vector Instructions &1 AVX FMUL + 1 AVX FADD per cycle \\\nopagebreak &$= \SI8{\flops\per\cycle}$ \\ \end{hwtable} \section{\hwstyle Ivy Bridge-EP E5-2680 v2} \label{hardware:E5-2680 v2} \href{http://ark.intel.com/products/75277/Intel-Xeon-Processor-E5-2680-v2-25M-Cache-2_80-GHz}{\nolinkurl{http://ark.intel.com/products/75277/Intel-Xeon-Processor-E5-2680-}\\\nolinkurl{v2-25M-Cache-2_80-GHz}} Our {\namestyle Ivy Bridge E5-2680 v3}s are part of our compute cluster. \begin{hwtable} Name &\namestyle Intel\textsuperscript\textregistered{} Xeon\textsuperscript\textregistered{} Processor E5-2680 v2\\ Codename &\namestyle Ivy Bridge-EP \\ Lithography &\SI{22}{\nano\meter} \\ Release &Q3 2013 \\ Cores / Threads &10 / 20 \\ Base Frequency &\SI{2.80}{\giga\hertz} \\ Max Turbo Frequency &\SI{3.60}{\giga\hertz} \\ Peak Performance &\SI{28.8}{\giga\flops\per\second} (single-threaded) \\\nopagebreak &\SI{224}{\giga\flops\per\second} (all cores) \\ Peak Bandwidth &\SI{59.7}{\giga\byte\per\second} \\ L3~cache &\SI{25}{\mebi\byte} shared, 20-way set associative \\ L2~cache &\SI{256}{\kibi\byte} per core, 8-way set associative \\ L1d~cache &\SI{32}{\kibi\byte} per core, 8-way set-associative \\ Vector Instructions &1 AVX FMUL + 1 AVX FADD per cycle \\\nopagebreak &$= \SI8{\flops\per\cycle}$ \\ \end{hwtable} \section{\hwstyle Haswell-EP E5-2680 v3} \label{hardware:E5-2680 v3} \href{http://ark.intel.com/products/81908/Intel-Xeon-Processor-E5-2680-v3-30M-Cache-2_50-GHz}{\nolinkurl{http://ark.intel.com/products/81908/Intel-Xeon-Processor-E5-2680-}\\\nolinkurl{v3-30M-Cache-2_50-GHz}} Our {\namestyle Haswell-EP E5-2680 v3}s are part of our compute cluster. \begin{hwtable} Name &\namestyle Intel\textsuperscript\textregistered{} Xeon\textsuperscript\textregistered{} Processor E5-2680 v3\\ Codename &\namestyle Haswell-EP \\ Lithography &\SI{22}{\nano\meter} \\ Release &Q3 2014 \\ Cores / Threads &12 / 24 \\ Base Frequency &\SI{2.50}{\giga\hertz} \\ Max Turbo Frequency &\SI{3.30}{\giga\hertz} \\ Peak Performance &\SI{52.8}{\giga\flops\per\second} (single-threaded) \\\nopagebreak &\SI{480}{\giga\flops\per\second} (all cores) \\ Peak Bandwidth &\SI{68}{\giga\byte\per\second} \\ L3~cache &\SI{30}{\mebi\byte} shared, 20-way set associative \\ L2~cache &\SI{256}{\kibi\byte} per core, 8-way set associative \\ L1d~cache &\SI{32}{\kibi\byte} per core, 8-way set-associative \\ Vector Instructions &2 AVX FMA per cycle \\\nopagebreak &$= \SI{16}{\flops\per\cycle}$ \\ \end{hwtable} \section{\hwstyle Broadwell i7-5557U} \label{hardware:i7-5557U} \href{https://ark.intel.com/products/84993/Intel-Core-i7-5557U-Processor-4M-Cache-up-to-3_40-GHz}{\nolinkurl{https://ark.intel.com/products/84993/Intel-Core-i7-5557U-}\\\nolinkurl{Processor-4M-Cache-up-to-3_40-GHz}} Our {\namestyle Broadwell i7-5557U} is part of a {\namestyle MacBook Pro}. \begin{hwtable} Name &\namestyle Intel\textsuperscript\textregistered{} Core\texttrademark{} i7-5557U Processor \\ Codename &\namestyle Broadwell-U \\ Lithography &\SI{14}{\nano\meter} \\ Release &Q1 2015 \\ Cores / Threads &2 / 4 \\ Base Frequency &\SI{3.10}{\giga\hertz} \\ Max Turbo Frequency &\SI{3.40}{\giga\hertz} \\ Peak Performance &\SI{54.4}{\giga\flops\per\second} (single-threaded) \\\nopagebreak &\SI{99.2}{\giga\flops\per\second} (all cores) \\ Peak Bandwidth &\SI{25.6}{\giga\byte\per\second} \\ L3~cache &\SI4{\mebi\byte} shared, 16-way set associative \\ L2~cache &\SI{256}{\kibi\byte} per core, 8-way set associative \\ L1d~cache &\SI{32}{\kibi\byte} per core, 8-way set-associative \\ Vector Instructions &2 AVX FMA per cycle \\\nopagebreak &$= \SI{16}{\flops\per\cycle}$ \\ \end{hwtable} \subsection{\blasl1} \routinedoc{dcopy, arguments={ n=dimension $n$, x=vector $\dv x \in \R^n$, incx=increment for \dv x, y=vector $\dv y \in \R^n$, incy=increment for \dv y }, description={double-precision vector copy}, operations={$\dv y \coloneqq \alpha \dv x$}, flops=0, datavol=$2 n$, datamov=$2 n$, } \routinedoc{dswap, arguments={ n=dimension $n$, x=vector $\dv x \in \R^n$, incx=increment for \dv x, y=vector $\dv y \in \R^n$, incy=increment for \dv y }, description={double-precision vector swap}, operations={${\dv x, \dv y \coloneqq \dv y, \dv x}$}, flops=0, datavol=$2 n$, datamov=$4 n$, } \routinedoc{daxpy, arguments={ n=dimension $n$, alpha=scalar $\alpha$, x=vector $\dv x \in \R^n$, incx=increment for \dv x, y=vector $\dv y \in \R^n$, incy=increment for \dv y }, description={double-precision scaled vector addition}, operations={$\dv y \coloneqq \alpha \dv x + \dv y$}, flops=$2 n$, datavol=$2 n$, datamov=$3 n$, } \routinedoc{ddot, arguments={ n=dimension $n$, x=vector $\dv x \in \R^n$, incx=increment for \dv x, y=vector $\dv y \in \R^n$, incy=increment for \dv y }, description={double-precision inner vector product}, operations={${\alpha \coloneqq \dm[height=0, ']x \dv x}$}, flops=$2 n$, datavol=$2 n$, datamov=$2 n$, } \subsection{\blasl2} \routinedoc{dgemv, arguments={ trans=\dm A is transposed, m=dimension $m$, n=dimension $n$, alpha=scalar $\alpha$, A=matrix $\dm A \in \R^{m \times n}$, ldA=leading dimension for \dm A, x={vector $\dv x \in \begin{cases} \R^n &\text{if } \code{trans} = \code N\\ \R^m &\text{else} \end{cases}$}, incx=increment for \dv x, beta=scalar $\beta$, y={vector $\dv y \in \begin{cases} \R^m &\text{if } \code{trans} = \code N\\ \R^n &\text{else} \end{cases}$}, incy=increment for \dv y }, description={double-precision matrix-vector product}, operations={ {$\dv y \coloneqq \alpha \dm A \matmatsep \dv x + \beta\dv y$}, {$\dv y \coloneqq \alpha \dm[']A \dv x + \beta\dv y$} }, flops=$2 m n$, datavol={$\begin{array}{ll} m n + m &\text{if } \code{trans} = \code N\\ m n + n &\text{else} \end{array}$}, datamov={$\begin{array}{ll} m n + 2 m &\text{if } \code{trans} = \code N\\ m n + 2 n &\text{else} \end{array}$}, } \routinedoc{dger, arguments={ m=dimension $m$, n=dimension $n$, alpha=scalar $\alpha$, x=vector $\dv x \in \R^m$, incx=increment for \dv x, y=vector $\dv y \in \R^n$, incy=increment for \dv y, A=matrix $\dm A \in \R^{m \times n}$, ldA=leading dimension for \dm A }, description={double-precision vector outer product}, operations={${\dm A \coloneqq \alpha \dv x \dm[height=0, ']y + \dm A}$}, flops=$2 m n$, datavol=$m n + m + n$, datamov=$2 m n + m + n$, } \routinedoc{dtrsv, arguments={ uplo=\dm[lower]A is lower- or upper-triangular, trans=\dm[lower]A is transposed, diag=\dm[lower]A is unit triangular, n=dimension $n$, A=matrix $\dm[lower]A \in \R^{n \times n}$, ldA=leading dimension for \dm[lower]A, x=vector $\dv x \in \R^n$, incX=increment for \dv x }, description={double-precision triangular linear system solve}, operations={ {$\dv x \coloneqq \dm[lower, inv]A \dv x$}, {$\dv x \coloneqq \dm[lower, inv']A \dv x$} }, flops=$n^2$, datavol={$\frac12 n (n + 1) + n$}, datamov={$\frac12 n (n + 1) + 2 n$} } \subsection{\blasl3} \routinedoc{dgemm, arguments={ transA=\dm A is transposed, transB=\dm B is transposed, m=dimension $m$, n=dimension $n$, k=dimension $k$, alpha=scalar $\alpha$, A={matrix $\dm A \in \begin{cases} \R^{m \times k} &\text{if } \code{transA} = \code N\\ \R^{k \times m} &\text{else} \end{cases}$}, ldA=leading dimension for \dm A, B={matrix $\dm B \in \begin{cases} \R^{k \times n} &\text{if } \code{transB} = \code N\\ \R^{n \times k} &\text{else} \end{cases}$}, ldB=leading dimension for \dm B, beta=scalar $\beta$, C={matrix $\dm C \in \R^{m \times n}$}, ldC=leading dimension for \dm C }, description={double-precision matrix-matrix product}, operations={ {$\dm C \coloneqq \alpha \dm A \matmatsep \dm B + \beta \dm C$}, {$\dm C \coloneqq \alpha \dm A \matmatsep \dm[']B + \beta \dm C$}, {$\dm C \coloneqq \alpha \dm[']A \matmatsep \dm B + \beta \dm C$}, {$\dm C \coloneqq \alpha \dm[']A \matmatsep \dm[']B + \beta \dm C$} }, flops=$2 m n k$, datavol=$m k + k n + m n$, datamov=$m k + k n + 2 m n$, } \routinedoc{dsymm, arguments={ side=\dm A is on the left or right of \dm B, uplo=\dm A is in lower- or upper-triangular storage, m=dimension $m$, n=dimension $n$, alpha=scalar $\alpha$, A={matrix $\dm A \in \begin{cases} \R^{m \times m} &\text{if } \code{side} = \code L\\ \R^{n \times n} &\text{else} \end{cases}$}, ldA=leading dimension for \dm A, B={matrix $\dm B \in \R^{m \times n}$}, ldB=leading dimension for \dm B, beta=scalar $\beta$, C={matrix $\dm C \in \R^{m \times n}$}, ldC=leading dimension for \dm C }, description={double-precision symmetric matrix-matrix product}, operations={ {$\dm C \coloneqq \alpha \dm A \matmatsep \dm B + \beta \dm C$}, {$\dm C \coloneqq \alpha \dm B \matmatsep \dm A + \beta \dm C$} }, flops={$\begin{array}{ll} 2 m^2 n &\text{if } \code{side} = \code L\\ 2 m n^2 &\text{else} \end{array}$}, datavol={$\begin{array}{ll} \frac12 m (m + 1) + 2 m n &\text{if } \code{side} = \code L\\ \frac12 n (n + 1) + 2 m n &\text{else} \end{array}$}, datamov={$\begin{array}{ll} \frac12 m (m + 1) + 3 m n &\text{if } \code{side} = \code L\\ \frac12 n (n + 1) + 3 m n &\text{else} \end{array}$}, } \routinedoc{dtrmm, arguments={ side=\dm[lower]A is on the left or right of \dm B, uplo=\dm[lower]A is lower- or upper-triangular, transA=\dm[lower]A is transposed, diag=\dm[lower]A is unit triangular, m=dimension $m$, n=dimension $n$, alpha=scalar $\alpha$, A={matrix $\dm[lower]A \in \begin{cases} \R^{m \times m} &\text{if } \code{side} = \code L\\ \R^{n \times n} &\text{else} \end{cases}$}, ldA=leading dimension for \dm[lower]A, B={matrix $\dm B \in \R^{m \times n}$}, ldB=leading dimension for \dm B }, description={double-precision triangular matrix-matrix product}, operations={ {$\dm B \coloneqq \alpha \dm[lower]A \matmatsep \dm B$}, {$\dm B \coloneqq \alpha \dm[lower, ']A \dm B$}, {$\dm B \coloneqq \alpha \dm[upper]A \matmatsep \dm B$}, {$\dm B \coloneqq \alpha \dm[upper, ']A \dm B$}, {$\dm B \coloneqq \alpha \dm B \matmatsep \dm[lower]A$}, {$\dm B \coloneqq \alpha \dm B \matmatsep \dm[lower, ']A$}, {$\dm B \coloneqq \alpha \dm B \matmatsep \dm[upper]A$}, {$\dm B \coloneqq \alpha \dm B \matmatsep \dm[upper, ']A$} }, flops={$\begin{array}{ll} m^2 n &\text{if } \code{side} = \code L\\ m n^2 &\text{else} \end{array}$}, datavol={$\begin{array}{ll} \frac12 m (m + 1) + m n &\text{if } \code{side} = \code L\\ \frac12 n (n + 1) + m n &\text{else} \end{array}$}, datamov={$\begin{array}{ll} \frac12 m (m + 1) + 2 m n &\text{if } \code{side} = \code L\\ \frac12 n (n + 1) + 2 m n &\text{else} \end{array}$}, } \routinedocforward\dsyrk{ssyrk, arguments={uplo=, trans=, n=, k=, alpha=, A=, ldA=, beta=, C=, ldB=}, description={single-precision symmetric rank-k update}, } \routinedoc{dsyrk, arguments={ uplo=\dm C has lower- or upper-triangular storage, trans=\dm A is transposed, n=dimension $n$, k=dimension $k$, alpha=scalar $\alpha$, A={matrix $\dm A \in \begin{cases} \R^{n \times k} &\text{if } \code{trans} = \code N\\ \R^{k \times n} &\text{else} \end{cases}$}, ldA=leading dimension for \dm A, beta=scalar $\beta$, C={symmetric matrix $\dm C \in \R^{n \times n}$}, ldB=leading dimension for \dm C }, description={double-precision symmetric rank-k update}, operations={ {$\dm C \coloneqq \alpha \dm A \matmatsep \dm[']A + \dm C$}, {$\dm C \coloneqq \alpha \dm[']A \dm A + \dm C$}, }, flops={$n (n + 1) k$}, datavol={$\frac12 n (n + 1) + n k$}, datamov={$n (n + 1) + n k$}, } \routinedocforward\dsyrk{cherk, arguments={uplo=, trans=, n=, k=, alpha=, A=, ldA=, beta=, C=, ldB=}, description={single-precision complex Hermitian rank-k update}, } \routinedocforward\dsyrk{zherk, arguments={uplo=, trans=, n=, k=, alpha=, A=, ldA=, beta=, C=, ldB=}, description={double-precision complex Hermitian rank-k update}, } \routinedoc{dsyr2k, arguments={ uplo=\dm C has lower- or upper-triangular storage, trans=\dm A is transposed, n=dimension $n$, k=dimension $k$, alpha=scalar $\alpha$, A={matrix $\dm A \in \begin{cases} \R^{n \times k} &\text{if } \code{trans} = \code N\\ \R^{k \times n} &\text{else} \end{cases}$}, ldA=leading dimension for \dm A, B={matrix $\dm B \in \begin{cases} \R^{n \times k} &\text{if } \code{trans} = \code N\\ \R^{k \times n} &\text{else} \end{cases}$}, ldB=leading dimension for \dm B, beta=scalar $\beta$, C={symmetric matrix $\dm C \in \R^{n \times n}$}, ldC=leading dimension for \dm C }, description={double-precision symmetric rank-2k update}, operations={ {$\dm C \coloneqq \alpha \dm A \matmatsep \dm[']B + \alpha \dm B \matmatsep \dm[']A + \dm C$}, {$\dm C \coloneqq \alpha \dm[']A \dm B + \alpha \dm[']B \dm A + \dm C$} }, flops={$2 n (n + 1) k$}, datavol={$\frac12 n (n + 1) + 2 n k$}, datamov={$n (n + 1) + 2 n k$}, } \routinedocforward\dtrsm{strsm, arguments={side=, uplo=, transA=, diag=, m=, n=, alpha=, A=, ldA=, B=, ldB=}, description={single-precision triangular linear system solve with multiple right hand sides}, } \routinedoc{dtrsm, arguments={ side=\dm[lower]A is on the left or right of \dm B, uplo=\dm[lower]A is lower- or upper-triangular, transA=\dm[lower]A is transposed, diag=\dm[lower]A is unit triangular, m=dimension $m$, n=dimension $n$, alpha=scalar $\alpha$, A={matrix $\dm[lower]A \in \begin{cases} \R^{m \times m} &\text{if } \code{side} = \code L\\ \R^{n \times n} &\text{else} \end{cases}$}, ldA=leading dimension for \dm[lower]A, B={matrix $\dm B \in \R^{m \times n}$}, ldB=leading dimension for \dm B }, description={double-precision triangular linear system solve with multiple right hand sides}, operations={ {$\dm B \coloneqq \alpha \dm[lower, inv]A \dm B$}, {$\dm B \coloneqq \alpha \dm[lower, inv']A \dm B$}, {$\dm B \coloneqq \alpha \dm[upper, inv]A \dm B$}, {$\dm B \coloneqq \alpha \dm[upper, inv']A \dm B$}, {$\dm B \coloneqq \alpha \dm B \matmatsep \dm[lower, inv]A$}, {$\dm B \coloneqq \alpha \dm B \matmatsep \dm[lower, inv']A$}, {$\dm B \coloneqq \alpha \dm B \matmatsep \dm[upper, inv]A$}, {$\dm B \coloneqq \alpha \dm B \matmatsep \dm[upper, inv']A$} }, flops={$\begin{array}{ll} m^2 n &\text{if } \code{side} = \code L\\ m n^2 &\text{else} \end{array}$}, datavol={$\begin{array}{ll} \frac12 m (m + 1) + m n &\text{if } \code{side} = \code L\\ \frac12 n (n + 1) + m n &\text{else} \end{array}$}, datamov={$\begin{array}{ll} \frac12 m (m + 1) + 2 m n &\text{if } \code{side} = \code L\\ \frac12 n (n + 1) + 2 m n &\text{else} \end{array}$}, } \routinedocforward\dtrsm{ctrsm, arguments={side=, uplo=, transA=, diag=, m=, n=, alpha=, A=, ldA=, B=, ldB=}, description={single-precision complex triangular linear system solve with multiple right hand sides}, } \routinedocforward\dtrsm{ztrsm, arguments={side=, uplo=, transA=, diag=, m=, n=, alpha=, A=, ldA=, B=, ldB=}, description={double-precision complex triangular linear system solve with multiple right hand sides}, } \subsection*{\codestyle\bf\llap{\routine(}\arglist)} \label{routine:\routine} {\it\description} ] \def\empty{}\small\singlespacing \expandafter\ifx\note\pgfkeysnovalue\else \paragraph{Note\strut} \note \fi \ifx\operations\empty\else \paragraph{Operations\strut} \operations \fi { \raggedright \hbadness=10000 \hangafter=1 \renewcommand\newline{ \par \settowidth\hangindent{\hspace\argwidth: } \makebox[\hangindent]{}% } \paragraph{Arguments\strut} \arguments } \expandafter\ifx\flops\pgfkeysnovalue\else \paragraph{Minimal FLOP-count\strut} \flops \fi \expandafter\ifx\datavol\pgfkeysnovalue\else \paragraph{Data volume\strut} \datavol \fi \expandafter\ifx\datamov\pgfkeysnovalue\else \paragraph{Minimal data movement\strut} \datamov \fi \end{multicols} \filbreak }} \newcommand\routinedocforward[2]{ \pgfkeys{ /routine, #2, name/.get=\routine, arglist/.get=\arglist, description/.get=\description, } \subsection*{\codestyle\bf\routine(\arglist)} \label{routine:\routine} {\it\description.} See #1. \filbreak } \subsection*{Reference Implementations} The \blas and \lapack reference implementations~\cite{blasweb, lapackweb} are fully functional and well-documented and thus of great value as references for routine interfaces and semantics. However, on their own they only attain poor performance, and should therefore not be used in production codes. All routines in the \blas reference implementation are single-threaded and unoptimized. The central kernel \dgemm, for instance, is realized as a simple triple loop that reaches around \SI6{\percent} of modern processors' single-threaded theoretical peak performance---optimized implementations are commonly $15\times$~faster on a single core and provide excellent multi-threaded scalability. Since \lapack primarily relies on a tuned \blas implementation for speed, the reference implementation can in principle reach good performance. However, as its documentation states, this requires careful tuning of its block sizes, whose default values are generally too low on contemporary processors. Optimized implementations may further improve \lapack's performance through faster algorithms, tuned unblocked kernels (e.g, \dtrti2, \dpotf2), and algorithm-level parallelism (e.g., task-based algorithms-by-blocks). Throughout this work, we use reference \blas and \lapack version~3.5.0. \subsection*{\namestyle OpenBLAS} {\namestyle OpenBLAS}~\cite{openblasweb} is a high-performance open-source \blas and \lapack implementation that is currently developed and maintained at the {\namestyle Massachusetts Institute of Technology}. It provides optimized and multi-threaded \blas kernels for a wide range of architectures, and offers tuned version of core \lapack routines, such as the \dlauum, \dtrtri, \dpotrf, and \dgetrf. {\namestyle OpenBLAS} is based on the discontinued {\namestyle GotoBLAS2}, adopting its approach and much of its source-code; it includes assembly kernels for more recent architectures, such as \sandybridgeshort and \haswellshort, as well {\namestyle AMD} processors. Throughout this work, we use {\namestyle OpenBLAS} version~0.2.15. \subsection*{\namestyle BLIS} The {\namestyle BLAS-like Library Instantiation Software} ({\namestyle BLIS})~\cite{blis1, blis2, blis3, blisweb} is a fairly recent framework for dense linear algebra libraries that is actively developed at the {\namestyle University of Texas at Austin}. While it comes with its own API, which is a superset, generalization, and extension of the \blas, it contains a compatibility layer offering the original de-factor standard \blas interface. {\namestyle BLIS} builds upon the {\namestyle GotoBLAS} approach, yet restructures and solidifies it to make all but a tiny ``micro-kernel'' architecture-independent. While its performance is so far generally lower than that of \openblas (see examples in \cref{sec:model:args}), its ambitious goal is to significantly speed up both the development of new application-specific kernels, and the adaptation to other architectures. Although multi-threading was introduced into {\namestyle BLIS}~\cite{blis3} soon after its inception, its flexible threading model lacked a simple end-user interface (such as following the environment variable \code{OMP\_NUM\_THREADS}) until November~2016 (commit \href{https://github.com/flame/blis/commit/6b5a4032d2e3ed29a272c7f738b7e3ed6657e556}{\sf 6b5a403}). As a result, we only presents single-threaded results for {\namestyle BLIS}. Throughout this work we use {\namestyle BLIS} version~0.2.0. \subsection*{\namestyle MKL} \intel's {\namestyle Math Kernel Library} ({\namestyle MKL})~\cite{mklweb} is a high-performance library for \intel processors that covers \blas and \lapack, as well as other high-performance computations, such as for Fast Fourier Transforms (FFT) and Deep Neural Networks (DNN). While {\namestyle MKL} is a closed-source library, it recently began offering free developer licenses. In terms of performance, it is in most scenarios superior to open-source libraries such as \openblas and \blis (see examples in \cref{sec:model:args}). Throughout this work we use {\namestyle MKL} version~11.3. \subsection*{\namestyle Accelerate} \apple's framework {\namestyle Accelerate}~\cite{accelerateweb} is a high-performance library that ships with {\namestyle macOS} and, among others, provides full \blas and \lapack functionality. Its performance is for many cases comparable to \openblas or slightly better. \subsection*{Other Implementations} The following notable \blas and \lapack implementations are not used throughout this work: \begin{itemize} \item The {\namestyle Automatically Tuned Linear Algebra Software} ({\namestyle ATLAS})~\cite{atlas1, atlas2, atlas3, atlasweb} is a high-performance \blas implementation that relies on auto-tuning. While {\namestyle ATLAS} kernels typically don not reach the performance of hand-tuned implementations such as \openblas, \blis, and \mkl, it provides good performance for new and exotic architectures with little effort. \item {\namestyle GotoBLAS2}~\cite{gotoblas1, gotoblas2, gotoblasweb} is a high-performance \blas implementation that was developed at the {\namestyle Texas Advanced Computing Center}. Since its discontinuation, much of its code-base was picked up by its successor \openblas in~2011, and its approach was refined and generalized in \blis. \item {\namestyle IBM}'s {\namestyle Engineering and Scientific Subroutine Library} ({\namestyle ESSL}) \cite{esslweb} provides a high-performance \blas implementation and parts of \lapack for {\namestyle POWER}-based systems, such as {\namestyle Blue Gene} supercomputers. \end{itemize} \section{Storage Format} \label{app:libs:store} \input{applibs/store} \section{\namestyle Basic Linear Algebra Subprograms} \label{app:libs:blas} \input{applibs/blas} \section{\namestyle Linear Algebra PACKage} \label{app:libs:lapack} \input{applibs/lapack} \section{Implementations} \label{app:libs:libs} \input{applibs/libs} } \subsection{Scalars} Each scalar operand (e.g., $\alpha \in \R$) is passed as a single argument, (e.g., \code{double *alpha}). Complex scalars are stored as two consecutive elements of the basis data-type (\code{float} or \code{double}) that represent the real and imaginary parts. \subsection{Vectors} Each vector operand (e.g., $\dv x \in \R^n$) is specified by three arguments: \begin{itemize} \item A size argument (e.g., \code{int *n}) determines the length of the vector. One size argument can describe multiple vectors (and/or matrices) with the same size. \item A data argument (e.g., \code{double *x}) points to the vector's first element in memory. \item An increment argument (e.g., \code{int *incx}) identifies the stride between consecutive elements of the vector. For instance, a contiguously stored vector has an increment of~1. Note that most routines allow negative increments. In this case, the vector is stored in reverse, and the data argument points to the vector's last element---the first memory location. \end{itemize} To summarize, vector element~$x_i$ is stored at \code{x[i * incx]} if \code{incx} is positive and \code{x[(i - n + 1) * incx]} otherwise. \subsection{Matrices} Each matrix (e.g., $\dm[width=.7]A \in \R^{m \times n}$) is specified by four arguments: \begin{itemize} \item Two size arguments (e.g., \code{int *m} and \code{int *n}) determine the matrix height~($m$) and width~($n$). One size argument can describe the dimensions of multiple matrices (and/or vectors), or both dimensions of a square matrix. \item A data argument (e.g., \code{double *A}) points to the first matrix element in memory (e.g., $a_{00}$). The following elements of the first column (e.g., $a_{i0}$) are stored consecutively in memory as vector with increment~1. \item A leading dimension argument (e.g., \code{int *ldA}) describes the distance in memory between matrix columns. It can hence be understood and used as the increment argument for the matrix rows as vectors. The term ``leading dimension'' comes from the concept that a referenced matrix is part of a larger, contiguously stored ``leading'' matrix. It allows to operate on sub-matrices or tensor panels as shown throughout this work. Leading dimensions must be at least equal to the height of the matrix (e.g., $m$). \end{itemize} To summarize, matrix element~$a_{ij}$ is stored at \code{A[i + j * ldA]}. \subsection{Compute-Bound Efficiency} \label{sec:term:eff:compbound} A computation is compute-bound on a hardware platform if the memory operations to load and store the involved data can be amortized by floating-point operations, i.e., the available memory bandwidth is sufficient for all transfers and the speed at which the processor performs \flops is the bottleneck. An operation is theoretically bandwidth bound when \[ \text{arithmetic intensity} \geq \frac\pperf\pbw \enspace. \] Furthermore, a computation's \definition[(compute-bound)\\efficiency]{compute-bound efficiency} (or simply {\em efficiency}) is given by \begin{equation}\label{eq:term:eff} \text{compute-bound efficiency} \defeqq \frac{\text{attained performance}}\pperf \enspace. \end{equation} This unit-less metric between 0 and~1 indicates how well the available hardware resources are utilized: While a value close to~1 corresponds to near-optimal utilization, lower values indicate untapped resource potential. \begin{example}{Compute-bound efficiency}{term:eff} The matrix-matrix multiplication $\dm C \coloneqq \dm A \matmatsep \dm B + \dm C$ (\dgemm[NN]) with $\dm A,\allowbreak \dm B,\allowbreak \dm C \in \R^{1000 \times 1000}$ has an arithmetic intensity of (see \cref{ex:term:ai}) \[ \SIvar{1000 \times \frac1{16}}{\flops\per\Byte} = \SI{62.5}{\flops\per\Byte} \enspace. \] On a single core of a \sandybridge with a peak floating-point performance of \SI{20.8}{\giga\flops\per\second} (\turboboost disabled) and peak bandwidth of \SI{51.2}{\gibi\byte\per\second} this operation is clearly compute bound: \[ \frac {\SI{20.8}{\giga\flops\per\second}} {\SI{16.25}{\gibi\byte\per\second}} \approx \SI{1.28}{\flops\per\Byte} < \SI{62.5}{\flops\per\Byte} \enspace. \] If the \dgemm[NN] runs at \SI{19.61}{\giga\flops\per\second} (\cref{ex:term:perf}), it reached an efficiency of \[ \frac{\text{attained performance}}\pperf = \frac {\SI{19.61}{\giga\flops\per\second}} {\SI{20.8}{\giga\flops\per\second}} \approx \SI{94.27}\percent \enspace. \] \end{example} There are many different ways to look at efficiency other than the ratio of attained performance to peak performance. Rewriting the definition of efficiency as \begin{align*} \text{efficiency} &= \frac{\text{attained performance}}\pperf \\ &= \frac {\text{cost} / \text{runtime}} {\text{cost} / \text{optimal runtime}} \\ &= \frac{\text{optimal runtime}}{\text{runtime}} \enspace, \end{align*} it is expressed as the ratio of the minimum time required to perform the operation's minimal \flops on the given hardware to the computation's runtime. If we reorganize it as \begin{align*} \text{efficiency} &= \frac{\text{attained performance}}\pperf \\ &= \frac{\text{cost} / \text{runtime}}\pperf \\ &= \frac{\text{cost}}{\text{runtime} \times \pperf} \\ &= \frac{\text{cost}}{\text{available \flops}} \enspace, \end{align*} it can be seen as the ratio of the operation's minimal \flop-count to how many \flops the processor could theoretically perform during the computation's runtime. \begin{example}{Expressing compute-bound efficiency}{term:eff2} In \cref{ex:term:eff} the \dgemm[NN] took \SI{102}\ms, while the \sandybridge with a peak performance of \SI{20.8}{\giga\flops\per\second} (\turboboost disabled) could have performed the required $\SIvar{2 \times 1000^3}\flops = \SI{2e9}\flops$ in \[ \frac{\SI{2e9}\flops}{\SI{20.8}{\giga\flops\per\second}} \approx \SI{96.15}\ms \enspace . \] Hence, the computation's efficiency can be computed as \[ \frac{\text{optimal runtime}}{\text{runtime}} = \frac{\SI{96.15}\ms}{\SI{102}\ms} \approx \SI{94.26}\percent \enspace. \] We can also consider that in the \SI{102}{\ms} that the \dgemm[NN] took, the \sandybridgeshort core could have performed \[ \SI{102}\ms \times \SI{20.8}{\giga\flops\per\second} \approx \SI{2.12e9}\flops \enspace. \] Once again we obtain the same efficiency, as a \flop-count ratio: \[ \frac{\text{cost}}{\text{available \flops}} = \frac{\SI{2e9}\flops}{\SI{2.12e9}\flops} \approx \SI{94.26}\percent \enspace. \] \end{example} \subsection{Bandwidth-Bound Efficiency} \label{sec:term:eff:bwbound} A computation is bandwidth-bound on a hardware platform if the memory operations cannot load and store the involved data as fast as the processor's floating-point units can process it, i.e., the memory bandwidth is the bottleneck and the compute units are partially idle. An operation is theoretically bandwidth-bound when \[ \text{arithmetic intensity} \leq \frac\pperf\pbw \enspace. \] Furthermore, a computation's \definition{bandwidth-bound efficiency} is defined as \begin{equation} \label{eq:term:eff:bwbound} \text{bandwidth-bound efficiency} \defeqq \frac{\text{attained bandwidth}}\pbw \enspace. \end{equation} A bandwidth-bound efficiency close to~1 indicates a good utilization of the processor's main-memory bandwidth, while smaller values signal underutilization. \begin{example}{Bandwidth-bound efficiency}{term:bwbeff} The vector inner product $\alpha \coloneqq \dm[height=0, ']x \matvecsep \dv y$ (\ddot) with $\dv x, \dv y \in \R^{\num{100000}}$ has an arithmetic intensity of \SIvar{\frac18}{\flops\per\Byte} (\cref{ex:term:ai}) and is thus clearly bandwidth-bound. If on one core of a \sandybridge, it attains a bandwidth of \SI{11.49}{\gibi\byte\per\second} (\cref{ex:term:bw}), relative to the processor's empirical peak bandwidth of \SI{16.25}{\gibi\byte\per\second} (\cref{ex:term:peakbw}), it performed at a bandwidth-bound efficiency of \[ \frac{\text{attained bandwidth}}\pbw = \frac {\SI{11.49}{\gibi\byte\per\second}} {\SI{16.25}{\gibi\byte\per\second}} \approx \SI{70.71}\percent \enspace. \] \end{example} \subsection{The Roofline Model} \label{sec:term:roofline} The \definition{Roofline model}~\cite{roofline1} plots the performance of computations (in \si{\giga\flops\per\second}) against their arithmetic intensity (in \si{\flops\per\Byte}). In addition to data-points from measurements, two lines are added to such a plot to indicate the theoretically attainable performance depending on the arithmetic intensity: The product of peak bandwidth and arithmetic intensity (in units: $\si{\gibi\byte\per\second} \times \si{\flops\per\Byte} = \si{\gibi\flops\per\second} \approx \SI{.93}{\giga\flops\per\second}$) constitutes a straight line through the origin with the bandwidth as a gradient (visually: \tikz\draw[thick, darkred] (0, 0) -- (1.5ex, 1.5ex);) that represents the bandwidth-bound performance limit; and the peak floating-point performance is a constant line (\tikz\draw[thick, darkred] (0,0) (0, 1.5ex) -- (3ex, 1.5ex);). Together these two lines form the roofline-shaped performance limit (\tikz\draw[thick, darkred] (0, 0) -- (1.5ex, 1.5ex) -- (4.5ex, 1.5ex);) that gives the visualization its name: \begin{equation}\label{eq:term:roofline} \text{performance limit} = \min\left(\begin{array}c \pbw \times \text{intensity},\\ \pperf \end{array}\right) \enspace. \end{equation} Comparing the attained performance of a computation to this limit yields the computation's efficiency---bandwidth-bound below the left part of the ``roof'' and compute-bound below the right part. \input{appterm/figures/roofline} \begin{example}{The roofline model}{term:roofline} \Cref{fig:term:roofline} presents the Roofline model for one core of a \sandybridge. This processor has a single-core peak performance of \SI{20.8}{\giga\flops\per\cycle} (\turboboost disabled), and we use the measured single-core peak bandwidth of \SI{16.25}{\gibi\byte\per\second} (\cref{ex:term:peakbw}). Together these two factors impose the performance limit~(\ref*{plt:term:roofline:peak}) \[ \min(\SI{16.25}{\gibi\byte\per\second} \times \text{arithmetic intensity}, \SI{20.8}{\giga\flops\per\second}) \] \cref{fig:term:roofline} also contains the measured performance of representative \blasl1, 2, and~3 operations, whose arithmetic intensity was determined in \cref{ex:term:ai}. \begin{itemize} \item The vector inner product $\alpha \coloneqq \dm[height=0, ']x \matvecsep \dv y$ (\ddot) with $\dv x, \dv y \in \R^n$ (\ref*{plt:term:roofline:ddot}) has a arithmetic intensity of \SIvar{\frac18}{\flops\per\Byte}, making it clearly bandwidth-bound below the left part of the ``roofline''. The attained (bandwidth-bound) efficiency, which is given by the ratio of the measured performance~(\ref*{plt:term:roofline:ddot}) to the attainable peak performance~(\ref*{plt:term:roofline:peak}), is quite high at~\SI{87.93}\percent. \item The matrix-vector multiplication $\dv y \coloneqq \dm A \matvecsep \dv x + \dm y$ (\dgemv) with $\dm A \in \R^{n \times n}$ and $\dv x, \dv y \in \R^n$ (\ref*{plt:term:roofline:dgemv}) has a computation intensity of $\approx \SIvar{\frac14}{\flops\per\Byte}$, making it also bandwidth-bound. The (bandwidth-bound) efficiency (\ref*{plt:term:roofline:dgemv} divided by \ref*{plt:term:roofline:peak}) is between~\SI{45.32}{\percent} (for $n = 100$) and \SI{76.66}{\percent} (for~$n = 2000$). \item The matrix-matrix multiplication $\dm C \coloneqq \dm A \matmatsep \dm B + \dm C$ (\dgemm[NN]) with $\dm A, \dm B, \dm C \in \R^{n \times n}$ (\ref*{plt:term:roofline:dgemm}) has a higher arithmetic intensity of \SIvar{\frac n{16}}{\flops\per\Byte}, which makes it theoretically compute-bound on our system for~$n \geq 21$. In the memory-bound domain it reaches its peak (memory-bound) efficiency (\ref*{plt:term:roofline:dgemm} divided by~\ref*{plt:term:roofline:peak}) of \SI{50.15}{\percent} at~$n = 20$. Within the compute-bound domain, its (compute-bound) efficiency grows towards \SI{74.32}{\percent} for our largest problem size~$n = 100$. Beyond this size the efficiency keeps growing and converge to its peak of \SI{93.70}{\percent} for matrices of size~$n = 2000$. \end{itemize} \end{example} \section{Workload} \label{sec:term:workload} \input{appterm/workload} \section{Runtime} \label{sec:term:time} \input{appterm/time} \section{Performance and Attained Bandwidth} \label{sec:term:perf} \input{appterm/perf} \section{Hardware Constraints} \label{sec:term:hw} \input{appterm/hardware} \section{Efficiency} \label{sec:term:eff} \input{appterm/eff} \section{Other Metrics} \label{sec:term:other} \input{appterm/othermetrics} } \subsection{Floating-Point Operations} \label{sec:term:flops} Most scientific computations, as complex as they may be, perform their work through a small set of elementary arithmetic operations on floating-point representations of real numbers, such as scalar additions or multiplications\footnote{% Exceptions that work on integer data or other structures include graph algorithms and discrete optimization. }---These the so-called \definition[\flops: floating-point operations\\single- and double-precision]{floating-point operations} ({\em\flops}).\footnote{% Not to be confused with floating-point operations {\em per second} (\si{\flops\per\second}). } Contemporary hardware offers two floating-point precisions standardized in IEEE~754~\cite{ieee754}: {\em single-precision}, and {\em double-precision}. They differ in the range of representable numbers, their representation accuracy, and their implementation in hardware. While we distinguish between single-precision \flops and double-precision \flops, throughout this work we are mostly concerned with double-precision computations. Hence we use ``\flops'' without a specification refers to double-precision floating-point operations, and \R is used to denote double-precision numbers. As commonly practiced in dense linear algebra, we assume that the multiplication of two $n \times n$ matrices requires \SIvar{2 n^3}\flops{}---it has an asymptotic \definition[matrix-matrix multiplication: $O(n^3)$]{complexity} of $O(n^3)$. While algorithms with lower asymptotic complexities (such as the {\em Strassen algorithm} with a complexity of $O(n^{2.807})$~\cite{strassen} or the {\em Coppersmith-Winograd algorithm} with a complexity of $O(n^{2.376})$~\cite{coppersmith}) were already known in the 1970s, due to considerably higher constant factors they found little to no application in high-performance computing until recently~\cite{blisstrassen}. The \flop-count of most dense liner algebra operations such as the matrix-matrix multiplication is \definition[data-independence]{data-independent}, i.e., the operand entries do not affect what arithmetic operations are performed.\footnote{% Exceptions may be caused by corrupted input, such as \code{NaN}s, or floating-point exceptions, such as division by~0 or under-/overflows. } In particular, this means that all multiplications with 0's are explicitly performed no matter how sparse an operand is (i.e., how few non-zero entries it has). A notable exception to the data-independence are numerical eigensolvers, whose FLOP-counts depend on the eigenspectrum of the input matrix; however, we do not study eigensolvers in further detail in this work. Assuming the cubic complexity of the matrix-matrix multiplication, the data-independence allows us to compute the \definition[cost = minimal FLOP-count]{minimal FLOP-count}---also referred to as {\em cost}---for most operations solely based on their operands' sizes. \begin{example}{Minimal \flop-counts}{term:flops} The vector inner product $\alpha \coloneqq \dm[height=0, ']x \matvecsep \dv y$ (\ddot) with $\dv x, \dv y \in \R^n$ costs \SIvar{2 n}\flops: one multiplication and one addition per vector entry. The solution of a triangular linear system with multiple right-hand-sides $\dm[width=.4]B \coloneqq \dm[lower, inv]A \dm[width=.4]B$ (\dtrsm) with $\dm[lower]A\lowerpostsep \in \R^{n \times n}$ and $\dm[width=.4]B \in \R^{n \times m}$ requires \SIvar{n^2 m}\flops. The Cholesky decomposition of a symmetric positive definite (SPD) matrix $\dm[lower]L \dm[upper, ']L \coloneqq \dm A$ (\dpotrf) with $\dm A \in \R^{n \times n}$ costs \[ \SIvar{\frac16 n (n + 1) (2 n + 1)}\flops \approx \SIvar{\frac13 n^3}\flops \enspace. \] \end{example} Note that an operation's minimal \flop-count only provides a lower bound for routines implementing it; reasons for exceeding this bound range from technical limitations to cache-aware data movement patterns and algorithmic schemes that perform extra \flops to use faster compute kernels. \subsection{Data Volume and Movement} \label{sec:term:datamovement} The largest portion of a scientific computation's memory footprint is typically occupied by its numerical data consisting of floating-point numbers. A real number in single- and double-precision requires, respectively, 4 and~\SI8\bytes, whereas complex numbers are represented as two consecutive real numbers and thus require twice the space. Since throughout this work we mostly use double-precision numbers---conventionally called ``\definition[$\SI1\double = \SI8\bytes$]{doubles}''---we can proceed with the assumption that each number takes up \SI8\bytes. In dense linear algebra, the \definition[data volume in \bytes]{data volume} (in~\bytes) involved in a computation is determined almost exclusively by the involved matrix operands. For instance, a square matrix of size $1000 \times 1000$ consists of $\SI{e6}\doubles = \SI{8e6}\bytes \approx \SI{7.63}{\mebi\byte}$;\footnote{% We use the 1024-based binary prefixes for data volumes: $\SI{1024}\bytes = \SI1{\kibi\byte}$ (``kibibyte''), $\SI{1024}{\kibi\byte} = \SI1{\mebi\byte}$ (``mebibyte''), and $\SI{1024}{\mebi\byte} = \SI1{\gibi\byte}$ (``gibibyte''). } vector and scalar operands in comparison take up little space: A vector of size~1000 requires $\SI{8000}\bytes = \SI{7.81}{\kibi\byte}$, and a scalar fits in just \SI8\bytes. While a computation's data volume describes how much data is involved in an operation, it says nothing about how often it is accessed. For this purpose we introduce the concept of \definition{data movement} that quantifies how much data is read from or written to memory. A computation's data movement is commonly higher than its data volume, because (parts of) the data are accessed multiple times. While the actual data movement of any dense linear algebra operation is highly implementation dependent, we can easily derive the \definition{minimal data movement} from the operation's mathematical formulation by summing the size of all input and output operands, counting the operands that are both input and output twice. \begin{example}{Data volume and movement}{term:datamov} The vector inner product $\alpha \coloneqq \dm[height=0, ']x \matvecsep \dv y$ (\ddot) with $\dv x, \dv y \in \R^n$ involves a data volume of $\SIvar{2 n}\doubles = \SIvar{16 n}\bytes$ (ignoring the scalar $\alpha$); since both \dv x and \dv y need only be read once the data movement is also \SIvar{16 n}\bytes. The matrix-matrix product $\dm C \coloneqq \dm A \matmatsep \dm B + \dm C$ (\dgemm[NN]) with $\dm A,\allowbreak \dm B,\allowbreak \dm C \in \R^{n \times n}$ involves a data volume of $\SIvar{3 n^2}\doubles = \SIvar{24 n^2}\bytes$, however, since $\dm C$ is updated, the minimal data movement is $\SIvar{4 n^2}\doubles = \SIvar{32 n^2}\bytes$. The Cholesky decomposition $\dm[lower]L \dm[upper, ']L \coloneqq \dm A$ (\dpotrf) with $\dm A \in \R^{n \times n}$ uses only the lower-triangular part of the symmetric matrix \dm A,\footnotemark{} and \dm A is decomposed in place, i.e., it is overwritten by \dm[lower]L\lowerpostsep upon completion. Hence the data volume is $\SIvar{\frac12 n (n + 1)}\doubles \approx \SIvar{4 n^2}\bytes$, while the minimal data movement is at least $\SIvar{2 \cdot \frac12 n (n + 1)}\doubles \approx \SIvar{8 n^2}\bytes$. \end{example} \footnotetext{% Space for the whole matrix is allocated, but the strictly upper-triangular part is not accessed. } Note that the minimal data movement is a strict lower bound when none of the involved data is in any of the processor's caches. Furthermore, depending on the operation and the cache sizes, it may not be attainable in implementations. \subsection{Arithmetic Intensity} \label{sec:term:ai} Dividing an operation's minimal flop count by its minimal data movement yields its \definition{arithmetic intensity}: \begin{equation} \label{eq:term:ai} \text{arithmetic intensity} \defeqq \frac{\text{minimal \flop-count}}{\text{minimal data movement}} \enspace. \end{equation} A low arithmetic intensity means that few operations are performed per memory access, thus making the data movement a likely bottleneck; a high arithmetic intensity on the other hand indicates that a lot of work is performed per data element, thus making the floating-point computations the potential bottleneck. Arithmetic intensity divides dense linear algebra operations into two groups: While for \blasl1 (vector-vector) and~2 (matrix-vector) operations the intensity is quite small and independent of the problem size, it is considerably larger for \blasl3 (matrix-matrix) and dense \lapack-level operations, for which increases linearly with the problem size. \begin{example}{Arithmetic intensity}{term:ai} The vector inner product $\alpha \coloneqq \dm[height=0, ']x \dv y$ (\ddot) with $\dv x, \dv y \in \R^n$ is a \blasl1 operation that performs \SIvar{2 n}{\flops} over \SIvar{2 n}{\doubles} of data movement. Hence its arithmetic intensity is \[ \frac{\text{minimal \flop-count}}{\text{minimal data movement}} = \frac{\SIvar{2 n}\flops}{\SIvar{2 n}\doubles} = \SIvar{\frac18}{\flops\per\Byte} \enspace. \] The matrix-vector multiplication $\dv y \coloneqq \dm A \matvecsep \dv x + \dm y$ (\dgemv[N]) with $\dm A \in \R^{n \times n}$ and $\dv x, \dv y \in \R^n$ is a \blasl2 operation that performs \SIvar{2 n^2}{\flops} over \SIvar{n^2 + 3 n}{\doubles} of data movement ($\dv y$ is both read and written). Therefore, its arithmetic intensity is \[ \frac{\text{minimal \flop-count}}{\text{minimal data movement}} = \frac{\SIvar{2 n^2}\flops}{\SIvar{n^2 + 3 n}\doubles} \approx \SIvar{\frac14}{\flops\per\Byte} \enspace. \] The matrix-matrix multiplication $\dm C \coloneqq \dm A \matmatsep \dm B + \dm C$ (\dgemm[NN]) with $\dm A,\allowbreak \dm B,\allowbreak \dm C \in \R^{n \times n}$ is a \blasl3 that performs \SIvar{2 n^3}{\flops} over \SIvar{4 n^2}{\doubles} of data movement ($\dm C$ is both read and written). Hence, its arithmetic intensity \[ \frac{\text{minimal \flop-count}}{\text{minimal data movement}} = \frac{\SIvar{2 n^3}\flops}{\SIvar{4 n^2}\doubles} = \SIvar{\frac n{16}}{\flops\per\Byte} \] grows linearly with the problem size~$n$ and already exceeds the intensity of \dgemv for matrices as small as $5 \times 5$. \end{example} We revisit the arithmetic intensity in \cref{sec:term:eff}, where it determines whether a computation's performance is limited by the processor's memory subsystem or its floating-point units. \section*{About This Document} \def\gettexliveversion#1, #2 (#3)#4\relax{#2} \newcommand\pdftexver{\expandafter\gettexliveversion\pdftexbanner\relax\xspace} This document was written in \href{https://www.latex-project.org/}{\LaTeXe} and typeset with \href{http://www.tug.org/applications/pdftex/}{pdfTeX} \pdftexver on \today. It relies on the following packages: \href{http://ctan.org/pkg/microtype}{\code{microtype}} for micro-typography; \href{http://ctan.org/pkg/listings}{\code{listings}} and \href{http://ctan.org/pkg/tcolorbox}{\code{tcolorbox}} for algorithms, listings, and examples; \href{http://ctan.org/pkg/pgf}{\code{tikz}} and \href{http://ctan.org/pkg/pgfplots}{\code{pgfplots}} for graphics and plots; \href{http://ctan.org/pkg/drawmatrix}{\code{drawmatrix}} for matrix visualizations; \href{http://ctan.org/pkg/cleveref}{\code{cleveref}} and \href{http://ctan.org/pkg/hyperref}{\code{hyperref}} for references and hyperlinks; and \href{http://ctan.org/pkg/biblatex}{\code{biblatex}} for the bibliography. \subsection{Timing Kernels in \lapack's \texorpdfstring\dgeqrf{dgeqrf}} \label{sec:cache:qr:alg} \input{cache/alg} \subsection{Cache-Aware Timings} \label{sec:cache:qr:timings} \input{cache/timings} \subsection{Modeling the Cache} \label{sec:cache:qr:cache} \input{cache/cache} \subsection{Varying the Setup} \label{sec:cache:qr:res} \input{cache/qrresults} \section{Application to Other Algorithms} \label{sec:cache:algs} \input{cache/otheralgs} \section{Feasibility on Modern Hardware} \label{sec:cache:new} \input{cache/new} \section{Summary} \label{sec:cache:conclusion} \input{cache/conclusion} } \subsection{In- and Out-of-Cache Timings} \label{sec:cache:icoc} \input{cache/figures/ooc} Out-of-core timings are hardware independent, and just as on the \harpertownshort serve as an upper bound on the \sandybridgeshort and \haswellshort. This is illustrated in \cref{fig:cache:ooc} for the inversion of a lower-triangular matrix $\dm[lower]A \in \R^{3200 \times 3200}$ with \dtrtri[LN] (\cref{alg:dtrtriLN2}) and block size~$b = 64$ on the \haswellshort, and the QR~decomposition of $\dm A \in \R^{2400 \times 2400}$ with \dgeqrf (\cref{alg:dgeqrf}) and $b = 32$ on the \sandybridgeshort{}---the chosen matrices comprise around \SI{40}{\mebi\byte} and thus exceed the \sandybridgeshort's and \haswellshort's last-level cache~(L3) of, respectively, \SIlist{20,30}{\mebi\byte}. The out-of-cache timings indeed consistently overestimate the in-algorithm timings---by up to~\SI{347}{\percent} for the last call to \refdtrmmRUNN in the QR~decomposition \dgeqrf on the \sandybridgeshort (\cref{fig:cache:ooc:dgeqrf:sandybridge} is clipped at~\SI{175}\percent). As such, these measurements serve well as an upper bound on the in-algorithm timings. \input{cache/figures/ic} Fore the same scenarios \cref{fig:cache:ic} presents the error of our previous in-cache setup with respect to the in-algorithm timings: While we expect the our setup to yield faster kernel executions than the in-algorithm timings, on the \sandybridge (with \turboboost disabled) the in-cache timings are still up to~\SI{.51}{\percent} slower than the in-algorithm timings (not accounting for the small unblocked \dgeqr2); on the \haswell (with \turboboost enabled), the relative errors for \dtrtri[LN] and \dgeqrf reach, respectively, \SIlist{1.67;3.44}\percent. \input{cache/figures/ictb} Further investigation reveals that the processor's \intel{} \turboboost is a source of complication for out measurements: As \cref{fig:cache:ictb} shows, enabling \turboboost on the \sandybridge leads to overestimations of the \dtrtri[LN]'s and \dgeqrf's most compute-intensive operations (i.e., the \refdtrmmLLNN and the two \dgemm{}s (\ref*{plt:dgemmTN}, \ref*{plt:dgemmNT})), by up to, respectively, \SIlist{3.20;2.79}\percent. While \turboboost increases the overestimation of individual kernels, this phenomenon's origin lies in the processor's cache hierarchy: Within an algorithm, each kernel is invoked with a distinct cache precondition, i.e., with only portions of its operands in the processor's caches. Since our algorithm-independent measurements do clearly not match such preconditions, we attempted to construct conditions in which the kernel executes at its absolute peak performance with different cache setups: \begin{itemize} \item First, we used simple repeated execution of the kernel without any modification of the cache in between as before. \item Next, we accessed the kernel operands in various orders prior to the invocation. E.g., for a \dgemm $\dm[width=.25]C \coloneqq \dm[width=.8]A \matvecsep \dm[width=.25, height=.8]B + \dm[width=.25]C$, we attempted all permutations of access orders, such as \dm[width=.8]A--\dm[width=.25, height=.8]B--\dm[width=.25]C and \dm[width=.25]C--\dm[width=.8]A--\dm[width=.25, height=.8]B. \item Finally, we refined the access granularity and attempted to bring operands into cache not as a whole but only partially: For a kernel with one operand larger than the cache and the other operand(s) only a fraction of that size (e.g., the \dgemm[TN] (\ref*{plt:dgemmTN}) in \dgeqrf: $\dm[width=.25]C \coloneqq \dm[width=.8]A \matvecsep \dm [width=.25, height=.8]B + \dm[width=.25]C$ where \dm[width=.25]B and \dm[width=.25]C are of width~$b$ and close to the problem size~$n$ in height), we bring the entire small operand(s) into cache but only portions of the large one. \input{cache/figures/acc} \Cref{fig:cache:acc} presents which operand portions we chose to load into the cache. These choices are based on the assumption that any kernel implementation likely traverses the input matrix somehow form from the top-left \tsearrow to the bottom-right.\footnote{% Exceptions are, e.g., \dtrsm[RLNN] ($B \coloneqq B A^{-1}$) and \dtrsm[LUNN] ($B \coloneqq A^{-1} B$), which must traverse the triangular~$A$ from the bottom-right to the top-left---in these cases the accessed matrix portions are mirrored accordingly. } Therefore, we bring a column panel of the operand, a row panel, a square block, or any combination of these into the processor's caches. While doing so, we varied the sizes~$s_1$, $s_2$, and~$s_3$ of the accessed operand portions. \end{itemize} While in some scenarios changing the in-cache setup for kernel invocations reduced the runtime overestimation, the effects were not consistent across different algorithms, kernels, processors, and \blas implementations. Altogether, it was not possible to determine general, algorithm-independent in-cache setups that yield a clear lower bound on the in-algorithm timings. \subsection{Algorithm-Aware Timings} \label{sec:cache:algaware} Since our above attempts at algorithm-independent in-cache timings did not yield the required lower bound on in-algorithm timings, the only alternative is to tailor the timing setups to individual algorithms. We might for instance setup each kernel timing with several preceding kernel invocations from within the algorithms. Such obtained \definition{algorithm-aware timings} yield accurate estimates for the in-algorithm timings, and rid us of the need for combining in- and out-of-cache estimates. \input{cache/figures/exact} \begin{example}{Algorithm-aware timings}{cache:algaware} \Cref{fig:cache:exact} presents the accuracy of algorithm-aware timings as estimates for in-algorithm timings for the inversion of a lower-triangular matrix (\dtrtri[LN]) and the QR~decomposition (\dgeqrf) on a \sandybridge (with \turboboost enabled) using single-threaded \openblas. The algorithm-aware timings were created by preceding each measured kernel invocation with the calls from the corresponding blocked algorithm that were executed since that kernel's last invocation. \Cref{fig:cache:exact:dtrtri} shows that for \dtrtri[LN] the algorithm-aware timings are with few exceptions within~\SI1{\percent} of the in-algorithm timings with an average absolute relative error (ARE) of~\SI{.54}\percent. As seen in \Cref{fig:cache:exact:dtrtri}, for the \dgetrf the relative error is overall larger yet similarly spread around~\SI0{\percent} with an average ARE of~\SI{.84}\percent. \end{example} While this approach yields accurate estimates, when the kernel invocations for each algorithm execution are timed separately and each measurement is preceded with a setup of one or more kernels, the timing procedure takes effectively longer than executing and measuring the target algorithm repeatedly. As a result, this method is at the same time highly accurate and impractical, which is why we do not further pursue it. \subsection{Cholesky Decomposition: \texorpdfstring{\dpotrf[U]}{dpotrf}} \label{sec:cache:dpotrfU} \input{cache/figures/cholUalg} First, we consider \lapack's upper triangular Cholesky decomposition \dpotrf[U] \[ \dm[lower, ']U \dm[upper]U \coloneqq \dm A \] of a symmetric positive definite $\dm A \in \R^{n \times n}$ in upper triangular storage. \Cref{alg:dpotrfU} presents the blocked algorithm employed in this routine, which is the transpose of \dpotrf's algorithm for lower-triangular case (\cref{alg:chol2} on \cpageref{algs:chol}). As the algorithm traverses \dm A, both the size and shape of~\dm[mat02, width=1.25]{A_{02}} (the largest operand) change noticeably: It starts as row panel, then grows to a square matrix and finally shrinks to a column panel. \dm[mat02, width=1.25]{A_{02}}'s size determines the workload performed by the algorithm's large \refdgemmTN, which is reflected in the in-algorithm timings in \cref{fig:cache:dpotrfU:instr}. \input{cache/figures/cholres} In our experiments, we execute \dpotrf[U] on a \harpertown with single-threaded \openblas, $\dm A \in \R^{2400 \times 2400}$,\footnote{% For $n = 2400$, the upper-triangular portion of~$A$ takes up about \SI{12}{\mebi\byte}---twice the size of the L2~cache. } and block size $b = 32$. \Cref{fig:cache:dpotrf:res} presents the relative performance difference with respect to in-algorithm timings for both repeated execution timings and our final estimates. Our estimates yield improvements for the \refdsyrkUT and \refdpotfU involving large matrices in the middle of \dm A's traversal. In the beginning of the traversal, the estimates are generally too pessimistic because some matrices are (partially) brought into cache by prefetching, which is not accounted for in our estimates. On average the relative error is reduced from~\SIrange{11.11}{7.87}{\percent}, i.e., by a factor of~1.41. However, note that the improvement is only visible in the averaged per-kernel relative error: Since the runtime of large \dgemm[TN]~(\ref*{plt:dgemmTN}) is overestimated, the accumulated runtime estimate for the entire algorithm actually becomes less accurate. \subsection{Inversion of a Triangular Matrix: \texorpdfstring{\dtrtri[LN]}{dtrtri}} \label{sec:cache:dtrtriLN} \input{cache/figures/trinvalg} We now take a closer look at \lapack's inversion of a lower-triangular matrix \dtrtri[LN] \[ \dm[lower]A \coloneqq \dm[lower, inv]A \] with $\dm A \in \R^{n \times n}$, whose blocked algorithm is presented in \cref{alg:dtrtriLN2}. In contrast to the previous operations, this algorithm traverses \dm A \tnwarrow from the bottom-right to the top-left, thereby operating on sub-matrices of increasing size. \Cref{fig:cache:dtrtriLN:instr} shows the in-algorithm timings for the algorithm, which are dominated by \refdtrmmLLNN. \input{cache/figures/trinvres} We execute \dtrtri[LN] on a \harpertown with single-threaded \openblas, $\dm A \in \R^{2400 \times 2400}$, and block size $b = 32$. \Cref{fig:cache:dtrtriLN:res} compares the performance measurements from repeated execution and our final estimates to in-algorithm timings: The improvements of our estimates are most significant in \refdtrmmLLNN (which performs the most computation) and \refdtrtiLN; the error is reduced from an average of~\SIrange{6.70}{3.37}{\percent}---a total improvement of~$1.99\times$. \subsection{Summary} We have seen that, on a \harpertown the accuracy of our runtime estimates for kernels within blocked algorithms is increased by taking the state of the L2~cache throughout the algorithm execution into consideration. For different algorithms, problem sizes, block sizes, \blas implementations, and thread counts, we have seen improvements between~$1.15\times$ (with all 4~cores) and~$2.99\times$. \chapter{Conclusion} \label{ch:conclusion} This dissertation set out to predict the performance of dense linear algebra algorithms. It targeted two types of algorithms that require different prediction approaches: blocked algorithms and tensor contractions. For blocked algorithms, we accomplished accurate performance predictions through automatically generated performance models for compute kernels. Our predictions both reliably identify the fastest blocked algorithm from potentially large numbers of available alternatives, and select a block size for near-optimal algorithm performance. Our approach's main advantage is its separation of the model generation and the performance prediction: While the generation may take several hours, thousands of algorithm executions are afterwards predicted within seconds. A discussed downside to the approach, however, is that it does not account for algorithm-dependent caching effects. For tensor contractions, we established performance predictions that identify the fastest among potentially hundreds of alternative \blas-based contraction algorithms. By using cache-aware micro-benchmarks instead of our performance models, our solution is highly accurate even for contractions with severely skewed dimensions. Furthermore, since these micro-benchmarks only execute a tiny fraction of each tensor contraction, they provide performance predictions orders of magnitude faster than empirical measurements. Together, our model generation framework and micro-benchmarks form a solid foundation for accurate and fast performance prediction for dense linear algebra algorithms. \section{Outlook} The techniques presented in this dissertations offer numerous opportunities for applications and extensions: \begin{itemize} \item Our methods can be applied to predict the performance various types of algorithms and operations, such as recursive algorithms and algorithms-by-blocks. \item For dense eigenvalue solvers, our models can predict the two most computationally intensive stages: The reduction to tridiagonal form and the back-transformation. By additionally estimating the data-dependent performance of tridiagonal eigensolvers, one can predict the solution of complete eigenproblems. \item Beyond individual operations, our predictions can be applied to composite operations and algorithms, such as matrix chain multiplications or least squares solvers. \item Our models were designed to provide estimates for configurable yet limited ranges of problem sizes. For extrapolations to larger problems they should be revised to ensure that local performance phenomena do not distort faraway estimates. \item Computations on distributed memory systems, accelerators, and graphics cards can be predicted by combining our techniques with models for data movement and communication. \end{itemize} \chapter*{Abstract} \input{abstract/abstract} \chapter*{Acknowledgments} First and foremost, I would like to express my sincere gratitude to my advisor Paolo Bientinesi. While guiding me through my studies, he always embraced my own ideas and helped me shape and develop them in countless discussions. While he granted me freedom in many aspects of my work, he always had time for anything between a quick exchange of thoughts and extensive brainstorming sessions. Beyond our professional relationship, we enjoyed twisty puzzles and board games in breaks from work, long game nights, and annual trips to SPIEL. I consider my self lucky to have spend my time as a doctoral student with him and his research group. The HPAC group proved to be much more than a collection of researchers working on remotely associated projects; my colleagues were not only a source of incredibly valuable discussions and feedback regarding my work, we also indulged in various unrelated arguments and exchanges over lunch and at many other occasions. My thanks go to Edoardo Di~Napoli, Diego Fabregat-Traver, Paul Springer, Jan Winkelmann, Henrik Barthels, Markus Höhnerbach, Sebastian Achilles, William McDoniel, and Caterina Fenu, as well as our former group members Matthias Petschow, Roman Iakymchuk, Daniel Tameling, and Lucas Beyer. I am grateful for financial support from the {\namestyle Deutsche Forschungs\-gemeinschaft} (DFG) through grant GSC~111 (the graduate school AICES) and the {\namestyle Deutsche Telekomstiftung}. Their programs not only funded my work, but opened further opportunities in the form of seminars and workshops, and connected me with like-minded students from various disciplines. The {\namestyle\rwth IT Center} provided and maintained an extremely reliable infrastructure central to my work: the {\namestyle \rwth Compute Cluster}. I thank its staff not only for ensuring smooth operations but also for their competent and detailed responses to my many inquiries and requests regarding our institute's cluster partition. The AICES service team did their best to shield me from the bureaucracy of contracts, stipends, and reimbursements. I am grateful they allowed me to focus solely on my research. Even more important than a gratifying work environment is forgetting about it every once in a while. My friends played a bigger role in this effort than probably most of them know, whether we were simply spending time hanging out or playing games, went swimming, climbing or playing badminton, or taught swimming and worked as lifeguards. You are too many to enumerate, but you know who you are. Finally, but most importantly, none of this would have been possible without the endless and uncompromising support of may parents. You are the reason I grew into the person I am today. Danke! \tableofcontents \subsection{Motivation: Blocked Algorithms} \label{sec:intro:blocked:algs} \definitionp[blocked algorithm]{Blocked algorithms} are commonly used to exploit the performance of optimized \blasl3 kernels\footnote{% The {\namestyle Basic Linear Algebra Subprograms} (BLAS) form the basis for high-performance in dense linear algebra. See \cref{app:term,app:libs}. } in other matrix operations, such as decompositions, inversions, and reductions. Every blocked algorithm traverses its input matrix (or matrices) in steps of a fixed \definition{block size}; in each step of this traversal, it exposes a set of \definition[sub-matrices\\updates]{sub-matrices} to which it applies a series of {\em updates}. Through these updates, it progresses with the computation and obtains a portion of the operation's result; once the matrix traversal completes, the entire result is computed. \input{intro/figures/blocked} \footnotetextbefore{% \Cref{app:libs} gives an overview of the \blas and \lapack routines used throughout this work. When specified, the subscripts indicate the values of the flag arguments, which identify the variant of the operation; e.g., in \dpotrf[L] the \code L corresponds to the argument \code{uplo} indicating a lower-triangular decomposition. } \begin{example}{Blocked algorithms for the Cholesky decomposition}{intro:chol} \newcommand\Azz{\dm[mat00, lower]{A_{00}}\xspace}% \newcommand\Aoz{\dm[mat10, height=.5]{A_{10}}\xspace}% \newcommand\Aoo{\dm[mat11, size=.5, lower]{A_{11}}\xspace}% \newcommand\Atz{\dm[mat20, height=1.25]{A_{20}}\xspace}% \Cref{algs:chol} illustrates blocked algorithms for a simple yet representative operation: the lower-triangular Cholesky decomposition \[ \dm[lower]L \dm[upper, ']L \coloneqq \dm A \] of a symmetric positive definite (SPD) matrix $\dm A \in \R^{n \times n}$ in lower-triangular storage (\lapack: \dpotrf[L]\footnotemark). For this operation there exist three different blocked algorithms. Each algorithm traverses \dm A diagonally from the top-left to the bottom-right \tsearrow and computes the Cholesky factor~\dm[lower]L in place. At each step of the traversal, the algorithm exposes the sub-matrices shown in \cref{algs:chol:traversal} and makes progress by applying the algorithm-dependent updates in \cref{alg:chol1,alg:chol2,alg:chol3}. Before these updates, the sub-matrix~\Azz, which in the first step is of size $0 \times 0$, already contains a portion of the Cholesky factor~\dm[lower]L; after the updates, the sub-matrices~\Aoz and~\Aoo also contain their portions of~\dm[lower]L, and in the next step become part of~\Azz. Once the traversal reaches the bottom-right corner (i.e., \Azz is now of size $n \times n$), the entire matrix is factorized. \end{example} Blocked algorithms pose two \definition[optimization challenges:\\alternative algorithms]{optimization challenges}: \begin{itemize} \item For each operation there typically exist several {\em alternative algorithms}, which are mathematically equivalent in exact arithmetic; however, even if such algorithms perform the same number of floating point operations, they may differ significantly in performance. \item For each algorithm, the \definition{block size} influences the number of traversal steps and the sizes and shapes of the exposed sub-matrices, and thus the performance of the kernels applied to them. \end{itemize} What makes matters more complicated is that the optimal choice depends on various factors, such as the hardware , the number of threads, the kernel implementations, and the problem size. \input{intro/figures/chol_vars} \footnotetextbefore{% \Cref{app:hardware} provides an overview of the processors used throughout this work. } \begin{example}{Performance of alternative algorithms}{intro:chol:var} \Cref{fig:intro:chol:vars} shows the performance of the three blocked Cholesky decompositions from \cref{algs:chol} with block size~$b = 128$ and increasing problem size~$n$ on a 12-core \haswell\footnotemark{} with single- and multi-threaded \openblas. In both the single- and multi-threaded scenarios, algorithm~3~(\ref*{plt:chol3}) is the fastest among the three alternatives for all problem sizes. On a single core and for problem size $n = 4152$, it is \SIlist{27.40;12.89}{\percent} faster than, respectively, algorithms~1~(\ref*{plt:chol1}) and~2~(\ref*{plt:chol2}), and it reaches up to \SI{91.01}{\percent} of the processor's theoretical peak performance (red line \legendline[very thick, darkred] at the top of the plot). On all 12~of the processor's cores, algorithm~3~(\ref*{plt:chol3}) still reaches an efficiency of~\SI{69.70}\percent, and outperforms algorithms~1~(\ref*{plt:chol1}) and~2~(\ref*{plt:chol2}) by, respectively, $5.21\times$ and~$1.92\times$. Although algorithm~3~(\ref*{plt:chol3}) is clearly the fastest in this and many other scenarios, \lapack's \dpotrf[L] implements algorithm~2~(\ref*{plt:chol2}). For other operations, the choice becomes more complicated, since no single algorithm is the fastest for all problem sizes and scenarios. For instance, for the single-threaded inversion of a lower-triangular matrix $\dm[lower]A \coloneqq \dm[lower, inv]A$, two different algorithms are the fastest for small and large matrices; with the performance differing by up to~\SI{13}{\percent} in either direction (\cref{sec:pred:var:trinv}). \end{example} \input{intro/figures/chol_b} \begin{example}{Influence of the block size on performance}{intro:chol:b} Let us consider the blocked Cholesky decomposition algorithm~3~(\ref*{plt:chol3} in \cref{fig:intro:chol:vars}) with fixed problem sizes~$n = 1000$, 2000, 3000, and~4000 and varying block size~$b$. \cref{fig:intro:chol:b} presents the performance of these algorithm executions for 1 and 12~threads on the \haswell using \openblas: Single-threaded, the optimal block size increases from~$b = 96$ for~$n = 1000$ to~$b = 184$ for~$n = 4000$. On 12~cores, on the other hand, the performance is less smooth and the optimal choices for~$b$ are between~56 and~112. \Cref{fig:intro:chol:b} demonstrates the importance of selecting the block size dynamically: If we use~$b = 184$, which is optimal for~$n = 4000$ on one core, for~$n = 1000$ on 12~cores we only reach \SI{77.62}{\percent} of the algorithm's optimal performance. On the other hand, \lapack's default block size~$b = 64$ (which is close to the optimal~$b = 56$ for~$n = 1000$ on 12~cores) would reach \SI{95.95}{\percent} of the optimal single-threaded performance for~$n = 4000$. \end{example} \subsection{Prediction through Performance Models} \label{sec:intro:blocked:pred} Naturally, both the best algorithm and its optimal block size for a given scenario (operation, problem size, hardware, kernel library, multi-threading) can be determined through exhaustive performance measurements; however, this is extremely time consuming and thus often impractical. Instead we aim to determine the optimal configuration {\em without executing} any of the alternative algorithms. For this purpose, we use the hierarchical structure of blocked algorithms: Their entire computation is performed in a series of calls to a few kernel routines; hence, by accurately estimating the runtime of these kernels, we can predict an entire algorithm's runtime and performance. In order to estimate the kernel runtimes, let us study how these kernels are used: In each algorithm execution, the same set of kernels is invoked repeatedly---once for each step of the blocked matrix traversal. Each invocation, however, works on operands of different size depending on the progress of the algorithms' traversal, the input problem size, and the block size. In short, we need to estimate the performance of only a few kernels, yet with potentially wide ranges of operand sizes. Our solution is \definition{performance modeling}, as detailed in \cref{ch:model}: Based on a detailed study of how a kernel's arguments (i.e., flags, operand sizes, etc.) affect its performance, we design performance models in the form of piecewise multi-variate polynomials. These models are generated automatically once for each hardware and software setup and subsequently provide accurate performance estimates at a tiny fraction of the kernel's runtime. Using such estimates, we \definition[performance prediction]{predict} the {\em performance} of blocked algorithms, as presented in \cref{ch:pred}. These fast predictions prove to be highly accurate, and allow us to both rank the blocked algorithms for a given operation according to their performance, and find near-optimal values for the algorithmic block sizes. While our models yield accurate performance estimates for individual kernel executions, they do not capture the performance influence of \definition{caching} between kernels. Prior to the invocation of each compute kernel in an algorithm, typically only a portion of its operands are in cache, and loading operands from main memory increases the kernel runtime. \cref{ch:cache} investigates how caching effects can be accounted for in blocked algorithms, and attempts to combine pure in- and out-of-cache estimates into more accurate prediction. However, while the results look promising on a rather old \harpertown, the analysis reveals that on modern processors the effect caching on kernel performance is so complex that accounting for it in algorithm-independent performance models to further improve our prediction accuracy is infeasible. \chapter{Introduction} \chapterlabel{intro} { \tikzsetexternalprefix{externalized/intro-} \input{intro/intro.tex} \section[Performance Modeling for Blocked Algorithms] {Performance Modeling\newline for Blocked Algorithms} \label{sec:intro:blocked} \input{intro/blocked} \section{Micro-Benchmarks for Tensor Contractions} \label{sec:intro:tensor} \input{intro/tensors} \section{Related Work} \label{sec:intro:relwork} \input{intro/relwork} } \subsection{Dense Linear Algebra Libraries and Algorithms} \label{sec:relwork:libsalgs} We begin with a brief history of the fundamental DLA libraries \blas and \lapack and prominent implementations in \cref{sec:relwork:libs}. We then focus on blocked algorithms and their tuning opportunities in \cref{sec:relwork:blocked}, and finally give an overview of alternative algorithms and libraries for distributed-memory and accelerator hardware in, respectively, \cref{sec:relwork:altalgs,sec:relwork:dist}. \subsubsection{\blas and \lapack} \label{sec:relwork:libs} The development of standardized DLA libraries began in~1979 with the inception of the {\namestyle Basic Linear Algebra Subprograms} (\definition{\blas})~\cite{blasl1}, a \fortran interface specification for, initially, various ``Level~1'' scalar and vector operations. It was subsequently extended to kernels for ``Level~2'' matrix-vector~\cite{blasl2} and ``Level~3'' matrix-matrix~\cite{blasl3} operations in, respectively, 1988 and~1990. The aim of the \blas specification is to enable performance portable applications: DLA codes reach high performance on different hardware by using architecture-specific \blas implementations. Although computer architectures have evolved dramatically in the last~40 years, this principle of performance portability is still at the core of all current DLA libraries. The \blas specification is accompanied by a reference implementation~\cite{blasweb} that, while fully functional and well documented, is deliberately simple and thus slow; to reach high performance, users instead link with optimized \definition[open-source implementations]{\blas implementations}. The oldest {\em open-source} implementation still in use is the {\namestyle Automatically Tuned Linear Algebra Software} (\atlas)~\cite{atlas1, atlas3, atlas2, atlasweb}, first released in 1997; this auto-tuning based library's main proficiency is to yield decent performance on a wide range of hardware platforms with little developer and user effort. The first major open-source implementation hand-tuned for modern processors with cache hierarchies was {\swstyle GotoBLAS}~\cite{gotoblas1, gotoblas2, gotoblasweb}. It reaches up to around \SI{90}{\percent} of a processor's peak floating-point performance for both sequential and multi-threaded Level~3 kernels and good bandwidth-bound performance for Level~1 and~2 operations. After {\swstyle GotoBLAS}'s discontinuation in~2010, its code-base and approach were picked up and extended to more recent processors in the \openblas library~\cite{openblasweb}, which is currently the fastest open-source implementation for many architectures. Also inspired by {\swstyle GotoBLAS}'s approach is the fairly recent {\namestyle \blas-like Library Instantiation Software} (\blis)~\cite{blis3, blis1, blis2, blisweb}, an open-source framework that provides optimized kernels for basic DLA operations, such as the \blas, based on one hand-tuned micro-kernel per architecture. In addition to open-source implementations, many hardware \definition[vendor implementations]{vendors} maintain and distribute their own high-performance {\em\blas}, e.g., \intel's {\namestyle Math Kernel Library} (\mkl)~\cite{mklweb}, \apple's framework \accelerate~\cite{accelerateweb}, and {\namestyle IBM}'s {\namestyle Engineering and Scientific Subroutine Library} (\essl)~\cite{esslweb}. \blas forms the basis for DLA libraries covering more advanced operations. The earliest library built on top of first \blasl1 and later Level~2 was {\swstyle LINPACK}~\cite{linpack, linpackweb}, a package of solvers for linear equations and least-squares problems from the~1970s and~1980s. {\swstyle LINPACK} together with {\swstyle EISPACK}~\cite{eispack, eispackweb}, a collection of eigenvalue solvers, was superseded by the {\namestyle Linear Algebra PACKage} (\definition{\lapack})~\cite{lapack, lapackweb} in~1992. \lapack has since been extended with new features and algorithms, and is still under active development. Just like \blas, \lapack functions as a de-facto standard interface specification for many advanced DLA operations; libraries such as \openblas and \mkl adopt its interface and provide tuned implementations of various routines. For more details on \blas and \lapack, and their kernels and implementations used throughout this work, see \cref{app:libs}. \subsubsection{Blocked Algorithms} \label{sec:relwork:blocked} \lapack uses \definition{blocked algorithms} for most of its dense operations. The core idea behind these algorithms is to leverage a processor's cache hierarchy by increasing the spacial and temporal locality of operands, as well as casting most of an operation's computation in terms of \blasl3 kernels. As a result, complex operations can reach performance levels close to the hardware's theoretical peak. However, for each operation, there typically exist multiple \definition{alternative blocked algorithms}, of which \lapack offers only one, but not always the fastest. The alternative algorithms for a given operation can be derived from its mathematical formulation systematically~\cite{derivingbalgs} and automatically~\cite{loopgen, pmegen}. Based on these principles, \libflame~\cite{libflameref, libflame, libflameweb} offers many alternative algorithms for each operation, and for several operations provides more efficient default algorithms than \lapack. In this work we consider \libflame's blocked algorithms for various operations, and aim to predict which of them is most efficient for given scenarios. Another caveat of blocked algorithms is their \definition[block size tuning]{block sizes}, which need to be carefully {\em tuned} to maximize performance. Since this is a well-known aspect of blocked algorithms~\cite{rooflinedla, blocksizetuning}, \lapack encapsulates and exposes all its tuning parameters in \ilaenv, a central routine that is used to configure the library at compile time; for many operations the block sizes used by \lapack's reference implementation of \ilaenv (64~for most algorithms) have been too small on recent hardware for quite some time. Although the necessity of optimizing block sizes is well understood and taken care of by implementations such as \mkl, it remains non-trivial, and in fact few end-users and application-developers are aware of it. The automated model-based optimization of the block size for blocked algorithms is the second major goal of this work. \subsubsection{Alternatives to Blocked Algorithms} \label{sec:relwork:altalgs} An alternative to blocked algorithms is \definition{recursive algorithms}, which avoid both the algorithm selection and block-size optimization. They are also known as ``cache oblivious'' algorithms~\cite{cacheoblivious2, cacheoblivious1} since they minimize the data-movement between cache levels~\cite{dlarec}. Recursion has been suggested for many DLA operations, such as the LU~decomposition~\cite{lurec, lurec2}, the Cholesky decomposition~\cite{cholrec}, triangular matrix inversion~\cite{trinvrec}, two-sided linear systems~\cite{sygstrec}, tall-and-skinny QR~factorization~\cite{qrrec}, and Sylvester-type equation solvers~\cite{recsy, recsyweb}. However, since no readily-available recursion-based library comparable to \lapack existed, we developed the {\namestyle Recursive \lapack collection} (\definition{\relapack})~\cite{relapack, relapackweb}. \relapack provides recursive implementations for 48~\lapack routines, and outperforms not only the reference implementation but in many cases also optimized libraries such as \openblas and \mkl. A second alternative to blocked algorithms tailored to shared-memory systems are task-based \definition{algorithms-by-blocks}, also known as ``block algorithms'' or ``tiled algorithms''. However, these algorithms not only introduce a specialized storage scheme of matrices ``by block'', but also require custom task scheduling mechanisms. Implementations of such schedulers include {\namestyle QUARK}~\cite{quark} as part of {\namestyle PLASMA}~\cite{plasma}, {\namestyle DAGuE}~\cite{dague}, {\namestyle SMPSs}~\cite{smpssdla}, and {\namestyle SuperMatrix}~\cite{supermatrix}. \subsubsection{Distributed-Memory and Accelerators} \label{sec:relwork:dist} \definition[distributed memory]{Distributed-memory} systems and super-computers are indispensable for large-scale DLA computations. The first noteworthy extension of the \blas and the \lapack to this domain was the {\namestyle Scalable Linear Algebra PACKage} (\scalapack)~\cite{scalapack, scalapackweb}, written in \fortran and based on \blas, \lapack, and the {\namestyle Message Passing Interface} (MPI). However, {\namestyle ScaLAPACK} is only sparingly updated (last in~2012), and, instead, the state of the art for distributed-memory DLA is {\namestyle Elemental}~\cite{elemental, elementalweb}, an actively developed \cpplang~library, based on \libflame's methodology in and object-oriented and templated programming techniques. Since \definition{accelerators} such as {\namestyle Xeon-Phi} coprocessors and graphics processors lend themselves well to compute-intensive operations, they are a natural target for DLA codes. While some classic \blas implementations such as \atlas, \blis, and \mkl, can be used on the x68-based {\namestyle Xeon Phi}s, separate libraries are required for graphics processors: {\namestyle NVIDIA}'s {\namestyle cuBLAS}~\cite{cublasweb} provides high-performance \blas kernels for {\langstyle CUDA}-enabled graphics cards, and {\namestyle clBLAS}~\cite{clblasweb} targets {\langstyle OpenCL}-capable devices. Furthermore, {\namestyle Matrix Algebra on GPU and Multicore Architectures} ({\namestyle MAGMA})~\cite{magma, magmaweb} targets \blas and \lapack operations on heterogeneous systems (e.g., CPU + GPU). \subsection{Performance Measurements and Profiling} \label{sec:relwork:meas} Runtime measurements of both application codes and algorithms are crucial in the investigation of performance behaviors, bottlenecks, as well as optimization and tuning in general; hence, numerous tools facilitate such measurements. Simple timers are accessible in virtually any language and environment: e.g., \code{time} in Unix, \code{rdtsc} in x86~assembly, \code{gettimeofday()} in~\clang, \code{omp\_get\_wtime()} in {\namestyle OpenMP}, \code{tic} and \code{toc} in \matlab, and \code{timeit} in \python. Several more advanced tools \definition[profiling]{profile} executions of functions and communications in applications by tracing or sampling: e.g., {\namestyle gprof}~\cite{gprof, gprofweb}, {\namestyle VAMPIR}~\cite{vampirweb}, {\namestyle TAU}~\cite{tau, tauweb}, {\namestyle Scalasca}~\cite{scalasca, scalascaweb}, and \intel's {\namestyle VTune}~\cite{vtuneweb}. While such tools are invaluable in the performance analysis of application codes, their generality makes them somewhat unwieldy for our purposes of investigating DLA kernel performance. Therefore, we designed {\namestyle Experimental Linear Algebra Performance Studies} (\definition{\elaps})~\cite{elaps, elapsweb}, a framework for performance measurements and analysis of DLA routines and algorithms, further detailed in \cref{sec:meas:elaps}. \subsection{Performance Modeling and Predictions} \label{sec:relwork:model} Predicting and modeling application performance is an important aspect of high-performance computing, and the term ``performance modeling'' is used to describe many different techniques and approaches. This section gives a brief overview of such approaches with focus on methods for DLA algorithms. The well-established \definition{Roofline model}~\cite{roofline1} does not predict performance, but relates an algorithm's attained performance to the hardware's potential: As detailed in \ref{sec:term:roofline}, it allows to evaluate an execution's resource efficiency by relating its algorithm's arithmetic intensity and int performance relative to the hardware's peak main-memory bandwidth and floating-point performance. It has been applied, implemented, and extended in numerous publications, such as~\cite{rooflinecache, rooflinetoolkit, roofline2}. Notably, \citeauthor*{rooflinedla} use the roofline model (the arithmetic intensity in particular) to optimize the block size for a blocked matrix inversion algorithm~\cite{rooflinedla}. Model-based performance tuning of \blas implementations was suggested for both \atlas~\cite{atlasmodel} and \blis~\cite{blismodel}, showing that near-optimal \blas performance can be reached without measurement-based autotuning: Instead they, e.g., select blocking sizes according to the \blas implementation and the target processor's cache sizes. Note that these approaches are used to tune \blas kernels, and do not actually predict their performance; hence they cannot serve as a basis for our predictions. Previous work in our research group by \citeauthor*{roman1} constructed accurate \definition[analytical models]{analytical performance models} for small DLA kernels~\cite{romandis, roman1}. These models target problems that fit within a \harpertown's last-level cache (L2), and are based on the number of memory-stalls and arithmetic operations as well as their overlap incurred by specific kernel implementations. As such, they require not only a deep understanding of the processor architecture, but also a detailed analysis of the kernel implementation. While the resulting models yield accurate predictions within a few percent of reference measurements, they are not easily extended to larger problems and other operations. Therefore, this work instead considers automatically generated, measurement-based models. \Citeauthor*{blis3model} construct \definition[piecewise models]{piecewise} runtime and energy {\em models}---somewhat similar to those presented in this work---for the \blis implementations of \dgemm and \dtrsm~\cite{blis3model} on a {\hwstyle Sandy Bridge-EP E5-2620}. However, their approach is based on extensive knowledge of \blis~\cite{blismodel}, and their models only represent one degree of freedom (by considering only square matrices or operations on panel matrices with fixed width/height). Their average runtime model accuracy for \dgemm and \dtrsm is, respectively, \SIlist{1.5;4.5}\percent, with local errors of up to, respectively, \SIlist{4.5;7}\percent. \citeauthor*{blischolmodel} extend this work to multi-threaded \dgemm, \dtrsm, and \dsyrk in order to predict the performance of a blocked Cholesky decomposition algorithm with fixed block size~\cite{blischolmodel}; their average runtime prediction errors are \SIlist{3.7;2.4}\percent, depending on the parallelization within \blis. In contrast to these publications, the modeling framework presented in this work, which was developed around the same time, is fully automated, applicable to any \blas- or \lapack-like routine, not limited to one implementation and hardware, and offers models with multiple degrees of freedom. In a separate effort \citeauthor*{tridiagmodel} constructs measurement-based, yet hardware- and \definition{implementation-independent models} in the form of a series of univariate polynomials (one kernel argument is represented by the polynomial, the other varied in the series) for several \blasl3 kernels~\cite{tridiagmodel, qrmodel}. These models are used to predict the performance of both a blocked reduction to tridiagonal form~\cite{tridiagmodel} and a blocked multishift QR~algorithm~\cite{qrmodel}. The resulting prediction error on an unspecified {\namestyle AMD Opteron} is reported to be below~\SI{10}{\percent} for the single-threaded tridiagonalization, and is on average around~\SI{10}{\percent} for the QR~algorithm using multi-threaded \blas. In contrast, the more general piecewise models proposed in this work yield considerable smaller prediction errors for various blocked algorithms. Several research projects model the performance of \definition[distributed memory]{distributed-memory} applications. A general purpose approach by \citeauthor*{alex1} builds basic performance models for kernels in application codes based on performance profiling~\cite{alex2, alex1}, allowing to investigate the complexity and scalability of application components. In the field of distributed-memory DLA, most modeling efforts target \scalapack using domain-specific knowledge through, e.g., polynomial fitting~\cite{scalapackpolfit} or hierarchical modeling of kernels~\cite{scalapckhierarchmodel}. \subsection{Tensor Contractions} \label{sec:relwork:tensor} Tensor contractions are at the core of scientific computations, such as machine learning~\cite{tensorml}, general relativity~\cite{generalrelativity, generalrelativity2}, and quantum chemistry~\cite{ccd2, ccd1}. Since generally speaking such contractions are high-dimensional matrix-matrix multiplications, they are closely related to \blasl3 operations, and in fact most contractions can be cast in terms of one or more calls to \dgemm, either by adding loops or transpositions; this is implemented in many frameworks, such as the {\namestyle Tensor Contraction Engine} (TCE)~\cite{tce, tceweb}, the {\namestyle Cyclops Tensor Framework} (CTF)~\cite{cyclops, cyclopsweb}, the \matlab{} {\namestyle Tensor Toolbox}~\cite{matlabtt, matlabttweb}, and {\namestyle libtensor}~\cite{libtensor, libtensorweb}. In contrast to these implementations, which rely on a single algorithm for each contraction (potentially selected through heuristics), previous work in our group by \citeauthor*{tensorgen} investigated the automated generation of all alternative \blas-based algorithms~\cite{tensorgen}. \Cref{ch:tensor} picks up this work and presents a performance prediction framework for such algorithms that allow to automatically identify the fastest algorithm~\cite{tensorpred}. More recent and ongoing work in our group by \citeauthor*{gett} attempts to go break the barrier between contraction algorithms and \dgemm implementations. Following the structured design of \blis~\cite{blis1}, they propose code generators that provide high-performance algorithms tailored to specific contraction problems that reach close to optimal performance~\cite{gett}. Their tools construct numerous alternative implementations, and identify the fastest through a combination of heuristics and micro-benchmarks. \subsection{Motivation: Tensor Contraction Algorithms} \label{sec:intro:tensor:algs} Computationally, tensor contractions are generalizations of matrix-vector and matrix-matrix products to operands of higher dimensionality. While \blas covers contractions of up to two-dimensional operands (i.e., matrices), there are no equivalently established and standardized high-performance libraries for general tensor contractions. Fortunately, just as a matrix-matrix products can be decomposed into sequences of matrix-vector products, higher dimensional tensor contractions can be cast in terms of matrix-matrix or matrix-vector kernels. (A broader overview of alternative approaches is given in \cref{sec:relwork:tensor}.) \input{intro/figures/tensor_algs} \begin{example}{Tensor contraction algorithms}{intro:tensor:algs} Let us consider the contraction $C_{abc} \coloneqq A_{ai} B_{ibc}$ (in Einstein notation), which is visualized as follows: \[ \begin{tikzpicture}[baseline=(c.base)] \begin{drawcube} \node[anchor=east] at (-1, 0, 1) {$\scriptstyle a$}; \node[anchor=north] at (0, -1, 1) {$\scriptstyle b$}; \node[anchor=north west] at (1, -1, 0) {$\scriptstyle c$}; \node (c) {$C$}; \end{drawcube} \end{tikzpicture} \coloneqq \begin{tikzpicture}[baseline=(c.base)] \begin{drawsquare} \node[anchor=east] at (-1, 0, 0) {$\scriptstyle a$}; \node[anchor=north] at (0, -1, 0) {$\scriptstyle i$}; \node {$A$}; \end{drawsquare} \end{tikzpicture} \matmatsep \begin{tikzpicture}[baseline=(c.base)] \begin{drawcube} \node[anchor=east] at (-1, 0, 1) {$\scriptstyle i$}; \node[anchor=north] at (0, -1, 1) {$\scriptstyle b$}; \node[anchor=north west] at (1, -1, 0) {$\scriptstyle c$}; \node {$B$}; \end{drawcube} \end{tikzpicture} \enspace. \] The entries~$C$\code{[a,b,c]} of the resulting three-dimensional tensor $C \in \R^{a \times b \times c}$ are computed as \[ \forall \code a \forall \code b \forall \code c :\ C\text{\code{[a,b,c]}} \coloneqq \sum_\code i A\text{\code{[a,i]}} B\text{\code{[i,b,c]}} \enspace. \] As further described in \cref{sec:tensor:alggen}, this contraction can be performed by a total of 36~alternative algorithms, each consisting of one or more \code{\bf for}-loops with a single \blas kernel at its core. Three examples of such algorithms using \blasl1, 2, and~3 kernels are shown in \cref{fig:intro:tensor:algs}. These algorithms use \matlab's ``\code:'' slicing notation\footnotemark{} to access matrices and vectors within the tensors~$A$, $B$, and~$C$; the resulting operand shapes within the tensors passed to the \blas kernel are shown alongside the algorithms. \end{example} \footnotetext{% The index ``\code:'' in a tensor refers to all elements along that dimension, e.g., $A$\code{[a,:]} is the \code a-th row of~$A$. } Each tensor contraction can be computed via \blas kernels through many---even hundreds---of algorithms, each with its own performance behavior. The \definition[optimization challenge:\\alternative algorithms\\skewed diensions]{optimization challenge} of identifying the fastest among such a set of {\em alternative algorithms} is especially difficult due to the in practice commonly encountered {\em skewed dimensions} (i.e., one or more dimensions are extremely small) for which most \blas implementations are typically not optimized. \input{intro/figures/tensor_perf} \begin{example}{Performance of contraction algorithms}{intro:tensor:perf} Let us consider the tensor contraction $C_{abc} \coloneqq A_{ai} B_{ibc}$ from \cref{ex:intro:tensor:algs} with tensors $A \in \R^{n \times 8}$, $B \in \R^{8 \times n \times n}$, and thus $C \in \R^{n \times n \times n}$; for~$n = 100$, this can be visualized as follows: \[ \begin{tikzpicture}[baseline=(c.base)] \begin{drawcube} \node[anchor=east] at (-1, 0, 1) {$\scriptstyle a$}; \node[anchor=north] at (0, -1, 1) {$\scriptstyle b$}; \node[anchor=north west] at (1, -1, 0) {$\scriptstyle c$}; \node (c) {$C$}; \end{drawcube} \end{tikzpicture} \coloneqq \begin{tikzpicture}[baseline=(a.base), x={(.08, 0)}] \begin{drawsquare} \node[anchor=east] at (-1, 0, 0) {$\scriptstyle a$}; \node[anchor=north] at (0, -1, 0) {$\scriptstyle i$}; \node (a) {$A$}; \end{drawsquare} \end{tikzpicture} \begin{tikzpicture}[baseline=(a.base), y={(0, .08)}] \begin{drawcube} \node[anchor=east] at (-1, 0, 1) {$\scriptstyle i$}; \node[anchor=north] at (0, -1, 1) {$\scriptstyle b$}; \node[anchor=north west] at (1, -1, 0) {$\scriptstyle c$}; \node (b) {$B$}; \end{drawcube} \end{tikzpicture} \enspace. \] \Cref{fig:intro:tensor:perf1} presents the performance of all 36~algorithms for this contraction on a \harpertown with single-threaded \openblas. While the two \dgemm-based algorithms~(\ref*{plt:intro:tensor:dgemm}) are clearly faster than the others, they differ in performance by up to \SI{23.32}\percent; with other kernels the difference are even more extreme, exceeding a factor of~60 for the \daxpy-based algorithms~(\ref*{plt:intro:tensor:daxpy}). \Cref{fig:intro:tensor:perf2} showcases the performance of algorithms for the more complex contraction $C_{abc} \coloneqq A_{ija} B_{jbic}$ on all 10~cores of an \ivybridge using multi-threaded \openblas. In this scenario, the performance of the \dgemm-based algorithms alone differs by up to~$3\times$. \end{example} One could argue that only \dgemm-based algorithms are viable candidates to achieve the best performance; while for the most part this observation is true, due to skewed dimensions, even the performance of only these algorithms can differ dramatically. Furthermore, some contractions (e.g., $C_a \coloneqq A_{iaj} B_{ji}$) cannot be implemented via \dgemm in the first place. Therefore, we aim at the accurate prediction of any \blas-based contraction, irrespective of which kernel is used. \subsection{Prediction through Micro-Benchmarks} \label{sec:intro:tensor:pred} At first sight the situation seems similar to the selection of blocked algorithms: We want to avoid exhaustive performance measurements and select the best algorithm {\em without executing} any of the alternatives; our strategy is once again to predict each algorithm's performance by estimating its invoked kernel's runtime. However, while performance models accurately estimates the performance of such kernels for many operand sizes, they perform rather poorly for operations with skewed dimensions: For extremely thin or small operands, \blas kernels exhibit strong size-dependent performance fluctuations, which are impractical to capture and represent in performance models. While we cannot rely on performance models, analyzing the structure of tensor contraction algorithms suggests a different approach: In contrast to blocked algorithms, a contraction algorithm performs its entire computation in a series of calls to a \definition[single kernel\\fixed size\\micro-benchmarks]{single \blas kernel} of with operands of {\em fixed size}. Based on this observation, we estimate the performance of such calls by constructing a small set of {\em micro-benchmarks} that executes the kernel only a few times, and thus performs only a fraction of the algorithm's computation. Since memory locality plays an especially important role in contractions with skewed dimensions, we carefully recreate the stat of the processor's caches within the micro-benchmarks to time the kernel in conditions analogous to those in the actual algorithm. Based on such micro-benchmarks, we can predict the total runtime of contraction algorithms for tensors of various shapes and sizes. These predictions reliably single out the fastest algorithm from a set of alternatives several orders of magnitude faster than a single algorithm execution. \subsubsection{Background and System Noise} \label{sec:meas:fluct:noise} The potentially most disturbing, yet also quite easily avoidable source of fluctuations are other \definition{background processes} competing for the processor's resources. \input{meas/figures/fluct} \begin{example}{Influence of background noise}{meas:fluct} \Cref{fig:meas:fluct} presents the runtime of 1000~repetitions of the matrix-matrix multiplication $\dm C \coloneqq \dm A \matmatsep \dm B + \dm C$ (\dgemm[NN]) with $\dm A, \dm B, \dm C \in \R^{100 \times 100}$ on a \broadwell (as part of {\namestyle MacBook Pro} with \apple's framework \accelerate and a \sandybridge (as part of \rwth's computing cluster) with \mkl. On the \broadwell~(\ref*{plt:ibacc:circ}) with various other applications running in the background (e.g., browser and music player), the fluctuations are enormous: The measurement standard deviation is over $4\times$~the mean runtime. On the \sandybridge~(\ref*{plt:sbmkl:circ}) with no other user applications running during measurements, the fluctuations are already much smaller at \SI{2.36}{\percent}~of the average time. For larger problem sizes, the fluctuations are considerably smaller, and quickly fall below \SI{.1}\percent. \end{example} While these type of fluctuations can be avoided to some extend by ensuring that no other applications run during measurements, they cannot be avoided altogether even with exclusive access to dedicated high-performance hardware---the remaining fluctuations are known as \definition{system noise}. Hence, for our experiments, models, and micro-benchmarks all our measurements are repeated at least five times and \definition{summary statistics} of the runtime (or performance) are presented, such as the minimum or median. \subsubsection{\intel{} \turboboost} \label{sec:meas:fluct:turbo} Compute-bound dense linear algebra computations, such as \blasl3 and \lapack-level routines, benefit directly from increased processing frequencies. Therefore, they usually trigger \intel{} \turboboost and constantly run at the maximum turbo frequency if possible. Since this frequency cannot be sustained indefinitely on most machines, the processor frequency is eventually lowered and henceforth fluctuates to keep the hardware within its power and thermal limits. \input{meas/figures/turbo} \begin{example}{\turboboost}{meas:turbo} \Cref{fig:meas:turbo} presents the runtime of repeated matrix-matrix multiplications $\dm C \coloneqq \dm A \matmatsep \dm B + \dm C$ (\dgemm[NN]) with $\dm A, \dm B, \dm C \in \R^{1300 \times 1300}$ alongside the processor's temperature and frequency\footnotemark{} on both cores of a \broadwell with multi-threaded \accelerate; in this experiment, no other resource intensive programs run in the background. In the beginning, the processor is at a cool \SI{53}{\celsius}~(\ref*{plt:meas:turbo:temp}) and each \dgemm[NN] takes about \SI{60}{\ms}~(\ref*{plt:meas:turbo:time}) at the maximum turbo frequency of \SI{3.4}{\GHz}~(\ref*{plt:meas:turbo:freq}). The processor temperature increases steadily up to \SI{105}{\celsius} around repetition~200 (\SI{12}{\second} into the experiment); at this point the frequency is reduced and continuously adjusted between \SIlist{3;3.2}{\GHz} such that this temperature threshold is not exceeded. This change in frequency, as well as its fluctuations towards the end have a direct effect on the \dgemm[NN]'s runtime: It increases by about~\SI{10}{\percent} to roughly~\SI{67}\ms. \end{example} \footnotetext{% Obtained through the \intel {\namestyle Power Gadget}. } The behavior of \turboboost depends enormously on the computation environment: While on a work-station or laptop system the processor temperature increases rapidly and the maximum turbo frequency is not sustained for long, on dedicated high-performance compute clusters, efficient cooling allows for the processor to operate at the maximum turbo frequency for much longer, if not indefinitely. However, even in our main computing facilities at the {\namestyle\rwth IT Center}, we observed notable fluctuations of the frequency below its maximum with negative impacts on our measurement quality and stability. Throughout this work, we consider processors with and without enabled \turboboost. While the performance of these two cases is not directly comparable, we consider our methodologies for both scenarios. In particular, \turboboost is disabled on our \sandybridge (unless otherwise stated) and enabled on our \haswell---an overview of all hardware configurations is given in \cref{app:hardware}. \subsubsection{Distinct Long-Term Performance Levels} \label{sec:meas:fluct:longterm} Even with \turboboost disabled, a processor's speed is not always fixed to its base frequency and we instead observed jumps between two or more \definition{performance levels}. \input{meas/figures/longterm} \begin{example}{Performance levels}{meas:longterm} \Cref{fig:meas:longterm} presents the runtime of 1000~repetitions of the matrix-matrix multiplication $\dm[width=.05]C \coloneqq \dm A \matvecsep \dm[width=.05]B + \dm[width=.05]C$ (\dgemm[NN]) with $\dm A \in \R^{4000 \times 4000}$ and $\dm[width=.05]B, \dm[width=.05]C \in \R^{4000 \times 200}$ on a \sandybridge and a \haswell (both with \turboboost disabled) with single-threaded \openblas. On both systems, we can clearly make out two distinct runtime levels: on the \sandybridgeshort, the measurements jump between \SIlist{354;359}\ms, which are \SI{1.4}{\percent}~apart, and on the \haswellshort with twice the floating-point performance per cycle, the two levels at~\SIlist{205;213}{\ms} differ by~\SI{3.9}\percent. There is no discernible pattern to the jumps between these levels and the processors commonly stay at the same level for~\SI{10}{\second} or longer (50~repetitions at \SI{200}{\ms} each). \end{example} Since we found no means to eradicate this type of fluctuations, we adopt our measurement setups to account for them: Whenever we have more than one measurement point (e.g., varying the routines or problem sizes), we not only repeat each measurement several times in isolation, but also shuffle the repetitions. As a result, the repetitions for each data point are spread across the entire experiment duration and summary statistics such as the minimum and median yield a stable runtime estimate for only one performance level. In summary, we can avoid or account for various types of fluctuations within our measurements. \section{Performance Effects for Dense Linear Algebra Kernels} \label{sec:meas:effects} \input{meas/effects} \subsection{Library Initialization Overhead} \label{sec:meas:effects:init} \input{meas/init} \subsection{Fluctuations} \label{sec:meas:effects:fluct} \input{meas/fluct} \subsection{Thread Pinning} \label{sec:meas:effects:pin} \input{meas/pin} \subsection{Caching} \label{sec:meas:effects:caching} \input{meas/caching} \subsection{Summary} \label{sec:meas:effects:sum} \input{meas/effectssum} \section{Measurements and Experiments: \elaps} \label{sec:meas:elaps} \input{meas/elapsintro} \subsection{The \sampler} \label{sec:meas:sampler} \input{meas/sampler} \subsection{The \elaps{} \python Framework} \label{sec:meas:elapslib} \input{meas/elaps} \section{Summary} \label{sec:meas:conclusion} \input{meas/conclusion} } \subsubsection{Alignment to Cache-Lines} Data is moved through the memory hierarchy in blocks of \SI{64}{\bytes} ($= \SI8\doubles$) called \definition{cache-lines}.\footnote{% The cache-line size is generally not fixed but for most processors it is \SI{64}{\Byte}. } Hence using multiples of the cache-lines size as memory access strides typically shows a more regular and often better performance compared to other strides. \input{model/figures/ld8} \footnotetextbefore{% Since $A$ and~$B$ have 256~rows, the leading dimensions are at least~256. } \begin{example}{Aligning leading dimensions to cache-lines}{model:args:ld:8} \Cref{fig:model:ld8} shows the runtime of \displaycall\dtrsm{ \arg{side}L, \arg{uplo}L, \arg{transA}N, \arg{diag}N, \arg m{256}, \arg n{256}, \arg{alpha}{1.0}, \arg AA, \arg{ldA}{\it\color{blue}ld}, \arg BB, \arg{ldB}{\it\color{blue}ld} } i.e., $\dmB \coloneqq \dmAi \dmB$ with $\dmA\lowerpostsep, \dmB \in \R^{256 \times 256}$, for leading dimensions\footnotemark{} $ld = 256, \ldots, 320$ in steps of~1 on a \sandybridge and a \haswell with single-threaded \openblas, \blis, and \mkl. For all setups, the \dtrsm[LLNN]'s runtime exhibits some regular pattern in terms of the leading dimension arguments---with an average amplitude of~\SI{2.19}\percent. However the patterns are quite different: While \openblas's runtime on the \sandybridgeshort~(\ref*{plt:sbopen}) drops equally at every even leading dimension, \mkl on the \haswellshort~(\ref*{plt:hwmkl}) dips only at multiples of~4, and on the \sandybridgeshort~(\ref*{plt:sbmkl}) it has stronger dips at multiples of~8. \blis on the other hand shows the exact opposite behavior: On both platforms~(\ref*{plt:sbblis}, \ref*{plt:hwblis}) its runtime spikes slightly at multiples of~8. Independent of the specific behavior of each setup, a smooth runtime curve is obtained when only multiples of~8 are considered as leading dimensions. \end{example} To avoid small performance irregularities, we will generate our models using \definition[use multiples of the cache-line size]{multiples of the cache-line size} for leading dimensions---in double-precision: multiples of~8. \subsubsection{Set-Associative Cache Conflicts} \label{sec:model:args:ld512} The Level~1 and~2 caches in our processors are \definition{8-way set-associative}: They are divided into sets of 8~cache-lines, and when a cache-line is loaded, its address's least significant bits determine which of the sets it is assigned to; within the set, an architecture-dependent cache replacement policy determines in which of the 8~slots it is stored. When the address space is accessed contiguously, consecutive cache-lines are loaded into consecutive sets, and the cache is filled evenly. In the worst case, however, the address space is accessed with a stride equal to the number of sets, and all loaded cache-lines are associated to the same set: Only 8~cache-lines are cached, and each additional line results in a \definition{cache conflict miss} causing a recently loaded line to be evicted. This effect should be avoided whenever possible. On recent \intel{} {\namestyle Xeon} processors, the Level~1 data cache~(L1d) fits \SI{32}{\kibi\byte} organized as 64~sets of 8~cache-lines. A memory location with address~$a$ is a part of cache-line~$\lfloor a / 64 \rfloor$ (due to the size of \SI{64}{\Byte} per line) and assigned to set $\lfloor a / 64 \rfloor \bmod 64$ (due to the capacity of 64~sets). The Level~2 cache (L2) in turn fits \SI{256}{\kibi\byte} in 1024~sets; here address~$a$ is assigned to set $\lfloor a / 64 \rfloor \bmod 1024$. In a double-precision matrix stored with leading dimension~$ld$, consecutive elements in each row are $8 ld$~\bytes apart ($\SI1\double = \SI8\bytes$). Hence, for $ld = 512$, the consecutive row elements starting at address~$a_0$ are stored at~$a_i = a_0 + 8 ld \cdot i = a_0 + 4096 i$, and associated to the same set in the L1d~cache: \begin{align*} \left\lfloor \frac{a_i}{64} \right\rfloor \bmod 64 &= \left\lfloor \frac{a_0 + 4096 i}{64} \right\rfloor \bmod 64 \\ &= \left(\left\lfloor \frac{a_0}{64} \right\rfloor + 64 i \right) \bmod 64 \\ &= \left\lfloor \frac{a_0}{64} \right\rfloor \bmod 64. \end{align*} The same problem occurs for leading dimensions that are multiples of~512, and even below~512 powers of~2 have a similar effect: E.g., with $ld = 256$ the elements of a row are associated to only two of the cache's 64~sets. Similarly, for the L2~cache with 1024~sets, consecutive row-elements are associated to the same cache set for leading dimensions that are multiples of~8192, and multiples of~4096 utilize only two sets. \input{model/figures/ld512} \begin{example}{Cache conflict misses caused by leading dimensions}{model:args:ld:512} \Cref{fig:model:ld512} shows the runtime of \displaycall\dtrsm{ \arg{side}L, \arg{uplo}L, \arg{transA}N, \arg{diag}N, \arg m{256}, \arg n{256}, \arg{alpha}{1.0}, \arg AA, \varg{ldA}{ld}, \arg BB, \varg{ldB}{ld} } i.e., $\dmB \coloneqq \dmAi \dmB$ with $\dmA\lowerpostsep, \dmB \in \R^{256 \times 256}$, for leading dimensions $ld = 256, \ldots, 8320$ in steps of~128 on a \sandybridge and a \haswell with single-threaded \openblas, \blis, and \mkl. For most setups the runtime spikes above the baseline at multiples of~512. However, the average magnitude of these spikes ranges from~\SI{.14}{\percent} for \blis on the \sandybridgeshort~(\ref*{plt:sbblis}) to~\SI{8.37}{\percent} for \openblas on the \haswellshort~(\ref*{plt:hwopen}). Especially for \openblas~(\ref*{plt:sbopen}, \ref*{plt:hwopen}), there are additional, yet lower spikes of \SI{1.40}{\percent} at multiples of~256. Furthermore, on the \haswellshort for both \openblas~(\ref*{plt:hwopen}) and \blis~(\ref*{plt:hwblis}) the spikes are especially high at $ld = 4096$ and~8192, exceeding the baseline by, respectively, \SIlist{6.55;11.24}\percent. \end{example} To prevent distortions from unfortunate leading dimensions in our model generation altogether, we will \definition{avoid multiples of~256} for these arguments. Note that by using leading dimensions that are multiples of~8, yet not of~256 in our measurements, our models will not yield accurate predictions for kernel invocations that do not follow this pattern. However, predicting the performance of such unfortunate invocations, which can be systematically avoided, is not part of our models' purpose and would exceed the scope of this work. \subsubsection{Smalls Scale Behavior} \label{sec:model:args:size:small} Optimizations of compute kernels commonly involve vectorization and loop unrolling of length~4 or~8. These optimizations typically have a direct influence on a kernel's runtime for small variations of the size arguments. \input{model/figures/size8} \begin{example}{Small variations of size arguments}{model:args:size:8} \Cref{fig:model:size8} shows the runtime of \displaycall\dtrsm{ \arg{side}L, \arg{uplo}L, \arg{transA}N, \arg{diag}N, \varg mn, \varg nn, \arg{alpha}{1.0}, \arg AA, \arg{ldA}{400}, \arg BB, \arg{ldB}{400} } i.e., $\dmB \coloneqq \dmAi \dmB$ with $\dmA\lowerpostsep, \dmB \in \R^{n \times n}$, for $n = 256, \ldots, 320$ in steps of~1 on a \sandybridge and a \haswell with single-threaded \openblas, \blis, and \mkl. All setups show periodic patterns in their runtimes. While these patterns differ between the implementations, most have local runtime minima at multiples of~4, and all of them have minima at multiples of~8. \end{example} To avoid runtime artefacts introduced by vectorization and loop unrolling, we will build our models on measurements that \definition{use multiples of~8} for all size arguments. \subsubsection{Piecewise Polynomial Behavior} \label{sec:model:args:size:large} Since an operation's minimal \flop-count is generally a (multivariate) polynomial function of the size arguments, one might expect that (for compute-bound kernels) it translates directly into an equally polynomial runtime. However, since a kernel's performance is generally not constant for varying operand sizes, a single polynomial is often insufficient to accurately represent a kernel's runtime for large ranges of problem sizes. \input{model/figures/size} \begin{example}{Polynomial fitting for size arguments}{model:args:size} \Cref{fig:model:size} shows the runtime of \displaycall\dtrsm{ \arg{side}L, \arg{uplo}L, \arg{transA}N, \arg{diag}N, \varg m{n}, \varg n{n}, \arg{alpha}{1.0}, \arg AA, \arg{ldA}{1000}, \arg BB, \arg{ldB}{1000} } i.e., $\dmB \coloneqq \dmAi \dmB$ with $\dmA\lowerpostsep, \dmB \in \R^{n \times n}$, with $n = 24, \ldots, 536$ in steps of~16 on a \sandybridge and a \haswell with single-threaded \openblas, \blis, and \mkl. At first sight, the runtime for all setups follows a smooth cubic behavior---perfectly in line with the operation's minimal cost of \SIvar{n^3}\flops. However, if for each setup we fit the measurements with a single cubic polynomial that minimizes the least-squares relative error (details in~\cref{sec:model:fit}), we are left with the approximation error shown in~\cref{fig:model:size:err1}. The absolute relative approximation error\footnotemark{} lies between \SI{.86}{\percent} for \blis on the \sandybridgeshort~(\ref*{plt:sbblis}) and \SI{11.22}{\percent} for \openblas on the \haswellshort~(\ref*{plt:hwopen}); on average it is~\SI{5.30}\percent. If we look closer at the approximation errors in \cref{fig:model:size:err1}---especially for \openblas on the \haswellshort~(\ref*{plt:hwopen})---we observe a piecewise smooth(er) behavior. Motivated by this observation, we now fit not one polynomial to each data-set but two: one for the first half ($n \leq 280$) and one for the second half ($n \geq 280$). For this two-split polynomial fit the approximation error is shown in~\cref{fig:model:size:err2}: The largest error is now reduced to~\SI{5.25}{\percent} for \mkl on the \haswellshort~(\ref*{plt:hwmkl}), and the average error is~\SI{2.55}{\percent}---less than half of the original approximation error. (Based on a more detailed analysis, a better splitting point than $\frac{24+536}2 = 280$ could have been chosen, but as \cref{fig:model:size:err1} shows such choices would be notably different for each setup.) Within the new approximation, the error for the second polynomial ($n \geq 280$) is already quite low---on average~\SI{.38}\percent. Hence, in a second step, we further subdivide only the first half of the domain ($n \leq 280$) at~$n = 152$, and generate a new approximation consisting of three polynomials. As \cref{fig:model:size:err3} shows, the error of this approximation is below~\SI{1.28}{\percent}~(\ref*{plt:hwmkl}) in all cases and on average~\SI{.71}\percent. \end{example} \footnotetext{% For a polynomial~$p(x)$ fit to measurements~$y_1, \ldots, y_N$ in points~$x_1, \ldots, x_N$ we consider the error $1 / N \sum_{i=1}^N \lvert y_i - p(x_i) \rvert / y_i$. Note that the least-squares fitting minimizes not this sum of absolute relative errors but the sum of squared relative errors. } To account for the not purely polynomial influence of a kernel's size arguments on its runtime, we will represent it in our models through \definition{piecewise polynomials}. Details on the such piecewise polynomial representations and their automated generation are given in \cref{sec:model:fit,sec:model:adaptive,sec:model:config}. \subsection{Configuration Parameters} The adaptive refinement is controlled by a total of eight \definition{configuration parameters}. They allow to control the model accuracy, but also affect the time spent for the required measurements. The eight parameters regulate the model generation as follows: \begin{itemize} \item To represent the runtime of a kernel, the monomial basis for the fitted polynomials needs to at least cover the kernel's asymptotic complexity (i.e., its minimal \flop-count). To better represent performance variations, however, the maximum degree of the monomials can be increased in each each dimension (i.e., size argument). We refer to this increase as \definition[overfitting:\\between 0 and~2]{overfitting}; practical values are {\em between 0 and~2}. \item To fit a polynomial to a routine's runtime, the number of sampling points along each dimension needs to be at least one more than the corresponding polynomial degree. However, since this minimal number of points yields a polynomial that fits the measurements perfectly, we cannot use it to compute an approximation error. We hence increase the number of sampling points per dimension by at least one, and to further improve the approximation accuracy, further points can be added; we refer to the total number of points added as \definition[oversampling:\\between 1 and~10]{oversampling}; practical values are values {\em between 1 and~10}. \item We introduced two alternatives to \definition[distribution grid:\\Cartesian or Chebyshev]{distribute} sampling points on {\em grids} that cover the domains of problem sizes: a {\em Cartesian} grid and a {\em Chebyshev} grid. \item For each sampling point, we perform several \definition[measurement repetitions:\\between 5 and~20]{measurement repetitions}; practical values are {\em between 5 and~20}. \item From the repetitions, we compute several runtime summary statistics: minimum, median, maximum, average, and standard deviation. One of these is selected as the \definition[reference statistic:\\minimum or median]{reference statistic}; practical choices are the {\em minimum and median}. \item From the absolute relative errors in the reference statistic for all sampling points, we compute the \definition[error measure:\\average, maximum, or 90th~percentile]{error measure} which is these relative errors' {\em average, maximum, or 90th~percentile}. \item The first termination criterion for the adaptive refinement process is the approximation accuracy: The refinement stops when the computed error measure is below a \definition[target error bound:\\between {\SIlist[detect-all=true]{1;5}\percent}]{target error bound}; practical values for this bound are {\em between \SIlist[detect-all=true]{1;5}\percent}. \item The second termination criterion is the size of the domains: The refinement stops when a new domain is smaller than a \definition[minimum width:\\32 or~64]{minimum width} along all dimensions; typical values are {\em 32 and~64}. \end{itemize} \subsection{Trade-Off and Configuration Selection} In the following, we analyze the accuracy of our models and their generation cost, and select a configuration to generate the models for the performance predictions in the \cref{ch:pred}. We consider the model generation for \displaycall\dtrsm{ \arg{side}L, \arg{uplo}L, \arg{transA}N, \arg{diag}N, \arg m{\it\color{blue}m}, \arg n{\it\color{blue}n}, \arg{alpha}{1.0}, \arg AA, \arg{ldA}{5000}, \arg BB, \arg{ldB}{5000} } i.e., $\dmB[height=.5] \coloneqq \dmAi[size=.5] \dmB[height=.5]$ with $\dmA[size=.5] \in \R^{m \times m}$ and $\dmB[height=.5] \in \R^{m \times n}$, for sizes $m \in [24, 536]$ and $n \in [24, 4152]$ on a \sandybridge and a \haswell using single-threaded \openblas, \blis, and \mkl. For each setup, our first step is to exhaustively measure the \dtrsm[LLNN]'s runtime 15~times in all points $(m, n)$ in the domain $[24, 536] \times [24, 4152]$ at which both~$m$ and~$n$ are multiples of~8---a total of \num{504075} measurements. These measurements are used both as the basis for our model generation and to evaluate the model accuracy across the entire domain (contrary to the model generation, which can only evaluate the error in its sampling points). \input{model/tables/config} We generate models for all 2880~configurations obtained from combining the parameter values shown in \cref{tbl:model:config}. These configurations result in a wide range of models with significantly different accuracies and generation costs. To evaluate them, we quantify the \definition{model error} as the averaged relative error of the predicted minimum runtime~$p(\x_i)$ relative to the measured minimum~$y_i$ across all $N = \num{33605}$ points~$\x_i$ of the domain: \[ \text{model error} \defeqq \frac1N \sum_{i=1}^N \frac{\lvert p(\x_i) - y_i \rvert}{y_i} \enspace; \] furthermore, we define the \definition{model cost} as the total runtime of the required measurements used as samples. \input{model/figures/modelplots} \input{model/tables/modelplots} \begin{example}{Model accuracy}{model:acc} \Cref{fig:model:modelplots} shows the structure and point-wise accuracy of the four models with minimum and maximum accuracy and cost for single-threaded \openblas on a \sandybridge; \cref{tbl:model:modelplots} lists the corresponding configurations. Both the cheapest and least accurate model use only a single polynomial for the entire domain but also offer only poor accuracy. The expensive and accurate models on the other hand subdivide the domain repetitively, and thus find a better fitting piecewise polynomial. \end{example} \input{model/figures/tradeoff} The accuracy and cost of all 2880~generated models for each setup are presented in \cref{fig:model:tradeoff:full}; in this plot, the preferable models with low error and cost are found close to the origin. All setups share the same general trend: Models with low accuracy are quite cheap, while models with high accuracy are more expensive. Hence we are faced with a \definition[trade-off:\\accuracy vs.~cost]{trade-off between accuracy and cost}. However, the configuration selection is not straight-forward: Models with practically identical accuracy are up to a factor of~16 apart in generation cost, and a cheap and accurate configuration for one setup may be neither for other setups. In the following, we describe how we approach the search-space of all considered configurations, and identify a desirable default configuration that we subsequently use to generate the models for all setups and kernels needed for our performance predictions in \cref{ch:pred}. Before we begin to reduce our search space, we notice that on the \haswellshort, the models for both \blis~(\ref*{plt:hwblis:circ}) and \mkl~(\ref*{plt:hwmkl:circ}) are on average less than half as accurate than for the other setups. The cause is a rather jagged performance behavior, which is difficult to represent accurately. Hence, to identify a good default configuration, we consider only the \sandybridgeshort~(\ref*{plt:sbopen:circ}, \ref*{plt:sbblis:circ}, \ref*{plt:sbmkl:circ}) and \openblas on the \haswellshort~(\ref*{plt:hwopen:circ}). Our first step is to \definition{prune by accuracy}: We discard any configuration that for any of the considered setups yields a model error larger than $1.5\times$ the minimum error for that setup; in other words, all remaining configurations generate models that are at most \SI{50}{\percent} less accurate than the most accurate model. This step reduces the number of potential configurations from 2880 to~163; all remaining configurations use an oversampling value of~3 or higher, and a target error bound of~\SI1\percent. \cref{fig:model:tradeoff:within2err} shows the 163~remaining models' accuracy and cost. \input{model/tables/tradeofffinal} Our second step is to similarly \definition{prune by cost}: We discard any configuration that for any considered setup takes longer than the first quartile in generation time for that setup; in other words, the remaining models are all within the \SI{25}{\percent} that are generated the fastest. This step further reduces the number of potential configurations from 163 to~14, as shown in \cref{fig:model:tradeoff:belowmedcost}. The parameter values for the 14~remaining configurations are shown in \cref{tbl:model:tradeoff:final}. For each parameter, we can find one value that is common to at least 8~of the 14~configurations (highlighted in {\bf bold}). We choose our \definition{default configuration} by selecting this most common value for each parameter. It corresponds to line~(10) in \cref{tbl:model:tradeoff:final} (highlighted in {\bf\color{blue}blue}), and is marked for each setup in \cref{fig:model:tradeoff:belowmedcost}. Note that it also serves as a good choice between accuracy and cost for \blis~(\ref*{plt:hwblis:circ}) and \mkl~(\ref*{plt:hwmkl:circ}) on the \haswellshort, which were not included in the pruning process. \subsection{Variations of the Default Configuration} While the configuration was found to yield good accuracies at reasonable costs for almost all encountered kernels, it proves to be quite expensive for kernels with \definition[3D case (\dgemm)]{three degrees of freedom}, which for the predictions in \cref{ch:pred} only applies to {\em\dgemm} with its three size arguments~\code m, \code n, and~\code k. To reduce the modeling cost for this kernel, we adjust the default configuration by reduce the overfitting from~2 to~0, and increasing the minimum width from~32 to~64. Furthermore, the performance of \blas kernels becomes less smooth when we bring \definition{multi-threading} into the picture. Hence, to avoid excessive partitioning as seen in \cref{fig:model:modelplots:maxcost}, we increase the minimum width for all models to~64, and for \dgemm to~256. \chapter{Performance Modeling} \chapterlabel{model} { \input{model/commands} \input{model/intro} \section{Kernel Argument Analysis} \label{sec:model:args} \input{model/args} \subsection{Flag Arguments} \label{sec:model:args:flag} \input{model/arg-flag} \subsection{Scalar Arguments} \label{sec:model:args:scalar} \input{model/arg-scalar} \subsection{Leading Dimension Arguments} \label{sec:model:args:ld} \input{model/arg-ld} \subsection{Increment Arguments} \label{sec:model:args:inc} \input{model/arg-inc} \subsection{Size Arguments} \label{sec:model:args:size} \input{model/arg-size} \subsection{Data Arguments} \label{sec:model:args:data} \input{model/arg-data} \subsection{Summary} \label{sec:model:args:sum} \input{model/arg-sum} \section{Model Generation} \label{sec:model:generation} \input{model/generation} \subsection{Model Structure} \label{sec:model:structure} \input{model/structure} \subsection{Sample Distribution} \label{sec:model:grids} \input{model/grids} \subsection{Repeated Measurements and Summary Statistics} \label{sec:model:stat} \input{model/stat} \subsection{Relative Least-Squares Polynomial Fitting} \label{sec:model:fit} \input{model/fit} \subsection{Adaptive Refinement} \label{sec:model:adaptive} \input{model/adaptive} \section{Model Generator Configuration} \label{sec:model:config} \input{model/config} \section{Summary} \label{sec:model:sum} \input{model/model-sum} } \subsection{Varying Problem Size} \label{sec:pred:chol:n} \input{pred/figures/cholperf} In our first analysis, we use only one of the \sandybridgeshort's 8~cores and vary the problem size between~$n = 56$ and~4152 in steps of~64 while keeping the block size fixed at~$b = 128$. \Cref{fig:pred:chol:time_perf} shows the runtime and performance of predictions and measurements for this setup side-by-side. (Since the red line \legendline[very thick, darkred] at the top of the performance plots indicates the processor's theoretical peak performance, such plots can also be interpreted as compute-bound efficiencies with \SI0{\percent}~at the bottom and \SI{100}{\percent}~at the top.) The predictions give a good idea of the algorithm behavior: While the runtime increases cubically with the problem size~$n$, the performance is low for small matrices and increases steadily towards \SI{18}{\giga\flops\per\second}. At first sight, the predictions match the measurements well. \input{pred/figures/cholerr} To further study the accuracy of our predictions, the top half of \cref{fig:pred:chol:err} presents the prediction errors. As one might expect, \cref{fig:pred:chol:err:time} indicates that with increasing problem size, the magnitude of the runtime prediction error increases for all summary statistics---most notably for the maximum~(\ref*{plt:max}). Since in contrast the performance prediction error~(\cref{fig:pred:chol:err:perf}) is not affected by the decomposition's cubic runtime, we instead observe the largest prediction errors for the smallest problem size~$n = 56$. Furthermore, we find that the minimum performance prediction error~(\ref*{plt:min}) seems to alternate between two separate levels: one around \SI0{\mega\flops\per\second} and one close to \SI{200}{\mega\flops\per\second}. This behavior, which is also already somewhat visible in \cref{fig:pred:chol:perf:meas,fig:pred:chol:err:time}, is caused by measurement fluctuations as discussed in \cref{sec:meas:fluct:longterm}. We gain more insights from the prediction errors when we compare it to the predicted quantities. For this purpose, the bottom half of \cref{fig:pred:chol:err} presents the relative runtime and performance prediction errors. These relative errors for these metrics are almost identical up to a change in the sign---since the runtime is generally slightly underestimated, the performance is somewhat overestimated. Focusing on the runtime in \cref{fig:pred:chol:re:time}, we notice that the average standard deviation ARE is~\SI{194.70}\percent~(\ref*{plt:std}), which, as in \cref{ex:pred:err}, exceeds the error of the other prediction statistics by far. Furthermore, the previously addressed measurement fluctuations are also clearly visible in the maximum~(\ref*{plt:max}) as variations with a magnitude of~\SI{1.5}\percent. The minimum~(\ref*{plt:min}), median~(\ref*{plt:med}), and mean~(\ref*{plt:avg}) AREs on the other hand quickly fall below~\SI2{\percent} for matrices larger than~$n = 200$ and further below below~\SI1{\percent} beyond~$n \approx 1000$; across all chosen problem sizes, the average AREs for the minimum, median and mean runtime are, respectively, \SIlist{.78;.91;.90}\percent. Among the eight metrics presented in \cref{fig:pred:chol:time_perf,fig:pred:chol:err}, we gained the most insight from 1)~the performance prediction (\cref{fig:pred:chol:perf:pred}), which gives a good idea of both the algorithm's performance and efficiency, and 2)~the relative runtime prediction error (\cref{fig:pred:chol:re:time}), which provides not only an accuracy measure independent of the operation, the algorithm, and the actual performance, but also indicates whether the runtime is under- or overestimated. Hence, we use these two types of plots in our following analyses. \subsection{Varying Block Size} \label{sec:pred:chol:b} \input{pred/figures/cholnb} In our next analysis, we fix the problem size to~$n = 3000$ and vary the block size between~$b = 24$ and~536 in steps of~8. \Cref{fig:pred:chol:b} presents the performance prediction and the relative runtime prediction error for this scenario using single-threaded \openblas on the \sandybridgeshort. The performance prediction (\cref{fig:pred:chol:b:perf}) exhibits the typical trade-off for any blocked algorithm: While for both small and large block sizes the algorithm attains rather poor performance, in between it reaches up to \SI{17.91}{\giga\flops\per\second}, which corresponds to an efficiency of~\SI{85.10}\percent. The cause for this trade-off and the selection of block sizes are addressed in detail in \cref{sec:pred:b}. Compared to our previous performance predictions (\cref{fig:pred:chol:perf:pred}), \cref{fig:pred:chol:b:perf} exhibits a far wider spread of the summary statistics for large block sizes. In particular, the predicted minimum performance~(\ref*{plt:min}) drops drastically, which immediately causes the mean performance~(\ref*{plt:avg}) to decrease and an enormous increase in the predicted standard deviation~(\ref*{plt:stdf}). The relative runtime prediction error (\cref{fig:pred:chol:b:re}) indicates that the predicted performance fluctuations are not present in the performance measurements: The maximum and mean relative errors (\ref*{plt:max} and \ref*{plt:avg}) increase drastically for large problem size, suggesting that the model generation was influenced by large outlier measurements. (A repetition of the generation process would likely encounter different outliers and distort these metrics statistics for other problem sizes.) The minimum~(\ref*{plt:min}) and median~(\ref*{plt:med}), on the other hand, are with few exceptions predicted within~\SI1\percent; their average prediction AREs are \SI{.36}{\percent} (minimum \ref*{plt:min}) and \SI{.42}{\percent} (median \ref*{plt:med}). \subsection{Varying Problem Size and Block Size} \label{sec:pred:chol:nb} \input{pred/figures/cholheatmap} If we vary both the problem size~$n$ and the block size~$b$, we can visualize the runtime prediction ARE as a set of heat-maps as shown in \cref{fig:pred:chol:heatmap}. Note that these plots are based on a total of \num{39690}~measurements of the algorithm's runtime (65~problem sizes, $\approx 65$~block sizes, 10~repetitions) that took over 4~hours. The performance models for the kernels needed for the predictions (\dpotf[L]2, \dtrsm[RLTN], and \dsyrk[LN]), on the other hand, were generated in just under 10~minutes, produced our predictions in under \SI{20}\second. The standard deviation ARE is once again too large to fit the chosen scale and is hence not shown. Furthermore, as already seen in \cref{fig:pred:chol:b}, the maximum prediction becomes rather inaccurate for large~$n$ and~$b$, which also has a negative impact on the mean prediction. On the other hand, both the minimum and median predictions are overall quite accurate with an average ARE of only~\SI{.45}\percent. Since in the following we compare multiple alternative algorithms and hardware/software setups, we limit our focus to a single statistic. While in the previous analysis the runtime minimum or median were predicted with equivalent accuracy, in practice the expected performance is better represented by the median runtime.\footnote{% In scenarios other than our considered single-node computations different measures might be preferable; e.g., the 90th~percentile runtime. } Hence, from now on we use the \definition[accuracy measure: relative median runtime prediction error]{relative median runtime prediction error}~\Q t{med}{RE} as our {\em prediction accuracy measure}. \subsection{Other Data-Types} \label{sec:pred:chol:dt} \input{pred/tables/cholfp} \input{pred/figures/cholfp} So far, we have considered the Cholesky decomposition of real double-precision matrices; however, the same algorithm is also applicable to other data-types. For the four de-facto standard numerical data-types (real and complex\footnote{% For the complex cases, the Cholesky decomposition is of the form $L L^H \coloneqq A$, where $A$~must be Hermitian positive definite (HPD). } floating-point numbers in single- and double-precision) \cref{tab:pred:chol:fp} summarizes the algorithm's \blas and \lapack kernels, and \Cref{fig:pred:chol:fp} presents our model's performance predictions and their accuracy. (For each data-type, we generated a separate set of performance models.) In the performance predictions (\cref{fig:pred:chol:fp:perf}), we observe that the real double-precision version~(\ref*{plt:dt:d}) is most efficient (with respect to its theoretical peak performance); this was to be expected because \openblas is most optimized for this data-type. In contrast, it is somewhat surprising that, while single-precision complex~(\ref*{plt:dt:c}) is noticeably more performant than single-precision real~(\ref*{plt:dt:s}), double-precision complex~(\ref*{plt:dt:z}) does not exceed an efficiency of~\SI{50}\percent. Although the algorithm's performance for the four data-types differs significantly, \cref{fig:pred:chol:fp:perf} reveals that our models predict the runtime for all of them equally well. Moreover, for the in comparison inefficient double-precision complex variant~(\ref*{plt:dt:z}), the prediction is already notably accurate small problem sizes below~$n = 1000$. With equally accurate predictions demonstrated for four data-types, we will in the following focus on real operations in double-precision. \subsection{Multi-Threaded \blas} \label{sec:pred:chol:mt} \input{pred/figures/cholp} Finally, we consider how multi-threading (through \openblas) impacts the algorithm's performance and our predictions' accuracy. For this purpose, \cref{fig:pred:cholp} presents the predicted performance of the Cholesky decomposition and the prediction accuracy with 1, 2, 4, and 8~threads on the 8-core \sandybridgeshort. (For each of these four levels of parallelism, a separate set of performance models was generated.) The predictions show that, while the performance grows with the number of threads, the efficiency decreases from~\SI{87.74}{\percent} with one thread to a maximum of~\SI{70.78}{\percent} with eight threads. Furthermore, the performance curves become less smooth with increased parallelism. Considering our prediction's accuracy, we notice that for small problem sizes below~$n = 500$, the prediction ARE increases significantly when more threads are added. Beyond this point however, the prediction for 1~(\ref*{plt:nt:1}) and 2~threads~(\ref*{plt:nt:2}) are both highly accurate with an average ARE of~\SI{.46}{\percent}; the predictions for 4~(\ref*{plt:nt:4}) and 8~threads~(\ref*{plt:nt:8}) are slightly less accurate and the AREs fluctuate around~\SI1\percent. Note that the large fluctuations within the ARE for the multi-threaded algorithms are caused by the combination of the block size~$b = 128$ and the chosen problem sizes in steps of~64. While with 8~threads~(\ref*{plt:nt:8}) these fluctuations are represented by our predictions to some degree, with 2~(\ref*{plt:nt:2}) and 4~threads~(\ref*{plt:nt:4}), they are most striking for large problem sizes, where our models do not predict such fluctuations. \subsection{Summary} \label{sec:pred:chol:sum} We studied the blocked Cholesky decomposition algorithm~3 on a \sandybridge using \openblas with varying problem and block sizes, data-types, and kernel parallelism. We analyzed this algorithm's measured and predicted runtime and performance to evaluate the accuracy of our predictions, and selected the relative median runtime prediction error~\Q t{med}{RE} as our primary accuracy measure. \subsection{Single-Threaded \blas} \label{sec:pred:acc:st} We begin with a study of the single-threaded prediction accuracy with \lapack's default block size ($b = 64$, except for \dgeqrf with~$b = 32$). While these are generally sub-optimal configurations and often even sub-optimal algorithms for the performed operations, this configuration is unfortunately still encountered frequently in application codes that use the reference \lapack implementation. As such, it forms a quite canonical reference for the evaluation of our predictions. \input{pred/figures/accst} \input{pred/tables/accst} \Cref{fig:pred:lapack:st} presents the relative runtime prediction error~\Q t{med}{RE} for this scenario. For all algorithms and setups, our predictions are mostly within \SI5{\percent}~of the measured runtime, and in many situations considerably closer. The runtime prediction ARE averaged across all problem sizes for each routine and setup is summarized in \cref{tbl:pred:acc:st}: It ranges from~\SIrange{.71}{3.93}\percent, and its average and median are, respectively, \SIlist{1.91;1.69}\percent. Overall, the predictions are slightly more accurate on the \sandybridge (average $\Q t{med}{ARE} = \SI{1.66}\percent$) with the lowest average $\Q t{med}{ARE} = \SI{1.22}\percent$ for \openblas~(\ref*{plt:sbopen}); on the \haswell (average $\Q t{med}{ARE} = \SI{2.16}\percent$), the predictions are least accurate for \mkl~(\ref*{plt:hwmkl}) with an average of $\Q t{med}{ARE} = \SI{2.26}\percent$. Most routines are predicted equally well (with an average \Q t{med}{ARE} around \SI{1.5}\percent) with two exceptions: \dsygst[1L] (average $\Q t{med}{ARE} = \SI{2.63}\percent$) and \dgeqrf (average $\Q t{med}{ARE} = \SI{2.87}\percent$). \begin{itemize} \item For the two-sided linear system solver \dsygst, \cref{fig:pred:accst:dsygst} reveals that for most setups, the predictions consistently underestimate the algorithm runtime for large problem sizes~$n$. A quick calculation shows that this effect is related to the size of the last-level cache~(L3): On the \haswellshort, the problem emerges beyond~$n \approx 2000$ at which point the two operands~\dm A (symmetric in lower-triangular storage) and~\dm[lower]L\lowerpostsep take up $\SIvar{2 \times \frac{2000^2}2}\doubles \approx \SI{30.52}{\mebi\byte}$---slightly more than the L3~cache of \SI{30}{\mebi\byte}. On the \sandybridgeshort with \SI{20}{\mebi\byte} of L3~cache, the effect is accordingly already visible beyond~$n \approx 1600$. The cause for the underestimation of large problems is as follows: Our models are based on repeated kernel measurements, which operate on cached (``warm'') data as long as all of the kernel's arguments fit in the cache. However, each traversal step of \dsygst[1L] (\cref{alg:dsygst}) uses two separate kernels (namely \dsyrtk[LN] and \dtrsm[LLNN]) that operate on the trailing parts of \dm A and \dm[lower]L\lowerpostsep{}---since these do not fit in the cache simultaneously, they are mutually evicted by these kernels, and hence have to be loaded from main memory repeatedly (``cold'' data). To summarize, our models estimate fast operations on cached data, while in the algorithm the operations are slower due to cache misses. A more detailed study of caching effects within blocked algorithms and attempts to account for them are presented in \cref{ch:cache}. Note that only \dsygst is affected by caching effects on this scale because all other routines involve only one dense operand. \item For the QR~decomposition \dgeqrf, \cref{fig:pred:accst:dgeqrf} reports that the runtime for almost all setups is consistently underestimated---especially for small problems. The cause is the transposed matrix copy and addition (see \cref{alg:dgeqrf}), which account for about~\SI4{\percent} of the runtime for small problems ($n \approx 250$) and \SI1{\percent} for large problems ($n \approx 4000$): The copy, performed by a sequence of $b = 32$~\dcopy{}s, is underestimated by~$2\times$ to~$7\times$ because our models do not account for caching effects; the addition, which inlined as two nested loops, is not accounted for at all. \end{itemize} \subsection{Multi-Threaded \blas} \label{sec:pred:acc:mt} We study the multi-threaded prediction accuracy for the same six \lapack algorithms using all available cores of the processors, i.e., 8~threads on the \sandybridge and 12~threads on the \haswell. In contrast to the single-threaded predictions, we use a block size of~$b = 128$ for all algorithms---while this configuration is certainly not optimal for all algorithms and problem sizes, it generally yields better performance than \lapack's default values. \input{pred/figures/accmt} \input{pred/tables/accmt} \Cref{fig:pred:lapack:mt} presents the relative runtime prediction errors~\Q t{med}{RE} for this scenario, and \cref{tbl:pred:acc:mt} summarizes their averaged AREs~\Q t{med}{ARE}. Compared to the single-threaded case, the prediction errors are across the board around $2.5\times$~larger with a total average of $\Q t{med}{ARE} = \SI{4.85}\percent$. The predictions are roughly equally accurate across the two architectures and the two \blas implementations. Considering \cref{fig:pred:lapack:mt}, we note fluctuation patterns in the prediction errors by up to~\SI{10}\percent, most notably for \dsygst[1L] and \dtrtri[LN] using \mkl on the \haswellshort~(\ref*{plt:hwmkl}). As observed in \cref{sec:pred:chol:mt}, these fluctuations are an artefact of the block size~$b = 128$ interacting with the considered problem sizes in steps of~64: Between consecutive problem sizes, the remaining matrix portions in the last step of the matrix traversal alternate between widths~56 and~120. As in the single-threaded case, the QR~decomposition's runtime is underestimated by on average~\SI{8.00}\percent, due to the \dcopy{}s and the inlined matrix addition. Since especially the latter cannot make any use of the multi-threaded parallelism, their impact increases significantly with the number of available cores. Furthermore, several individual algorithms and setups are consistently under- or overestimated: e.g., \openblas on the \sandybridge~(\ref*{plt:sbopen}) for \dlauum[L] and \dpotrf[L]. These problems arise from the multi-threaded implementations of \dgemm, whose irregular performance is not well represented in our models: Since \blas implementations distribute computations among threads along a certain dimension of the operation, for small dimension (such as the block size), only a subset of the available threads is used. When the small dimension is increased, more threads are activated and the performance increases suddenly. \subsection{Summary} \label{sec:pred:acc:sum} This section has shown that across experiments on two processor architectures, three \blas implementations, and six blocked \lapack algorithms, our models yield accurate predictions that are on average within~\SI{1.91}{\percent} (single-threaded) and \SI{4.85}{\percent} (multi-threaded) of reference measurements. Encouraged by these accuracy results, the following sections use performance predictions to target our main goals of algorithm selection and block-size optimization. \section{Performance Prediction} \label{sec:pred:pred} \input{pred/pred} \section{Accuracy Quantification} \label{sec:pred:acc} \input{pred/acc} \section[Accuracy Case Study: Cholesky Decomposition] {Accuracy Case Study:\newline Cholesky Decomposition} \label{sec:pred:chol} \input{pred/chol} \section[Accuracy Study: Blocked \lapack Algorithms] {Accuracy Study:\newline Blocked \lapack Algorithms} \label{sec:pred:lapack} \input{pred/lapack} \section{Algorithm Selection} \label{sec:pred:var} \input{pred/var} \subsection{Cholesky Decomposition} \label{sec:pred:var:chol} \input{pred/varchol} \subsection{Triangular Inversion} \label{sec:pred:var:trinv} \input{pred/vartrinv} \subsection{Sylvester Equation Solver} \label{sec:pred:var:sylv} \input{pred/varsylv} \subsection{Summary} \label{sec:pred:var:sum} \input{pred/varsum} \section{Block Size Optimization} \label{sec:pred:b} \input{pred/b} \subsection{Cholesky Decomposition} \label{sec:pred:b:chol} \input{pred/bchol} \subsection{Triangular Inversion} \label{sec:pred:b:trinv} \input{pred/btrinv} \subsection{\lapack Algorithms} \label{sec:pred:b:lapack} \input{pred/blapack} \section{Summary} \label{sec:pred:conclusion} \input{pred/conclusion} } \subsubsection{Algorithms} The solution to the triangular Sylvester equation is computed by traversing \dmC from the bottom left to the top right. However, in contrast to the previous operations, this traversal does not need to follow \dmC's diagonal; in fact \dmC can be traversed in various different ways: Two algorithms traverse \dmC vertically, two horizontally (using $3 \times 1$ and $1 \times 3$ partitions), and 14~diagonally (exposing $3 \times 3$ sub-matrices), making a total of 18~algorithms. Furthermore, as detailed in the following, the Sylvester equation requires two layers of blocked algorithms, resulting in a total of \definition[Sylvester equation:\\64~``complete'' algorithms]{64~``complete'' algorithms}. \input{pred/figures/sylv1dalgs} \Cref{algs:sylv1d} presents the four algorithms that traverse \dmC vertically or horizontally, thereby exposing $3 \times 1$ or $1 \times 3$ sub-matrices; each of these algorithms consists of one call to \dgemm[NN] and the solution of a sub-problem (another triangular Sylvester equation). To obtain a ``complete'' algorithm, two of these algorithms with orthogonal traversals are combined---the first traverses the full~\dmC and invokes the second to solve sub-problem in each iteration; the second, in turn, solves its small $b \times b$ sub-problem using \lapack's unblocked \dtrsyl[NN1]. E.g., one can use algorithm~$m1$ to traverse \dmC vertically and in each step apply algorithm~$n2$ to traverse the middle panel~\dm[mat11, height=.2, width=.8]{C_1} horizontally. We call the resulting ``complete'' algorithm~$m1n2$, and see that eight such combinations are possible: $m1n1$, $m1n2$, $m2n1$, $m2n2$, $n1m1$, $n1m2$, $n2m1$, and~$n2m2$. Note that in principle the block sizes for the two layered blocked algorithms can be chosen independently; however, we limit our study to a single block size for both layers. \input{pred/figures/sylv2dalgs} Beyond the combination of the vertically and horizontally traversing algorithms above, an additional 14~algorithms traverse the matrix diagonally (with potentially different block sizes~$b_m$ and~$b_n$ for dimensions~$m$ and~$n$), and operate on a set of $3 \times 3$ sub-matrices in each iteration; \cref{algs:sylv2d} presents a sample of two of these algorithms (all 14~algorithms are found in \libflame~\cite{libflameweb}). Each algorithm consists of a sequence of \dgemm[NN]{}s and three solutions of sub-problems that are also triangular Sylvester equations. While the sub-problem involving \dm[mat11, size=.5]{B_{11}} of size $b_m \times b_n$ is directly solved by the unblocked \dtrsyl[NN1], the other two involve potentially large yet thin panels of~\dmC. Complete algorithms are constructed by solving each of these sub problems with an appropriate vertical or horizontal traversal algorithm.\footnote{% Setting one of the block sizes of a diagonally traversing algorithm to the corresponding matrix size results in one of the vertical or horizontal traversal algorithms. } Since each of the 14~algorithms has two such sub-problems, for each of which we can choose from two algorithms, we end up with a total of $14 \cdot 2 \cdot 2 = 56$~possible combinations. Together with the eight combinations of only vertical and horizontal traversal algorithms, this results in a grand total of 64~different ``complete'' blocked algorithms. \subsubsection{Algorithm Selection} \input{pred/figures/varsylv} \cref{fig:pred:var:sylv} presents performance predictions and measurements for the Sylvester equation solver for problem sizes between~$n = 56$ and~4152 in steps of~64 and block size~$b = 64$ on a \haswell using \openblas. Since the executions for this setup take between 40~minutes and 2~hours for each algorithm, we only measured the eight algorithms based exclusively on orthogonal matrix traversals. Our predictions, which are generated up to $1500\times$~faster at roughly \SI5{\second}~per algorithm, indicate that in terms of performance these eight algorithms are evenly spread across the entire 64~``complete'' algorithms. For the single-threaded scenario, the predictions in \cref{fig:pred:var:sylv:pred:1} suggest that algorithms~$n2m2$~(\ref*{plt:sylvn2m2}) and $m1n1$~(\ref*{plt:sylvm1n1}) are, respectively, the fastest and slowest, and differ in performance by~\SI{9.99}\percent. The measurements in \cref{fig:pred:var:sylv:meas:1} confirm that, while algorithm~$n2m2$~(\ref*{plt:sylvn2m2}) is indeed the fastest, algorithm~~$n1m1$~(\ref*{plt:sylvn1m1}) is the slowest. While the performance of algorithms~$m1n1$~(\ref*{plt:sylvm1n1}) and $n1m1$~(\ref*{plt:sylvn1m2}) is predicted to be almost identical, the measurements show that $m1n1$~(\ref*{plt:sylvm1n1}) is in fact up to \SI{3.00}{\percent} faster than $n1m1$~(\ref*{plt:sylvn1m2}). Furthermore, while the remaining algorithms are correctly placed between the fastest and the slowest, they are not accurately ranked. The predictions and measurements for the multi-threaded scenario in \cref{fig:pred:var:sylv:pred:12,fig:pred:var:sylv:meas:12} are at first sight surprising: Compared to the single-threaded case the attained performance is considerably lower. For matrices of size~$n = 4000$, the algorithms reach roughly \SI8{\giga\flops\per\second}, which corresponds to merely~\SI{1.67}{\percent} of the processor's 12-core peak performance of \SI{480}{\giga\flops\per\second} (without \turboboost). An analysis revealed that the source of the drastic increase in runtime is the \blasl1 kernel \dswap, which the unblocked \dtrsyl\footnote{% Technically within \code{dlasy2}, which is called from \dtrsyl. } uses to swap two vectors of length~4: Although the workload for this operation is tiny, with multiple threads \openblas (version~0.2.15) activates its parallelisation, which for a copy operation on only~\SI{64}{\bytes} introduces a overhead of over~$200\times$ the kernel's single-threaded runtime. (The problem was subsequently fixed in \openblas version~0.2.16 (March 2016) and is not present in \mkl.) While the multi-threaded predictions for all 64~algorithms indicate virtually identical performance and thus do not allow a meaningful performance ranking, they support the crucial revelation that using \openblas~0.2.15 the triangular Sylvester equation is solved considerably faster on a single core than on 12~cores without exception. } \section{Algorithm Generation} \label{sec:tensor:alggen} \input{tensor/alggen} \section{Runtime Prediction} \label{sec:tensor:pred} \input{tensor/pred} \subsection{Example Contraction: \texorpdfstring{$C_{abc} \coloneqq A_{ai} B_{ibc}$}{C\_abc := A\_ai B\_ibc}} \label{sec:tensor:extc} \input{tensor/predex} \subsection{Repeated Execution} \label{sec:tensor:repeat} \input{tensor/repeat} \subsection{Operand Access Distance} \label{sec:tensor:accdist} \input{tensor/accdist} \subsection{Cache Prefetching} \label{sec:tensor:prefetch} \input{tensor/prefetch} \subsection{Prefetching Failures} \label{sec:tensor:prefetchfail} \input{tensor/prefetchfail} \subsection{First Loop Iterations} \label{sec:tensor:firstiter} \input{tensor/firstiter} \section{Results} \label{sec:tensor:results} \input{tensor/results} \section{Summary} \label{sec:tensor:conclusion} \input{tensor/conclusion} } \subsection{Changing the Setup for \texorpdfstring{$C_{abc} \coloneqq A_{ai} B_{ibc}$}{C\_abc := A\_ai B\_ibc}} \label{sec:ai_ibc2} \input{tensor/figures/ai_ibc2} We consider the previously studied contraction with an entirely different setup: We use $a = b = c = 128$ and $i = 8, \ldots, 1000$ in steps of~8 on an \ivybridge with single-threaded \mkl. For this scenario, \cref{fig:tensor:ai_ibc2} presents the performance predictions and measurements for all 36~algorithms (see \cref{sec:tensor:extc}). Although everything, ranging from the problem sizes to the machine and \blas library was changed in this setup, the predictions are of equivalent quality and our tool correctly determines that the \dgemm-based algorithms (\ref*{plt:ai_ibc:c_gemm}), \ref*{plt:ai_ibc:b_gemm}) not only perform best and equally well but also reach over~\SI{75}{\percent} of the \ivybridgeshort's theoretical peak performance of \SI{28.8}{\giga\flops\per\second}. \subsection{Vector Contraction: \texorpdfstring{$C_a \coloneqq A_{iaj} B_{ji}$}{C\_a := A\_iaj B\_ji}} \label{sec:noblas3} \input{tensor/algs/iaj_ji} \input{tensor/figures/iaj_ji} For certain contractions (e.g., those involving vectors), \dgemm cannot be used as a compute kernel, and algorithms can only be based on \blasl1 or~2 kernels. One such scenario is encountered in the contraction $C_a \coloneqq A_{iaj} B_{ji}$, for which our generator yields 8~algorithms: \begin{itemize} \item 4 \ddot-based: \tensoralgname{aj}{dot}~(\ref*{plt:iaj_ji:aj_dot}), \tensoralgname{ja}{dot}~(\ref*{plt:iaj_ji:ja_dot}),\\ \tensoralgname{ai}{dot}~(\ref*{plt:iaj_ji:ai_dot}), \tensoralgname{ia}{dot}~(\ref*{plt:iaj_ji:ia_dot}); \item 2 \daxpy-based: \tensoralgname{ij}{axpy}~(\ref*{plt:iaj_ji:ij_axpy}), \tensoralgname{ji}{axpy}~(\ref*{plt:iaj_ji:ji_axpy}), and \item 2 \dgemv-based (see \cref{algs:iaj_ji}): \tensoralgname j{gemv}~(\ref*{plt:iaj_ji:j_gemv}), \tensoralgname{i'}{gemv}~(\ref*{plt:iaj_ji:i'_gemv}). \end{itemize} Note that since last algorithm operates on slices \tind A{i,:,:}, which do not have contiguously-stored dimension, a \code{copy} kernel (indicated by the apostrophe in the algorithm name) is required before each \dgemv[N] (\cref{alg:iaj_ji:i'-gemv}). \Cref{fig:tensor:iaj_ji} presents the predicted and measured performance for these algorithms. Our predictions clearly identify the fastest algorithm \tensoralgname j{gemv}~(\ref*{plt:iaj_ji:j_gemv}) across the board. Furthermore, the next group of four algorithms is also correctly recognized, and the low performance of the second \dgemv[N]-based algorithm \tensoralgname{i'}{gemv}~(\ref*{plt:iaj_ji:i'_gemv}) (due to the overhead of the involved copy operation) is correctly predicted as well. \subsection{Challenging Contraction: \texorpdfstring{$C_{abc} \coloneqq A_{ija} B_{jbic}$}{C\_abc := A\_ija B\_jbic}} \label{sec:ijb_jcid} \input{tensor/algs/ijb_jcid} We now turn to a more complex example inspired by space-time continuum computations in the field general relativity~\cite{generalrelativity}: $C_{abc} \coloneqq A_{ija} B_{jbic}$. For this contraction, we generated a total of 176~different algorithms: \begin{itemize} \item 48 \ddot-based~(\ref*{plt:ijb_jcid:dot}), \item 72 \daxpy-based~(\ref*{plt:ijb_jcid:axpy}), \item 36 \dgemv-based~(\ref*{plt:ijb_jcid:gemv}), \item 12 \dger-based~(\ref*{plt:ijb_jcid:ger}), and \item 8 \dgemm-based:\\ \tensoralgname{cj'}{gemm}~(\ref*{plt:ijb_jcid:cj'_gemm}), \tensoralgname{jc'}{gemm}~(\ref*{plt:ijb_jcid:jc'_gemm}), \tensoralgname{ci'}{gemm}~(\ref*{plt:ijb_jcid:ci'_gemm}), \tensoralgname{i'c}{gemm}~(\ref*{plt:ijb_jcid:i'c_gemm}),\\ \tensoralgname{bj'}{gemm}~(\ref*{plt:ijb_jcid:bj'_gemm}), \tensoralgname{jb'}{gemm}~(\ref*{plt:ijb_jcid:jb'_gemm}), \tensoralgname{bi'}{gemm}~(\ref*{plt:ijb_jcid:bi'_gemm}), \tensoralgname{i'b}{gemm}~(\ref*{plt:ijb_jcid:i'b_gemm}). \end{itemize} All \dgemm-based (see \cref{algs:ijb_jcid}) and several of the \dgemv-based algorithms involve copy operations to ensure that each matrix has a contiguously-stored dimension as required by the \blas interface. Once again, we consider a challenging scenario where both contracted indices are of size $i = j = 8$ and the free indices $a = b = c$ vary between~8 and~1000. \input{tensor/figures/ijb_jcid} \Cref{fig:tensor:ijb_jcid:pred} presents the predicted performance of the 176~algorithms, where algorithms based on \blasl1 and~2 are grouped by kernel. Even with the copy operations, the \dgemm-based algorithms are the fastest. However, within these 8~algorithms, the performance differs by more than~\SI{20}\percent. \Cref{fig:tensor:ijb_jcid:meas} compares our predictions with corresponding performance measurements\footnote{% Slow tensor contraction algorithms were stopped before reaching the largest problem size by limiting the total measurement time per algorithm to~\SI{15}\min. }: Among the \dgemm-based algorithms, our predictions clearly separate the bulk of fast algorithms from the slightly less efficient ones. \input{tensor/figures/ijb_jcid10} \paragraph{Multi-Threading} Our contraction algorithms can profit from shared memory parallelism through multi-threaded \blas kernels. To focus on the impact of parallelism, we increase the contracted tensor dimension sizes to~$i = j = 32$ and use all 10~cores of the \ivybridge with multi-threaded \openblas. \Cref{fig:tensor:ijb_jcid10} presents performance predictions and measurements for this setup: Our predictions accurately distinguish the three groups of \dgemm-based implementations, and algorithms \tensoralgname{i'c}{gemm}~(\ref*{plt:ijb_jcid:i'c_gemm}) and \tensoralgname{i'b}{gemm}~(\ref*{plt:ijb_jcid:i'b_gemm}) (see \cref{algs:ijb_jcid}), which reach \SI{170}{\giga\flops\per\second}, are correctly identified as the fastest. \tensoralgname{jb'}{gemm}~(\ref*{plt:ijb_jcid:jb'_gemm}) on the other hand merely reaches \SI{60}{\giga\flops\per\second}. This $3\times$~difference in performance among \dgemm-based algorithms emphasizes the importance of selecting the right algorithm. \subsection{Efficiency Study} \input{tensor/figures/eff} The above study provided evidence that our automated approach successfully identifies the most efficient algorithm(s). In the following we show how much faster this approach is compared to empirical measurements. For this purpose, we once more consider the contraction $C_{abc} \coloneqq A_{ai} B_{ibc}$ with $i = 8$ and varying $a = b = c$ on a \harpertown with \openblas. \Cref{fig:tensor:eff} presents the speedup of our micro-benchmark over corresponding algorithm measurements: Generally our predictions are several orders of magnitude faster than such algorithm executions. For $a = b = c = 1000$, this relative improvement is smallest for the \dgemm-based algorithms~(\ref*{plt:eff:gemm}) at $1000\times$, because each \dgemm performs a significant portion of the computation; for the \dger-based algorithms~(\ref*{plt:eff:ger}), it lies between 6000 and \num{10000} and for the \dgemv-based algorithms~(\ref*{plt:eff:gemv}) the gain is $\num{5e5}\times$ to $\num{e6}\times$; finally, for the \blasl1-based algorithms~(\ref*{plt:eff:axpy}, \ref*{plt:eff:dot}), where each kernel invocation only performs a tiny fraction of the contraction, our predictions are \num{1e6} to \num{1e9}~times faster than the algorithm executions.
1412.0576
\section{Introduction} We study a 2D problem of diffraction by a segment bearing im\-pe\-dan\-ce boundary conditions on both sides. This problem can be considered as a cross-section of a 3D problem of diffraction by an infinitely long strip with finite width and zero thickness. The governing equation is the Helmhotz one, so the stationary problem is studied. No restriction is imposed on the relation between the wavelength and the width of the strip (length of the segment). The impedances of the sides are assumed to be equal. The problem of diffraction by a segment has been studied extensively, but the vast majority of papers is related to the case of ideal (Dirichlet or Neumann) boundary conditions. A problem with ideal boundary conditions (ideal segment) admits an application of separation of variables method in the elliptical coordinates. As the result, the solution becomes expressed in terms of Matheu functions \cite{Sieger}. However this solution seems not attractive for applications and for analytical studies. Numerous attempts have been made to obtain a solution analogous to the Sommerfeld's formula for the half-plane \cite{Somm}. A review of these attempts can be found in \cite{Luneburg}. Unfortunately, it has been found that the elegant approach of Riemann surface and Sommerfeld integral cannot be successfully used for the segment problem. A good practical way to treat the segment problem at least in the short-wave approximation is the diffraction series approach. For an ideal segment this approach has been developed in \cite{Sch,HasWein} and in many other papers. Some mathematically important results for the ideal strip problem have been obtained in \cite{Williams,Latta,Gorenflo}. The problem of diffraction by an ideal strip was reduced there to the inverse monodromy problem for a confluent Heun's equation. Thus the problem of diffraction by an ideal strip has been solved at least in the mathematical sense. One of the authors contributed to this branch \cite{Shanin2001,Shanin2003a,Shanin2003b}. The problem of diffraction by an impedance segment seems much more complicated. In the case of high frequencies the method of diffraction series can be applied to this problem \cite{Herman}. Otherwise one needs to solve an appropriate integral equation \cite{Senior} numerically. Also there exist some hybrid techniques, which combine both analytical and numerical approach. By using such techniques computational time may be significantly reduced \cite{Burnside,Sahalos,Ikiz}. Besides, some approximate analytical methods, e.~g.\ an approximate Wiener--Hopf technique \cite{Serbest} can be applied to this problem. Still the analytical theory of scattering by an impedance segment is far from being completed. Here we present some results that seem important and enable one to perform efficient calculations. The first part of the paper describes the preliminary steps. Namely, the problem is formulated and symmetrized. After sym\-met\-ri\-za\-ti\-on, the sym\-met\-ri\-cal and the antisymmetrical problem are studied in parallel (they are slightly different). Following \cite{Nobl}, for each of these two diffraction problems a functional problem is formulated. Then, auxiliary functional problems are formulated. The {\em embedding formula\/} expressing the directivity in terms of the auxiliary solutions is derived. This embedding formula is useful since it represents the directivity (which is a function of the angle of incidence and the angle of scattering) as a combination of functions depending on a single variable. Method of embedding formula have been applied to many diffraction problems with different sets of auxiliary problems. In \cite{Williams} embedding formula was derived for diffraction by an ideal strip. Problems with grazing incidence were taken to generate auxiliary solutions. In \cite{Biggs1, Biggs2, Biggs3} embedding formula was obtained for diffraction by thin breakwaters using tricky manipulation with integral equations. Also embedding formula was derived for planar cracks in \cite{Shanin2001b}. Edge Green's functions were used to generate auxiliary problems. In the current research we do not use this approach and just introduce auxiliary functional problems with a proper behaviour at infinity. Then, following the procedure developed in \cite{Hurd} matrix Riemann--Hilbert problems are formulated for the auxiliary functional problems. The second part of the paper will be dedicated to solving the matrix Riemann--Hilbert problems using a novel technique of the OE--equation. \section{Formulation of diffraction problem} Consider a 2D plane $(x,y)$. The scatterer is the segment $y = 0$, $-a< x < a$. Everywhere outside this segment the Helmholtz equation is valid: \begin{equation} \Delta u + k_0^2 u = 0 \label{eq0101} \end{equation} where $u(x,y)$ is a field variable, and $k_0$ is a parameter. We assume that $k_0$ has a vanishing positive imaginary part in order to use the limiting absorption principle. The choice of time dependence is such that the wave traveling in the positive $x$-direction has the form $e^{i k_0 x}$. The total field is a sum of the incident wave $u^{\rm in}$ and the scattered wave $u^{\rm sc}$: \[ u = u^{\rm in} + u^{\rm sc}, \] where \begin{equation} u^{\rm in} = \exp \{ - i k_0 (x \cos \theta^{\rm in} + y \sin \theta^{\rm in} ) \} \label{eq0102} \end{equation} is a plane wave. Here $\theta^{\rm in}$ is the angle of incidence; $0 \le \theta^{\rm in} \le \pi/2$. The total field should be one-side continuous on the scatterer and obey impedance boundary conditions on the faces of the scatterer: \begin{equation} \pm \frac{\partial u}{\partial y} (x , \pm 0) = \eta \, u (x , \pm 0) , \qquad -a < x < a. \label{eq0103} \end{equation} Here $\eta$ is the impedance parameter. Energy conservation or dissipation condition requires \begin{equation} {\rm Im}[\eta] \le 0 . \label{eq0104} \end{equation} The total field should obey Meixner's conditions near the vertices $(\pm a,0)$. Namely, the integral of the ``energy'' combination $|\nabla u|^2 + |u|^2$ over any finite proximity of a vertex should be finite. Later on, the Meixner's condition will be reformulated as a restriction imposed on the growth of the field near the vertices. The scattered field $u^{\rm sc}$ should also obey the Sommerfeld's radiation condition in the standard form: \begin{equation} \left( \frac{\partial u^{\rm sc}}{\partial r} - i k_0 u^{\rm sc} \right) = o(e^{i k_0 r} (k_0 r)^{-1/2}), \label{eq0105} \end{equation} where $r = \sqrt{x^2 + y^2}$. Thus, the scattered field for large $r$ can be written as follows: \begin{equation} u^{\rm sc} (r , \theta) = \frac{\exp\{i k_0 r\}}{\sqrt{2 \pi k_0 r}} S (\theta, \theta^{\rm in}) + o(e^{i k_0 r} (k_0 r)^{-1/2}). \label{eq0106} \end{equation} Here $\theta = \arctan (y/x)$, and $S(\theta, \theta^{\rm in})$ is the {\em directivity} of the scattered field. This directivity should be found as the result of this research. \section{Symmetrization} Since the impedances of the faces of the scatterer are chosen to be equal, the problem can be split into the symmetrical and antisymmetrical parts: \begin{equation} u^{\rm sc}(x,y) = u^{\rm a}(x,y) + u^{\rm s}(x,y), \label{eq0201} \end{equation} where \[ u^{\rm a}(x,y) = - u^{\rm a}(x,-y), \qquad u^{\rm s}(x,y) = u^{\rm s}(x,-y) \] are the antisymmetrical and symmetrical parts, respectively. The symmetrical and antisymmetrical parts correspond to the incident waves \[ u^{\rm in,s} = \frac{1}{2}[ \exp \{ - i k_0 (x \cos \theta^{\rm in} + y \sin \theta^{\rm in} ) \} + \exp \{ - i k_0 (x \cos \theta^{\rm in} - y \sin \theta^{\rm in} ) \} ], \] \[ u^{\rm in,a} = \frac{1}{2}[ \exp \{ - i k_0 (x \cos \theta^{\rm in} + y \sin \theta^{\rm in} ) \} - \exp \{ - i k_0 (x \cos \theta^{\rm in} - y \sin \theta^{\rm in} ) \} ], \] respectively. The problems for $u^{\rm a}$ and $u^{\rm s}$ can be formulated as mixed boundary value problems in the half-plane $y> 0$. Boundary conditions for $u^{\rm a}$ are as follows: \begin{equation} \left[ \frac{\partial}{\partial y} - \eta \right] u^{\rm a}(x,+0) = i k_0 \sin \theta^{\rm in} \exp \{ -i k_0 x \cos \theta^{\rm in} \} \qquad |x| < a, \label{eq0202} \end{equation} \begin{equation} u^{\rm a}(x,0) = 0, \qquad |x| > a. \label{eq0203} \end{equation} Boundary conditions for $u^{\rm s}$ are as follows: \begin{equation} \left[ \frac{\partial}{\partial y} - \eta \right] u^{\rm s}(x,+0) = \eta \exp \{ -i k_0 x \cos \theta^{\rm in} \} \qquad |x| < a, \label{eq0204} \end{equation} \begin{equation} \frac{\partial}{\partial y}u^{\rm s}(x,+0) = 0, \qquad |x| > a. \label{eq0205} \end{equation} Below we study the symmetrical and the antisymmetrical problem separately (in parallel). In both cases, we are interested in the field for $y > 0$ only. The directivity of the scattered field is a sum of the symmetrical and antisymmetrical part: \begin{equation} S(\theta, \theta^{\rm in}) = S^{\rm s}(\theta, \theta^{\rm in}) + S^{\rm a}(\theta, \theta^{\rm in}), \label{obvious01} \end{equation} where the last two values are defined similarly to (\ref{eq0106}). \section{Local behavior of wave fields near the edges} Here we study the growth of the solutions near the vertices. This growth is limited by the Meixner's conditions. \begin{figure}[ht] \centerline{\epsfig{file=fig01.eps}} \caption{Local coordinates} \label{fig01} \end{figure} Introduce local cylindrical variables $(\rho_\pm , \phi_\pm)$ (Fig.~\ref{fig01}). Consider the {\em total\/} field in the {\bf antisymmetrical case}, i.\ e.\ consider the function $u = u^{\rm a} + u^{\rm in, a}$. The Meixner's series for a solution has form \begin{equation} u(\rho, \phi) = \sum_m \sum_n (k_0 \rho)^{\nu_m} \log^n (k_0 \rho) f_{m,n} (\phi), \label{eq0301} \end{equation} where $\rho= \rho_{\pm}$, $\phi = \phi_{\pm}$, $f_{m,n} (\phi) = f^\pm_{m,n}(\phi_\pm)$. This series is substituted into the Helmholtz equation and into the boundary conditions. Also, some terms of the series are considered as prohibited according to the Meixner's condition mentioned above. As the result, we get the following asymptotic expansion of the field: \[ u = c (k_0 \rho)^{1/2} \sin(\phi/2) - \frac{2 c \eta}{3 \pi k_0 }(k_0 \rho)^{3/2} \phi \cos (3\phi/2) \] \begin{equation} - \frac{2 c \eta}{3 \pi k_0 } (k_0 \rho)^{3/2} \log(k_0 \rho) \sin (3 \phi/2) + O(\log^2(k_0 \rho) (k_0 \rho)^{5/2}). \label{eq0307} \end{equation} Now consider the {\bf symmetrical case}, i.\ e.\ let be $u = u^{\rm s} + u^{\rm in, s}$. The asymptotics for this case is as follows: \begin{equation} u = d - \frac{\eta d}{\pi} \rho \log (k_0 \rho) \cos(\phi) + \frac{\eta d}{k_0 \pi}\rho \phi \sin(\phi) + O((k_0 \rho)^2 \log^2 (k_0 \rho) ). \label{eq0305} \end{equation} Note that constants $c$ and $d$ in (\ref{eq0307}) and (\ref{eq0305}) are undetermined. Of course both constant take different values for two edges, i.\ e.\ totally we introduce four constants $c_\pm$ and $d_\pm$ here. \section{Formulation of Wiener--Hopf functional problems} \subsection{Antisymmetrical case} Consider domain $\Omega$ shown in Fig.~\ref{fig02}. This domain is bounded by a part of $x$-axis, two small arcs (having radii $\epsilon \to 0$) encircling the vertices, and a large arc (having radius $R \to \infty$) mimicking the infinity. Consider two functions, both solutions of Hemholtz equation (\ref{eq0101}) in $\Omega$. The first function is $u^{\rm a}$ (the scattered field in the antisymmetrical case), and the second function is an outgoing or decaying plane wave $w$: \begin{equation} w = w(k,x,y) = \exp \left\{ i \left(k x + \xi(k) y \right) \right\}, \label{eq0601} \end{equation} \begin{equation} \xi(k) \equiv \sqrt{k_0^2 - k^2}, \label{eq0603b} \end{equation} where $k$ is a real value. The branch of square root $\xi$ is chosen in such a way that while $|k| < {\rm Re}[k_0]$ the values of the square root are close to positive real. By continuity, the values of the square root for $|k| > {\rm Re}[k_0]$ are close to positive imaginary (the real axis passes below the point~$k_0$ due to the limiting absorption principle). Note that $w$ is a solution of the Helmholtz equation for each value of parameter $k$. \begin{figure}[ht] \centerline{\epsfig{file=fig02.eps}} \caption{Contour for the Green's formula} \label{fig02} \end{figure} Apply the Green's formula to these two functions in $\Omega$: \begin{equation} \int_{\partial \Omega} \left[ \frac{\partial u^{\rm a}}{\partial n} w - \frac{\partial w}{\partial n} u^{\rm a} \right] dl = 0. \label{eq0511} \end{equation} Since function $u^{\rm a}$ obeys the radiation condition, the integral over the large arc tends to zero as $R \to \infty$. The integrals over small arcs tend to zero as $\epsilon \to 0$ due to the local asymptotic expansions at the vertices. Thus, only the integral over the parts of the $x$-axis should be considered. Define the following values: \begin{equation} \check U_- (k)= \int \limits_{-\infty}^{-a} \left[ \frac{\partial u^{\rm a}}{\partial n} w - \frac{\partial w}{\partial n} u^{\rm a} \right] dx = \int \limits_{-\infty}^{-a} \frac{\partial u^{\rm a}(x,+0)}{\partial y} e^{i k x} dx , \label{eq0602} \end{equation} \begin{equation} \check U_0 (k)= \int \limits_{-a}^a \left[ \frac{\partial u^{\rm a}(x,+0)}{\partial n} w(x,+0) - \frac{\partial w (x,+0)}{\partial n} u^{\rm a}(x,+0) \right] dx , \label{eq0603} \end{equation} \begin{equation} \check U_+ (k)= \int_a^{\infty} \left[ \frac{\partial u^{\rm a}}{\partial n} w - \frac{\partial w}{\partial n} u^{\rm a} \right] dx = \int \limits_a^{\infty} \frac{\partial u^{\rm a}(x,+0)}{\partial y} e^{i k x} dx . \label{eq0604} \end{equation} According to (\ref{eq0511}) the following functional equations are valid for all real $k$: \begin{equation} \check U_- (k) + \check U_0 (k) + \check U_+ (k) =0 . \label{eq0605} \end{equation} Expression (\ref{eq0603}) can be transformed using (\ref{eq0202}): \[ \check U_0(k) = (\eta - i \xi(k)) \int \limits_{-a}^a u^{\rm a}(x,+0) e^{i k x} dx + \] \begin{equation} \frac{k_0 \sin \theta^{\rm in}}{k - k_*} \left( \exp \{ i (k - k_*) a \}- \exp \{ -i (k - k_*) a \} \right), \label{eq0603a} \end{equation} where $$ k_* = k_0 \cos \theta^{\rm in}. $$ Define the values \begin{equation} U_- (k) \equiv \check U_- (k) - \frac{k_0 \sin \theta^{\rm in}}{k - k_*} \exp \{ -i (k - k_*) a \} \label{eq0602d} \end{equation} \begin{equation} U_0 (k) \equiv (\eta - i \xi(k)) \int \limits_{-a}^a u^{\rm a}(x,+0) e^{i k x} dx \label{eq0603d} \end{equation} \begin{equation} U_+ (k) \equiv \check U_+ (k) + \frac{k_0 \sin \theta^{\rm in}}{k - k_*} \exp \{ i (k - k_*) a \}. \label{eq0604d} \end{equation} According to (\ref{eq0605}) these values obey the functional equation \begin{equation} U_- (k) + U_0 (k) + U_+ (k) =0 . \label{eq0605t} \end{equation} Functions $\check U_j$, $j = -,0,+$ are defined as Fourier transforms taken on some parts of the real axis. Thus, standard theorems can be used to establish properties of these functions as well as the properties of $U_j$: \begin{itemize} \item[\bf Property 1 ] Function $U_- (k)$ defined by (\ref{eq0602d}) and (\ref{eq0602}) can be analytically continued onto the whole lower half-plane from the real axis, and it is regular there. Note that since we assume that $k_0$ has a negligibly small positive imaginary part, the important point $k = - k_0$ belongs to the lower half-plane, and the function $U_- (k)$ is regular at this point. \item[\bf Property 2 ] Similarly, function $U_+ (k)$ defined by (\ref{eq0604d}) and (\ref{eq0604}) can be analytically continued onto the whole upper half-plane including $k = k_0$, and it is regular everywhere in the upper half-plane except a pole at $k = k_*$. At this pole function $U_+$ has a prescribed residue equal to $k_0 \sin \theta^{\rm in}$. \item[\bf Property 3 ] Function \begin{equation} \tilde U_0 (k) = \left( \eta - i \xi(k) \right)^{-1} U_0 (k) \label{eq0808a} \end{equation} is regular on the whole complex plane $k$. \item[\bf Property 4 ] Applying Watson's lemma to the integral representations (\ref{eq0602}), (\ref{eq0603}), (\ref{eq0604}) we can get the following growth estimations as $|k| \to \infty$ in the domains of {\em a priori\/} regularity of the unknown functions: \begin{equation} U_+(k) = O(k^{-1/2} e^{i k a}), \qquad {\rm Arg}[e^{- i \pi /2} k] \le \pi/2 , \label{eq0609a} \end{equation} \begin{equation} U_-(k) = O(k^{-1/2} e^{-i k a}), \qquad {\rm Arg}[e^{ i \pi /2} k] \le \pi/2 , \label{eq0610a} \end{equation} \begin{equation} U_0(k) = O(k^{-1/2} e^{-i k a}), \quad {\rm Arg}[e^{- i \pi /2} k] \le \pi/2 , \label{eq0611a} \end{equation} \begin{equation} U_0(k) = O(k^{-1/2} e^{i k a}), \qquad {\rm Arg}[e^{ i \pi /2} k] \le \pi/2 , \label{eq0612a} \end{equation} Note that estimations (\ref{eq0609a}), (\ref{eq0610a}) require some algebra to derive. \end{itemize} Introduce cuts ${\cal G}_1$ and ${\cal G}_2$ going from $-k_0$ and $k_0$ to infinity (see Fig.~\ref{fig04_a}). These cuts go along the lines corresponding to the values of the square root $\pm \sqrt{k_0^2 - k^2}$ taken for real $k$. \begin{figure}[ht] \centerline{\epsfig{file=fig04_a.eps}} \caption{Cuts ${\cal G}_1$ and ${\cal G}_2$} \label{fig04_a} \end{figure} Function $U_-$ can be naturally continued to the lower half-plane, function $U_+$ can be naturally continued to the upper half-plane, and function $U_0$ can be continued to the whole plane with the cuts ${\cal G}_1$ and ${\cal G}_2$. However, using relations \[ U_- (k) = - U_0 (k) - U_+ (k), \qquad U_+ (k) = - U_0 (k) - U_- (k) \] the functions $U_-$ can be continued to the upper half-plane with a cut ${\cal G}_2$, and the functions $U_+$ can be continued to the lower half-plane with a cut ${\cal G}_1$. Moreover, it is possible to study the Riemann surface of each function from the set $(U_-, U_+,U_0)$, and prove that all branch points have order two and affixes~$\pm k_0$. These properties enable us to formulate a functional problem for the functions $U_\pm$: \begin{problem} Find functions $U_+ (k)$, $U_- (k)$, regular in the complex plane with the cuts ${\cal G}_1$ and ${\cal G}_2$, such that \begin{itemize} \item function $U_-$ is regular in the lower half-plane; \item function $U_+$ is regular in the upper half-plane except a simple pole at $k = k_*$ with a residue equal to $k_0 \sin \theta^{\rm in}$; \item function $(\eta - i \xi(k) )^{-1} U_0(k)$ is regular on the whole plane (here $U_0$ is defined as $U_0 \equiv - (U_+ + U_-)$); \item functions $U_+$, $U_-$, $\tilde U_0$ obey growth restrictions (\ref{eq0609a}), (\ref{eq0610a}), (\ref{eq0611a}), (\ref{eq0612a}). \end{itemize} \label{functional_problem_A1} \end{problem} The formulation of the functional problem means that we forget about the definition of the unknown functions through the wave fields, and look for functions $U_+ (k)$, $U_- (k)$ obeying Problem~\ref{functional_problem_A1} and having arbitrary nature. Let a solution of the functional problem be found. Let us describe the link between the directivity $S^{\rm a} (\theta)$ for the antisymmetrical problem and the solution of the functional problem. Apply Green's formula (\ref{eq0511}) to the domain $\Omega$, take $u^{\rm a}$ as $u$, and $u^{\rm in,a}(x,y)$ as $w$. The integral over the large arc tends to a constant linked with the directivity. The result is as follows: \begin{equation} S^{\rm a} (\theta,\theta^{\rm in}) = -e^{- i \pi /4} k_0 \sin \theta \,\, \tilde U_0 (- k_0 \cos(\theta)). \label{eq0617} \end{equation} Note that $\tilde U_0$ depends on $\theta^{\rm in}$ implicitly. \subsection{Functional problem for the symmetrical case} In the symmetrical case define functions $V_-(k)$, $V_+ (k)$, $V_0(k)$ by formulae \begin{equation} V_- (k) = \int_{-\infty}^{-a} \exp \{ i k x \} u^{\rm s} (x,+0) dx - \frac{i}{k-k_*}\exp\{-i(k-k_*)a\}, \label{eq0602e} \end{equation} \begin{equation} V_0 (k) = \frac{ i \left(\eta - i \xi(k) \right)}{\eta \xi(k)} \int \limits_{-a}^a\exp \{ i k x \}\frac{\partial{ u^{\rm s}(x, +0)}}{\partial y}dx , \label{eq0603e} \end{equation} \begin{equation} V_+ (k) = \int_{a}^\infty \exp \{ i k x \} u^{\rm s} (x,+0) dx + \frac{i}{k-k_*}\exp\{i(k-k_*)a\}, \label{eq0604e} \end{equation} which are similar to (\ref{eq0602d}), (\ref{eq0603d}), (\ref{eq0604d}). A functional equation is valid for these functions: \begin{equation} V_- (k) + V_0 (k) + V_+ (k) = 0. \label{eq0605a} \end{equation} The growth estimations for the new unknown functions are as follows: \begin{equation} V_+(k) = O(k^{-1} e^{i k a}), \qquad {\rm Arg}[e^{- i \pi /2} k] \le \pi/2 , \label{eq0609b} \end{equation} \begin{equation} V_-(k) = O(k^{-1} e^{-i k a}), \qquad {\rm Arg}[e^{ i \pi /2} k] \le \pi/2 , \label{eq0610b} \end{equation} \begin{equation} V_0(k) = O(k^{-1} e^{-i k a}), \quad {\rm Arg}[e^{- i \pi /2} k] \le \pi/2 , \label{eq0611b} \end{equation} \begin{equation} V_0(k) = O(k^{-1} e^{i k a}), \qquad {\rm Arg}[e^{ i \pi /2} k] \le \pi/2 . \label{eq0612b} \end{equation} The functional problem for the functions $V_\pm$ is as follows: \begin{problem} Find functions $V_+ (k)$, $V_+ (k)$, regular in the complex plane with the cuts ${\cal G}_1$ and ${\cal G}_2$, such that \begin{itemize} \item function $ V_- (k)$ is regular in the lower half-plane; \item function $V_+ (k)$ is regular in the upper half-plane except a simple pole at $k = k_*$ with residue equal to $i$; \item function \begin{equation} \tilde V_0 = \frac{\eta\xi(k)}{i(\eta - i \xi(k) )} V_0(k) \end{equation} is regular in the whole plane (here $V_0$ is defined as $V_0 \equiv - (V_+ + V_-)$); \item functions $V_+$, $V_-$, $\tilde V_0$ obey growth restrictions (\ref{eq0609b}), (\ref{eq0610b}), (\ref{eq0611b}), (\ref{eq0612b}). \end{itemize} \label{functional_problem_S1} \end{problem} The expression for the directivity of the symmetrical problem is as follows: \begin{equation} S^{\rm s} (\theta, \theta^{\rm in}) = e^{- i \pi /4} \tilde V_0 (- k_0 \cos(\theta)). \label{eq0617a} \end{equation} \section{Auxiliary Wiener--Hopf functional problem and embedding formula} \subsection{Auxiliary functions. Antisymmetrical problem} Consider Problem~\ref{functional_problem_A1}. Here we modify this functional problem and formulate a problem for the auxiliary functions. The following modifications are made. First, two pairs of auxiliary functions are introduced. They are $(U^1_-, U^1_+)$, $(U^2_-, U^2_+)$. This enables us to construct a basis of solutions for a family of initial functional problems indexed by parameter $\theta^{\rm in}$. Second, functions $U^{1,2}_+$ are required to have no poles (i.\ e.\ the conditions of analyticity become more strict). Third, faster growth at infinity is allowed (i.\ e.\ growth restriction become weaker). \begin{problem} Find functions $U^{1,2}_+ (k)$, $U^{1,2}_- (k)$, regular in the complex plane with the cuts ${\cal G}_1$ and ${\cal G}_2$, such that \begin{itemize} \item functions $U^{1,2}_-$ are regular in the lower half-plane; \item functions $U^{1,2}_+$ are regular in the upper half-plane; \item functions \begin{equation} \tilde U_0 = (\eta - i \xi(k) )^{-1} U^{1,2}_0(k) \end{equation} are regular on the whole plane (here functions $U^{1,2}_0$ are defined as $U^{1,2}_0 \equiv - (U^{1,2}_+ + U^{1,2}_-)$); \item functions $U_+$, $U_-$, $\tilde U_0$ obey growth restrictions (\ref{eq0609c}), (\ref{eq0610c}), (\ref{eq0611c}), (\ref{eq0612c}) formulated below. \end{itemize} \label{functional_problem_A2} \end{problem} The growth restrictions for this functional problem have the following form: \begin{equation} U^{j}_+(k) = \delta_{j,2} (e^{-i \pi /2 } k)^{1/2} e^{i k a} + O(k^{-1/2} e^{i k a}), \qquad {\rm Arg}[e^{- i \pi /2} k] \le \pi/2 , \label{eq0609c} \end{equation} \begin{equation} U^{j}_-(k) = \delta_{j,1} (e^{i \pi /2 } k)^{1/2} e^{-i k a} + O(k^{-1/2} e^{-i k a}), \qquad {\rm Arg}[e^{ i \pi /2} k] \le \pi/2 , \label{eq0610c} \end{equation} \begin{equation} \tilde U^{j}_0(k) = - \delta_{j,1} ( e^{- i \pi /2 } k)^{-1/2} e^{- i k a} + O(k^{-3/2} e^{-i k a}), \quad {\rm Arg}[e^{- i \pi /2} k] \le \pi/2 , \label{eq0611c} \end{equation} \begin{equation} \tilde U^{j}_0(k) = - \delta_{j,2} ( e^{i \pi /2 } k)^{-1/2} e^{i k a} + O(k^{-3/2} e^{i k a}), \qquad {\rm Arg}[e^{ i \pi /2} k] \le \pi/2 , \label{eq0612c} \end{equation} where $j = 1,2$, and $\delta$ is the Kronecker's symbol. Organize the solution of the auxiliary functional problem as a matrix \begin{equation} {\rm U}(k) = \left( \begin{array}{cc} U^1_-(k) & U^1_+ (k) \\ U^2_-(k) & U^2_+ (k) \end{array} \right). \label{eq0613} \end{equation} Let us show that the solution of Problem~\ref{functional_problem_A2} is unique. Namely, let there exist two such solutions ${\rm U}$ and $\bar {\rm U}$. Consider the expression ${\rm J} = \bar {\rm U} {\rm U}^{-1}$. This expression is equal to \begin{equation} {\rm J} = \frac{1}{D} \left( \begin{array}{cc} D_{1,1} & D_{1,2} \\ D_{2,1} & D_{2,2} \end{array} \right) \label{eq0614} \end{equation} where \[ D = | {\rm U} |, \] \[ D_{1,1} = \left| \begin{array}{cc} \bar U^1_-(k) & \bar U^1_+ (k) \\ U^2_-(k) & U^2_+ (k) \end{array} \right|, \] \[ D_{1,2} = \left| \begin{array}{cc} U^1_-(k) & U^1_+ (k) \\ \bar U^1_-(k) & \bar U^1_+ (k) \end{array} \right|, \] \[ D_{2,1} = \left| \begin{array}{cc} \bar U^2_-(k) & \bar U^2_+ (k) \\ U^2_-(k) & U^2_+ (k) \end{array} \right|, \] \[ D_{2,2} = \left| \begin{array}{cc} U^1_-(k) & U^1_+ (k) \\ \bar U^2_-(k) & \bar U^2_+ (k) \end{array} \right|, \] where $|\cdot|$ denotes determinant of the matrix. All five determinants can be analyzed as follows. Consider $D$ as an example. Study two representations of this determinant (they are equivalent due to linear dependence of $U^j_-$, $U^j_+$, and $\tilde U^j_0$): \begin{equation} D = - ( \eta - i \xi(k) ) \left( \begin{array}{cc} U^1_- & \tilde U^1_0 \\ U^2_- & \tilde U^2_0 \end{array} \right) = -( \eta - i \xi(k) ) \left( \begin{array}{cc} \tilde U^1_0 & U^1_+ \\ \tilde U^2_0 & U^2_+ \end{array} \right). \label{eq0615} \end{equation} The first representation can be used to study the behaviour of \[ \tilde D(k) \equiv - ( \eta - i \xi(k) )^{-1} D(k) \] in the lower half-plane, and the second representation can be used to study the behaviour of the same function in the upper half-plane. One can see that $\tilde D$ is analytical in both half-planes, and grows as a constant equal to $-1$ in both half-planes. Thus, according to Liouville's theorem, \[ \tilde D \equiv -1. \] A similar reasoning can be applied to each of four other determinants. The result is \[ {\rm J}(k) \equiv {\rm I}, \] which is the identity matrix, i.\ e.\ the solution is unique. Note that the determinant $D(k)$ can have no zeros except the zeros of the function $\eta - i \xi(k)$. \subsection{Auxiliary functions. Symmetrical problem} Similarly to the antisymmetrical case, introduce an auxiliary functional problem for the symmetrical case. \begin{problem} Find functions $V^{1}_+ (k)$, $V^{2}_+ (k)$, $V^{1}_- (k)$, $V^{2}_- (k)$, regular in the complex plane with the cuts ${\cal G}_1$ and ${\cal G}_2$, such that \begin{itemize} \item functions $V^{j}_-$ are regular in the lower half-plane; \item functions $V^{j}_+$ are regular in the upper half-plane; \item functions \begin{equation} \label{eq0605_s} \tilde V^{j}_0 \equiv -\frac{ \xi(k)}{ i \left(\eta - \i \xi(k) \right)} (V^{j}_- + V^{j}_+) \end{equation} are regular on the whole plane; \item functions $V^{j}_+$, $V^{j}_-$, $\tilde V^{j}_0$ obey growth restrictions (\ref{eq0609s}), (\ref{eq0610s}), (\ref{eq0611s}), (\ref{eq0612s}) formulated below. \end{itemize} \label{functional_problem_S2} \end{problem} The growth conditions for this functional problem have the following form: \begin{equation} V^{j}_+(k) = \delta_{j,2} e^{i k a} + O(k^{-1}e^{i k a}), \qquad {\rm Arg}[e^{- i \pi /2} k] \le \pi/2 , \label{eq0609s} \end{equation} \begin{equation} V^{j}_-(k) = \delta_{j,1} e^{-i k a} + O(k^{-1} e^{-i k a}), \qquad {\rm Arg}[e^{ i \pi /2} k] \le \pi/2 , \label{eq0610s} \end{equation} \begin{equation} \tilde V^{j}_0(k) = - \delta_{j,1} e^{- i k a} + O(k^{-1} e^{-i k a}), \quad {\rm Arg}[e^{- i \pi /2} k] \le \pi/2 , \label{eq0611s} \end{equation} \begin{equation} \tilde V^{j}_0(k) = - \delta_{j,2} e^{i k a} + O(k^{-1}e^{i k a}), \qquad {\rm Arg}[e^{ i \pi /2} k] \le \pi/2 . \label{eq0612s} \end{equation} The solution of the functional problem can be organized as a matrix \begin{equation} {\rm V}(k) = \left( \begin{array}{cc} V^{1}_-(k) & V^{1}_+ (k) \\ V^{2}_-(k) & V^{2}_+ (k) \end{array} \right). \label{eq0613_s} \end{equation} Using representation similar to (\ref{eq0614}) one can show that Problem~\ref{functional_problem_S2} has a unique solution. \subsection{Embedding formula} Consider the {\bf antisymmetrical} case. Let row vector $(U_- , U_+)$ be a solution of Problem~\ref{functional_problem_A1}, and let ${\rm U}(k)$ be a solution of Problem~\ref{functional_problem_A2} in the matrix form (\ref{eq0613}). Find functions $r_1(k)$ and $r_2(k)$ such that \begin{equation} (U_-(k), U_+(k)) = (r_1(k) , r_2(k)) \left( \begin{array}{cc} U^1_-(k) & U^1_+ (k) \\ U^2_-(k) & U^2_+ (k) \end{array} \right). \label{eq2001} \end{equation} Due to Cramer's rule, \begin{equation} r_1 = \frac{D_1}{D}, \qquad r_2 = \frac{D_2}{D}, \label{eq2002} \end{equation} where \begin{equation} D_1 = \left| \begin{array}{cc} U_-(k) & U_+ (k) \\ U^2_-(k) & U^2_+ (k) \end{array} \right|, \qquad D_2 = \left| \begin{array}{cc} U^1_-(k) & U^2_+ (k) \\ U_-(k) & U_+ (k) \end{array} \right|. \label{eq2003} \end{equation} Determinant $D$ was calculated in the previous section using representation (\ref{eq0615}). Determinants $D_1$, $D_2$ can be analyzed similarly to determinant $D$, namely there exist two representations for each determinant enabling one to study these determinants in the upper and lower half-plane: \begin{equation} D_1 = - ( \eta - i \xi(k) ) \left( \begin{array}{cc} U_- & \tilde U_0 \\ U^2_- & \tilde U^2_0 \end{array} \right) = -( \eta - i \xi(k) ) \left( \begin{array}{cc} \tilde U_0 & U_+ \\ \tilde U^2_0 & U^2_+ \end{array} \right), \label{eq2003add01} \end{equation} \begin{equation} D_2 = - ( \eta - i \xi(k) ) \left( \begin{array}{cc} U^1_- & \tilde U^1_0 \\ U_- & \tilde U_0 \end{array} \right) = -( \eta - i \xi(k) ) \left( \begin{array}{cc} \tilde U^1_0 & U^1_+ \\ \tilde U_0 & U_+ \end{array} \right). \label{eq2003add02} \end{equation} Using these representations and applying the Liouville's theorem one can prove that \begin{equation} D_1 = \frac{\left( \eta - i \sqrt{k_0^2 - k^2} \right)}{k - k_*}R_1, \label{eq2003add03} \end{equation} \begin{equation} D_2 = \frac{\left( \eta - i \sqrt{k_0^2 - k^2} \right)}{k - k_*}R_2, \label{eq2003add04} \end{equation} where $R_1$, $R_2$ are some constants. $R_1$, $R_2$ can be obtained by calculating residues of determinants $D_1$, $D_2$ at the point $k=k_*$. These residues can be found either from (\ref{eq2003add01}), (\ref{eq2003add02}) or from (\ref{eq2003add03}), (\ref{eq2003add04}). Comparing these representations, obtain \begin{equation} R_1 = -\sqrt{k_0^2 - k_*^2}\, \tilde U^2_0(k_*), \quad R_2 = \sqrt{k_0^2 - k_*^2} \, \tilde U^1_0(k_*). \end{equation} Substituting $r_1$ and $r_2$ into (\ref{eq2003}) obtain the embedding formula: \begin{equation} \tilde U_{0}(k,k_*) =\frac{\xi(k_*)}{k-k_*}\left(\tilde U_0^{1}(k_*)\tilde U^{2}_0(k) - \tilde U_0^{1}(k)\tilde U^{2}_0(k_*)\right). \label{embedding_a} \end{equation} According to embedding formula we can focus our efforts on finding the solution of Problem~\ref{functional_problem_A2}, namely on functions $U^{j}_0(k)$, $j= 1,2$. Conducting a similar procedure one can obtain an embedding formula for the {\bf symmetrical case}: \begin{equation} \tilde V_{0}(k,k_*) =\frac{i\eta}{(k-k_*)}\left(\tilde V^{2}_0(k_*)\tilde V_0^{1}(k) - \tilde V^{2}_0(k)\tilde V_0^{1}(k_*) \right). \label{embedding_s} \end{equation} \section{Matrix Riemann--Hilbert formulation for auxiliary functional problems} \subsection{Antisymmetrical problem} Here we present a matrix Riemann--Hilbert formulation for the antisymmetrical case. Let us make some preliminary steps. Consider the cuts ${\cal G}_1$ and ${\cal G}_2$ (see Fig.~\ref{fig04}, left). The values on the left shores (when going from $\pm k_0$ to $\infty$) of the cuts are denoted by symbols with lower index $L$; the values on the right shores are denoted by index~$R$. Consider the bypasses about $\pm k_0$ and going from a point on the left shore to the right shore, i.\ e.\ going in the positive direction. Our current aim is to describe the transformation of the matrix ${\rm U}$ occuring as a result of the bypass. Namely, let us prove that \begin{equation} {\rm U}_R(k) = {\rm U}_L(k)\, {\rm M}_1(k), \qquad k \in {\cal G}_1 , \label{eq0801} \end{equation} \begin{equation} {\rm U}_R(k) = {\rm U}_L(k)\, {\rm M}_2(k), \qquad k \in {\cal G}_2 , \label{eq0802} \end{equation} with \begin{equation} {\rm M}_1 (k) = \left( \begin{array}{cc} 1 & 2 i \xi/(\eta - i \xi) \\ 0 & (\eta + i \xi)/(\eta - i \xi) \end{array} \right) , \label{eq0806} \end{equation} \begin{equation} {\rm M}_2 (k) = \left( \begin{array}{cc} (\eta + i \xi)/(\eta - i \xi) & 0 \\ 2 i \xi/(\eta - i \xi) & 1 \end{array} \right) . \label{eq0805} \end{equation} The analytic continuation of the square root $\xi(k)\equiv \sqrt{k_0^2 - k^2}$ on the cuts ${\cal G}_{1,2}$ is defined as follows. This square root is equal to $k_0$ for $k =0$. Then, introduce the paths shown in Fig.~\ref{fig04} (right). These paths go from zero to the left shores of ${\cal G}_{1,2}$. The values of the square root on ${\cal G}_{1,2}$ is taken as the result of the continuation along these paths. The values of the square root are taken for ${\rm M}_{1,2}$ from the left shores. \begin{figure}[ht] \centerline{\epsfig{file=fig04.eps}} \caption{(left) Bypasses around $k_0$ and $-k_0$. (right) Analytical continuation of the square roots} \label{fig04} \end{figure} Derive (\ref{eq0802}). Consider contour ${\cal G}_{2}$ associated with matrix ${\rm M}_2$. Continue functional equation (\ref{eq0605}): \begin{equation} (U_-^j(k))_L = - U^j_+(k) - \left(\eta - i \xi(k) \right)\tilde U^j_0 (k), \label{eq0803} \end{equation} \begin{equation} (U_-^j(k))_R = - U^j_+(k) - \left(\eta + i \xi(k)\right)\tilde U^j_0 (k). \label{eq0804} \end{equation} Then, \[ (U_-^j (k))_R = \frac{\eta + i \xi(k)}{\eta - i \xi(k)} (U_-^j(k))_L + \frac{2 i \xi(k)}{\eta - i \xi(k)} U_+^j (k). \] Note that functions $U^j_+$ and $\tilde U_0^j$ are not labeled as $R$ or $L$, since they do not change their values after the considered bypass. Thus, relations (\ref{eq0802}) and (\ref{eq0805}) are valid. Similarly one can prove (\ref{eq0801}) and (\ref{eq0806}). Reformulate the growth restrictions (\ref{eq0611c}) and (\ref{eq0612c}) according to (\ref{eq0605}) as follows: \begin{equation} U^{j}_- = i \, \delta_{j,1} ( e^{- i \pi /2 } k)^{1/2} e^{- i k a} + O(k^{-1/2} e^{-i k a}), \quad {\rm Arg}[e^{- i \pi /2} k] \le \pi/2 , \label{eq0807} \end{equation} \begin{equation} U^{j}_+ = i \, \delta_{j,2} ( e^{ i \pi /2 } k)^{1/2} e^{i k a} + O(k^{-1/2} e^{i k a}), \qquad {\rm Arg}[e^{ i \pi /2} k] \le \pi/2 . \label{eq0808} \end{equation} Both restrictions are related to the continuations along the paths shown in Fig.~\ref{fig04}. Now we can formulate a Riemann--Hilbert problem for ${\rm U}$: \begin{problem} Find a matrix function ${\rm U}(k)$ of elements (\ref{eq0613}) such that \begin{itemize} \item it is regular on the plane cut along the lines ${\cal G}_{1,2}$; \item it obeys functional equations (\ref{eq0801}), (\ref{eq0802}) with coefficients (\ref{eq0805}), (\ref{eq0806}) on the cuts; \item it obeys growth restrictions (\ref{eq0609c}), (\ref{eq0610c}), (\ref{eq0807}), (\ref{eq0808}); \item functions $U^j_+ (k) + U^j_- (k)$, $j = 1,2$ have zeros at $k = k' \equiv \sqrt{k_0^2 + \eta^2}$; \item functions $U^j_{\pm}$ grow no faster than a constant near the points $\pm k_0$. \end{itemize} \label{WHH_with_zeros} \end{problem} The fourth condition (concerning zeros at $\pm k'$) are difficult to take into account, so we would like to eliminate it. Consider Riemann surface of the function $\sqrt{k_0^2 - k^2}$ cut along the lines ${\cal G}_{1,2}$. The surface is split into two sheets by the cuts. The sheet to which the point $\sqrt{k_0^2 - 0^2} = k_0$ belongs will be called the physical sheet. Consider the function $\eta - i \sqrt{k_0^2 - k^2}$ on this surface. Note that this function has two zeros only on one sheet (on the physical one or on the other one). If the zeros belong to the physical sheet, deform the contours ${\cal G}_{1,2}$ such that: \begin{itemize} \item the end points remain the same; \item contour ${\cal G}_2$ remains symmetrical to ${\cal G}_1$ with respect to zero; \item zeros of $\eta - i \sqrt{k_0^2 - k^2}$ finally become not belonging to the physical sheet. \end{itemize} A scheme of such contour deformation is shown in Fig.~\ref{fig05}. If the zeros do not belong to the physical sheet from the very beginning, then no deformation is needed. The domain of $\eta$ for which the zeros of $\eta - i \sqrt{k_0^2 - k^2}$ belong to the physical sheet (and the deformation is needed) is \begin{equation} {\rm Im}[\eta] <0,\quad {\rm Re}[\eta] <0, \end{equation} i.\ e.\ it is the third quadrant of the complex plane. Denote the resulting contours (deformed if the deformation is needed or undeformed otherwise) by ${\cal G}_{1,2}'$. {\bf Remark. } Positions of the points $k'$ on the Riemann surface of $\sqrt{k_0^2 - k^2}$ can be found from condition (\ref{eq0104}). Namely, the boundary between the allowed values of $\eta$ and prohibited values is the real axis. Consider the function $k' = k' (\eta)$. This function maps the real axis of $\eta$ into the parts ${\cal G}_1'' = (-\infty , - k_0 )$, ${\cal G}_2''= (k_0, \infty)$ of the real axis. Consider the Riemann surface of $\sqrt{k_0^2 - k^2}$ cut along ${\cal G}_{1,2}''$. The surface will be split into two sheets. Again, call the sheet containing the point $\sqrt{k_0^2 -0^2} = k_0$ the physical sheet. The boundary ${\rm Im}[\eta] = 0$ corresponds to the cuts ${\cal G}_{1,2}''$. The area ${\rm Im}[\eta] < 0$ corresponds to the unphysical sheet. \begin{figure}[ht] \centerline{\epsfig{file=fig05.eps,width = 14cm}} \caption{Deformation of the cuts ${\cal G}_{1,2}$} \label{fig05} \end{figure} Formulate the functional problem for the contours ${\cal G}_{1,2}'$. According to the principles of analytical continuation, relations (\ref{eq0801}), (\ref{eq0802}) remain valid with the same matrices (\ref{eq0805}), (\ref{eq0806}). Thus, the formulation of the problem is almost the same: \begin{problem} Find a matrix function ${\rm U}(k)$ of elements (\ref{eq0613}) such that \begin{itemize} \item it is regular on the plane cut along the lines ${\cal G}_{1,2}'$; \item it obeys functional equations (\ref{eq0801}), (\ref{eq0802}) with coefficients (\ref{eq0805}), (\ref{eq0806}) on the cuts; \item it obeys growth restrictions (\ref{eq0609c}), (\ref{eq0610c}), (\ref{eq0807}), (\ref{eq0808}); \item functions $U^j_{\pm}$ grow no faster than a constant near the points $\pm k_0$. \end{itemize} \label{WHH} \end{problem} \subsection{Symmetrical problem} Similarly to the antisymmetrical case, there are two functional equations describing the transformation of unknown functions at the cuts: \begin{equation} {\rm V}_R(k) = {\rm V}_L {\rm N}_1(k), \qquad k \in {\cal G}_1 , \label{eq0801sym} \end{equation} \begin{equation} {\rm V}_R(k) = {\rm V}_L {\rm N}_2(k), \qquad k \in {\cal G}_2 . \label{eq0802sym} \end{equation} \begin{equation} {\rm N}_1 (k) = \left( \begin{array}{cc} 1 & -2 \eta/(\eta - i \zeta) \\ 0 & (\eta + i \zeta)/(i \zeta - \eta ) \end{array} \right) , \label{eq0806sym} \end{equation} \begin{equation} {\rm N}_2 (k) = \left( \begin{array}{cc} (\eta + i \zeta)/(i \zeta - \eta) & 0 \\ -2 \eta/(\eta - i \zeta) & 1 \end{array} \right) , \label{eq0805sym} \end{equation} Reformulate growth restrictions (\ref{eq0611s}), (\ref{eq0612s}) according to (\ref{eq0605_s}) as follows: \begin{equation} V^{j}_- = \, \delta_{j,1}e^{- i k a} + O(k^{-1} \log(k)e^{-i k a}), \quad {\rm Arg}[e^{- i \pi /2} k] \le \pi/2 , \label{eq0807sym} \end{equation} \begin{equation} V^{j}_+ = \, \delta_{j,2} e^{i k a} + O(k^{-1} \log(k) e^{i k a}), \quad {\rm Arg}[e^{ i \pi /2} k] \le \pi/2 . \label{eq0808sym} \end{equation} Finally, formulate a functional problem for ${\rm V}$. \begin{problem} Find a matrix function ${\rm V}(k)$ of elements (\ref{eq0613_s}) such that \begin{itemize} \item it is regular on the plane cut along the lines ${\cal G}_{1,2}$; \item it obeys functional equations (\ref{eq0801sym}), (\ref{eq0802sym}) with coefficients (\ref{eq0806sym}), (\ref{eq0805sym}) on the cuts; \item it obeys growth restrictions (\ref{eq0609s}), (\ref{eq0610s}), (\ref{eq0807sym}), (\ref{eq0808sym}); \item functions $ V^{j}_{\pm}$ grow no faster than $(\sqrt{k_0 \mp k})^{-1/2}$ near the points $\pm k_0$. \end{itemize} \label{WHH_sym} \end{problem} \section{Conclusion} The problem of diffraction by impedance strip is symmetrized and reduced to two Wiener--Hopf functional problems (Problem \ref{functional_problem_A1} and \ref{functional_problem_S1}) leading to directivities $S^{\rm a}(\theta,\theta^{\rm in})$ and $S^{\rm s}(\theta,\theta^{\rm in})$. Then auxiliary functional problems (Problem~\ref{functional_problem_A2} and~\ref{functional_problem_S2}) are introduced. Using embedding formulae (\ref{embedding_a}) and (\ref{embedding_s}) a simple connection with Problem \ref{functional_problem_A1} and \ref{functional_problem_S1} is established. Riemann--Hilbert problems (Problem~\ref{WHH} and \ref{WHH_sym}) for auxiliary solutions are formulated. In the second part of the paper the family of Riemann--Hilbert problems indexed by an artificial parameter will be introduced. A differential equation will be built with respect to this parameter. A novel technique of OE--equation will be applied to solve this equation and find the solution of original problem. Some numerical results will be presented. \section*{Acknowledgements} The work is supported by the grants RFBR 14-02-00573, Scientific Schools-283.2014.2, RF Government grant 11.G34.31.0066. The authors are grateful to participants of the seminar on wave diffraction held in S.Pb. branch of Steklov Mathematical Institute of RAS (the chairman is Prof. V.\ M.\ Babich) for interesting discussions.
2205.06194
\section{Introduction} Quantum electrodynamics (QED) can be regarded as the oldest and possibly most accurate and successful quantum field theory (QFT) there is. The renormalisation of QED, by the pioneers Dyson, Feynman, Schwinger, Tomonaga and others \cite{Schweber:1994qa}, gave birth to the successful application of quantum field theory to all of particle physics culminating in the Standard Model (SM) in the sixties \cite{Glashow:1961tr,Weinberg:1967tq,Salam:1968rm} and finally the Higgs-boson discovery in 2012 \cite{ATLAS:2012yve,CMS:2012qbp}. Since the QED coupling constant is small $\alpha \equiv \frac{e^2}{4 \pi} \approx \frac{1}{137}$ perturbation theory is a reliable tool for many cases. A topical example is the anomalous magnetic moment of the muon $a_\mu = (g_\mu-2)/2$ with the theory average $a_\mu =116591810(43) 10^{-11}$ \cite{Aoyama:2020ynm} very close to the experimental average $a_\mu =116592061(41) 10^{-11}$ \cite{Muong-2:2021ojo}, currently with some tension. The application of QED to particle decays comes with additional subtleties which can be traced back to two idealisations, infinite space and infinitely precise measurement apparatuses, which do not hold in practice leading to infrared- (IR) divergences. In well-defined observables IR-divergences cancel and the understanding thereof is based on cancellation-theorems \cite{Bloch:1937pw,Kinoshita:1962ur,Lee:1964is} relying on first principles such as unitarity. IR-sensitivity, leading to large logs, can invalidate the naive counting in perturbation theory. In $d \Gamma( B \to \pi e^+ \bar \nu)/d E_\pi$ for example, one will find $\alpha \to \alpha \ln m_b/m_e \approx 0.05$ to all orders in perturbation theory. The focus on these notes is on conceptual matters of QED in weak decays illustrated on examples. In the remaining two paragraphs we briefly comment on important topics not covered in this text. In reporting experimental results in flavour physics the QED-radiation is regarded as a background and is effectively removed by using Monte-Carlo programs such as PHOTOS \cite{PHOTOS} or PHOTONS++ in SHERPA \cite{Schonherr:2008av}. These tools are based on versions of scalar QED (point-like approximations). The cross-validation of these programs seems essential in assuring precision extraction of CKM matrix elements (e.g. $|\VCKM{u(c)b}|$) or the testing of lepton flavour universality \cite{Bifani:2018zmi} (e.g. $R_K = \Gamma[B \to K \mu^+ \mu^-]/\Gamma[B \to K e^+ e^-]$ with tensions since 2014 up to its latest measurement \cite{LHCb:2021lvy} ). This topic certainly deserves further commenting and study.\footnote{Let us add that one needs to distinguish kaon physics from $D$- and $B$-physics in this respect. In the former case the situation is better as the logs are not that large, structure-dependent analyses in chiral perturbation theory exists and experiment is more inclusive in the photon such that Monte-Carlo tools are not indispensable in principle.} Somewhat related QED is also important in the context of initial state radiation in $e^+ e^-$ colliders \cite{Frixione:2022ofv} and the main proponent in QED in strong backgrounds \cite{Fedotov:2022ely}. We will not review the infrared problems of quantum chromodynamics (QCD) but refer the reader to an excellent list of text books \cite{Sterman:1993hfp,Weinberg:1995mt,Muta:1998vi,Smilga:2001ck,Collins:2011zzd} and review articles \cite{Sterman:1995fz,Sterman:2004pd,Agarwal:2021ais}. We content ourselves emphasising that QCD is conceptually very different from QED in that there is a mass gap for the observable hadronic spectrum. All particle masses are proportional to a non-perturbative scale $\Lambda_{\textrm{QCD}} = {\cal O}( 200\,\mbox{MeV})$ with the exception of the pseudo-Goldstone, due to chiral symmetry breaking, for which $m_\pi^2 = m_q {\cal O}(\Lambda_{\textrm{QCD}})$. The challenge in QCD is to establish factorisation theorems whereby collinear divergences arising from a hard kernel, computed with quarks and gluons, are absorbed in a meaningful way into hadronic objects such as the parton distribution functions or jets. These short notes are organised as follows. In Sec.~\ref{sec:IR} we describe the origin of infrared divergences and the cancellation thereof in observables. Three examples, $e^+ e^- \to hadrons$, $\pi^+ \to \ell^+ \bar \nu$ and $B \to \pi \ell^+ \bar \nu$ in increasing complexity are reviewed in Sec.~\ref{sec:examples} at the level of the point-like approximation. Aspects of going beyond this approximation are discussed in Sec.~\ref{sec:beyond} and we end with conclusions in Sec.~\ref{sec:conclusions}. Formal matters such as the Low-theorem, the KLN-theorem and coherent states are summarised or extended in Apps.~ \ref{app:Low}, \ref{app:KLN} and \ref{app:coherent}. Some more practical aspects related to QED, such as infrared singularities at one-loop, numerical handling of singularities and terminology can be found in Apps.~ \ref{app:regions}, \ref{app:pragmatic} and \ref{app:terminology} respectively. \section{Infrared Divergences and Infrared-sensitivity} \label{sec:IR} IR-divergences are associated with massless particles and there are two known mechanisms for enforcing massless particles, Goldstone bosons and gauge bosons (without confinement and unbroken gauge symmetry).\footnote{The fermion mass in QCD can be put to zero and remains zero in perturbation theory due to chiral symmetry but the zero value in itself does not stand out by any mechanism.}$^,$\footnote{ Not so long ago it has been understood that the photon can be viewed as a Goldstone boson of a higher form symmetry \cite{Gaiotto:2014kfa}. This would bring down the number of mechanisms to one and further unify the picture. } The Goldstone effective theory, chiral perturbation theory in QCD, is largely free from IR-divergences as the shift symmetry enforces derivative interactions which tame the IR-behaviour. Now, the only gauge boson of the type mentioned is our well-known photon and this places QED as a unique laboratory for IR-problems.\footnote{To some extent this also applies to the graviton and gravity as already studied by Weinberg \cite{Weinberg:1965nx} and \cite{Cachazo:2014fwa} for renewed interest.} Before venturing any further it is advisable to review the basics of IR-divergences. Since real and virtual photon radiation are connected by cancellation theorems it is sufficient, at first, to consider real radiation only. \begin{figure}[h] \begin{centering} \includegraphics[width=7.0 cm]{figs/photon-emission.pdf} \caption{\small Photon-emission from an external electron in a generic process. \label{fig:real-emission}} \end{centering} \end{figure} Disregarding ultraviolet (UV) divergences the only type of divergences that can arise in Feynman diagrams are due to propagators going on-shell which are of the IR-type. At LO this is particularly simple as we may just consider real emission of a photon from a charged particle, e.g. a lepton such as the electron $e^-$, as depicted in Fig.~\ref{fig:real-emission}. The propagator $\frac{1}{(p+k)^2 - m_e^2}$ denominator, for on-shell $p$, behaves like \begin{equation} \label{eq:basic} (p+k)^2 - m_e^2 = 2 p \cdot k = 2 E_\gamma E_e( 1 - \beta \cos \theta) \;, \end{equation} where $k = E_\gamma(1,0,0,1)$, $p = (E_e, \kappa \hat{n} )$, $E_e = \sqrt{m_e^2 + \kappa^2}$, $\beta = \kappa/E_e$ and $\theta$ the angle between the unit vector $\hat{n}$ and the $z$-axis. The propagator is singular if either the photon energy $E_\gamma $ or the angle $\theta$ approach zero (and $m_e \to 0$). These divergences are known as soft and collinear respectively. In $d=4$ they lead to logarithmic singularities $\ln m_\gamma$ and $\ln m_e$.\footnote{A photon mass $m_\gamma$ is introduced to regulate the soft divergence, in addition to \eqref{eq:basic}, which in dimensional regularisation would map into $\frac{1}{\eps_{\text{IR}}}$. Note that the photon mass also regularises the collinear divergences.} In certain regions of phase space these divergences combine and lead to soft-collinear divergences $\ln m_\gamma \, \ln m_e$. Generally, at $n$-loops there are terms of the order $\ln^k m_\gamma \, \ln^l m_e$ with $l \leq n$ and $k+l \leq 2n$. It seems worthwhile to briefly digress on the collinear term $\ln m_e$. For finite lepton mass this is a physical effect, see for example the previously mentioned sizeable $\alpha \ln m_e/m_b$-terms in $B \to \pi e^+ \bar \nu$.\footnote{ In QCD $\ln m_q$ terms are either absorbed into hadronic quantities such as distribution amplitudes, parton distribution functions or jets in the context of what is known as \emph{factorisation theorems} or if this cannot be done then the variable is not IR-safe. This might indicate a problem of applying perturbation in a non-perturbative regime.} The question for what observables QED is well-defined for zero lepton masses, gave rise to the KLN-theorem (cf. App.~\ref{app:KLN} for further comments). We shall assume leptons masses to be non-zero and special emphasis will be given to $\ln m_\ell$-terms to which we refer to as \textbf{ hard-collinear logs} (cf. also App.~\ref{app:terminology}) and are a physical effect. This contrasts the terms caused by zero energy photons to which refer to as IR-divergences (and interchangeably as soft-divergences) following the main literature. \subsection{Observables are infrared finite} Of course physical observables have to be free of divergences and this is where one expects deep physical principles to dictate cancellations. Cancellations segregate observable from non-observable quantities. First, IR-divergences are interlinked with the very definition of what a particle is and the measurement process itself. How can one distinguish a single electron from an electron with an ultrasoft photon (or a highly relativistic electron with a photon emitted at an infinitesimally small angle)? That is also indeed where the resolution lies, what is measurable needs to be assessed carefully. One needs to come back to the idealisation mentioned in the introduction: infinite space and infinite detector resolution. The true IR-divergences (i.e. excluding collinear ones) are effectively regulated by the introduction of an energy scale, say $\delta$ (which has to be larger than the actual detector resolution scale). There are two main approaches to it: \begin{enumerate} \item The fixed particle Fock-space is abandoned in favour of so-called \textbf{ coherent states} which take into account that charged particles are surrounded by a soft photon-cloud \cite{Chung:1965zza,Kibble:1968oug,Kibble:1968npb,Kibble:1968lka,Kulish:1970ut}. In 1970 Kulish and Faddeev \cite{Kulish:1970ut} showed that the coherent state approach leads to a finite $S$-matrix in QED, which is gauge invariant with a separable Hilbert space.\footnote{There is no successful version of this approach for perturbative QCD, for early attempts see \cite{Giavarini:1987ts} and for recent improved understanding of the underlying issues thereof cf. \cite{Gonzo:2019fai}. From a purely conceptual viewpoint this is not crucial as the $S$-matrix of QCD is defined with respect to its physical states, the hadronic states, and it comes with all its good properties. The $S$-matrix elements can be extracted from (non-perturbative) correlation functions via the LSZ-formalism (as shown to be valid by the Haad-Ruelle scattering theory \cite{Duncan}). On a pragmatic level, in collider physics, quarks and gluons hadronise into jets.} \item Second, one defines \textbf{ observables} which are \textbf{ inclusive enough} such that these divergences cancel. This approach was pioneered by Bloch and Nordsieck 1937 \cite{Bloch:1937pw}, extended in the sixties by the KLN-theorem \cite{Kinoshita:1962ur,Lee:1964is} (cf. App.~\ref{app:KLN}) to additionally include collinear singularities and applied to correlation functions in form of the Kinoshita-Poggio-Quinn-theorem \cite{Kinoshita:1962ur,Poggio:1976qr,Sterman:1976jh} (cf. Sec.~\ref{sec:inclusive}). As a rule of thumb, the more inclusive a quantity is, the fewer divergences or IR-sensitive terms there are. \end{enumerate} The second approach can, reassuringly, be seen as a limit of the latter. In view of it being more general we consider it worthwhile to first discuss the coherent state approach. Our brief summary is largely based on the excellent presentation in Duncan's book \cite{Duncan} and some more context can be found in App.~\ref{app:coherent}. Let us concretely assume that the detector can only capture photons with an energy above $ \delta$ and reject photons with energies above that threshold. Thus it is advisable to replace the electron state, to which we adhere for illustration, by a state with any number of photons with energies smaller than the detector cut-off \begin{equation} \label{eq:coherent} \state{e^-(\vec{q})} \to \state{e^-(\vec{q})}_n \equiv \state{e^-(q),\gamma(k_1) \dots \gamma(k_n) }_{(E_\gamma)_i < \delta} \;, \end{equation} and (formally) \begin{equation} \state{e^-(\vec{q})} = \sum_{n \geq 0,\vec{q}} c_n(\vec{q}) \state{e^-(\vec{q})}_n \;, \end{equation} is the coherent state, with appropriate $c_n(\vec{q})$, which can be written as an exponential of an integral over the creation operators cf. App.~\ref{app:coherent}. Denoting by $P_n$ the probability of $n$-soft photon emission, the total probability is a sum of all possibilities $ P_{\textrm{tot}} = \sum_{n \geq 0} P_n$. When the total transition probability $P_{\textrm{tot}}$ of all $n$-states \eqref{eq:coherent} is considered, the momentum space integrals are cut-off below at $\delta$ and are thus manifestly IR-finite (no soft-divergences). The $S$-matrix is well-defined, as mentioned above, and the IR-divergences are absorbed into the definition of the states. It seems worthwhile to point out that this bears some resemblance with the absorption of the UV divergences into the parameters of the theory which in turn also originates from an idealisation, namely that space-time is a continuum. How does this connect to the Bloch-Nordsieck mechanism? Reassuringly, upon expanding to finite order in $\alpha$ one recovers the Bloch-Nordsieck solution. More concretely, in order to compute the ${\cal O}(\alpha)$ corrections to a decay process $i \to f$ one has to consider its radiative counterpart $i \to f (\gamma_{E_\gamma < \delta})$. In the total transition probability one can show that the IR-divergences cancel diagram by diagram; as beautifully illustrated in many textbooks e.g. $e^+ e^- \to \bar q q $ in \cite{Muta:1998vi}. These cancellations have been shown to hold to all orders in QED by exponentiation \cite{Yennie:1961ad,Weinberg:1965nx}.\footnote{The case of QCD, which is beyond the scope of these notes, is complicated as the simple combinatorics in QED are spoiled by zero mass charged particles (the gluons) and the colour structure. The Bloch-Nordsieck mechanism is replaced in perturbation theory by the KLN-theorem, whose features are briefly discussed in App.~\ref{app:KLN}, and for the more involved case of hadrons in final states we refer to the textbooks \cite{Sterman:1993hfp,Collins:2011zzd}.} \paragraph{No fixed particle-number $S$-matrix:} Let us briefly digress and motivate why the $S$-matrix of the fixed particle-number Fock space does not exist, as it turns out to be zero. We provide three different viewpoints: \begin{enumerate} \item The IR-divergences, caused by the absence of a mass gap, can be seen as an indication for the ill-defined fixed particle number Fock space $S$-matrix. It is instructive to give mass to the particle causing the trouble, the photon. The IR-divergences $\ln m_\gamma$ exponentiate such that the $S$-matrix, $S \propto \exp( |a| \ln m_\gamma + \dots) \to 0 $, assumes zero value in the limit of zero photon mass. Hence, the $S$-matrix is infinite at fixed order and zero at all orders! Thus the asymptotic completeness of the in and out Hilbert space ceases to make sense as there is no $S$-matrix connecting the two. \item Another way to look at it is to realise that due to the massless photons the single particle pole, assumed by the LSZ-formalism, is softened by the presence of radiative corrections $(p^2-m^2)^{-1} \to (p^2-m^2)^{-1 + \alpha |A| }$ into a branch cut as first shown by Schroer in 2D model \cite{Schroer:1963gw} (he came up with the term ``infraparticle"). This makes the particle of mass $m$ disappear from the $S$-matrix when multiplied by the LSZ-factor $p^2 - m^2$ upon taking the on-shell limit $p^2 \to m^2$. Moreover, Buchholz \cite{Buchholz:1986uj} has shown, using very general arguments, that a charged particle obeying Gauss' law cannot be a discrete eigenstate of the momentum squared operator, which goes hand in hand with the branch cut. A notable aspect is that the coefficient $|A|$ is gauge dependent, e.g. \cite{Jackiw:1968jpv}, and another sign that there is a problem. \item Whereas the $S$-matrix is gauge invariant in perturbation theory this is not the entire story as has recently been shown using asymptotic symmetries \cite{Kapec:2017tkm}. The common lore is that local gauge symmetries give rise to global charge conservation only and that local gauge symmetries are not really symmetries in the observable sense. However, asymptotically (that is at spatial infinity) there are infinitely many symmetries, so-called asymptotic symmetries \cite{Strominger:2017zoo}, known as large (i.e. non-local) gauge symmetries. In a very interesting paper \cite{Kapec:2017tkm} it has been shown that the vanishing of the fixed particle number $S$-matrix can be understood as due to non-invariance under these asymptotic symmetries. Closing the circle, it is found that once gauge invariance is enforced, the coherent states emerge! \end{enumerate} Now, is it considered a problem that the fixed particle number $S$-matrix is not defined? For mathematical physics, yes. The fact that electron does not correspond to an isolated particle in the spectrum is known as the IR-problem of QED (e.g. \cite{Mund:2021zhx} also for historic references and discussion of this notorious problem). The pragmatic particle physicist, or advocate of the Bloch-Nordsieck- and KLN-approach, would simply point to the fact that the fixed particle number $S$-matrix is not an observable but rather an intermediate auxiliary quantity. Concluding, in practice the infrared problem of QED is bypassed in the pragmatic approach by IR-regularisation (e.g. $m_\gamma \neq 0$) and removing the regulator ($m_\gamma \to 0$) in observables such as decay rates. Let us add that in practice, for a number of reasons (e.g. no additional scale), dimensional regularisation is the choice by most practitioners. \section{Decay Rates and their Infrared-effects} \label{sec:examples} Following the discussion on the origin of IR-divergences and why they disappear from observables we discuss these mechanisms in three practical examples with decreasing level of inclusiveness and increasing level of IR-effects. Namely, the (inclusive) $e^+ e^- \to hadrons$ cross section, the leptonic decay $\pi^+ \to \ell^+ \bar \nu$ and the semileptonic case $B \to \pi \ell^+ \bar \nu$. In the latter two cases, the hadrons will be treated in the point-like approximation with comments beyond this treatment deferred to Sec.~\ref{sec:beyond}. For most practical applications first order ${\cal O}(\alpha)$ is sufficient. At the amplitude level we therefore need ${\cal O}(e^{0,1,2,})$, denoted by ${\cal A}^{(0,1,2)}$, corresponding to tree, real and virtual. We refer to ${\cal A}^{(0)}$ and ${\cal A}^{(2)}$ as the non-radiative and to ${\cal A}^{(1)}$ as the radiative amplitude. The cancellation of IR-divergences is then a result of $\textrm{Re}[{\cal A}^{(0)} ({\cal A}^{(2)})^*]$ versus $|{\cal A}^{(1)}|^2$ when properly integrated over phase space. Let us rephrase this in terms of a generic decay $i \to f$ at the level of the rates \begin{alignat}{3} \label{eq:IRtype} &d \Gamma(i \to f) &\;\propto\;& 1 +& & \frac{\alpha}{\pi}(A_V \ln m_\gamma + B_V \ln m_{\gamma,f} \ln m_f + C_V \ln m_f + {\cal O}(1) ) d \Phi_f \;, \nonumber \\[0.1cm] &d \Gamma(i \to f \gamma) &\;\propto\;& & & \frac{\alpha}{\pi}(A_R \ln m_\gamma + B_R \ln m_{\gamma,f} \ln m_f + C_R \ln m_f + {\cal O}(1) ) d \Phi_{f} d\Phi_\gamma \;, \end{alignat} where $d \Phi$ is the phase space measure, $m_f$ is a small mass of a final state particle (e.g. an electron mass) and $m_{\gamma,f}$ stands for either $m_f$ or $m_\gamma$. The subscripts $V$ and $R$ denote virtual and real and $A$, $B$ and $C$ stand for soft, soft-collinear and hard-collinear divergences. Integrating over the entire photon phase space \begin{eqnarray} \label{eq:diff-cancel} d \Gamma(i \to f) + \int d\Phi_\gamma \, d \Gamma(i \to f\gamma ) \propto (1 + \frac{\alpha}{\pi} C \ln m_f + {\cal O}(1) ) \, d \Phi_f \;, \end{eqnarray} with all the soft-divergences canceling and the collinear logs cancel, \begin{equation} C = ( C_V + \int d\Phi_\gamma C_R) = \left\{\begin{array}{ll} \textrm{zero} & \textrm{collinear-safe differential variables} \\[0.1cm] \textrm{non-zero} & \textrm{non collinear -safe differential variables} \end{array} \right. \;, \end{equation} depending on the differential variables (cf. Sec.~\ref{sec:semileptonic} for a concrete example). The further statement of the cancellation-theorems (Bloch-Nordsieck and KLN) is that that if one integrates over the remaining phase space $d \Phi_{f}$, then (in the total rate) \begin{equation} \label{eq:total-cancel} \Gamma(i \to f) + \Gamma(i \to f\gamma) \propto 1 + \frac{\alpha}{\pi}{\cal O}(1) \;, \end{equation} all IR-divergences are absent, schematically: $([A,B,C]_V+ [A,B,C]_R)^{\textrm{inc}} = 0$. This picture is broken in practice by the following two sources: \begin{itemize} \item [i) ] The experiment is not fully photon-inclusive and rejects hard photons with $E_\gamma > \delta$ where $\delta$ is the previously discussed threshold which is (slightly) larger than the actual detector resolution.\footnote{If one of the final state particles is very light then one might think to apply cuts on the angle because of the angular resolution as well. As long as the mass of the charged particle is finite one can separate it from the collinear photon(s) by a magnetic field.} This leads to the replacements \begin{alignat}{2} \label{eq:IRtypeII} & (A_V+ A_R) \ln m_\gamma &\;\to\;& (A_V(\delta)+ A_R(\delta)) \ln \delta \;,\nonumber \\[0.1cm] & (B_V+ B_R) \ln m_\gamma \ln m_f &\;\to\;& (B_V(\delta)+ B_R(\delta)) \ln \delta \ln m_f \;, \nonumber \\[0.1cm] & C \ln m_f &\;\to\;& C(\delta) \ln m_f \;, \end{alignat} where $C(\delta) \neq 0$ irrespective of whether the differential variables are collinear-safe or not. The functions $A,B,C(\delta)$ are polynomial in $\delta$. \item [ii) ] The rate can be differential in some final state kinematics and therefore not a total rate as in \eqref{eq:diff-cancel}. In this case the unitarity argument, on which the cancellation is based, does not necessarily hold since the kinematics make the sum too restrictive. The (non)-cancellation needs to be reassessed, depending on the kinematic variables hard-collinear effects $\ln m_f$ do or do not cancel. \end{itemize} \begin{table}[h] \begin{center} \begin{tabular}{l | l | l | l | r} \textbf{type} & \textbf{i) diff. in $\gamma$} & \textbf{ii) diff. in $f$} & \textbf{IR-terms} & \textbf{Sec.~}\\ \hline $e^+ e^- \to hadrons $ & no & no & none & \ref{sec:inclusive} \\[0.1cm] $\pi^+ \to \ell^+ \bar \nu $ & yes & no & $A,B, (C)$ Eqs.\mbox{(\ref{eq:IRtype},\ref{eq:IRtypeII})} & \ref{sec:leptonic} \\[0.1cm] $B \to \pi \ell^+ \bar \nu $ & yes & yes & $A,B,C$ Eqs.\mbox{(\ref{eq:IRtype},\ref{eq:IRtypeII})} & \ref{sec:semileptonic} \\[0.1cm] \end{tabular} \end{center} \caption{Types of observables considered where diff. is short for differential in $\gamma$ or $f$ (final states) and i) and ii) refer to the itemised conditions above. The bracket around (C) in row 2 will, hopefully, become clear upon reading Sec.~\ref{sec:leptonic}. \label{tab:cases}} \end{table} \subsection{A classic example of infrared finiteness: $e^+ e^- \to hadrons$} \label{sec:inclusive} Here we briefly deviate from the QED-course as we consider finiteness under correction in the strong coupling constant to $e^+ e^- \to hadrons$. An analogue in QED would be the somewhat exotic $ \nu \bar \nu\to Z \to \ell^+ \ell^-$. Now, by the optical theorem the total cross-section \begin{equation} \label{eq:e+e-} \sigma_{\textrm{tot}} (e^+ e^- \to hadrons)(q^2) \propto \textrm{Im} [ \Pi(q^2) ] \;, \end{equation} is related to the imaginary part of the vacuum polarisation $\Pi(q^2)$ \begin{equation} \label{eq:vacpol} \left(q_\mu q_\nu - q^2 g_{\mu\nu} \right){\Pi}(q^2) = i \int d^4 x e^{i x \cdot q} \matel{0}{T j_\mu (x) j_\nu(0) }{0} \;, \end{equation} where $j_\mu = \sum_f e Q_f \bar f \gamma_\mu f$ is the electromagnetic current and $Q_f$ the electromagnetic charge. On the non-perturabative level there is no question as to whether this quantity is well-defined because of the mass gap. In particular, in the large-$N_c$ limit \begin{equation} \textrm{Im} [ \Pi(s) ] = \pi \sum_{V = \rho^0,\omega ..} \delta(s-m_V^2) f_V^2 \;, \end{equation} with $f_V$ the vector meson decay constants and most importantly $m_{\rho^0} \approx 770\,\mbox{MeV}$ is the lowest mass exhibiting the mass gap. The question we would like to address is whether it is finite to all orders in perturbation theory using quarks and gluons as degrees of freedom. According to the cancellation-theorems and the discussion outlined in the beginning of this section this must be the case since this is a fully inclusive observable (and conditions i) \& ii) are not met). Alternatively, this can be established on grounds of the Kinoshita-Poggio-Quinn-theorem \cite{Kinoshita:1962ur,Poggio:1976qr,Sterman:1976jh} which states: \emph{In massless renormalisable theories the one-particle irreducible correlation functions are IR-finite for non-exceptional (external) Euclidean momenta.}\footnote{Non-exceptional momenta configurations are such that no subset of momenta adds to zero.} Renormalisability is important as it settles power counting for the proof and the Euclidean momenta condition avoids particles going on-shell. This applies to the case at hand since $\textrm{Im} [ \Pi(q^2) ] = \frac{1}{2i}( \Pi(q^2 + i0) - \Pi(q^2 - i0))$ with $q^2 \pm i0$ effectively counts as off-shell (or Euclidean in practice). Hence $\sigma_{\textrm{tot}}(q^2)$ must be IR-finite (in perturbation theory) as found in many explicit computations for any $q^2 > 0$ in particular. \begin{figure}[h] \begin{centering} \includegraphics[width=10.0 cm]{figs/e+e-.pdf} \caption{\small Strong coupling corrections to the vacuum polarisation $\Pi(q^2)$ \eqref{eq:vacpol}, at $O(\alpha_s)$, which necessarily involves quarks and gluons (partons). As its imaginary part corresponds to the total cross section \eqref{eq:e+e-} the cuts give rise to various subprocesses which include the virtual and real parts. The dashed or blue cuts correspond to the virtual and the real parts respectively. \label{fig:real-emission}} \end{centering} \end{figure} One can learn a fair amount by considering the one-loop corrections (depicted in Fig.~\ref{fig:real-emission}) since the imaginary part is proportional to the discontinuity and the latter is proportional to the sum of all cuts by the Cutkosky rules (e.g. \cite{Sterman:1993hfp}) The different types of cuts include the radiative and non-radiative parts cf. figure caption. Each one of these cuts is IR-divergent but they cancel in the sum as dictated by the arguments given above. That individual contributions behave very different from the total contribution is not restricted to IR-effects but can also appear in the power-behaviour of a heavy quark mass or an external momentum in case they are assumed to be large. \subsection{Leptonic decay of the type $\pi^+ \to \ell^+ \bar \nu$} \label{sec:leptonic} We now turn to the simple example of an exclusive decay, the pion decay $\pi^+ \to \ell^+ \bar \nu$. The photon energy cut-off $E_\gamma < \delta$ (in say the pion restframe) will introduce the $\ln \delta$-terms as in \eqref{eq:IRtypeII}. This will lead to soft- and soft-collinear terms as indicated in Tab.~\ref{tab:cases}. The hard-collinear logs ($C$-type in \eqref{eq:IRtype}) are a bit peculiar in this decay in the SM since the amplitude is ${\cal O}(m_\ell )$ (and therefore automatically finite in the limit $m_\ell \to 0$). This helicity suppression is relieved for $S\!-\! P$ interactions and we thus include them along the $V\!-\! A$ structure in order to illustrate the straightforward nature of the hard-collinear logs in this example. In turn these logs have to disappear in the photon-inclusive limit $2 m_\pi \delta \to m_\pi^2 - m_\ell^2$. All of which will be made explicit. The four-Fermi effective Lagrangian, including $S\!-\! P$- and $V\!-\! A$-interactions, reads \begin{equation} \label{eq:Leff} {\cal L}^{\textrm{eff}} = 4 \sqrt{2} G_F\left( C_{V\mi A} \bar u \gamma_\mu d_L \bar{\ell} \gamma^\mu \nu_L + C_{S\mi P} \bar u d_L \bar{\ell} \nu_L \right)\;, \end{equation} where $2 f_L \equiv (1-\gamma_5) f$ and in the SM $(C_{V\mi A},C_{S\mi P}) =( \VCKM{ud},0)$. The LO amplitude is given by \begin{alignat}{2} \label{eq:AmpLepLO} & {\cal A}^{(0)}(\pi^+ \to \ell^+ \bar \nu) &\;\propto\; & C_{V\mi A} (L_0)_\mu H^\mu _0 + C_{S\mi P} L_0 H_0 \nonumber \\[0.1cm] & &\;=\; & i ( C_{V\mi A} m_\ell F_\pi - C_{S\mi P} G_\pi ) L_0 \;, \end{alignat} where the leptonic matrix element reads \begin{equation} L_0^{(\mu)} \equiv \matel{\bar{ \nu} \ell^+ }{\bar \ell \Gamma^{(\mu)} \nu } {0} = \bar u(p_\nu) \Gamma^{(\mu)} \nu(p_\ell) \;, \end{equation} with $\Gamma = (1-\gamma_5)$, $ \Gamma_\mu = \gamma_\mu \Gamma$ and the hadronic matrix elements are \begin{alignat}{3} \label{eq:Fpi} & \matel{0} {A^a_{5 \, \mu } }{ \pi^b (p)} &\;=\;& \delta^{ab} (H_0)_{\mu} &\;=\;& \phantom{-}i \delta^{ab} F_{\pi} p_{\mu} \;, \nonumber \\[0.1cm] & \matel{0} {P^a }{ \pi^b (p)} &\;=\;& \delta^{ab} H_0 &\;=\;& - i \delta^{ab} G_{\pi} \;, \quad G_{\pi} = \frac{F_{\pi} m_\pi^2}{2 m_q} = \frac{- \vev{\bar qq}}{2 F_\pi} \;, \end{alignat} with $A^a_{5 \, \mu} = \bar q T^a \gamma_{\mu} \gamma_{5} \, q$, $P^a = \bar q T^a \gamma_{5} \, q$ and $T^a$ the adjoint $SU(2)$-representation matrix corresponding to $q = (u,d)$ (with (u)p and (d)own quarks). Note that use of the equation of motion was made for the $V\!-\! A$-part in \eqref{eq:AmpLepLO} which makes the $m_\ell$-suppression factor explicit. The LO decay rate is given by \begin{equation} \Gamma(\pi^+ \to \ell^+ \bar \nu)^{(0)} = \frac{G_F^2 }{ \pi m_\pi^3} | C_{V\mi A} m_\ell F_\pi - C_{S\mi P} G_\pi |^2 |\vec{p}_\ell|^2 \;, \end{equation} where the lepton velocity, in the pion's restframe, is \begin{equation} \quad |\vec{p}_\ell| = \frac{\lambda^{1/2}(m_\pi^2,m_\ell^2,0)}{2 m_\pi} = \frac{m_\pi}{2} \left( 1- \frac{m^2_\ell}{m_\pi^2} \right) \;, \end{equation} and $\lambda(x,y,z) \equiv x^2+y^2+z^2 - 2x y -2 x z -2 y z$ denotes the K\"all\`en function. Notably $F_\pi \approx 92 \,\mbox{MeV}$, a non-perturbative parameter of QCD known as the pion decay constant, is the order parameter of the spontaneous breaking of chiral symmetry $SU(2)_L \times SU(2)_R \to SU(2)_V$ (in the $m_q \to 0$ limit). When QED corrections are considered it ceases to be an observable and it is essentially degraded to the status of a wave function renormalisation constant. This can be seen from the explicit results in the nice review \cite{Gasser:2010wz} where $F_\pi$ is found to be gauge dependent and divergent in the $m_\pi \to 0$ limit. Unlike in QCD, in QED the chiral logs $\ln m_\pi $ are not protected by powers in the pion mass since $F_\pi$ is not an observable. This is a point we will come back to at the end of the section. \begin{figure}[h] \begin{centering} \includegraphics[width=10.0 cm]{figs/leptonic.pdf} \caption{\small Real emission diagram of the pion decay. The diagram in the centre is the so-called contact term and does appear for the $V\!-\! A$- but not the $S\!-\! P$-interaction. The real amplitude is given in \eqref{eq:leptonic-real}. \label{fig:leptonic}} \end{centering} \end{figure} Next we discuss how to incorporate radiative corrections in the point-like approximation. This is a straightforward exercise in effective field theory. The hadronic operator are matched to pions ($\matel{0}{\pi^a}{\pi^b} = \delta^{ab}$) \begin{equation} A^a_{5 \, \mu} \to - F_\pi D_\mu \pi^a \;, \quad P^a \to - i G_\pi \pi^a \;, \end{equation} such that the LO matrix element \eqref{eq:Fpi} is reproduced. The momentum dependence in the axial current \eqref{eq:Fpi} enforces a covariant derivate, $D_\mu \pi^a = (\partial_\mu + i e Q_{\pi^a} A_\mu) \pi^a$, which gives raise to a so-called contact term. The leading radiative amplitude is given by \begin{alignat}{2} \label{eq:leptonic-real} & {\cal A}^{(1)}(\pi^+ \to \ell^+ \bar \nu \gamma) \propto \sum_{i } C_i & & \left( \hat{Q}_{\ell_1} \bar u \frac{2 \epsilon^* \cdot \hat{\ell}_1+ \slashed{\epsilon}^* \slashed{k} }{ 2 k \cdot \hat{\ell}_1 } (\Gamma \cdot H_0)_i v + \hat{Q}_{\bar{\ell}_2} \bar u (\Gamma \cdot H_0)_i \frac{2 \epsilon^* \cdot \hat{\ell}_2+ \slashed{k} \slashed{\epsilon}^* }{2 k \cdot \hat{\ell}_2 } v \; + \right. \nonumber \\[0.1cm] & & & \left. \hat{Q}_{\pi} (L_0 \cdot H_0)_i|_{p \to \bar{p}} \frac{\epsilon \cdot (\hat{p} + \hat{\bar{p}} ) }{ 2 k \cdot \hat{p} } + \hat{Q}_{\pi} (L_0 \cdot H_0)_i|_{p \to \epsilon^*} \right) \;, \end{alignat} where $\bar{p} = p - k$, $ \ell,\nu \to \ell_1 ,\ell_2$ in order to be more general, $i = S \!-\! P, V \!-\! A$ and the conventions are the same as in \cite{Isidori:2020acz}: $\hat{Q}_j = \pm Q_j$ and $\hat{p}_j = \pm p_j$ for out(in)-going states. Some more detail on the notation. The last term in \eqref{eq:leptonic-real}, and centre of Fig.~\ref{fig:leptonic}, is the so-called contact term, only present for $V\!-\! A$ as mentioned above. In addition, the following compact notation has been introduced \begin{equation} (L_0 \cdot H_0)_i = \left\{\begin{array}{ll} (L_0)_\mu H^\mu _0 & i = V \!-\! A \\[0.1cm] L_0 H _0 & i = S \!-\! P \end{array} \right. \;, \end{equation} likewise for $L_0 \to \Gamma$. The terms of the Low-theorem (cf. App.~\ref{app:Low}) are explicit which include the ${\cal O}(E_\gamma^{-1})$ eikonal terms \begin{equation} \label{eq:eikonal} {\cal A}^{(1)} = {\cal A}^{(0)} \sum_i \hat{Q}_i \frac{\epsilon^* \cdot \hat{p}_i}{k \cdot \hat{p}_i} + {\cal O}(E_\gamma^0) \;, \end{equation} and the ${\cal O}(E_\gamma^0)$-term related to the angular momentum can be seen in the leptonic parts. Gauge invariance amounts to ${\cal A}^{(1)}|_{\epsilon \to k} = 0$ and does hold provided $\sum_i \hat{Q}_i = \hat{Q}_{\ell_1} + \hat{Q}_{\bar{\ell}_2} + \hat{Q}_{\pi} = 0$ (which is nothing but charge conservation). The latter has to be imposed in gauge-fixed perturbation theory but would be automatic in a manifestly gauge invariant formalism such as the path-integral used in lattice simulations. Hence the radiative amplitude is gauge invariant and thus the virtual (or non-radiative) amplitude must be as well. In particular in the virtual amplitude the gauge dependence of the ${\cal O}(\alpha)$ pion decay constant cancels against the lepton-pion and lepton radiative corrections.\footnote{In fact in the virtual case one finds that the covariant gauge-fixing parameter $\xi$ appears in the form ${\cal A}^{(2)} \propto \xi (\sum_i \hat{Q}_i)^2 + \dots$ and is again effectively absent because of charge conservation \cite{Isidori:2020acz}. This time the charge condition is quadratic of course.} As previously said, we present the $S\!-\! P$- and $V\!-\! A$-interaction separately as they both have different features. \subsubsection{Leading logs with $S\!-\! P$-interaction} \label{sec:SP} For the $S\!-\! P$-interaction ($C_{S\mi P} \neq 0, \, C_{V\mi A}=0$) we may parameterise the ${\cal O}(\alpha)$ rate as follows\footnote{Soft logs are proportional to the LO rate but not hard-collinear which arise in differential distribution (cf. \eqref{eq:magic} in the next section).} \begin{equation} \label{eq:leptonic-rate} \Gamma(\pi^+ \to \ell^+ \bar \nu) = \Gamma(\pi^+ \to \ell^+ \bar \nu)^{(0)}(1 + \frac{\alpha}{4 \pi}\left(F_{\textrm{soft}}(\hat{m}^2_\ell , 2 \hat{\delta}) + F_{\textrm{coll}}(\hat{\delta}) \ln \hat{m}_\ell + \textrm{non-log} \right)) \;, \end{equation} where ``non-log" stands for anything that is neither a soft, soft-collinear or hard-collinear log. Hatted quantities, except charges, are understood to be divided by the pion mass in this section. The quantity $\delta $ is the previously introduced photon energy cut-off and its photon-inclusive limit is $2 \hat{\delta} \to 1-\hat{m}_\ell^2$. Below we discuss both $F_{\textrm{soft}}$ and $F_{\textrm{coll}}$ without resorting to the full computation. \begin{itemize} \item \emph{The soft and soft-collinear terms} are universal and given by \begin{equation} \label{eq:soft} F_{\textrm{soft}}(x,y) = - (4 \frac{1+x^2}{1-x^2} \ln x^2 + 8) \ln y \;, \end{equation} and its exponentiation is a well established \cite{Yennie:1961ad,Weinberg:1965nx} \begin{equation} \label{eq:resum} \Gamma( \alpha \to \beta) = \Gamma( \alpha \to \beta)^{\textrm{LO}} \exp ( - A \ln \frac{\lambda}{\Lambda}) \;, \quad \end{equation} where $\lambda$ and $\Lambda$ are IR and UV cut-offs. These are to be replaced in practice with $ \delta$ and the largest scale in the problem; beyond that they are equivalent to so-called finite terms and undetermined in the leading log approximation.\footnote{We will have more to say on how this happens in computation in Sec.~\ref{sec:semileptonic}. The breaking of Lorentz-invariance by introducing a photon energy cut-off in a specific frame introduces a practical challenge.} Now, the factor $A$ has a pleasing form \begin{equation} \label{eq:A} A = \frac{e^2}{8 \pi^2} \sum_{i,j} \hat{Q}_i\hat{Q}_j \frac{1}{2 \beta_{ij}} \ln \frac{1+\beta_{ij}}{1-\beta_{ij}} \;, \end{equation} where the sum is over the charged particles in the decay and \begin{equation} \beta_{ij} = \frac{ \beta_i + \beta_j}{1+\beta_i \beta_j} \;, \end{equation} is the relativistic addition of the velocities of the $i,j$-particles in the $ij$-restframe. With $\beta_{ii}= 1$ for $i = \pi^+,\ell^+$ (since the relative velocities are zero ) and with $\beta_{\ell \pi} = (1 - \hat{m}_\ell^2)/(1+\hat{m}_\ell^2)$ one recovers $\eqref{eq:soft}$. It is instructive to reproduce the leading term from the eikonal part \eqref{eq:eikonal} which is of course what the original papers did. Following \cite{Isidori:2020acz} we denote the decay rate as \begin{alignat}{2} \label{eq:schema} & d \Gamma &\;=\;& d \Gamma^{\textrm{LO}} + \frac{\alpha}{\pi} \sum_{i,j} \hat{Q}_i \hat{Q}_j ( {\cal H}_{ij} + {\cal F}_{ij}(\delta) ) d\phi_f = d \Gamma^{\textrm{LO}}(1 + \Delta_{\text{rel}} \, d \phi_f) \;, \end{alignat} where ${\cal H}$ and ${\cal F}$ stand for the non-radiative and the radiative part respectively and $\Delta_{\text{rel}} $ is the relative correction, not to be confused with the photon energy cut-off, which is a function of the non-trivial differential variables $d \phi_f = \prod_{i=1}^{n_f} d\vartheta_i$ (with $n_f = 0$ and $n_f=2$ in the leptonic and semileptonic case respectively). After making use of gauge invariance, by choosing the Feynman gauge $\xi =1$, performing the polarisation sum $\sum_{\lambda} \epsilon^*_\mu(\lambda) \epsilon_\nu(\lambda) = - g_{\mu \nu} + (1-\xi) k_\mu k_\nu/k^2 \to -g_{\mu\nu}$ over the eikonal part one gets \begin{alignat}{1} & {\cal F}_{ij}(\delta) = (2 \pi)^2 \int_{\delta} \frac{- {p}_i \cdot {p}_j }{( k \cdot{p}_i )( k \cdot {p}_j ) } d\Phi_\gamma = - K_R(\delta) I_{ij}^{(0)} + \textrm{non-soft} \;, \label{eq:F02} \end{alignat} where ``non-soft" stands for finite non-logarithmic regularisation dependent terms. The $K_R(\delta)$-term is the regularisation dependent energy integral and $ I_{ij}^{(0)}$ an angular integral. In the leading log approximation $K_R(\delta)$ and $ I_{ij}^{(0)}$ are separately Lorentz invariant \cite{Isidori:2020acz}. This is non-trivial since the introduction of the photon energy cut-off introduces a preferred frame and complicates the analytic evaluation of the non-approximated integrals. More concretely, \begin{equation} K_R(\delta) =\int _{0}^{\delta}\frac{dE_{\gamma }}{E_{\gamma }} = \left\{\begin{array}{ll} -\frac{1}{2} \ln \frac{m_\gamma}{\mu} +\ln \left ( \frac{ \delta}{\mu } \right ) +{\cal O}(m_\gamma ) & m_\gamma \textrm{-reg} \\ - \frac{1}{2\epsilon} +\ln \left ( \frac{2\delta }{\mu } \right ) + {\cal O}(\epsilon ) & \text{dim-reg} \end{array} \right. \;, \end{equation} given in dimensional regularisation $d = 4 - 2 \epsilon$ and photon mass regularisation (cf. App.~ D \cite{Isidori:2020acz} for some more detail). The angular integral produces a term \begin{equation} \label{eq:soft-integral} I_{ij} = \int d \Omega \frac{E_\gamma^2 p_i \cdot p_j}{ (k \cdot p_i ) (k \cdot p_j)} = \frac{1}{2 \beta_{ij}} \ln \frac{1+\beta_{ij}}{1-\beta_{ij}} = 1 + {\cal O}(\beta_{ij}) \;, \end{equation} which matches the expression in \eqref{eq:A} and thus reproduces \eqref{eq:soft} as outlined earlier. \item \emph{The hard-collinear logs} can be obtained from the splitting function which has been verified in \cite{Isidori:2020acz} for the more advanced semileptonic case. The formula for the collinear logs reads \begin{alignat}{2} \label{eq:collin} & \Delta_{\text{rel}}|_{\ln m_\ell} &\;=\;& - \frac{\alpha}{\pi} \hat{Q}^2_{\ell^+} \ln \hat{m}_\ell \left( \frac{d\Gamma^{\textrm{LO}}} {d \phi_f} \right)^{-1} \int^1_{z(\hat{\delta})} d z P_{f \to f \gamma}(z) \frac{d \Gamma^{\textrm{LO}} }{d \phi_f } ( f_i (z)\vartheta_i ) \nonumber \\[0.1cm] & &\;\to\;& - \frac{\alpha}{\pi} \hat{Q}^2_{\ell^+} \ln \hat{m}_\ell \int^1_{1- 2 \hat{\delta}} {d z} P_{f \to f \gamma}(z) \nonumber \\[0.1cm] & &\;=\;& - \frac{\alpha}{\pi} \hat{Q}^2_{\ell^+} \ln \hat{m}_\ell \left( \frac{3}{2} - 2 \hat{\delta}(2 - \hat{\delta} ) \right)\;, \end{alignat} (and thus $F_{\textrm{coll}} (\hat{\delta}) = -4 \hat{Q}^2_{\ell^+} ( \frac{3}{2} - 2 \hat{\delta}(2 - \hat{\delta} )$) with fermion splitting function \begin{equation} \label{eq:P} P_{f \to f \gamma}(z) = \frac{1 + z^2}{(1- z)_+} + \frac{3}{2} \delta(1-z) \;, \end{equation} where $\delta(1-z)$ is a Dirac delta function and $\frac{1}{(1-z)_+}$ is the plus distribution $\int_0^1 dz \frac{f(z) }{(1-z)_+}= \int_0^1 dz \frac{ f(z)- f(1))}{1-z}$.\footnote{This is just one specific way to regularise. Alternatively one may use for instance ${P}_{f \to f \gamma}(z) = \lim_{z^* \to 0} \left[ \frac{1 + z^2}{(1- z)}\theta((1-z^*)-z) +( \frac{3}{2} + 2 \ln z^*)\delta(1-z) \right] $.} For the leptonic case the formula is trivial since there are no phase space variables. Crucially, in the photon-inclusive limit $2 \hat{\delta} \to 1$ the hard-collinear logs cancel $ F_{\textrm{coll}} (\frac{1}{2}) =0$ in accordance with the KLN-theorem. This has to hold since $\int_0^1 dz P_{f \to f \gamma}(z) = 0$ which in turn follows from the conservation of the electromagnetic current (as it is related to the current's anomalous dimension which vanishes). \end{itemize} \subsubsection{Leading order result with $V\!-\! A$-interaction as in the Standard Model} \label{sec:lepVA} The Standard Model computation ($C_{S\mi P} = 0,\, C_{V\mi A} \neq 0$) has of course been obtained a long time ago \cite{Kinoshita:1959ha,Marciano:1993sh}, we quote \begin{equation} \label{eq:leptonic-rate} \Gamma(\pi^+ \to \ell^+ \bar \nu) = \Gamma(\pi^+ \to \ell^+ \bar \nu)^{(0)}(1 + \frac{\alpha}{4 \pi}\left(- 3 \ln \hat{m}_W^2 + F(\hat{m}^2_\ell , 2 \hat{\delta}) \right)) \;, \end{equation} and comment on the various terms further below. In \eqref{eq:leptonic-rate} $- 3 \ln {\hat{m}_W^2}$ incorporates the matching to the $M_W$-scale \cite{Marciano:1993sh}. The explicit radiative function $ F(x,y)$ is given by \cite{Carrasco:2015xwa} \begin{alignat}{2} & F(x,y) &\;=\;& 4 \frac{1+x^2}{1-x^2} Li_2(y) + \ln x^2 + \frac{2-10 x^2}{1-x^2} \ln x^2 - 4 \frac{1+x^2}{1-x^2} Li_2(1-x^2) -3 \nonumber \\[0.1cm] & &\;+\;& \frac{3+ y^2 + 4 y(x^2-1)}{2(1-x^2)^2} \ln (1\!-\! y) + \frac{y(4 \!-\! y \!-\! x^2)}{(1-x^2)^2} \ln x^2 + \frac{ y(22-3 y - 28 x^2)}{2(1-x^2)^2} \nonumber \\[0.1cm] & &\;+\;& F_{\textrm{soft}}(x,y) \;. \end{alignat} In the photon-inclusive case, $ F_{\textrm{inc}}(x) \equiv F(x,1-x^2)$, the radiative function assumes the form \begin{alignat}{2} \label{eq:F2} & F_{\textrm{inc}}(x) &\;=\;& - 8 \ln(1 - x^2) - \frac{3 x^2}{(1-x^2)^2} \ln x^2 - 8 \frac{1+x^2}{1-x^2} Li_2(1-x^2) \nonumber \\[0.1cm] & &\;+\;& \frac{13 - 19x^2}{2(1-x^2)} + \frac{6 - 14 x^2 - 4 (1+x^2) \ln (1 \!-\! x^2)}{ 1-x^2} \ln x^2) \;. \end{alignat} Let us now turn our focus to the logs as in the previous section: \begin{itemize} \item \emph{The soft and soft-collinear terms} are universal and $F_{\textrm{soft}}(x,y) $ is indeed the same function as in \eqref{eq:soft}. \item \emph{Hard-collinear logs}, of the type $\ln m_\ell$, are not present. The LO $V\!-\! A$-amplitude is ${\cal O}(m_\ell)$-suppressed. and this is enough to guarantee the absence of the latter at ${\cal O}(\alpha)$ which can be seen as follows. In the real radiation rate the $\ln m_\ell$-terms arise from the eikonal part \eqref{eq:eikonal} which are proportional to the LO amplitude which is ${\cal O}(m_\ell)$ and thus the logs can be at worst of the form $m^2_\ell \ln m_\ell^2$ in the rate. Since the $\ln m_\ell$-terms in the virtual and the real part of the ${\cal O}(\alpha)$ rate have to cancel the virtual rate cannot contain them either. We are to conclude that ${\cal O}(\alpha) m_\ell^2 \ln m_\ell$ are the leading logs of this type. Since the limit $m_\ell \to 0$ is not divergent these logs do not have to cancel contrary to the $S\!-\! P$-case. Inspection of \eqref{eq:F2} shows that they do indeed \emph{not cancel} since $F = - 3\hat{m}_\ell^2 \ln \hat{m}_\ell^2+ \dots$. It seems worthwhile to briefly pause and reflect. If the ``naive" equation of motion, linking $V \!-\! A$ to $S \!-\! P$, where to apply it would be possible to reuse $S\!-\! P$-computation from the one of $V\!-\! A$. This holds for the real part but not the virtual part as in this case the photon in the derivative interaction is \emph{not} an external on-shell particle. The moral of the story is that collinear logs only cancel if they have to due to the principle of unitarity which underlies the KLN-theorem. \item \emph{A different type of collinear log:} We may however turn the tables and consider the decay $\tau^- \to \pi^- \bar \nu$ and regard $\ln m_\pi$ as the collinear log. The amplitude which is identical to the one for the leptonic decay is not ${\cal O}(m_\pi)$-suppressed, thus there will be $\ln m_\pi$ terms in the real and the virtual part of the rate and they have to cancel in the total rate. There are some differences in the integration over phase space for the radiative part but not for the relevant eikonal terms. Inspecting \eqref{eq:F2}, taking the $1/x \to 0$ limit and adding the log in \eqref{eq:leptonic-rate}, one collects $\frac{\alpha}{4 \pi}(6 + 16 + 6 + 0 + 0 -28 ) \ln m_\pi = 0$ and it is seen that the logs do cancel as they have to! \end{itemize} \subsection{Semileptonic decay of the type $B \to \pi \ell^+ \bar \nu$ } \label{sec:semileptonic} The new element in the semileptonic decay $B \to \pi \ell^+ \bar \nu$ is the extra meson in the final state leading to two non-trivial kinematic variables. They can be chosen to be the Dalitz-plot variables or the more commonly used lepton momentum squared $q^2 = (\ell_1+ \ell_2)^2$ and the angle $\theta$ of a lepton to the decay axis in the $q$-restframe (as depicted in Fig.~\ref{fig:semileptonic}). Hence the LO decay is differential unlike in the leptonic case (cf. for instance App.~ B.1 in \cite{Isidori:2020acz} for the explicit result). A noticeable aspect is that QED, unlike the weak interaction term \eqref{eq:Leff}, give rise to higher moments in the lepton angle \cite{Gratrex:2015hna}. In many ways the QED-treatment of the semileptonic decay $B \to \pi \ell^+ \bar \nu$ in the point-like approximation is similar to the leptonic decay and we shall be brief on those matters. There are also new aspects which bring in a certain amount of complication which we identify and examine more closely: \begin{enumerate} \item The role of the pion decay constant $F_\pi$ is taken by two form factors $f^{B \to \pi}_\pm(q^2)$, \begin{alignat}{2} & \matel{0} {A^a_{5 \, \mu } }{ \pi^b (p)} &\;=\;& -i \delta^{ab} F_{\pi} p_{\mu} \to \nonumber \\[0.1cm] & \matel{\pi }{ V_\mu }{ B } &\;=\;& f^{B \to \pi}_+(q^2) (p_B\!+\! p_\pi)_\mu + f^{B \to \pi}_-(q^2) (p_B\!-\! p_\pi)_\mu \;. \end{alignat} Often in the literature the form factor is taken to be a constant, which is a good approximation in $K \to \pi \ell^+ \bar \nu$ but less so for $B \to \pi \ell^+ \bar \nu$. Expanding the form factor in $q^2$, as in \cite{Isidori:2020acz}, leads to a more involved effective theory which goes beyond the point-like approximation. The effect of the expansion is most prominent when the photon energy cut is large due to migration of radiation (for which we refer the reader to the plots in App.~ A in\cite{Isidori:2020acz}).\footnote{The flavour changing neutral current (FCNC) case is peculiar in that for $B^0 \to \pi^0 \ell^+ \ell^-$ the form factor expansion amounts to the replacement of the constant form factor by $f_\pm \to f_\pm(q^2)$, whereas in the charged case $B^+ \to \pi^+ \ell^+ \ell^-$ the expansion is necessary and could be relevant because of the migration of radiation in conjunction with resonance-contributions entering non-resonant bins.} \item For the radiative matrix element the $(q^2 ,\theta)$-variables have to be adapted because of the additional photon. We follow the discussion in \cite{Isidori:2020acz} (replacing the kaon by the pion) where the following kinematic variables \begin{equation} \label{eq:tr} \{ q^2_{\AAA}, c_{\AAA} \} = \left\{ \begin{array}{ll} q_\ell^2 = (\ell_1+\ell_2)^2\;, ~ & c_{\ell} = - \left(\frac{\vec{\ell_1}\cdot \vec{p}_{\pi}}{ |\vec{\ell_1}| | \vec{p}_{\pi}| } \right)_{q-\textrm{RF}} \\ q^2_0 = (p_B-p_\pi)^2~, & c_{0}= - \left(\frac{\vec{\ell_1}\cdot \vec{p}_{\pi}}{ |\vec{\ell_1}| | \vec{p}_{\pi}| } \right)_{q_0 -\textrm{RF}} \;, \end{array} \right. \end{equation} are defined with $q-\textrm{RF}$ and $q_0-\textrm{RF}$ denoting the $q$ and $q_0 \equiv q+ k$ restframes respectively. Note that, the $(q_0^2, c_{0})$-variables, unlike at an $e^+ e^-$-collider, are difficult to measure at a hadron collider where the components of the $B$-momentum are unknown. \item The LO amplitude is not ${\cal O}(m_\ell)$-suppressed and a priori it is only the total (photon-inclusive) rate which is well-defined in the $m_\ell \to 0$ limit. As previously state, for finite $m_\ell$, as in the real world, this leads to a sizeable and measurable effect. This raises the interesting question as to whether any of the differential variables in \eqref{eq:tr} are collinear-safe (i.e. $m_\ell \to 0$ can be taken). \item The photon interacts with many particle-pairs and this complicates the analytic evaluation of the phase-space integrals as one can choose the restframe only once. As previously discussed, the energy- and soft-integrals \eqref{eq:soft-integral} are separately Lorentz-invariant in the soft-limit and can therefore each be evaluated using a separate preferred frame \cite{Isidori:2020acz}. \end{enumerate} Now, point 4 is partly covered in App.~\ref{app:pragmatic} and all aspects of point 1 are covered in \cite{Isidori:2020acz}. Let us just briefly mention that as long as a constant form factor is assumed or the mesons are neutral, the computation of the real and virtual amplitude is very similar to the leptonic case albeit technically more involved. Points 2 and 3 deserve a closer look and are the topic of the next section. \subsubsection{(Non)-collinear safe differential variables} The soft-divergences which have to cancel at the differential level, can of course be derived using the same techniques as for the lepton case \eqref{eq:resum} (with relevant practical remarks deferred to App.~\ref{app:pragmatic}). The hard-collinear divergences have been isolated using the phase space slicing technique. They cancel charge by charge in the photon-inclusive total rate in accordance with \eqref{eq:total-cancel}. \begin{figure}[h] \begin{centering} \includegraphics[width=7.0 cm]{figs/semileptonic.pdf} \caption{\small Sketch of semileptonic (non-radiative) decay $B \to \pi \ell^+ \bar \nu$ with the definition of the lepton angle $\theta$ (and $q^2 = (p_{\ell^+} + p_\nu)^2$). The definition of these variables need to be revised when the photon emission is considered in addition \eqref{eq:tr}.} \label{fig:semileptonic} \end{centering} \end{figure} Let us now turn to the question, phrased in point 3, whether or not these logs cancel in one of the differential variables defined in \eqref{eq:tr}. It is found by explicit computation that the $\ln m_\ell$-terms cancel in the $(q_0^2, c_{0})$- but not the $(q^2, c_{\ell})$-variables \cite{Isidori:2020acz}. We wish to discuss this result from a physics viewpoint. The cancellation of soft-divergences at the differential level is quite plausible since the soft photon does not make a difference to the radiative versus the non-radiative decay topology. For the (energetic) collinear photon this is not the case. The topologies of the radiative and non-radiative amplitude are rather different and a priori one would not expect cancellations. In the total rate these cancellations are non-trivial and based on unitarity as emphasised earlier. Thus it is natural to ask whether it can be understood from this viewpoint. The answer is affirmative. \begin{itemize} \item The $q_0^2$-variable is the four momentum of the total lepton-photon system and for fixed $q_0^2$ one may interpret it as a decay of a boson of mass $q_0^2$ into the two leptons and the photon, e.g. $W^+(q_0) \to \ell^+ \bar \nu (\gamma)$. And decay is not differential (its non-radiative part), just as the leptonic case, and thus the $\ln m_\ell$ terms have to cancel by virtue of the KLN-theorem. \item Alternatively one may regard $q_0^2$ as the analogue of a jet where radiative and non-radiative parts are not distinguished and the problem of discerning the lepton from the lepton with a photon emitted at an infinitesimally small angle does not pose itself. This is the pedestrian version of the IR-safety criteria which states that an observable $\Phi_n$ of $n$-particles is collinear-safe if (e.g. \cite{Contopanagos:1996nh}) \begin{alignat}{1} & \Phi_n(p_1, \dots p_i,p_j, \dots p_n) \to \Phi_{n-1}(p_1, \dots p_{ij}, \dots p_n) \;, \quad p_i \parallel p_j \;, \quad p_{ij} = p_i+p_j \;, \end{alignat} is smooth. Clearly this is the case in the $q_0^2$-variable but not the $q^2$-variable when differential. \end{itemize} \paragraph{Cancellation of hard-collinear logs in total rate:} It is instructive to illustrate the cancellation of the hard-collinear terms in the total rate. Applying formula \eqref{eq:collin} to the case where we keep the differential variable $q^2$ one gets \begin{equation} \frac{d\Gamma}{d q^2} |_{\ln \hat{m}_\ell} = - \frac{\alpha}{\pi} \hat{Q}^2_{\ell^+} \ln \hat{m}_\ell \int^1_{q^2/m_B^2} \frac{d z}{z} P_{f \to f \gamma}(z) \frac{d \Gamma^{\textrm{LO}} }{dq^2 } ( q^2/z) +{\cal O}(1) \;, \end{equation} where the the factor $1/z$ is a Jacobian from the change of variable $q_0^2 = q^2/z$ (with $z$ the energy fraction of the lepton after collinear splitting). The lower integration boundary of $z$ is the photon inclusive limit, neglecting ${\cal O}(m_{\pi,\ell})$-terms. If we perform the integration over the $q^2$ phase space then the $\ln \hat{m}_\ell $-terms have to cancel according to the KLN-theorem. This is indeed the case \begin{alignat}{2} \label{eq:magic} &\Gamma^{\textrm{tot}} |_{\ln \hat{m}_\ell} = \int_{0}^{m_B^2} dq^2 \frac{d\Gamma}{d q^2} |_{\ln \hat{m}_\ell} &\;\propto\;& \int_0^1 \frac{dz}{z} \int_0^{z m_B^2} dq^2 P_{f \to f \gamma}(z) \frac{d\Gamma^{\textrm{LO}}}{d q^2}(q^2/z) \nonumber \\[0.1cm] & &\;=\;& \int_0^1 dz P_{f \to f \gamma}(z) \int_0^{ m_B^2} d q_0^2 \frac{d\Gamma^{\textrm{LO}}}{d q^2}(q_0^2) =0 \;, \end{alignat} where in the first equation the order of integration has been exchanged and in the second equation the chance of variable $q_0^2 = q^2/z$ was performed. This is of course the collinear-safe variable $q_0^2$ indeed. The vanishing of $ \int_0^1 dz P_{f \to f \gamma}(z) = 0$ has been previously discussed in \eqref{eq:P}. In conclusion the hard-collinear logs vanish for the full rate independent of the specific decay rate. The assumption is of course that the splitting function reproduces all the logs. This fails if the $m_\ell \to 0$ limit can be taken such as in the leptonic decay of the SM (cf. Sec.~\ref{sec:lepVA}) where the amplitude and the leading logs are ${\cal O}(m_\ell)$ suppressed. \section{Structure-dependent QED corrections - Resolving the Hadrons} \label{sec:beyond} \subsection{Summary on status of structure-dependent QED corrections} \label{sec:SDcorr} The field of QED corrections to hadronic decays including structure-dependent corrections (i.e. going beyond the point-like approximation) is not yet at a mature stage. The physical picture is well-motivated from the hydrogen atom where the proton and electron make up a charge neutral object but photonic interaction plays an important role. Thus it cannot be expected that a photon does not interact with a neutral $B$-meson composed of a $b$- and a $d$-valence quark. It is precisely for this meson that one can expect the largest effects as it is composed of a heavy and a light quark. There are various reasons why this is a difficult task. One of them is of course the cancellations of IR-divergences which enforces to consider real radiation. A task which goes beyond standard flavour physics and interferes with confinement at long distances. Amongst the continuum methods there is chiral perturbation theory, light-cone approaches such as soft-collinear effective theory (SCET) and heavy quark symmetry. QED in chiral perturbation is well established \cite{Cetal01,CGH08,Descotes-Genon:2005wrq}, and its main challenge is the determination of the counterterms (which seem to follow the pattern of vector resonance saturation as in QCD). In SCET the leptonic FCNC decay $B_s \to \mu^+ \mu^-$ has been investigated in \cite{BBS17,Beneke:2019slt} with the main parametric uncertainty coming from the QCD $B$-meson distribution amplitude. Hadronic decays of the type $B \to K \pi$ have been investigated in \cite{Beneke:2020vnb} and the definition of the charged light-meson distribution amplitudes is non-trivial \cite{Beneke:2019slt}. A remarkable aspect is that so far in SCET only virtual contributions have been considered. Real radiation is only incorporated via the universal soft-photon part \eqref{eq:resum}. And heavy quark symmetry has been found to be constraining in $B \to D^{(*)} \ell \nu (\gamma)$ decays \cite{Papucci:2021ztr} (in the appropriate kinematic region). Lattice QCD + QED comes with its own challenges such as containing the massless photon in a finite box (cf. \cite{Patella:2017fgk} for a review). There are, by now, four main programs. QED$_L$ where the spatial zero mode of the photon, which is in tension with the finite volume, is removed at the cost of locality and non-gauge invariant interpolating operators are used for the charged mesons \cite{Carrasco:2015xwa}. In this approach finite volume effects to hadronic observables (hadron masses and leptonic decay rates) are power-like rather than exponentially suppressed. In the context of leptonic decays, the leading universal finite volume effects have been determined up to $O(1/L)$ in \cite{Lubicz:2016xro} and up to $O(1/L^2)$ in Ref. \cite{DiCarlo:2021apt}, including structure-dependent contributions. Only virtual corrections are computed on the lattice and for the real correction the point-like approximation is proposed which is a good enough approach for $K^+,\pi^+ \to \ell^+ \nu$. First lattice results have been reported in \cite{Giusti:2017dwk,DiCarlo:2019thl} for these decays. A modification of this approach is QED$_\infty$ where finite volume effects are exponentially suppressed \cite{Feng:2018qpx}. This approach needs to be adopted case by case and has been applied to the pion mass difference \cite{Feng:2021zek}. Another approach is to work with a massive photon, emulating the continuum approach, which does not require to cut out the zero mode but introduces another scale into the problem \cite{Endres:2015gda}. First results on hadron masses have been reported in \cite{Clark:2022wjy}. A fully gauge invariant approach to lattice QCD, building upon ideas of Dirac and others, has been proposed \cite{Lucini:2015hfa}, known as $C^*$ boundary condition. Again results on hadron mass determination have been reported \cite{Hansen:2018zre}. \subsection{Cancellation of hard-collinear logs for structure dependent contribution} \label{sec:noSDlogs} Technicalities aside, one may in particular be concerned that hard-collinear logs ${\cal O}(\alpha) \ln m_e/m_b$, originating from structure-dependent corrections, do lead to large uncertainties as currently unknown. Fortunately a rigorous result can be established forbidding those logs \cite{Isidori:2020acz}, based on gauge invariance. The basic idea of the proof is that when one considers a light particle like the electron and photon then $\ell_e = k + {\cal O}(m_e^2)$ in the collinear region which lends itself to the use of gauge invariance. We will sketch some more detail by decomposing the radiative amplitude (${\cal A}^{(1)} \to {\cal A}$ for brevity) \begin{equation} {\cal A} = \epsilon^* \!\cdot \! ( {\cal A}_e + ({\cal A} - {\cal A}_e)) \;, \qquad \epsilon^* \!\cdot \! {\cal A}_e \propto \hat{Q}_{e} \frac{\epsilon^* \cdot \ell_1}{k \cdot \ell_e} \;, \end{equation} such that the entire eikonal term of the electron is in ${\cal A}_e$. Squaring this matrix element, summing over polarisation in the Feynman gauge (cf. Sec.~\ref{sec:SP}) and integrating over the photon phase space one gets three terms \begin{equation} \int d \Phi_\gamma \, {\cal A}\!\cdot \! {\cal A}^* = \int d \Phi_\gamma ( ({\cal A} - {\cal A}_e)\!\cdot \! ({\cal A} - {\cal A}_e)^* + 2 \textrm{Re}[{\cal A}_e \!\cdot \! {\cal A}^*] - {\cal A}_e \!\cdot \! {\cal A}^*_e) \;. \end{equation} The first is by construction finite in the collinear region of the lepton $\ell_1$. The second has no hard collinear logs since it is proportional to \begin{equation} \ell_e \cdot {\cal A} = k \cdot {\cal A} + {\cal O}(m_{e}^2) = {\cal O}(m_{e}^2) \;, \end{equation} in the collinear region. The third one gives raise to the collinear logs. Firstly, we learn that the $\ln m_{e}$-terms are necessarily proportional to $\hat{Q}_{e}^2$ (as manfiested in the splitting function approach). Second, and more importantly there cannot be any further hard collinear logs in the structure dependent part. This is the case since the addition of structure dependent term will just change ${\cal A} \to {\cal A} + \delta {\cal A}$ where $\delta {\cal A}$ is itself gauge invariant and will be finite in the first term and not change the conclusions in the second either and not be part of the third one! The result is unchanged when the spin is considered, as explicitly shown for spin-$1/2$ and argued for any spin in \cite{Isidori:2020acz}. Hence the result is: \emph{any gauge invariant addition (to the point-like approximation) can at most lead to logs of the form ${\cal O}(\alpha) m_e^2 \ln m_e$.} These terms are not sizeable and in particular vanish in the chiral limit $m_e \to 0$. This result has been verified in the derivative expansion of the form factor which is a particular approach that goes beyond the point-like approximation. This is fortunate as it puts $R_K$, or more generally tests in the lepton universality, on much firmer grounds since Monte Carlo tools such as PHOTOS do not (yet) incorporate structure-dependence. \section{Discussions and Conclusions} \label{sec:conclusions} QED corrections have a long history. In particular electromagnetic corrections have been the vehicle to the development of quantum mechanics and QFT. The massless photon leads to IR-effects which have a high degree of universality. The Bloch-Nordsieck cancellation mechanism from 1937, predates the solid development of QED in the 1940's, and is a strong indication of universality in the IR-domain. The IR-effects are interlinked with the measurement process and gives rise to the largest QED corrections. We have reviewed the very basic of IR-divergences in Sec.~\ref{sec:IR} along with the connection to the elegant coherent states formalism. How IR-effects affect predictions was the topic of Sec.~\ref{sec:examples}, including three examples of increasing IR-sensitivity: the (inclusive) $e^+ e^- \to hadrons$ cross section, the leptonic decay $\pi^+ \to \ell^+ \bar \nu$ and the semileptonic case $B \to \pi \ell^+ \bar \nu$ in Secs.~ \ref{sec:inclusive}, \ref{sec:leptonic} and \ref{sec:semileptonic} respectively. We have highlighted the peculiarity of the leading collinear logs in the leptonic decay in the Standard Model and clarified the importance of the choice of kinematic variables in the differential distribution of the semileptonic decay types. Going beyond the point-like approximation, taking into account structure dependence, is the next step in the precision physics program of weak decays and a topic in Sec.~\ref{sec:beyond}. Different methods and approaches have briefly been discussed in Sec.~\ref{sec:SDcorr}. The text ends in Sec.~\ref{sec:noSDlogs} with the model-independent demonstration, based on gauge invariance, that the structure dependent part does not lead to new hard-collinear logs. This is fortunate as it will considerably reduce the uncertainty in many important observables such as the precision determination of heavy-light CKM-elements and tests of lepton flavour universality. However, the implementation of these corrections in experiment will necessitate the development or extension of Monte Carlo tools. This demands a joint effort of theory and experiment. \vspace{6pt} \paragraph{Acknowledgement} RZ is supported by an STFC Consolidated Grant, ST/P0000630/1. I am grateful to Saad Nabeebaccus and Matt Rowe for careful reading of the notes and comments. Correspondence with Matteo Di Carlo and Adrian Signer is further acknowledged. These notes were originally prepared for the EuroPLEx Summer School 2021 which fell victim to substantial shortening due to Covid. I intend to update these notes in the future with regards to structure dependence in the foreseeable future.
1711.08180
\section{Introduction} Semantic segmentation assigns all pixels in an image to semantic classes; it gives finely-detailed pixel-level information to visual data and can build a valuable module for higher-level applications such as image answering, event detection, and autonomous driving. Conventional semantic segmentation techniques for images have been mostly built using handcrafted features on conditional random fields (CRFs) \citep{russell2009associative,ladicky2010and, yao2012describing}. Recently deep convolutional neural networks (DCNNs) have achieved great success in classification \citep{krizhevsky2012imagenet, simonyan2014very, szegedy2015going,he2015deep}, so they have been widely applied to semantic segmentation approaches \citep{long2015fully, chen2014semantic,noh2015learning,zheng2015conditional,lin2015efficient,liu2015semantic,yu2015multi}. \begin{figure}[ht]\centering \begin{tabular}{cccccc} \includegraphics[width=0.88in]{frame0152.jpg} \includegraphics[width=0.88in]{frame0252.jpg} \includegraphics[width=0.88in]{frame0038.jpg}\\ \includegraphics[width=0.88in]{frame0152_color.png} \includegraphics[width=0.88in]{frame0252_color.png} \includegraphics[width=0.88in]{frame0038_color.png} \\ (a)\\ \includegraphics[width=0.88in]{frame0001.jpg} \includegraphics[width=0.88in]{frame0031.jpg} \includegraphics[width=0.88in]{frame0061.jpg}\\ \includegraphics[width=0.88in]{frame0001_color.png} \includegraphics[width=0.88in]{frame0031_color.png} \includegraphics[width=0.88in]{frame0061_color.png} \\ (b) \end{tabular} \caption{Problems when the pre-trained DCNN model is applied directly to a video frame (top: a frame, bottom: the result). Different colors of results represent different classes. (a) Objects segmented into different classes. (b) Label wavers between visually-similar categories (from left to right: frame 1 - mixed, frame 31 - dog, frame 61 - horse). {\bf Best viewed in color.}} \end{figure} A video consists of a sequence of images, so image-based models can be applied to each of them. For instance, there was an attempt to apply object and region detectors pre-trained on still images to a video \citep{zhang2015semantic}; they used pre-trained object detectors to generate rough object region proposals in each frame. Similarly, we adopt an approach that employs an image-trained model to process a video, but instead of a conventional object detector, we apply a DCNN semantic segmentation model to each frame. However, the DCNN model can show spatially inconsistent labels as previously reported in \citep{qi2015semantic} when it is applied to an image. This inconsistency is exacerbated for video due to various factors such as motion blur, video compression artifacts, and sudden defocus \citep{kalogeiton2015analysing}. When the model is applied directly to video, the labeling result for an object can be segmented into different classes, and can waver between visually confusing categories among frames (Fig. 1). Human vision also experiences such difficulty of recognition under certain circumstances. Our framework is motivated by the following question: `How does a human recognize a confusing object?' When a human has difficulty identifying the object, she can guess its identity by using learned models and focusing on the object for a while. Then, she can recognize the object by referring to the moments during which it is unambiguous; i.e., she tunes her model to the specific object while regularizing small appearance changes. In a way analogous to this process, our framework takes advantage of multiple frames of a video by emphasizing confidently-estimated (CE) frames to adapt a pre-trained model to a video. \begin{figure}[t] \begin{center} \begin{overpic}[width=1.00\linewidth]{idea4.eps} \put(2,52){\footnotesize{Video frames}} \put(8.8,18){\footnotesize{Globally confident}} \put(8.8,11.5){\footnotesize{Locally confident}} \put(24,52){\footnotesize{DCNN Semantic Segmentation}} \put(72,52.5){\footnotesize{Output}} \put(79,28.5){\footnotesize{Select frames}} \put(47,28.5){\footnotesize{Adapt model}} \put(54,17){\footnotesize{Set labels}} \put(40,3){\footnotesize{{\bf Generate self-adapting dataset}}} \end{overpic} \end{center} \caption{Main framework of our method. {\bf Best viewed in color.}} \end{figure} The key idea of our method is to propagate the belief of CE frames to the other frames by fine-tuning DCNN model; we apply a pre-trained DCNN model to each frame to guess the object's class, and collect frames in which the estimation is globally confident or locally confident. Then, we use the set of CE frames as a training set to fine-tune the pre-trained model to the instances in a video (Fig. 2). We restrict the DCNN model to be video-specific rather than general purpose; i.e., we make the model focus on the specific instances in each video. In our procedures, we only use the label of CE regions, and let the uncertainly-estimated (UE) regions be determined by the CE frames. We also incorporate weak labels (i.e., manually-annotated class labels of objects in the video) to prevent a few incorrect labels from degrading the model. Our procedures to generate a self-adapting dataset and to use CE frames to update the model can recover the uncertain or misclassified parts of UE frames that include multiple objects. We also propose an online approach that is implemented in a way similar to object tracking, because object tracking and online video object segmentation are closely related in that both tasks should trace the appearance change of an object while localizing it. Then we combine the batch and online results to improve the motion-consistency of segmentation. We validate our proposed method on the Youtube-Object-Dataset \citep{prest2012learning,jain2014supervoxel, zhang2015semantic}. In experiments our model greatly improved the pre-trained model by mitigating its drawback even when we do not use the weak labels. \section{Related work} Recent image semantic segmentation techniques have been propelled by the great advance of DCNN for the image classification task \citep{krizhevsky2012imagenet, simonyan2014very, szegedy2015going,he2015deep}. Based on the classification network, \cite{long2015fully} extended a convolutional network to a fully-convolutional end-to-end training framework for pixel-wise dense prediction. \cite{chen2014semantic} used a hole algorithm to efficiently compute dense feature maps, and combined the output of the network into a fully-connected CRF. Several follow-up studies proposed more-sophisticated combinations of CRF framework with DCNNs \citep{zheng2015conditional,lin2015efficient,liu2015semantic}. \cite{yu2015multi} proposed a modified architecture that aggregates multi-scale context by using dilated convolutions specifically designed for dense prediction. Due to the difficulty of pixel-wise annotation for frames, most video semantic object segmentation techniques have been built on a weakly-supervised setting that is given only video-level class labels of objects appearing in a video. \cite{hartmann2012weakly} trained weakly-supervised classifiers for several spatiotemporal segments and used graphcuts to refine them. \cite{tang2013discriminative} determined positive concept segments by using a concept-ranking algorithm to compare all segments in positive videos to the negative videos. \cite{liu2014weakly} proposed a label transfer scheme based on nearest neighbors. The natural limitation of the weakly-supervised approach is that it has no information about the location of target objects. Because a video is a sequence of images, \cite{zhang2015semantic} used object detectors that had been pre-trained on still images, and applied them to a video to localize object candidates; in each frame the method generates several object proposals by using object detectors that correspond to the given labels and by using rough segmentation proposals based on objectness. Although the object detection gives the spatial information as a bounding box around objects that have a semantic class label, the information is not sufficient for pixel-wise segmentation. Thus we use a DCNN semantic segmentation model pre-trained on images to give the pixel-wise spatial extent of the object and its semantic label at the same time, and adapt the image-based DCNN model to the input video. In contrast to most existing approaches that focus on modeling temporal consistency at pixel or region levels, our framework does not necessarily assume that the neighboring frames should be similar, and because it samples several frames that may capture different appearances of an object, our framework is relatively insensitive to sudden changes of the object. Another related topic is video object segmentation with semi-supervised video \citep{ali2011flowboost,ramakanth2014seamseg,tsai2012motion,badrinarayanan2010label,jain2014supervoxel, fathi2011combining}, which is given pixel-wise annotation of certain frames. Especially, \cite{jain2014supervoxel} proposed a method that employs supervoxels to overcome the myopic view of consistency in pairwise potentials by incorporating additional supervoxel-level higher-order potential. \cite{fathi2011combining} developed an incremental self-training framework by iteratively labeling the least uncertain frame and updating similarity metrics. The framework is similar to ours in that we update a model based on previously-estimated frames, although we neither assume pixel-wise annotation nor require superpixels, which often wrongly capture the boundary of the object. We also update the pre-trained DCNN model, in contrast to the method of \cite{fathi2011combining} that updates a simple similarity metric based on hand-crafted features such as SIFT and a color histogram. \section{Method: AdaptNet} We assume that a video includes at least a few CE frames and that they are helpful to improve the results of the UE frames. The main idea of our method is to propagate the belief of those CE frames by fine-tuning DCNN model. Thus our main framework consists of the following steps: selection of CE frames, label map generation, and adaptation of the model to the input video. We describe the detailed algorithm of these steps in the following subsections. We first propose a batch (offline) framework, then an online framework. We combine those two results to improve the motion-consistency of segmentation by incorporating optical flow. At the end of the section, we mention simple extensions to the unsupervised batch algorithm. \subsection{Batch} Let $\mathcal{F}$ denote a set of frame indices, and $\mathcal{W}$ denote a set of given weak labels of the input video. We begin by applying a pre-trained DCNN model $\theta$ to each frame $f\in \mathcal{F}$, then use softmax to compute the probability $P(x_i|\theta)$ that the $i$-th pixel is a member of each class $x_i\in\mathcal O$, where $\mathcal O$ denotes the set of object classes and background. The semantic label map $S$ can be computed using the argmax operation for every $i$-th pixel: $S(i)=\arg\max_{x_i} P(x_i|\theta)$. To adapt the DCNN model $\theta$ to the input video, we collect a self-adapting dataset $\mathcal{G}$ that consists of CE frames and corresponding label maps. We collect globally-CE and locally-CE frames and compute the respective label maps $G^g$ and $G^l$ to construct the self-adapting dataset. The procedures to select the frames and to compute the labels are described in the following and summarized in Algorithm 1. We first perform connected-component analysis on each class map of $S$ to generate a set $\mathcal{R}$ of object candidate regions. For each $k$-th segmented label map $R_k\in \mathcal{R}$ we measure the confidence $C(R_k)$ of the estimated regions, where the $C(\cdot)$ operator takes a label map as input and computes the average probability that the pixels labeled as objects in the label map have the corresponding class labels. Then we generate the label map $G^g_f$ by setting the label of the region only when its confidence exceeds a high threshold $t_o$. We also set the background label for every pixel for which the probability $P(x_i=bg|\theta)$ of being background exceeds threshold $t_b$. To complete $G^g_f$, the remaining uncertain regions must be processed. For this purpose, we let the remaining pixels have the {\it ``ignored''} label. The uncertain {\it ``ignored''} pixels are not considered during computation of the loss function for model update. We also ignore all pixels that have labels that are not in the set $\mathcal{W}$ of given weak labels of a video. We add globally-CE frames with $G^g_f$ that has at least one confident region (i.e., $C(G^g_f)>0$) to the self-adapting dataset $\mathcal{G}$. Because the selected frames might be temporally unevenly distributed, the model can be dominated by frames that are selected during a short interval. To mitigate the resulting drawback and regularize the model, we also select the locally-CE frames that have best object confidences during every period $\tau_b$ although the frames do not include globally-CE object regions. We determine the locally-CE frame and its label map $G^l$ as follows: we generate a label map $G^l_f$ for every frame $f$ by keeping the label of all pixels only if the label $S(i)$ is included in $\mathcal{W}$, while setting the background as before. We measure the confidence of a frame by computing $C(G^l_f)$ and we regard as locally-CE the frame that has the highest confidence during every section of $\tau_b$ frames. If the locally-CE frame is not already selected as a globally-CE frame, we add it to the self-adapting dataset $\mathcal{G}$. \begin{algorithm}[t] \caption{AdaptNet-Batch} \label{Alg1} \begin{algorithmic}[1] \STATE Given: DCNN model $\theta$, a set of weak labels $\mathcal{W}$ \STATE Local best confidence $d=0$ \STATE {\bf for} $f \in \mathcal{F}$ {\bf do} \STATE ~~~~Initialize $G^g_f, G^l_f$ to "ignored" labels \STATE ~~~~Compute $P({\bf x}|\theta)$ and $S=\arg\max_{\bf x}P({\bf x}|\theta)$ \STATE ~~~~Compute set $\mathcal{R}$ of connected components in $S$ \STATE ~~~~{\bf for} $R_k \in \mathcal{R}$ {\bf do} \STATE ~~~~~~~~{\bf if} $S(i) \notin \mathcal{W}, i \in R_k$ {\bf then} {\bf continue} \STATE ~~~~~~~~{\bf if} $C(R_k)>t_o$ {\bf then} Set $G^g_f(i)=S(i), \forall i \in R_k$ \STATE ~~~~~~~~Set $G^l_f(i)=S(i), \forall i \in R_k$ \STATE ~~~~Set $G^g_f(i)=G^l_f(i)=0, \forall i, s.t. P(x_i=bg|\theta)>t_b$ \STATE ~~~~{\bf if} $C(G^g_f)>0$ {\bf then} $\mathcal{G}\leftarrow \mathcal{G}\cup\{G^g_f\}$ \STATE ~~~~{\bf if} $C(G^l_f)>d$ {\bf then} \STATE ~~~~~~~~Update $t = f$ and $d=C(G^l_f)$ \STATE ~~~~{\bf if} $f$ mod $\tau_b = 0$ {\bf then} \STATE ~~~~~~~~{\bf if} $G^g_t \notin \mathcal{G}$ {\bf then} \STATE ~~~~~~~~~~~~$ \mathcal{G}\leftarrow \mathcal{G}\cup\{G^l_t\}$ \STATE ~~~~~~~~Initialize $d=0$ \STATE Finetune DCNN model $\theta$ to $\theta'$ using the set $\mathcal{G}$ \end{algorithmic} \end{algorithm} Given the self-adapting dataset $\mathcal{G}$ constructed from the above procedures, we finally adapt the DCNN model $\theta$ to the video by fine-tuning the model to $\theta'$ based on the dataset. Then, we compute the new label map by applying $\theta'$ to every frame. \begin{figure*}[ht] \begin{center} \begin{overpic}[width=0.76\linewidth]{motion3.eps} \put(-3,12.5){\footnotesize{$S^b$}} \put(-3,3.5){\footnotesize{$S^o$}} \put(15.3,19){\footnotesize{O.F.}} \put(16.5,1.5){\footnotesize{$c(m_f,m_{f+1})$}} \end{overpic} \end{center} \caption{Motion-consistent combination. Top row: consecutive frames of a video and color-coded optical flows (O.F.) between frames. Middle row: corresponding results of batch ($S^b$) model. Bottom row: corresponding results of online model ($S^o$). We compute the consistency $c(m_f,m_{f+1})$ between those results by following optical flow, then find the most consistent path of selected models. {\bf Best viewed in color.}} \end{figure*} \subsection{Online} The main difference between the batch and online frameworks is the generation of the self-adapting dataset. We adopt an online framework similar to object tracking using DCNN \citep{nam2015learning} because object tracking and online video object segmentation are closely related in that they should trace an object's appearance change while localizing object's region. \cite{nam2015learning} pre-trained domain-independent layers from training videos and collected two sets of frames (i.e., log-term and short-term sets) to fine-tune the model to be domain-specific. Similarly, to update the model periodically we collect and maintain two sets: ${\mathcal T}_l$ and ${\mathcal T}_s$ of frames instead of $\mathcal{G}$; ${\mathcal T}_l$ maintains $\tau_l$ globally-CE frames and ${\mathcal T}_s$ maintains $\tau_s$ locally-CE frames separately. ${\mathcal T}_l$ is implemented as a priority queue to collect globally-CE frames; ${\mathcal T}_s$ is a basic queue to deal with local variations. After collecting two sets of frames for a certain period $\tau_b$, we use both ${\mathcal T}_l$ and ${\mathcal T}_s$ as the self-adapting dataset to update the parameters of the model $\theta$. We iterate those procedures until the end of the video. The detailed procedures are described in Algorithm 2. Note that the first 11 lines in the algorithm are the same as the batch procedures. \begin{algorithm}[t] \caption{AdaptNet-Online} \label{Alg1} \begin{algorithmic}[1] \STATE Given: DCNN model $\theta$, a set of weak labels $\mathcal{W}$ \STATE Local best confidence $d=0$ \STATE {\bf for} $f \in \mathcal{F}$ {\bf do} \STATE ~~~~Initialize $G^g_f, G^l_f$ to "ignored" labels \STATE ~~~~Compute $P({\bf x}|\theta)$ and $S=\arg\max_{\bf x}P({\bf x}|\theta)$ \STATE ~~~~Compute set $\mathcal{R}$ of connected components in $S$ \STATE ~~~~{\bf for} $R_k \in \mathcal{R}$ {\bf do} \STATE ~~~~~~~~{\bf if} $S(i) \notin \mathcal{W}, i \in R_k$ {\bf then} {\bf continue} \STATE ~~~~~~~~{\bf if} $C(R_k)>t_o$ {\bf then} Set $G^g_f(i)=S(i), \forall i \in R_k$ \STATE ~~~~~~~~Set $G^l_f(i)=S(i), \forall i \in R_k$ \STATE ~~~~Set $G^g_f(i)=G^l_f(i)=0, \forall i, s.t. P(x_i=bg|\theta)>t_b$ \STATE ~~~~{\bf if} $C(G^g_f)>0$ {\bf then} \STATE ~~~~~~~~{\bf if} $|{\mathcal T}_l|>\tau_l$ {\bf then} ${\mathcal T}_l$.dequeue \STATE ~~~~~~~~~~~~ ${\mathcal T}_l$.enqueue($G^g_f$) \STATE ~~~~{\bf if} $C(G^l_f)>d$ {\bf then} \STATE ~~~~~~~~Update $t=f$ and $d=C(G^l_f)$ \STATE ~~~~{\bf if} $f$ mod $\tau_s = 0$ {\bf then} \STATE ~~~~~~~~{\bf if} $G^g_t \notin {\mathcal T}_l$ {\bf then} \STATE ~~~~~~~~~~~~{\bf if} $|{\mathcal T}_s|>\tau_s$ {\bf then} ${\mathcal T}_s$.dequeue \STATE ~~~~~~~~~~~~${\mathcal T}_s$.enqueue($G^l_t$) \STATE ~~~~~~~~Initialize $d=0$ \STATE ~~~~{\bf if} $f$ mod $\tau_b=0$ {\bf then} \STATE ~~~~~~~~Finetune DCNN model $\theta$ to $\theta'$ using ${\mathcal T}_s \cup {\mathcal T}_l$ \STATE ~~~~~~~~Set $\theta \leftarrow \theta'$ \end{algorithmic} \end{algorithm} \subsection{Motion-Consistent Combination} The batch algorithm generally works better than the online algorithm, because the former uses global update with larger pool of CE frames and has longer-range dependency than the latter; i.e., the batch algorithm makes its decision after processing the whole video. However, videos may exist in which the online algorithm shows better results for certain local frames. Thus we can combine the two results to improve the motion-consistency of segmentation by incorporating dense optical flow. We cast the combination as the problem of selecting the best model in every frame as follows: let $m_f\in\{S_f^{b},S_f^{o}\}, \forall f$ be a variable that selects a labeled result between batch $S_f^b$ and online $S_f^o$, and $c(m_f,m_{f+1})$ measure a consistency between two consecutive labeled frames. We can formulate the motion-consistent model selection problem as \begin{equation*} \argmax_{\bf m}\sum_f c(m_f,m_{f+1}), \end{equation*} where ${\bf m} = \{m_1, m_2,...,m_{|{\mathcal F}|}\}$ is the set of selected models. We measure the consistency $c(m_f,m_{f+1})$ by the overlap $o(m_f,m_{f+1})$ of object regions between consecutive labeled frames warped by following dense optical flow \citep{farneback2003two} (Fig. 3). Because the optical flow can be noisy we give a small preference $\epsilon$ for the transition from batch to batch result. That is, \begin{displaymath} c(m_f,m_{f+1}) = \left\{ \begin{array}{ll} o(m_f,m_{f+1})+\epsilon & \textrm{if } m_f=S_f^b \\ & \land ~ m_{f+1}=S_{f+1}^b \\ o(m_f,m_{f+1}) & \textrm{otherwise.} \end{array} \right. \end{displaymath} Note that this problem can be easily solved using {\it dynamic programming}. \begin{table*}[ht] \centering \caption{Intersection-over-union overlap on Youtube-Object-Dataset 2014 \citep{jain2014supervoxel}} \begin{tabular}{lcccccccccc|c} \Xhline{2\arrayrulewidth} & Aero & Bird & Boat & Car & Cat & Cow & Dog & Horse & Motor & Train & Avg. \\ \hline Base-context & 0.808 & 0.642 & 0.627 & 0.746 & 0.622 & 0.646 & 0.670 & 0.414 & 0.570 & 0.607 & 0.635 \\ \hline Base-front-end & 0.828 & 0.725 & 0.657 & 0.797 & 0.616 & 0.646 & 0.671 & 0.462 & 0.674 & 0.624 & 0.670 \\ \hline SCF \citep{jain2014supervoxel} & \bf 0.863 & \bf 0.810 & 0.686 & 0.694 & 0.589 & 0.686 & 0.618 & 0.540 & 0.609 & 0.663 & 0.672 \\ \hline Our-Unsupv-batch & 0.829 & 0.783 & 0.699 & 0.812 & 0.688 & 0.675 & 0.701 & 0.505 & 0.705 & 0.702 & 0.710 \\ Our-Weak-on & 0.819 & 0.774 & 0.686 & 0.791 & 0.676 & 0.680 & 0.710 & 0.540 & 0.693 & 0.679 & 0.705 \\ Our-Weak-batch & 0.830 & 0.788 & 0.708 & 0.817 & 0.688 & 0.685 & 0.732 & 0.589 & 0.711 & 0.718 & 0.727 \\ Our-Weak-comb & 0.830 & 0.788 & 0.710 & 0.817 & 0.713 & 0.696 & 0.732 & 0.595 & 0.711 & 0.718 & 0.731 \\ Our-Unsupv-CRF & 0.844 & 0.808 & 0.710 & 0.822 & 0.696 & 0.688 & 0.717 & 0.514 & 0.714 & 0.702 & 0.722 \\ Our-Weak-on-CRF & 0.837 & 0.794 & 0.690 & 0.797 & 0.694 & 0.690 & 0.726 & 0.553 & 0.704 & 0.673 & 0.716 \\ Our-Weak-batch-CRF & 0.844 & \bf 0.810 & 0.723 & \bf 0.827 & 0.698 & 0.700 & \bf 0.745 & 0.610 & \bf 0.722 & \bf 0.729 & 0.741 \\ Our-Weak-comb-CRF & 0.844 & \bf 0.810 & \bf 0.725 & \bf 0.827 & \bf 0.722 & \bf 0.709 & \bf 0.745 & \bf 0.611 & \bf 0.722 & \bf 0.729 & \bf 0.744 \\ \Xhline{2\arrayrulewidth} \end{tabular} \end{table*} \subsection{Unsupervised Video} We briefly mention our method for processing an unsupervised video. Our framework can be easily applied to unsupervised videos by bypassing line 8 in Algorithm 1. This deletion means that we do not care whether the class actually appears in the video, thus we set all the labels of CE regions even if the labels are incorrect. We found that most of the videos processed in this way show similar results to those of weakly-supervised video, because the labels of pixels determined with very high probability usually correspond to the correct labels. Nevertheless, a few exceptions that correspond to incorrect labels occur, which can degrade the model and decrease the accuracy compared with the weakly-supervised setting. \subsection{Post-processing} Because the output of DCNN is insufficient to exactly delineate the object, we use the fully-connected CRF \citep{koltun2011efficient}. We simply use the output of DCNN for the unary term and use colors and positions of pixels for the computation of pairwise terms as \cite{chen2014semantic} did. Finally, we refine the label map through morphological operations (i.e., dilation and erosion). \section{Experiments} \subsection{Implementation details} We tested the `{\it front-end}' and `{\it context}' DCNN models pre-trained in \cite{yu2015multi} and observed that the {\it front-end} model shows better results than the {\it context} model under our setting on Youtube-Object-Dataset (Table 1). Thus we used the {\it front-end} model as our baseline model $\theta$. The {\it front-end} model is a modified version of the VGG-16 network \citep{simonyan2014very} and is extended for dense prediction. In practice, we resize each frame such that its long side is 500 pixels, then pad the frame by reflecting it about each image boundary to be 900$\times$900 pixels. We use threshold values $t_o=0.75$ and $t_b=0.8$; the value for background is set slightly higher than for foreground to leave room for additional foreground pixels (e.g., pixels around objects). We set the local period $\tau_b=30$ for both algorithms and $\tau_l=10$, $\tau_s=5$ for our online algorithm, and the small preference $\epsilon=0.02$ for the motion-consistent combination. For model update, we fine-tuned all the layers with dropout ratio of 0.5 for the last two layers and iterated for the number of frames in the self-adapting dataset with batch size of 1. We set the learning rate to 0.001, momentum to 0.9, and weight decay to 0.0005. Due to the lack of a validation dataset, we use the fixed CRF parameters used in \citep{noh2015learning} during post-processing. Our implementation is based on the Caffe library \citep{jia2014caffe} equipped with an Nvidia GTX Titan X GPU. \subsection{Evaluation} \begin{table*}[t] \centering \caption{Intersection-over-union overlap on Youtube-Object-Dataset 2015 \citep{zhang2015semantic}} \begin{tabular}{lcccccccccc|c} \Xhline{2\arrayrulewidth} & Aero & Bird & Boat & Car & Cat & Cow & Dog & Horse & Motor & Train & Avg. \\ \hline \citep{zhang2015semantic} & 0.758 & 0.608 & 0.437 & 0.711 & 0.465 & 0.546 & 0.555 & 0.549 & 0.424 & 0.358 & 0.541\\ \hline Base-front-end& 0.786 & 0.727 & 0.632 & 0.866 & 0.583 & 0.657 & 0.632 & 0.403 & 0.635 & 0.626 & 0.655 \\ \hline Our-Unsupv-batch & 0.791 & 0.766 & 0.681 & 0.876 & 0.679 & 0.711 & 0.656 & 0.432 & 0.689 & 0.650 & 0.693\\ Our-Weak-on & 0.795 & 0.773 & 0.665 & 0.883 & 0.641 & 0.691 & 0.684 & 0.488 & 0.658 & 0.652 & 0.693 \\ Our-Weak-batch & 0.794 & 0.786 & 0.685 & 0.870 & 0.667 & 0.738 & 0.734 & 0.567 & 0.694 & 0.672 & 0.721 \\ Our-Weak-comb & 0.794 & 0.786 & 0.684 & 0.870 & 0.668 & 0.738 & 0.737 & 0.580 & 0.694 & \bf 0.673 & 0.722 \\ Our-Unsupv-CRF & 0.808 & 0.791 & 0.695 & 0.879 & 0.683 & 0.729 & 0.669 & 0.441 & 0.690 & 0.651 & 0.704 \\ Our-Weak-on-CRF & \bf 0.813 & 0.796 & 0.672 & \bf 0.884 & 0.643 & 0.702 & 0.703 & 0.507 & 0.671 & 0.631 & 0.702 \\ Our-Weak--batch-CRF & 0.809 & \bf 0.809 & \bf 0.699 & 0.870 & 0.674 & \bf 0.756 & 0.751 & 0.580 & \bf 0.695 & 0.651 & 0.729\\ Our-Weak-comb-CRF & 0.809 & 0.807 & 0.698 & 0.870 & \bf 0.675 & \bf 0.756 & \bf 0.754 & \bf 0.595 & \bf 0.695 & 0.645 & \bf 0.730 \\ \Xhline{2\arrayrulewidth} \end{tabular} \end{table*} We evaluate the proposed method on the Youtube-Object-Dataset \citep{jain2014supervoxel,zhang2015semantic} that contains subset of classes in the PASCAL VOC 2012 segmentation dataset, on which the baseline model is pre-trained. The Youtube-Object-Dataset was originally constructed by \cite{prest2012learning}. \cite{jain2014supervoxel} annotated pixel-level ground truth for every 10-th frame for the first shot of each video. The dataset consists of 126 videos with 10 object classes. Due to inconsistent numbers of annotations, \cite{zhang2015semantic} modified the dataset by resampling 100 frames (sampled every other frame) for each video and annotating missed frames for one in every 10 frames. We use those two versions of dataset to compare with the two existing methods respectively. We use the intersection-over-union overlap to measure the accuracy. The results on the dataset used in \citep{jain2014supervoxel} are shown in Table 1. We first report the accuracies of two baseline DCNN models proposed in \citep{yu2015multi} denoted by Base-front-end and Base-context in the table. Note that the Base-front-end model \citep{yu2015multi} showed higher accuracy (0.670) for the dataset under our setting than did the Base-context model (0.635), which attaches several context layers to the {\it front-end} model. It is interesting that the {\it front-end} model applied to each frame of unsupervised videos showed almost the same average accuracy as the results of SCF \citep{jain2014supervoxel}, which is built based on semi-supervised video. Our method with weak-supervision (Our-Weak-comb-CRF) further improved the accuracy to 0.744, which exceeds that of SCF (average 0.672). Our online algorithm under weak-supervision (Our-Weak-on) improved the accuracy by 3.5\% and the batch algorithm (Our-Weak-batch) improved it by 5.7\% which is better than the online algorithm due to the global update. Note that our motion-consistent combination (Our-Weak-comb) achieved small improvement on a few classes by selecting results that are more motion-consistent, which shows temporally consistent video segmentation result. The great improvement of our model over the baseline model mostly originates from the correction of confusing parts of frames by the newly-updated model based on CE frames. Some representative results of ours are shown in Figure \ref{fig:qualitative}\footnote{More video results and the datasets we used are available at the project webpage: {\it https://seongjinpark.github.io/AdaptNet/}}. We also report our results without any supervision (Our-Unsupv-batch) explained in Sec 3.4 to show the efficiency of our method. It achieved slightly lower accuracy (0.710) than the algorithm with weak-supervision, but had 3.8\% higher average accuracy than SCF. We report only the batch algorithm for unsupervised videos because the online algorithm is more vulnerable than the batch algorithm to incorrect labels, especially for the short videos that the dataset includes. Post-processing increased the accuracy by about $1.1\sim1.4$\%. Our result achieved state-of-the-art overlap accuracy for all classes except `aeroplane'. We observed that the slightly inferior accuracy on this class occurs mainly because our method gives less accurate object boundary for the class. This problem occurs because we used the same CRF parameters for all classes as we could not cross-validate the parameters for each class, whereas SCF is given manual delineation of the object in the first frame. In Table 2, we also report our results on the dataset used in \cite{zhang2015semantic} to compare with the framework that uses models pre-trained on images to segment weakly-supervised video. \cite{zhang2015semantic} constructed the dataset (Youtube-Object-Dataset 2015) by modifying the Youtube-Object-Dataset 2014. Because neither the dataset nor source code is provided by the authors, we manually built the dataset by following the procedures explained in the paper. Our results in Table 2 show tendencies similar to those in Table 1, from the online to the combined model that greatly improves the baseline model. In this analysis the difference between the accuracies of our model and that of existing method was much larger than in Table 1 because \cite{zhang2015semantic} used a conventional object detector on weakly-supervised video, whereas we use a semantic segmentation model based on DCNN. \begin{figure*}[t]\centering \begin{tabular}{ccccc} Aeroplane & Bird & Boat & Car & Cat \\ \includegraphics[width=1.26in]{0037_org.png}& \includegraphics[width=1.26in]{0152_org.png}& \includegraphics[width=1.26in]{0002_org.png}& \includegraphics[width=1.26in]{0101_org.png}& \includegraphics[width=1.26in]{0031_org.png}\\ \includegraphics[width=1.26in]{0037_comb.png}& \includegraphics[width=1.26in]{0152_comb.png}& \includegraphics[width=1.26in]{0002_comb.png}& \includegraphics[width=1.26in]{0101_comb.png}& \includegraphics[width=1.26in]{0031_comb.png}\\ \includegraphics[width=1.26in]{0037_comb_CRF.png}& \includegraphics[width=1.26in]{0152_comb_CRF.png}& \includegraphics[width=1.26in]{0002_comb_CRF.png}& \includegraphics[width=1.26in]{0101_comb_CRF.png}& \includegraphics[width=1.26in]{0031_comb_CRF.png}\\ Cow & Dog & Horse & Motorbike & Train\\ \includegraphics[width=1.26in]{0065_org.png}& \includegraphics[width=1.26in]{0170_org.png}& \includegraphics[width=1.26in]{0078_org.png}& \includegraphics[width=1.26in]{0058_org.png}& \includegraphics[width=1.26in]{0174_org.png}\\ \includegraphics[width=1.26in]{0065_comb.png}& \includegraphics[width=1.26in]{0170_comb.png}& \includegraphics[width=1.26in]{0078_comb.png}& \includegraphics[width=1.26in]{0058_comb.png}& \includegraphics[width=1.26in]{0174_comb.png}\\ \includegraphics[width=1.26in]{0065_comb_CRF.png}& \includegraphics[width=1.26in]{0170_comb_CRF.png}& \includegraphics[width=1.26in]{0078_comb_CRF.png}& \includegraphics[width=1.26in]{0058_comb_CRF.png}& \includegraphics[width=1.26in]{0174_comb_CRF.png}\\ \end{tabular} \caption{Representative results of proposed method compared with baseline model. The results of Top: Base-front-end, Middle: Our-Weak-comb, Bottom: Our-Weak-comb-CRF. Semantic labels are overlaid on images with different colors corresponding to different class labels. We only highlight the boundary of correct class. {\bf Best viewed in color.}} \label{fig:qualitative} \end{figure*} \subsection{Limitation} The limitation of our method occurs when a video does not meet our assumption that at least one frame has a correct label or that at least one object region that corresponds to the pre-trained object classes is estimated. Such cases mostly occur due to very small size of objects in an image. The absence of a frame to improve the other frames yields the same result as the baseline model. We plan to consider these problems in our future work. \section{Conclusion} We proposed a novel framework for video semantic object segmentation that adapts the pre-trained DCNN model to the input video. To fine-tune the extensively-trained model to be video-specific, we constructed a self-adapting dataset that consists of several frames that help to improve the results of the UE frames. In experiments the proposed method improved the results by using the fine-tuned model to re-estimate the misclassified parts. It also achieved state-of-the-art accuracy by a large margin. We plan to extend the framework for semi-supervised video to increase the accuracy. We also expect that the efficient self-adapting framework can be applicable to generate a huge accurately-labeled video dataset, and thus be used to progress image semantic segmentation. \bibliographystyle{model2-names}
1711.08225
\section{Introduction} \label{sec:1} Research on pedestrian dynamics has systematically analysed the influence of group behaviour only in the last years (see, e.g.,~\cite{TheraulazGroup,DBLP:journals/prl/BandiniGV14,VonKruchten2017}): although observations and experiments agree on some aggregated and microscopic effects of the presence of groups (e.g. group members walk slower than individuals), there is still need for additional insights, for instance on the spatial patterns assumed by groups in their movement and in general on the interaction among different factors influencing overall pedestrian dynamics (e.g.~do obstacles still make egress from a room smoother in presence of groups?). Models incorporating mechanisms reproducing the cohesion of group members, in fact, are just partially able to reproduce overall phenomena related to the presence of groups in the simulated population of pedestrians (see, e.g.,~\cite{ITSC2014-groups} in which groups preserve their cohesion and they move slower than individuals) and they would benefit from additional insights on how members manage their movements balancing (for instance) goal orientation, tendency to stay close to other members, opportunities offered by the presence of lanes. In this framework, the present work discusses results of experiments carried out to investigate the potentially combined impact of counter-flow situations~\cite{Zhang2012} and grouping \cite{zanlungo2014potential}. Experiment 1~\cite{feliciani2016empirical} tested the impact of four different configurations of counter-flow in a corridor setting (from uni-directional to fully balanced bi-directional flow). Experiment 2~\cite{gorrini2016social} replicated the same procedures and about half of participants were paired to compose dyads (the simplest and most frequent type of group), asking them to walk close to their companion. In the following, both the experimental procedures and the achieved results will be presented in details. \section{Description of Experiments} \label{sec:exp} The two experiments have been performed on June 13, 2015 at the Research Center for Advanced Science and Technology of The University of Tokyo (Tokyo, JAPAN). Experiments have been executed in a corridor-like setting composed as in Fig.~\ref{fig:experiments-setting}. The central area of $10\times3$ m$^2$ was recorded for the tracking of participants and it was surrounded by two start areas of $12\times 3$ m$^2$ and two buffer zones of 2 m length that allowed participants to reach a stable speed at the measurement area. Participants were asked to wear coloured caps so that trajectories could be automatically recorded with the software \emph{PeTrack}\cite{boltes2013collecting}. \begin{figure}[t] \begin{center} \includegraphics[width=.99\textwidth]{experiments-setting.png} \caption{A screenshot from the video of the experiments and a schematic representation of the setting.}\label{fig:experiments-setting} \end{center} \end{figure} Each experiment was composed of four procedures, to which 54 male students participated. To achieve a more consistent dataset, every procedure was iterated four times. The aim of the whole investigation has been to test the following hypotheses: (Hp1) the increase of flow ratio negatively impacts the speed of pedestrians; (Hp2) the cohesion of dyad members affects their speed; (Hp3) the cohesion of dyads leads to a lower pedestrian flow at a macroscopic level. With \emph{flow ratio} we denote the rate between the \emph{minor flow} and the \emph{total flow} in bidirectional scenarios. Flow ratio was managed as independent variable among four experimental procedures, as graphically exemplified in Fig.~\ref{fig:experiments-procedures}. At the beginning of each iteration and according to the tested flow ratio, pedestrians were placed in the marked positions of the start areas. Fig.~\ref{fig:experiments-procedures} exemplifies the arrangement in all experimental procedures. In case of \expr{2}, roughly 44\% of the participants (24 out of the 54 total) was configured as dyads. These were formed by coupling two random members and asking them to possibly walk close to the other companion during the iteration. As shown in Fig.~\ref{fig:experiments-procedures}, dyads could be initially disposed either in \emph{line abreast} or \emph{river-like} pattern, except in \proc{2} where only the latter was possible for dyads belonging to the minor flow. In the other cases the choice was purely random. \begin{figure}[t] \begin{center} \includegraphics[width=.99\textwidth]{experiments-description.png} \caption{Experiments and procedure tested in this investigation.}\label{fig:experiments-procedures} \end{center} \end{figure} \section{Data Analysis} Individual data analysis on each of the two experiments has been already described in~\cite{feliciani2016empirical} and~\cite{gorrini2016social}. As expected, both facing a counter-flow and being part of a group was found to influence the walking speed of pedestrians. In addition, there is a difference between the behavior found in balanced and un-balanced configurations of counter-flow in terms of lanes' formation and amount of lateral motion required to avoid conflicts with participants from the opposite direction~\cite{feliciani2016empirical,Feliciani2017new}. In this paper we will compare the results of the two experiments, with the aim of verifying whether their spatial patterns and different speeds affect the dynamics at a more macroscopic level. \subsection{Microscopic Analysis on Dyads} A comparison of speeds among dyads and individuals in \expr{2} shows that dyads are slower in procedures characterised by a counter-flow situation. In presence of a uni-directional flow essentially exempt from collisions (\proc{1}), on the other hand, the difference is rather small (see Fig.~\ref{fig:speeds}). This suggests that the bi-directional flow affects more the spatial pattern of the dyad members, that more frequently switch from the desired \emph{line-abreast} pattern to a \emph{river-like} one. Moreover, it is also observed that group members have perceived a sensibly higher density during the procedures with counter-flow: Fig.~\ref{fig:densities} shows the distributions of local densities for all procedures of \expr{2}, calculated using the \emph{Voronoi} method~\cite{tordeux2015quantitative} (density values used here are instantaneous and collected from the time the first participant enters the measurement area to the time the last one leaves it). While average density is almost equal in \proc{1}, the difference becomes already noticeable in the second one. Later on we will show that this is due to the fact that dyad members tend to walk close to each other compared to the other individual pedestrians, that instead take more frequently the opportunities offered by unoccupied gaps in front of them. \begin{figure}[t] \begin{center} \subfigure[]{\includegraphics[width=.48\textwidth]{speeds.png}\label{fig:speeds}} \subfigure[]{\includegraphics[width=.48\textwidth]{densities.png}\label{fig:densities}} \caption{Distribution of pedestrian speeds (a) and local densities (b) among procedures of \expr{2}. Black dots indicate the mean, red lines the median and the box size defines the standard deviation.} \end{center} \end{figure} A first analysis on the distributions of relative positions of dyads, with respect to their centroid, has shown a decrease of distance between them with the increase of counter-flow conditions~\cite{gorrini2016social}. Moreover, it is observed that conflicts arising from the bi-directional flow frequently lead dyad members to assume a river-like pattern, which is barely visible in the first procedure. As with the analysis on speed and local density distribution, no significant difference arises between \proc{3 and 4} of the second experiment. The relation between density, speed and relative positions is shown in Fig.~\ref{fig:dyads_densitySpeed}(a) and (b). While a dependency between density and angular arrangement (i.e.~spatial pattern) of dyads was not found to be significant, it is apparent how points of high densities are mostly close to the center (the few outliers are probably due to a transient stretched river-like pattern) and they describe a pattern with an elliptical shape, whose long side is associated to the walking direction. The same regularity is also visible with the data about the speed: points associated to higher speeds are located in the outer part of the dataset, while close to the center speeds are lower, about 0.6 m/s. \begin{figure}[t] \begin{center} \subfigure[]{\includegraphics[width=.45\textwidth]{all_density.jpeg}} \subfigure[]{\includegraphics[width=.45\textwidth]{all_speed.jpeg}} \subfigure[]{\includegraphics[width=.45\textwidth]{all_eDist_density.png}} \subfigure[]{\includegraphics[width=.45\textwidth]{all_eDist_speed.png}} \caption{(a -- b) Relative position of dyads according to their centroid. Positions are rotated so that the movement direction is up. Colors indicate the information of local density (a) or instantaneous speed (b). (c -- d) Relations between density, distance and speed of dyads.}\label{fig:dyads_densitySpeed} \end{center} \end{figure} The elliptical appearance of diagrams in Fig.~\ref{fig:dyads_densitySpeed}(a) and (b) is not surprising and it reflects the physics of pedestrian movement already considered with former works on the modelling side (e.g.~\cite{Chraibi2010}). According to these data, it is possible to define a distance metric that applies distortion on the y-axis and helps to analyse the relation between density, speed and \emph{elliptical distance}: $$f(x,y) = \sqrt{x^2 + \left(\frac{y}{2}\right)^2} $$ The outcome of this analysis is shown in Fig.~\ref{fig:dyads_densitySpeed} (c) and (d). It is fair to state that the defined elliptical distance between dyad members acts as a mediator between the fundamental characteristics of the dynamics (more logically the local density leads group members to walk closer and not vice-versa). On the other hand, this analysis suggests that there is a positive effect of the density on the cohesion of dyad members, that consequently affects their instantaneous speed. However, the relation between walking speed and elliptical distance is less clear and points in Fig.~\ref{fig:dyads_densitySpeed}~(d) appear to be in a rather large area which is difficult to describe using a linear function. Considering these observations, we can say that models of dyads should be able to reproduce a growing trend between elliptical distance and speed. An additional analysis carried out on microscopic data about the instantaneous position of pedestrians is focused on evaluating the position of the other nearby pedestrians: this kind of analysis, shown in Fig.~\ref{fig:lane_formation}, highlights a different kind of behaviour between members of dyads and individuals with regard to the lane formation phenomenon. We focus in particular on \proc{4} since it is the most interesting one in terms of macroscopic results. It must be said that lane formation is a rather fuzzy concept and several methods are proposed in the literature as attempt for its quantification: \cite{feliciani2016empirical}, for example, analyse the \emph{rotation} of the pedestrian directions to achieve a numerical value describing the stability of lanes. We also do not try to provide a definition of lane, but the data describing the proxemic behaviour of individuals and dyads, respectively shown in Fig.~\ref{fig:lane_formation}(a) and (b), show that, on one side, there is a clear following behaviour for the individuals, where the most frequent position of neighbouring pedestrians is in a spot about 1 m ahead. On the other hand, members of dyads mostly try to keep a line-abreast pattern: the most frequent positions for neighbours are in fact on the side instead that ahead the considered pedestrian. In other words, lanes composed of dyad members tended to be wider and this led to a less efficient utilisation of the space available on average. \begin{figure}[t] \begin{center} \subfigure[]{\includegraphics[width=.45\textwidth]{lane_singles.png}\label{fig:lane_singles}} \subfigure[]{\includegraphics[width=.45\textwidth]{lane_dyads.png}\label{fig:lane_dyads}} \caption{Distribution of relative positions of neighbour pedestrians, according to the position of each individual (a) or dyad member (b).}\label{fig:lane_formation} \end{center} \end{figure} \subsection{Effects of dyads at a macroscopic scale}\label{sec:lane} Previous results highlighted the effects of density and counter-flow situations on the behaviour of dyads at a very detailed scale whereas we present here the aggregated effect of these microscopic observations on the overall pedestrian flow at different levels of density. The presence of groups in \proc{1} did not bring to significant differences, since only a simple free-flow situation emerged from it. In counter-flow situations, instead, differences become apparent and the most interesting result is represented by the scenario with a perfectly balanced counter-flow, whose data are reported in Fig.~\ref{fig:FD}. The diagram shows very little difference at low densities, but starting from 0.5 peds/m$^2$ the specific pedestrian flow observed in the experiment with dyads grows at a slower rate, compared to the \expr{1}. The range of observed densities does not reach a critical density for any of the experiments, but the trend of the diagrams supports the conjecture that the situation of \expr{2} would lead to a lower maximum flow. \begin{figure}[t] \begin{center} \subfigure{\includegraphics[width=.45\textwidth]{fd_speed.png}} \subfigure{\includegraphics[width=.45\textwidth]{fd_flow.png}} \caption{Comparison of fundamental diagrams in the form density--speed (left) and density--flow (right) of \expr{1 and 2} - \proc{4}.}\label{fig:FD} \end{center} \end{figure} \section{Conclusions and Future Works} The paper has presented original results of analyses of pedestrian dynamics achieved through an experimental observation aimed at characterising the influence of dyads, both at micro and macroscopic level. Micro-level results underline that different counterflow situations affect local density, and that groups walk slower compared to singletons, depending also on their spatial patterns at variable density situations. The introduction of dyads in the pedestrian demand leads to a higher level of measurable density in analogous initial conditions and a more chaotic macro-level dynamics characterized by fragmented lanes, inducing a lower observed specific flow. Future works are aimed, on one hand, to transfer the achieved results to the modelling activities in presence of groups (preliminary results are discussed in another paper in this volume~\cite{TGF2017-Yiping}), but additional observations and experiments would be needed to further investigate whether previously observed aggregated phenomena are still observed in the presence of groups.
1711.08394
\section{Introduction} The advent of high-precision space-based broadband optical photometry with satellites (or ensembles of satellites) such as the \textit{Microvariability and Oscillations of STars} (MOST; \citealt{2003PASP..115.1023W}) and \textit{BRIght Target Explorer} (BRITE; \citealt{2014PASP..126..573W}) missions has opened the door to a brand new picture of the photospheres of hot massive stars. In particular, recent studies have led to the detection of co-rotating bright spots on the surface of a few O stars \citep{2014MNRAS.441..910R, 2017arXiv171008414R}. The existence of such spots has been proposed \citep{1996ApJ...462..469C, 2017MNRAS.470.3672D} to explain the formation of \textit{co-rotating interaction regions} (CIRs), which in turn are postulated \citep{1986A&A...165..157M} to lead to recurring \textit{discrete absorption components} (DACs) which migrate through the velocity space of the absorption troughs of ultraviolet resonance lines, as revealed in timeseries of spectra obtained by the \textit{International Ultraviolet Explorer} (IUE; e.g., \citealt{1989ApJS...69..527H, 1996A&AS..116..257K}). While the physical origin of these bright spots remains contested, one popular hypothesis contends that they are caused by small-scale magnetic fields which can be generated in the subsurface convection zone due to the iron opacity bump (FeCZ; \citealt{2011A&A...534A.140C}). $\epsilon$ Ori and $\kappa$ Ori are two bright early B supergiants (respectively B0Ia and B0.5Ia) which have been observed by BRITE. Together, they constitute an interesting testbed for the study of photospheric perturbations such as bright spots, for a few reasons. First, given their magnitudes (respectively, $m_V$ = 1.69 and $m_V$ = 2.06; \citealt{2002yCat.2237....0D}), they are prime candidates for high signal-to-noise ratio (SNR) observations, allowing us to place very tight constraints on their properties. Secondly, their current evolutionary stage is of interest for the study of spots\footnote{It should be noted, however, that B stars are also known to be the theatre of various forms of variability, from SPB/$\beta$ Cephei pulsations to rotational modulation \citep{2011MNRAS.413.2403B}.} as the envelopes of hot supergiants are expected to host more convection than their main sequence counterparts \citep{2009A&A...499..279C}. Finally, if one naively posits that the properties of putative bright spots should be intimately related to stellar parameters, it would therefore be expected that if $\epsilon$ Ori and $\kappa$ Ori show signatures of co-rotating bright surface spots, these spots would have similar characteristics, an assertion that we can test. Conversely, any departure from that expectation informs us about the nature of these photospheric structures. $\epsilon$ Ori is a known variable star. \citet{2002A&A...388..587P} have traced the evolution of a DAC in its ultraviolet lines for at least 17h, and periods ranging from about 0.8 to 19 days have been recovered from its optical spectra (e.g., \citealt{2013AJ....145...95T}). From these, a rotational period of either $\sim$4 days or $\sim$18 days is inferred. From its radius ($24.0 R_\odot$; \citealt{2006A&A...446..279C}) and its projected rotational velocity (60 km/s; \citealt{2014A&A...562A.135S}), we derive a maximum rotational period of about 20 days. On the other hand, repeating the same calculation for $\kappa$ Ori ($ R_* = 22.2 R_\odot$; $v \sin i = 54$ km/s), we obtain a maximum period of about 21 days. \section{Observations} So far, the Orion field has been observed five times by BRITE (2013, 2014, 2015, 2016 and 2017) in both red and blue wavebands. However, we focus here on the first two observing runs. The details of the observations are presented in Table~\ref{tab:obsrun}. \begin{table} \caption{Details of the two Orion observing runs presented in this study. The satellites are UniBRITE (UBr; red waveband), BRITE Austria (BAb; blue waveband), BRITE Lem (BLb; blue waveband), BRITE Heweliusz (BHr; red waveband) and BRITE Toronto (BTr; red waveband).}\label{tab:obsrun} \begin{tabular}{lllll} Run & Starting date & Total length (days) & Telescopes\\ Orion I & 2013-11-07 & 131 & UBr, BAb\\ Orion II & 2014-09-24 & 174 & BAb, BLb, BTr, BHr\\ \end{tabular} \end{table} Sample light curves are shown in Fig.~\ref{fig:lc}. The main characteristic that we can observe (for both stars) is that there appear to be significant variations with a maximum amplitude of roughly 30 mmag. These variations do not exhibit an obvious periodic behaviour. The point-to-point precision is of millimagnitude order. \begin{figure} \includegraphics[width=\textwidth]{epsori_2014_lc.jpg} \caption{Orion II light curves (red and blue filters) of $\epsilon$ Ori. The blue light curve is shifted downwards. We can see that both light curves exhibit similar variations, with a maximum amplitude of roughly 30 mmag. However, no obvious repeatable pattern is observed.} \label{fig:lc} \end{figure} \section{Preliminary analysis} We perform a period search on these light curves to find any periodicity. While neither star shows clear, periodic variations, various frequencies are detected (see Fig.~\ref{fig:ft}), and a time-frequency analysis suggests that these frequencies appear and disappear over time. In the case of $\epsilon$ Ori, clumps of frequencies around $\sim$0.25 c/d, $\sim$ 0.5 c/d, $\sim$0.75 c/d and $\sim$1 c/d (therefore, roughly the first few integer multiples of a base frequency of around 0.25 c/d) are detected at any given time. At face value, this seems somewhat consistent with the type of observational signatures historically associated with the bright spot/co-rotating interaction regions (CIR) phenomenology (e.g., \citealt{2011ApJ...735...34C}), in which case this base frequency could be of rotational origin (meaning that $P \simeq 4$ days, as previously suggested as one of the possible periods by optical spectra). The result of this analysis on a portion of the blue Orion II light curve is shown in Fig.~\ref{fig:stft}. \begin{figure} \centering \begin{minipage}{0.48\textwidth} \includegraphics[width=\textwidth]{hd37128_blue_2014.jpg} \caption{Periodogram of the blue light curve of $\epsilon$ Ori from the Orion II run; we can see groupings of frequencies around 0.25 c/d, 0.5 c/d and 1 c/d.} \label{fig:ft} \end{minipage} \quad \begin{minipage}{0.48\textwidth} \includegraphics[width=\textwidth]{hd37128_stft_blue_2014.jpg} \caption{Time-frequency analysis of a portion of about 30 days of the Orion II blue light curve of $\epsilon$ Ori performed with an 8-day window; frequencies can notably be found around 0.25 c/d, 0.5 c/d and 1 c/d. The integrated periodogram is shown on the top.} \label{fig:stft} \end{minipage} \end{figure} However, more analysis is required in order to lend credence to the bright spot scenario. In particular, pulsations must first be investigated in depth before ruling them out as being responsible for the observed variability. A similar analysis was performed on the light curves of $\kappa$ Ori, revealing frequencies at around 0.4 c/d and 1.2 c/d. While it is too early to conclude whether these periods are due to rotational modulation, it should be noted that if the base frequencies of $\epsilon$ Ori and $\kappa$ Ori are indeed of rotational origin, both stars can then be inferred to be viewed at a rather small inclination, which might be problematic. \section{Conclusions and future work} While this study remains in the early stages and much more work is required, preliminary results show pleasant parallels with the observational signatures typically ascribed to rotational modulation due to co-rotating bright surface spots. If this scenario is favored, these observations can help us learn a great deal about the properties and the nature of such photospheric perturbations. In particular, the bright spot/magnetic spot connection can hopefully be further investigated, as both stars are ideal candidates for ultra-deep magnetometry given their magnitude and low projected rotational velocity (e.g., \citealt{2016MNRAS.456....2W}). Furthermore, advances in modelling will also prove invaluable in constraining the properties of bright spots and associated wind structures; building on our recent work \citep{2017MNRAS.470.3672D}, the next logical step will be to produce hydrodynamical models of co-rotating interaction regions in three dimensions, studying, among other things, the effects of inclination on observational signatures. Meanwhile however, a more in-depth period analysis, including the data from all 5 BRITE Orion runs, will be necessary to more robustly establish whether the observed variations are indeed consistent with bright spots. \acknowledgements{ADU gratefully acknowledges support from the \textit{Fonds qu\'{e}b\'{e}cois de la recherche en nature et technologies}.} \bibliographystyle{ptapap}
1602.02794
\section*{~} As one of the three clock-synchronization algorithms studied for wireless sensor networks (WSNs) under unknown delay \cite{leng10}, Leng and Wu proposed the generalization of the maximum-likelihood-like estimator (MLLE) proposed by Noh \textit{et al}. \cite{noh07:_novel}. To overcome the drawback of the MLLE that it can utilize only the time stamps in the first and the last of $N$ message exchanges, they extend the gap $\alpha$ between two subtracting time stamps from $N{-}1$ to a range of $\left[1,\ldots,N{-}1\right]$ so that the generalized MLLE can take into more time stamps in estimating clock skew. Specifically, the time stamps in two-way message exchanges are modeled as \cite[Eqs.~(1)~and~(2)]{leng10} \begin{align} \label{eq:T1T2} T_{2,i} & = \beta_{1} T_{1,i} + \beta_{0} + \beta_{1} \left(d + X_{i}\right) \\ \label{eq:T3T4} T_{3,i} & = \beta_{1} T_{4,i} + \beta_{0} - \beta_{1} \left(d + Y_{i}\right) \end{align} where $\beta_{0}$ and $\beta_{1}$ denote the clock offset and clock skew of the child node $S$ with respect to the parent node $P$, respectively; $d$ represents the fixed portion of one-way propagation delay, while $X_{i}$ and $Y_{i}$ are its variable portions (see Fig.~1 of \cite{leng10}). Based on \eqref{eq:T1T2} and \eqref{eq:T3T4}, they construct new sequences $D_{r,j}{\triangleq}T_{r,\alpha+j}{-}T_{r,j}$ ($j{=}1,\ldots,N{-}\alpha$ and $r{=}1,2,3,4$) and model them as follows \cite[Eqs.~(10)~and~(11)]{leng10}: \begin{align} \label{eq:D1D2} D_{2,j} & = \beta_{1} D_{1,j} + \beta_{1} \left(X_{\alpha+j} - X_{j}\right) \\ \label{eq:D3D4} D_{3,j} & = \beta_{1} D_{4,j} - \beta_{1} \left(Y_{\alpha+j} - Y_{j}\right) \end{align} for $j{=}1,\ldots,N{-}\alpha$. Noting that $\left(X_{\alpha+j}{-}X_{j}\right){\sim}\mathcal{N}(0,2\sigma^{2})$ and $\left(Y_{\alpha+j}{-}Y_{j}\right){\sim}\mathcal{N}(0,2\sigma^{2})$ because $X_{j}$ and $Y_{j}$ are i.i.d. zero-mean Gaussian random variables with variance $\sigma^{2}$, they obtain the maximum-likelihood estimator (MLE) for $\beta_{1}$ given by \cite[Eq.~(13)]{leng10} \begin{equation} \label{eq:skew_est} \hat{\beta}_{1} = \dfrac{1}{\hat{\theta}_{1}} = \dfrac{\sum_{j=1}^{N-\alpha}\left(D_{2,j}^{2}+D_{3,j}^{2}\right)}{\sum_{j=1}^{N-\alpha}\left(D_{1,j}D_{2,j}+D_{4,j}D_{3,j}\right)} . \end{equation} The major problem in the derivation of the MLE for $\beta_{1}$ given in \eqref{eq:skew_est} is that, even though $X_{j}$ and $Y_{j}$ are i.i.d. Gaussian random variables, the noise components $\left(X_{\alpha+j}{-}X_{j}\right)$ and $\left(Y_{\alpha+j}{-}Y_{j}\right)$ are not in general: For $m,n{\in}\left\{1,\ldots,N-\alpha\right\}$ and $m{\neq}n$, \begin{align} \label{eq:noise_correlation} \MoveEqLeft \operatorname{E}\left[\left(X_{\alpha+m}-X_{m}\right)\left(X_{\alpha+n}-X_{n}\right)\right] \notag \\ & = -\operatorname{E}\left[X_{\alpha+m}X_{n}\right] - \operatorname{E}\left[X_{m}X_{\alpha+n}\right] \notag \\ & = \begin{cases} -\sigma^{2}, & \mbox{if } \alpha=\left|m-n\right| \\ 0, & \mbox{otherwise} \end{cases} , \end{align} and the same goes for $\left(Y_{\alpha+j}{-}Y_{j}\right)$. Note that, if the noise components are independent one another as claimed in \cite{leng10}, the expectation in \eqref{eq:noise_correlation} must be zero. The consequence of \eqref{eq:noise_correlation} is that $\alpha$ should be greater than $\frac{N-1}{2}$, i.e., \begin{align} \label{eq:gap_range} \alpha \in \left\{\left\lfloor\frac{N}{2}\right\rfloor,\ldots,N-1\right\} \end{align} in order to maintain the validity of the derivation of the MLE for $\beta_{1}$ \cite[Eq.~(13)]{leng10} and its performance bound \cite[Eq.~(29)]{leng10}: If $\alpha{\leq}\frac{N-1}{2}$, there exists at least one pair of $m$ and $n$ satisfying $\alpha=\left|m-n\right|$ so that the noise components are no longer independent one another. For example, let $n$ be 1. Then $m{=}\alpha{+}1$ satisfies the said condition. Because $\alpha{\leq}\frac{N-1}{2}$ and \[ m = \alpha + 1 \leq \dfrac{N-1}{2} + 1 = \dfrac{N+1}{2} = N - \dfrac{N-1}{2} \leq N - \alpha , \] $m$ belongs to $\left\{1,\ldots,N-\alpha\right\}$. Fig.~\ref{fig:noise_correlation_effect} clearly shows the effect of the noise correlation on the mean square error (MSE) of estimation of clock skew and the relationship between $\alpha$ and $N$ when SNR{=}30 dB and $H{=}G{=}10$. In the figure, GE1 denotes the simulation results of the generalized MLLE for time stamps and resulting sequences generated according to the original models of \eqref{eq:T1T2} through \eqref{eq:D3D4}; GE2, on the other hand, denotes the results for the time sequences in \eqref{eq:D1D2} and \eqref{eq:D3D4} with the noise components $\left(X_{\alpha+j}-X_{j}\right)$ and $\left(Y_{\alpha+j}-Y_{j}\right)$ replaced by two newly-generated i.i.d. zero-mean Gaussian random variables with variance $2\sigma^{2}$.\footnote{It does not correspond to any model of two-way message exchanges and is given just for the purpose of comparison.} \begin{figure}[!tb] \centering \includegraphics[width=\linewidth]{skew_est_snr30} \caption{Effect of noise correlation on the MSE of estimated clock skew.} \label{fig:noise_correlation_effect} \end{figure} If $\alpha$ is greater than $\frac{N-1}{2}$, we can see that the results of GE1 closely match with the performance bounds (i.e., PB$_{\rm g}$) because there is no issue of noise correlation; for example, when $\alpha$ is 10, the results of GE1 match with the performance bounds for $N$ up to 20. Compared to the results for GE1, the results for GE2 of a fictitious model show that they can attain the performance bounds irrespective of the value of $\alpha$ because there is no issue of noise correlation at all. It is interesting, though, that the results of GE1 for $\alpha\leq\frac{N-1}{2}$ show even better performance than the performance bounds. With the valid range of $\alpha$ given by \eqref{eq:gap_range}, the selection of the optimal $\alpha$ given in Eqs.~(32)~and~(33) of \cite{leng10} should be modified accordingly. Because $\Phi(\alpha_{r})$ in Eq.~(32) of \cite{leng10} is concave downward for the whole range of real-valued $\alpha_{r}{\in}\left[\left\lfloor\frac{N}{2}\right\rfloor,N{-}1\right]$, $\alpha^{*}_{r}$ in Eq.~(33) of \cite{leng10} is now simplified as follows\footnote{See \cite[Appendix~A]{leng10} for details.}: \begin{align} \alpha^{*}_{r} = \dfrac{1}{3}N + \sqrt{\dfrac{1}{9}N^{2} - \dfrac{2\beta_{1}^{2}\sigma^{2}}{\beta_{1}^{2}H^{2}+G^{2}}} \end{align} \balance
2108.01515
\section{INTRODUCTION} Characterizing the mechanical properties of materials has many applications in industrial testing and bioengineering. These can be inferred by tracking the deformation of a specimen under known loads. Optical coherence tomography (OCT) is a powerful modality for the role of displacement tracking, due to its high resolution and non-invasive 3D imaging capability, where its use is known as optical coherence elastography (OCE)\cite{Kennedy2017}. The key to performing OCE is calculating local displacements through a sequence of temporal images. This problem is actively treated in other image processing applications: known as `optical flow' in video compression; and `particle image velocimetry' in fluid dynamics. A common approach is block matching, whereby sub-images are matched through some metric, such as cross correlation. The accuracy of block matching is limited by noise in the image sequence, which can be especially high in cases of rapid acquisitions. Additionally, if the displacement between was known to a high accuracy, then the noise could be reduced, in a manner similar to B-scan averaging. These concepts motivate a simultaneous approach to imaging, whereby denoising and motion estimation are combined. An instance of this same concept was realised by Buades et al.\cite{Buades2016}, applied to video enhancement, where they performed patch based denoising on motion compensated frames. In a second step, they then recalculate the displacement on the cleaned frames, followed by another motion compensated denoising. In this article, we propose to perform motion compensated denoising of OCT images by applying the BM4D method \cite{Maggioni2013} to warped frames from a temporal sequence. These enhanced images will then form the basis for a second motion estimation step. To enable good spatial resolution throughout the specimen, we will apply the ISAM method \cite{Ralston2007}, which we implement through the non-uniform fast Fourier transform (NUFFT) \cite{Fessler2003a}. \section{METHODOLOGY} \subsection{Motion Compensated Denoising}\label{sec:method} \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[height=7cm]{flow_diagram} \end{tabular} \end{center} \caption[example] { \label{fig:flow_diagram} Flow diagram of simultaneous displacement estimation and image enhancement.} \end{figure} A diagrammatic representation of the method is shown in Figure~\ref{fig:flow_diagram}. This shows the potential to iterate the displacement estimation and subsequent motion compensated denoising several times. In practice, we find that a single iteration, up to the re-estimation of the motion, is sufficient. In this case, the steps may be described as: \begin{enumerate} \item Form initial images from standard reconstruction, including any appropriate preprocessing steps such as dispersion encoded artifact removal\cite{Hofer2009} or ISAM\cite{Ralston2007} resampling. \item Estimate relative displacement between neighboring frames with block matching, as a linear operation, $U$. \item Warp each neighboring frame to common spatial position through applying $U$. \item Perform motion compensated denoising, through the BM4D method \cite{Maggioni2013}. \item Apply adjoint of linear warping on denoised images, through $U^T$. \item Recalculate displacement on denoised frames. \item Return to step 3 or terminate. \end{enumerate} When this process is repeated for several iterations, one should ensure that the linear warping operation, $U$, is applied to the original preprocessed images, and not the denoised frames. Otherwise, loss of information is likely. \subsection{Linear Sub-pixel Displacement Estimation} A key element of our framework is the means to calculate the local displacement throughout a sequence of images, from which we can derive a linear operator, $U$. For the displacement estimation, we use multi-resolution cross correlation block matching from the PIVlab\cite{Thielicke2014} Matlab toolbox. This software implements sub-pixel estimation through 2D Gaussian regression\cite{Nobach2005}. We then use linear interpolation to upsample from the block size resolution to the resolution of the image. After the dense local deformations have been found, we form the operator, $U$, to bi-linearly interpolate a pixel's value to its new sub-pixel location. To preserve linearity, we have to ensure the displacements are constrained to the image view. With this, we do not apply any deformation to pixels that are not present in both images. \section{EXPERIMENTATION} \subsection{Displacement Estimation Validation} \label{sec:valid} In this first experiment, we wish to validate our simultaneous flow estimation and image denoising. To this end, we induce known motion to a sample, and evaluate the accuracy in both image quality and local displacement. Our sample in this experiment is a 2\% agarose gel with dispersed latex micro-beads, through which a 2 mm cross section was recorded. The specimen was placed on an motorized micrometer uniaxial stage, which was programmed to move laterally at steps of 10 $\mu$m at 5 positions. The imaging equipment used was a Wasatch photonics 800 nm OCT system. We adjusted the system, so that the focal point lay at the centre of the sample. This allowed the simple application of ISAM\cite{Ralston2007}, which has the effect of refocussing throughout the sample, and acts as our preprocessing operation. The original reconstruction of the sample, along with ISAM, are shown in Figure~\ref{fig:sample_latex}. \begin{figure} \begin{center} \begin{tabular}{cc} standard IFFT reconstruction & ISAM \\ \includegraphics[height=6cm]{valid_full_raw} &\includegraphics[height=6cm]{valid_full_isam_2} \end{tabular} \end{center} \caption[example] { \label{fig:sample_latex} Latex micro-bead specimen, and its ISAM reconstruction.} \end{figure} The regions from Figure~\ref{fig:sample_latex} are shown in Figure~\ref{fig:latex_roi} for various methods. These are: a single temporal frame; the B-scan average of 20 frames; the average of the 5 temporal frames after linear warping; and the result of our proposed motion compensated denoising using BM4D. \begin{figure}[htb!] \begin{center} \begin{tabular}{c|c|c|c|c|} & single frame & 20 frame average & 5 frame warped average & 5 frame warped denoised \\ \hline \rotatebox{90}{ROI 1}& \includegraphics[height=3.5cm]{valid_raw}& \includegraphics[height=3.5cm]{valid_oracle}& \includegraphics[height=3.5cm]{valid_avg}& \includegraphics[height=3.5cm]{valid_denoise}\\ \hline \rotatebox{90}{ROI 2}& \includegraphics[height=3.5cm]{valid_b_raw}& \includegraphics[height=3.5cm]{valid_b_oracle}& \includegraphics[height=3.5cm]{valid_b_avg}& \includegraphics[height=3.5cm]{valid_b_denoise}\\ \hline \end{tabular} \end{center} \caption[example] {\label{fig:latex_roi} Results within ROIs shown in Figure~\ref{fig:sample_latex}, showing denoising effect.} \end{figure} From Figure~\ref{fig:sample_latex}, the effect of our approach can be visualized. The single frame exhibits a significant amount of noise compared to the 20 frame average, especially in ROI 2. Simply applying B-scan averaging to warped frames results in a very poor image, where the brightest features are preserved, but most speckle structure is lost. In the case of the denoised images, ROI 1 appears very similar to the ground truth, with good preservation of speckle structure. In ROI 2, the micro-beads are well preserved, and the image is more clear than the single frame, albeit with some block-like artifacts. \begin{figure}[htb!] \begin{center} \begin{tabular}{c|c|c|c|c|} & displacement 1 & displacement 2 & displacement 3 & displacement 4 \\ \hline \rotatebox{90}{initial estimate}& \includegraphics[height=3.5cm]{valid_f1_r1}& \includegraphics[height=3.5cm]{valid_f2_r1}& \includegraphics[height=3.5cm]{valid_f3_r1}& \includegraphics[height=3.5cm]{valid_f4_r1}\\ \hline \rotatebox{90}{denoised estimate}& \includegraphics[height=3.5cm]{valid_f4_r2}& \includegraphics[height=3.5cm]{valid_f2_r2}& \includegraphics[height=3.5cm]{valid_f3_r2}& \includegraphics[height=3.5cm]{valid_f1_r2}\\ \hline \end{tabular} \end{center} \caption[example] {\label{fig:result_latex_flow} Estimated displacements in ROI 2, before and after motion compensated denoising.} \end{figure} Displacements between the 5 temporal frames are shown in Figure~\ref{fig:result_latex_flow} within ROI 2, which is a challenging region due to its high level of noise. The top row shows the initial local displacement estimates from the preprocessed images, whilst the bottom row shows the re-estimated displacements after motion compensated denoising. It is clear that the uniformity of the motion estimation is dramatically enhanced in the denoised case. \begin{table} \centering \begin{tabular}{c|c|c|c} method & image RMSE & x-displacement RMSE & y-displacement RMSE \\ \hline original & 0.203 & 1.03 & 0.489 \\ warped mean & 6.10 & - & - \\ proposed framework & \textbf{0.137} & \textbf{0.909} & \textbf{0.329} \end{tabular} \caption{\label{tab:results} Quantitative results from reconstructions. All results are normalized cross-correlation (NCC) as in against fully sampled ISAM image.} \end{table} Quantitative results are given in Table~\ref{tab:results}. Here the root--mean--squared--error (RMSE) is calculated against the ground truth image, and throughout all local displacement estimations, to give evaluation of both criteria. Firstly, it is evident that applying a naive average throughout the warped frames, heavily degrades the image fidelity. On the other hand, BM4D throughout these frames reduces the error by 33\% against the ground truth. In terms of displacement estimation, our approach also offers a significant gain in accuracy of 12\% and 33\% in the lateral and axial dimensions respectively. From the qualitative and quantitative results, we have shown that this proposed approach does successfully simultaneously enhance both image quality and displacement estimation. \subsection{Complex Displacement Example} In the second case, we wish to evaluate the displacement from a more complex motion, which would be applicable to elastography. In this case, we apply a uniaxial stress on a sample of PDMS resin, by means of a syringe attached to a pressure controller --- shown in Figure~\ref{fig:setup}. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[height=7cm]{syringe.png} &\includegraphics[height=5cm]{straight_oct} \end{tabular} \end{center} \caption[example] { \label{fig:setup} Setup with syringe for loading sample, along with standard IFFT reconstruction, to exhibit artifacts from the scan.} \end{figure} In this setup, the sample is placed in a glass Petri dish, and imaged by the bottom. The first glass interface leads to a large artifact after applying IFFT --- visible in Figure~\ref{fig:setup} --- due to the only recording the real part of the interferometric signal. Due to a dispersion mismatch between the reference and sample arms, this appears as a burred fringe, and can be removed by cancelling the contribution from the glass interference\cite{Hofer2009}, located in the negative optical delay. Along with the stage example, we also apply ISAM resampling to complete the prepocessing. As with the validation experiment in Section~\ref{sec:valid}, we record OCT data from 5 temporal positions, in which we apply differing force on the syringe. Ideally, this will induce a smooth non-linear deformation map throughout the sample, as it is elastically loaded. The displacement estimates from the first run, and after motion aware denoising are shown in Figure~\ref{fig:result_squash_flow}. \begin{figure}[htb!] \begin{center} \begin{tabular}{c|c|c|c|c|} & displacement 1 & displacement 2 & displacement 3 & displacement 4 \\ \hline \rotatebox{90}{initial estimate}& \includegraphics[height=3.5cm]{flow1_round1}& \includegraphics[height=3.5cm]{flow2_round1}& \includegraphics[height=3.5cm]{flow3_round1}& \includegraphics[height=3.5cm]{flow4_round1}\\ \hline \rotatebox{90}{denoised estimate}& \includegraphics[height=3.5cm]{flow1_round2}& \includegraphics[height=3.5cm]{flow2_round2}& \includegraphics[height=3.5cm]{flow3_round2}& \includegraphics[height=3.5cm]{flow4_round2}\\ \hline \end{tabular} \end{center} \caption[example] {\label{fig:result_squash_flow} Local displacement estimates from loaded sample, before and after motion compensated denoising.} \end{figure} There are a couple of observations that can be made from the Figure~\ref{fig:result_squash_flow}. Firstly, the level of noise in the lower column is significantly reduced, whilst the structure of the scatterers is well preserved. Secondly, although there are several spurious displacement estimations in the top row, these are on the whole corrected in the lower one, which has a much more smooth deformation field with the same underlying pattern. \section{CONCLUSIONS} We have proposed a framework for simultaneously enhancing both image quality and local displacement accuracy, and validated it against real measurements with a commercial SD-OCT system. This allows SNR improvements, when B-scan averaging is not available due to motion, and enables more robust elastography for large displacements. Future work includes extending the method to use phase information available from SD-OCT, to increase the sub-pixel accuracy, and evaluating it with mechanical testing experiments. \section*{ACKNOWLEDGMENTS} The authors sincerely thank Graham Anderson from the University of Edinburgh, for assistance creating the beaded gel phantom. This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) MechAScan project: EP/P031250/1. \bibliographystyle{apalike}
1806.09006
\section{Introduction} \label{sec:into} The axial, scalar and tensor charges of the nucleon are needed to interpret the results of many experiments and probe new physics. In this paper, we extend the calculations presented in Refs.~\cite{Bhattacharya:2015wna,Bhattacharya:2015esa,Bhattacharya:2016zcn} by analyzing eleven ensembles of $2+1+1$ flavors of highly improved staggered quarks (HISQ)~\cite{Follana:2006rc} generated by the MILC collaboration~\cite{Bazavov:2012xda}. These now include a second physical mass ensemble at $a=0.06$~fm, and an ensemble with $a=0.15$~fm and $M_\pi \approx 310$~MeV. We have also increased the statistics significantly on six other ensembles using the truncated solver with bias correction method~\cite{Bali:2009hu,Blum:2012uh}. The resulting high-statistics data provide better control over various sources of systematic errors, in particular the two systematics: (i) excited-state contamination (ESC) in the extraction of the ground-state matrix elements of the various quark bilinear operators and (ii) the reliability of the chiral-continuum-finite volume (CCFV) extrapolation used to obtain the final results that can be compared to phenomenological and experimental values. With improved simultaneous CCFV fits, we obtain $g_A^{u-d} =1.218(25)(30)$, $g_S^{u-d} =1.022(80)(60)$ and $g_T^{u-d} = 0.989(32)(10)$ for the isovector charges in the $\overline{MS}$ scheme at 2~GeV. The first error includes statistical and all systematic uncertainties except that due to the ansatz used for the final CCFV extrapolation, which is given by the second error estimate. We also update our estimates for the connected contributions to the flavor diagonal charges $g_{A,T}^{u}$ and $g_{A,T}^{d} $, and the isoscalar combination $g_T^{u+d} $. Throughout the paper, we present results for the charges of the proton, which by convention are called nucleon charges in the literature. From these, results for the neutron, in our isosymmetric formulation with $m_u = m_d$, are obtained by the $u \leftrightarrow d$ interchange. The axial charge, $g_A^{u-d}$, is an important parameter that encapsulates the strength of weak interactions of nucleons. It enters in many analyses of nucleon structure and of the Standard Model (SM) and beyond-the-SM (BSM) physics. For example, it impacts the extraction of the Cabibbo-Kobayashi-Maskawa (CKM) matrix element $V_{ud}$, tests the unitarity of the CKM matrix, and is needed for the analysis of neutrinoless double-beta decay. Also, the rate of proton-proton fusion, the first step in the thermonuclear reaction chains that power low-mass hydrogen-burning stars like the Sun, is sensitive to it. The current best determination of the ratio of the axial to the vector charge, $g_A/g_V$, comes from measurement of neutron beta decay using polarized ultracold neutrons (UCN) by the UCNA collaboration, $1.2772(20)$~\cite{Mendenhall:2012tz,Brown:2017mhw}, and by PERKEO II, $1.2761{}^{+14}_{-17}$~\cite{Mund:2012fq}. Note that, in the SM, $g_V=1$ up to second order corrections in isospin breaking~\cite{Ademollo:1964sr,Donoghue:1990ti} as a result of the conservation of the vector current. Given the accuracy with which $g_A^{u-d}$ has been measured in experiments, our goal is to calculate it directly with $O(1\%)$ accuracy using lattice QCD. The result presented in this paper, $g_A^{u-d}=1.218(25)(30)$, is, however, about $1.5\sigma$ ($5\%$) smaller than the experimental value. In Sec.~\ref{sec:comparison}, we compare with the result $g_A^{u-d} = 1.271(13)$ by the CalLat collaboration. We show that the data on seven HISQ ensembles analyzed by both collaborations agree within $1\sigma$ and the final difference is due to the chiral and continuum extrapolation--the fits are weighted differently by the data points that are not common. Based on the analysis of the size of the various systematics in Sec.~\ref{sec:errors}, and on the comparison with CalLat calculation, we conclude that our analysis of errors is realistic. Our goal, therefore, is to continue to quantify and control the various sources of errors to improve precision. The Standard Model does not contain fundamental scalar or tensor interactions. However, loop effects and new interactions at the TeV scale can generate effective interactions at the hadronic scale that can be probed in decays of neutrons, and at the TeV scale itself at the LHC. Such scalar and tensor interactions contribute to the helicity-flip parameters $b$ and $b_\nu$ in the neutron decay distribution~\cite{Bhattacharya:2011qm}. Thus, by combining the calculation of the scalar and tensor charges with the measurements of $b$ and $b_\nu$ in low energy experiments, one can put constraints on novel scalar and tensor interactions at the TeV scale as described in Ref.~\cite{Bhattacharya:2011qm}. To optimally bound such scalar and tensor interactions using measurements of $b$ and $b_\nu$ parameters in planned experiments targeting $10^{-3}$ precision~\cite{abBA,WilburnUCNB,Pocanic:2008pu}, the level of precision required in $g_S^{u-d}$ and $g_T^{u-d}$ is at the $10\%$ level as explained in Refs.~\cite{Bhattacharya:2011qm,abBA,WilburnUCNB,Pocanic:2008pu}. Future higher-precision measurements of $b$ and $b_\nu$ would require correspondingly higher-precision calculations of the matrix elements to place even more stringent bounds on TeV-scale couplings. In a recent work~\cite{Bhattacharya:2015wna}, we showed that lattice-QCD calculations have reached a level of control over all sources of systematic errors needed to yield the tensor charge with the required precision. The errors in the scalar 3-point functions are about a factor of 2 larger. In this paper we show that by using the truncated solver method with bias correction~\cite{Bali:2009hu,Blum:2012uh}, (for brevity called TSM henceforth), to obtain high statistics on all ensembles, we are also able to control the uncertainty in $g_S^{u-d}$ to the required 10\% level. These higher-statistics results also improve upon our previous estimates of the axial and the tensor charges. The matrix elements of the flavor-diagonal tensor operators are needed to quantify the contributions of the $u,\ d, \ s, \ c$ quark electric dipole moments (EDM) to the neutron electric dipole moment (nEDM)~\cite{Bhattacharya:2015wna,Pospelov:2005pr}. The nEDM is a very sensitive probe of new sources of $T$ and $CP$ violation that arise in most extensions of the Standard Model designed to explain nature at the TeV scale. Planned experiments aim to reduce the current bound on the nEDM of $2.9 \times 10^{-26}\ e$~cm~\cite{Baker:2006ts} to around $ 10^{-28}\ e$~cm. Improving the bound will put stringent constraints on many BSM theories provided the matrix elements of novel $CP$-violating interactions, of which the quark EDM is one, are calculated with the required precision. In Refs.~\cite{Bhattacharya:2015wna,Bhattacharya:2016zcn}, we showed that the disconnected contributions are negligible so we update the connected contributions to the flavor diagonal tensor charges for the light $u$ and $d$ quarks that are taken to be degenerate. The tensor charges are also extracted as the zeroth moment of the transversity distributions, These are measured in many experiments including Drell-Yan and semi-inclusive deep inelastic scattering (SIDIS) and describe the net transverse polarization of quarks in a transversely polarized nucleon. There exists an active program at Jefferson Lab (JLab) to measure them~\cite{Dudek:2012vr}. It is, however, not straightforward to extract the transversity distributions from the data taken over a limited range of $Q^2$ and Bjorken $x$, consequently additional phenomenological modeling is required. Lattice QCD results for $g_T^{u}$, $g_T^{d}$, $g_T^{s}$ and $g_T^{u-d}$ are the most accurate at present as already discussed in Ref.~\cite{Bhattacharya:2016zcn}. Future experiments at JLab and other experimental facilities worldwide will significantly improve the extraction of the transversity distributions, and together with accurate calculations of the tensor charges using lattice QCD elucidate the structure of the nucleon in terms of quarks and gluons. The methodology for calculating the isovector charges in an isospin symmetric theory, that is, measuring the contribution to the matrix elements of the insertion of the zero-momentum bilinear quark operators in one of the three valence quarks in the nucleon, is well developed~\cite{Bhattacharya:2015wna,Bhattacharya:2015esa,Bhattacharya:2016zcn,Lin:2012ev,Syritsyn:2014saa,Constantinou:2014tga}. Calculation of the flavor-diagonal charges is similar except that it gets additional contributions from contractions of the operator as a vacuum quark loop that interacts with the nucleon propagator through the exchange of gluons. In Ref.~\cite{Bhattacharya:2015wna}, we showed that these contributions to $g_T^{u,d,s}$ are small, $O(0.01)$, and consistent with zero within errors. Thus, within current error estimates, the connected contributions alone provide reliable estimates for the flavor diagonal charges $g_{T}^{u,d} $ and the isoscalar combination $g_T^{u+d} $. A detailed analysis of disconnected contributions to the axial, scalar and tensor charges will be presented in a separate paper. This paper is organized as follows. In Sec.~\ref{sec:Methodology}, we describe the parameters of the gauge ensembles analyzed and the lattice methodology. The fits used to isolate excited-state contamination are described in Sec.~\ref{sec:excited}. The renormalization of the operators is discussed in Sec.~\ref{sec:renorm}. Our final results for the isovector charges and the connected parts of the flavor-diagonal charges are presented in Sec.~\ref{sec:results}. Our estimation of errors is revisited in Sec.~\ref{sec:errors}, and a comparison with previous works is given in Sec.~\ref{sec:comparison}. In Sec.~\ref{sec:est}, we provide constraints on novel scalar and tensor interactions at the TeV scale using our new estimates of the charges and precision beta decay experiments and compare them to those from the LHC. Our final conclusions are presented in Sec.~\ref{sec:conclusions}. \section{Lattice Methodology} \label{sec:Methodology} \begin{table*}[tbp] \begin{center} \renewcommand{\arraystretch}{1.2} \begin{ruledtabular} \begin{tabular}{l|ccc|cc|cccc} Ensemble ID & $a$ (fm) & $M_\pi^{\rm sea}$ (MeV) & $M_\pi^{\rm val}$ (MeV) & $L^3\times T$ & $M_\pi^{\rm val} L$ & $\tau/a$ & $N_\text{conf}$ & $N_{\rm meas}^{\rm HP}$ & $N_{\rm meas}^{\rm LP}$ \\ \hline $a15m310 $ & 0.1510(20) & 306.9(5) & 320.6(4.3) & $16^3\times 48$ & 3.93 & $\{5,6,7,8,9\}$ & 1917 & 7668 & 122,688 \\ \hline $a12m310 $ & 0.1207(11) & 305.3(4) & 310.2(2.8) & $24^3\times 64$ & 4.55 & $\{8,10,12\}$ & 1013 & 8104 & 64,832 \\ $a12m220S$ & 0.1202(12) & 218.1(4) & 225.0(2.3) & $24^3\times 64$ & 3.29 & $\{8, 10, 12\}$ & 946 & 3784 & 60,544 \\ $a12m220 $ & 0.1184(10) & 216.9(2) & 227.9(1.9) & $32^3\times 64$ & 4.38 & $\{8, 10, 12\}$ & 744 & 2976 & 47,616 \\ $a12m220L_O$ & 0.1189(09) & 217.0(2) & 227.6(1.7) & $40^3\times 64$ & 5.49 & $\{8,10,12,14\}$ & 1010 & 8080 & 68,680 \\ $a12m220L$ & & & & & & $\{8,10,12,14\}$ & 1000 & 4000 & 128,000 \\ \hline $a09m310 $ & 0.0888(08) & 312.7(6) & 313.0(2.8) & $32^3\times 96$ & 4.51 & $\{10,12,14,16\}$ & 2263 & 9052 & 114,832 \\ $a09m220 $ & 0.0872(07) & 220.3(2) & 225.9(1.8) & $48^3\times 96$ & 4.79 & $\{10,12,14,16\}$ & 964 & 7712 & 123,392 \\ $a09m130 $ & 0.0871(06) & 128.2(1) & 138.1(1.0) & $64^3\times 96$ & 3.90 & $\{10,12,14\}$ & 883 & 7064 & 84,768 \\ $a09m130W$ & & & & & & $\{8,10,12,14,16\}$ & 1290 & 5160 & 165,120 \\ \hline $a06m310 $ & 0.0582(04) & 319.3(5) & 319.6(2.2) & $48^3\times 144$& 4.52 & $\{16,20,22,24\}$ & 1000 & 8000 & 64,000 \\ $a06m310W$ & & & & & & $\{18,20,22,24\}$ & 500 & 2000 & 64,000 \\ $a06m220 $ & 0.0578(04) & 229.2(4) & 235.2(1.7) & $64^3\times 144$& 4.41 & $\{16,20,22,24\}$ & 650 & 2600 & 41,600 \\ $a06m220W$ & & & & & & $\{18,20,22,24\}$ & 649 & 2596 & 41,546 \\ $a06m135 $ & 0.0570(01) & 135.5(2) & 135.6(1.4) & $96^3\times 192$& 3.7 & $\{16,18,20,22\}$ & 675 & 2700 & 43,200 \\ \end{tabular} \end{ruledtabular} \caption{Parameters, including the Goldstone pion mass $M_\pi^{\rm sea}$, of the eleven 2+1+1- flavor HISQ ensembles generated by the MILC collaboration and analyzed in this study are quoted from Ref.~\cite{Bazavov:2012xda}. All fits are made versus $M_\pi^{\rm val}$ and finite-size effects are analyzed in terms of $M_\pi^{\rm val} L$. Estimates of $M_\pi^{\rm val}$, the clover-on-HISQ pion mass, are the same as given in Ref.~\cite{Bhattacharya:2015wna} and the error is governed mainly by the uncertainty in the lattice scale. In the last four columns, we give, for each ensemble, the values of the source-sink separation $\tau$ used in the calculation of the three-point functions, the number of configurations analyzed, and the number of measurements made using the high precision (HP) and the low precision (LP) truncation of the inversion of the clover operator. The second set of calculations, $a09m130W$, $a06m310W$ and $a06m220W$, have been done with the larger smearing size $\sigma$ that is given in Table~\protect\ref{tab:cloverparams}. The new $a12m220L$ simulations replace $a12m220L_O$ for reasons explained in the text.} \label{tab:ens} \end{center} \end{table*} \begin{table}[htbp] \centering \begin{ruledtabular} \begin{tabular}{l|lc|c|c} \multicolumn1c{ID} & \multicolumn1c{$m_l$} & $c_{\text{SW}}$ & Smearing & RMS smearing \\ & & & Parameters & radius \\ \hline $a15m310 $ & $-0.0893$ & 1.05094 & \{4.2, 36\} & 4.69 \\ \hline $a12m310 $ & $-0.0695$ & 1.05094 & \{5.5, 70\} & 5.96 \\ $a12m220S$ & $-0.075$ & 1.05091 & \{5.5, 70\} & 5.98 \\ $a12m220 $ & $-0.075$ & 1.05091 & \{5.5, 70\} & 5.96 \\ $a12m220L$ & $-0.075$ & 1.05091 & \{5.5, 70\} & 5.96 \\ \hline $a09m310 $ & $-0.05138$ & 1.04243 & \{7.0,100\} & 7.48 \\ $a09m220 $ & $-0.0554$ & 1.04239 & \{7.0,100\} & 7.48 \\ $a09m130 $ & $-0.058$ & 1.04239 & \{5.5, 70\} & 6.11 \\ $a09m130W$ & $-0.058$ & 1.04239 & \{7.0,100\} & 7.50 \\ \hline $a06m310 $ & $-0.0398$ & 1.03493 & \{6.5, 70\} & 7.22 \\ $a06m310W $ & $-0.0398$ & 1.03493 & \{12, 250\} & 12.19 \\ $a06m220 $ & $-0.04222$ & 1.03493 & \{5.5, 70\} & 6.22 \\ $a06m220W $ & $-0.04222$ & 1.03493 & \{11, 230\} & 11.24 \\ $a06m135 $ & $-0.044$ & 1.03493 & \{9.0,150\} & 9.56 \\ \end{tabular} \end{ruledtabular} \caption{The parameters used in the calculation of the clover propagators. The hopping parameter for the light quarks, $\kappa_l$, in the clover action is given by $2\kappa_{l} = 1/(m_{l}+4)$. $m_l$ is tuned to achieve $M_\pi^{\rm val} \approx M_\pi^\text{sea}$. The parameters used to construct Gaussian smeared sources, $\{\sigma, N_{\text{KG}}\}$, are given in the fourth column where $N_{\text{KG}}$ is the number of applications of the Klein-Gordon operator and the width of the smearing is controlled by the coefficient $\sigma$, both in Chroma convention~\cite{Edwards:2004sx}. The resulting root-mean-square radius of the smearing, defined as $\sqrt{\int r^2 \sqrt{S^\dag S} dr /\int \sqrt{S^\dag S} dr} $, is given in the last column. } \label{tab:cloverparams} \end{table} The parameters of the eleven ensembles used in the analysis are summarized in Table~\ref{tab:ens}. They cover a range of lattice spacings ($0.06 \, \raisebox{-0.7ex}{$\stackrel{\textstyle <}{\sim}$ } a \, \raisebox{-0.7ex}{$\stackrel{\textstyle <}{\sim}$ } 0.15$~fm), pion masses ($135 \, \raisebox{-0.7ex}{$\stackrel{\textstyle <}{\sim}$ } M_\pi \, \raisebox{-0.7ex}{$\stackrel{\textstyle <}{\sim}$ } 320$~MeV) and lattice sizes ($3.3\, \raisebox{-0.7ex}{$\stackrel{\textstyle <}{\sim}$ } M_\pi L\, \lsim5.5$) and were generated using 2+1+1-flavors of HISQ fermions~\cite{Follana:2006rc} by the MILC collaboration~\cite{Bazavov:2012xda}. Most of the details of the methodology, and the strategies for the calculations and the analyses are the same as described in Refs.~\cite{Bhattacharya:2015wna,Bhattacharya:2016zcn}. Here we will summarize the key points to keep the paper self-contained and highlight the new features and analysis. We construct the correlation functions needed to calculate the matrix elements using Wilson-clover fermions on these HISQ ensembles. Such mixed-actions, clover-on-HISQ, are a nonunitary formulation and suffer from the problem of exceptional configurations at small, but {\it a priori} unknown, quark masses. We monitor all correlation functions for such exceptional configurations in our statistical samples. For example, evidence of exceptional configurations on three $a15m310$ lattices prevents us from analyzing ensembles with smaller $M_\pi$ at $a = 0.15$~fm using the clover-on-HISQ approach. The same holds for the physical mass ensemble $a12m130$. The parameters used in the construction of the 2- and 3-point functions with clover fermions are given in Table~\ref{tab:cloverparams}. The Sheikholeslami-Wohlert coefficient~\cite{Sheikholeslami:1985ij} used in the clover action is fixed to its tree-level value with tadpole improvement, $c_\text{sw} = 1/u_0^3$, where $u_0$ is the fourth root of the plaquette expectation value calculated on the hypercubic (HYP) smeared~\cite{Hasenfratz:2001hp} HISQ lattices. The masses of light clover quarks were tuned so that the clover-on-HISQ pion masses, $M^{\rm val}_\pi$, match the HISQ-on-HISQ Goldstone ones, $M_\pi^{\rm sea}$. Both estimates are given in Table~\ref{tab:ens}. All fits in $M_\pi^2$ to study the chiral behavior are made using the clover-on-HISQ $M^{\rm val}_{\pi}$ since the correlation functions, and thus the chiral behavior of the charges, have a greater sensitivity to it. Henceforth, for brevity, we drop the superscript and denote the clover-on-HISQ pion mass as $M_\pi$. Performing fits using the HISQ-on-HISQ values, ${M_\pi^{\rm sea}}$, does not change the estimates significantly. The highlights of the current work, compared to the results presented in Ref.~\cite{Bhattacharya:2016zcn}, are as follows: \begin{itemize} \item The addition of a second physical pion mass ensemble $a06m135$ and the coarse $a15m310$ ensemble. \item The new $a12m220L$ simulations replace the older $a12m220L_O$ data. In the $a12m220L_O$ calculation, the HP analysis had only been done for $\tau=10$, while in the new $a12m220L$ data the HP calculation has been done for all values of source-sink separation $\tau$, and the bias correction applied. We have also increased the number of LP measurements on each configurations and both HP and LP source points are chosen randomly within and between configurations. Even though the results from the two calculations are consistent, as shown in Tables~\ref{tab:2ptmulti},~\ref{tab:results3bareu-d} and~\ref{tab:results3bareu+d}, nevertheless, for the two reasons stated above, we will, henceforth, only use the $a12m220L$ data in the analysis of the charges and other quantities in this and future papers. \item All ensembles are analyzed using the TSM method with much higher statistics as listed in Table~\ref{tab:ens}. Our implementation of the TSM method is described in Refs.~\cite{Bhattacharya:2015wna,Yoon:2016dij}. \item The new high statistics data for ensembles $a09m310$, $a09m220$ and $a09m130W$ were generated using the smearing parameter $\sigma=7$. This corresponds to a r.m.s. radius of $\approx 7.5$ in lattice units or roughly 0.66~fm. As discussed in Sec.~\ref{sec:excited} and shown in Figs.~\ref{fig:gA2v3a12}--\ref{fig:gT2v3a06}, increasing $\sigma$ from $5.5$ to $7.0$ reduces the ESC at a given source-sink separation $\tau$.\looseness-1 \item The two-point correlation functions are analyzed keeping up to four states in the spectral decomposition. Previous work was based on keeping two states.\looseness-1 \item The three-point functions are analyzed keeping up to three states in the spectral decomposition of the spectral functions. Previous work was based on keeping two states. \end{itemize} We find that the new higher precision data significantly improved the ESC fits and the final combined CCFV fit used to obtain results in the limits $a \to 0$, the pion mass $M_\pi \to 135$~MeV and the lattice volume $M_\pi L \to \infty$. \subsection{Correlation Functions} \label{sec:CorrelationFunctions} We use the following interpolating operator $\chi$ to create$/$annihilate the nucleon state: \begin{align} \chi(x) = \epsilon^{abc} \left[ {q_1^a}^T(x) C \gamma_5 \frac{(1 \pm \gamma_4)}{2} q_2^b(x) \right] q_1^c(x) \,, \label{eq:nucl_op} \end{align} with $\{a, b, c\}$ labeling the color indices, $C=\gamma_0 \gamma_2$ the charge conjugation matrix, and $q_1$ and $q_2$ denoting the two different flavors of light quarks. The nonrelativistic projection $(1 \pm \gamma_4)/2$ is inserted to improve the signal, with the plus and minus signs applied to the forward and backward propagation in Euclidean time, respectively~\cite{Gockeler:1995wg}. At zero momentum, this operator couples only to the spin-$\frac{1}{2}$ state. The zero momentum 2-point and 3-point nucleon correlation functions are defined as \begin{align} {\mathbf C}_{\alpha \beta}^{\text{2pt}}(\tau) &= \sum_{\mathbf{x}} \langle 0 \vert \chi_\alpha(\tau, \mathbf{x}) \overline{\chi}_\beta(0, \mathbf{0}) \vert 0 \rangle \,, \label{eq:corr_fun2} \\ {\mathbf C}_{\Gamma; \alpha \beta}^{\text{3pt}}(t, \tau) &= \sum_{\mathbf{x}, \mathbf{x'}} \langle 0 \vert \chi_\alpha(\tau, \mathbf{x}) \mathcal{O}_\Gamma(t, \mathbf{x'}) \overline{\chi}_\beta(0, \mathbf{0}) \vert 0 \rangle \,, \label{eq:corr_fun3} \end{align} where $\alpha$ and $\beta$ are spinor indices. The source is placed at time slice $0$, $\tau$ is the sink time slice, and $t$ is an intermediate time slice at which the local quark bilinear operator $\mathcal{O}_\Gamma^q(x) = \bar{q}(x) \Gamma q(x)$ is inserted. The Dirac matrix $\Gamma$ is $1$, $\gamma_4$, $\gamma_i \gamma_5$ and $\gamma_i \gamma_j$ for scalar (S), vector (V), axial (A) and tensor (T) operators, respectively. In this work, subscripts $i$ and $j$ on gamma matrices run over $\{1,2,3\}$, with $i<j$. The nucleon charges $g_\Gamma^q$ are obtained from the ground state matrix element $ \langle N(p, s) \vert \mathcal{O}_\Gamma^q \vert N(p, s) \rangle$, that, in turn, are extracted using the spectral decomposition of the 2- and 3-point correlation functions. They are related as \begin{align} \langle N(p, s) \vert \mathcal{O}_\Gamma^q \vert N(p, s) \rangle = g_\Gamma^q \bar{u}_s(p) \Gamma u_s(p) \end{align} with spinors satisfying \begin{equation} \sum_s u_s(p) \bar{u}_s(p) = \frac{E_{\mathbf{p}} \gamma_4 - i\vec{\gamma}\cdot \vec{p} + M_N} {2 E_{\mathbf{p}}}\,. \end{equation} To extract the charges, we construct the projected 2- and 3-point correlation functions \begin{align} C^{\text{2pt}}(t) & = {\langle \Tr [ \mathcal{P}_\text{2pt} {\mathbf C}^{\text{2pt}}(t) ] \rangle} \label{eq:2pt_proj} \\ C_{\Gamma}^{\text{3pt}}(t, \tau) & = \langle \Tr [ \mathcal{P}_{\rm 3pt} {\mathbf C}_{\Gamma}^{\text{3pt}}(t, \tau) ]\rangle \, . \label{eq:3pt_proj} \end{align} The operator $\mathcal{P}_\text{2pt} = (1 \pm \gamma_4)/2$ is used to project on to the positive parity contribution for the nucleon propagating in the forward (backward) direction. For the connected 3-point contributions, $\mathcal{P}_{\rm 3pt} = \mathcal{P}_\text{2pt}(1+i\gamma_5\gamma_3)$ is used. Note that the $C_{\Gamma}^{\text{3pt}}(t, \tau)$ defined in Eq.~\eqref{eq:3pt_proj} becomes zero if $\Gamma$ anticommutes with $\gamma_4$, so only $\Gamma = 1$, $\gamma_4$, $\gamma_i \gamma_5$ and $\gamma_i \gamma_j$ elements of the Clifford algebra survive. The fits used to extract the masses, amplitudes and matrix elements from the 2- and 3-point functions, defined in Eqs.~\eqref{eq:2pt_proj} and~\eqref{eq:3pt_proj}, are discussed in Sec.~\ref{sec:excited}. \subsection{High Statistics Using the Truncated Solver Method} \label{sec:TSM} We have carried out high-statistics calculation on all the ensembles using the truncated solver method with bias correction~\cite{Bali:2009hu,Blum:2012uh}. In this method, correlation functions are constructed using quark propagators inverted with high precision (HP) and low precision (LP) using the multigrid algorithm. The bias corrected correlators on each configuration are then given by \begin{align} C^\text{imp}& = \frac{1}{N_\text{LP}} \sum_{i=1}^{N_\text{LP}} C_\text{LP}(\mathbf{x}_i^\text{LP}) \nonumber \\ +& \frac{1}{N_\text{HP}} \sum_{i=1}^{N_\text{HP}} \left[ C_\text{HP}(\mathbf{x}_i^\text{HP}) - C_\text{LP}(\mathbf{x}_i^\text{HP}) \right] \,, \label{eq:2-3pt_TSM} \end{align} where $C_\text{LP}$ and $C_\text{HP}$ are the 2- and 3-point correlation functions constructed using LP and HP quark propagators, respectively, and $\mathbf{x}_i^\text{LP}$ and $\mathbf{x}_i^\text{HP}$ are the source positions for the two kinds of propagator inversion. The LP stopping criteria, defined as $r_{\rm LP} \equiv |{\rm residue}|_{\rm LP}/|{\rm source}|$ varied between $ 10^{-3}$ and $5 \times 10^{-4}$, while that for the HP calculations between $10^{-7}$ and $10^{-8}$. As discussed in Ref.~\cite{Yoon:2016dij}, to reduce statistical correlations between measurements, $N_\text{HP}$ maximally separated time slices were selected randomly on each configuration and on each of these time slices, $N_\text{LP}/N_\text{HP}$ LP source positions were again selected randomly. The number of sources, $N_\text{LP}$ and $N_\text{HP}$, used are given in Table~\ref{tab:ens}. An important conclusion based on all our calculations with $O(10^5)$ measurements of nucleon charges and form factors carried out so far (see Refs.~\cite{Bhattacharya:2015wna,Bhattacharya:2016zcn,Yoon:2016dij,Yoon:2016jzj,Rajan:2017lxk}), is that the difference between the LP and the bias corrected estimates (or the HP) is smaller than the statistical errors. To further reduce the computational cost, we also used the coherent sequential source method discussed in Ref.~\cite{Yoon:2016dij}. Typically, we constructed four HP or LP sequential sources on four sink time slices, and added them to obtain the coherent source. A single inversion was then performed to construct the coherent sequential propagator. This was then contracted with the four original propagators to construct four measurements of each three-point function. All of these propagators were held in the computer memory to remove the I/O overhead. Our final errors are obtained using a single elimination jackknife analysis over the configurations, that is, we first construct the average defined in Eq.~\eqref{eq:2-3pt_TSM} on each configuration. Because of this ``binning'' of the data, we do not need to correct the jackknife estimate of the error for correlations between the $N_\text{LP}$ LP measurements per configuration. \section{Excited-State Contamination} \label{sec:excited} To extract the nucleon charges we need to evaluate the matrix elements of the currents between ground-state nucleons. The lattice nucleon interpolating operator given in Eq.~\eqref{eq:nucl_op}, however, couples to the nucleon, all its excitations and multiparticle states with the same quantum numbers. Previous lattice calculations have shown that the ESC can be large. In our earlier works~\cite{Bhattacharya:2015wna,Bhattacharya:2016zcn,Yoon:2016jzj,Yoon:2016dij}, we have shown that this can be controlled to within a few percent using the strategy summarized below. The overlap between the nucleon operator and the excited states in the construction of the two- and three-point functions is reduced by using tuned smeared sources when calculating the quark propagators on the HYP smeared HISQ lattices. We construct gauge-invariant Gaussian smeared sources by applying the three-dimensional Laplacian operator, $\nabla^2$, $N_{\rm GS}$ number of times, i.e., $(1 + \sigma^2\nabla^2/(4N_{\rm GS}))^{N_{\rm GS}}$ on a delta function source. The input smearing parameters $\{\sigma, N_{\rm GS}\}$ for each ensemble are given in Table~\ref{tab:cloverparams} along with the resulting root-mean-square radius defined as $\sqrt{\int r^2 \sqrt{S^\dag S} dr /\int \sqrt{S^\dag S} dr }$. We find that, as a function of distance $r$, the modulus of the sum of the values of the twelve spin-color components at each site, $\sqrt{S^\dag S}$, is well described by a Gaussian, and we use this ansatz to fit the data. The results for the root-mean-square radius given in Table~\ref{tab:cloverparams} show weak dependence on the lattice spacing or the pion mass for fixed $\sigma$, and are roughly equal to the input $\sigma$. Throughout this work, the same smearing is used at the source and sink points. The analysis of the two-point functions, $C^\text{2pt}$, was carried out keeping four states in the spectral decomposition: \begin{align} C^\text{2pt} &(t,\bm{p}) = \nonumber \\ &{|{\cal A}_0|}^2 e^{-M_0 t} + {|{\cal A}_1|}^2 e^{-M_1 t}\,+ \nonumber \\ &{|{\cal A}_2|}^2 e^{-M_2 t} + {|{\cal A}_3|}^2 e^{-M_3 t}\,, \label{eq:2pt} \end{align} where the amplitudes and the masses of the four states are denoted by ${\cal A}_i$ and $M_i$, respectively. In fits including more than two states, the estimates of $M_i$ and the ${\cal A}_i$ for $i \ge 2$ were sensitive to the choice of the starting time slice $t_{\rm min}$, and the fits were not always stable. The fits were stabilized using the empirical Bayesian procedure described in Ref.~\cite{Yoon:2016jzj}. Examples of the quality of the fits are shown in Figs.~22--29 in Ref.~\cite{Rajan:2017lxk}. The new results for masses and amplitudes obtained from 2-, 3- and 4-state fits are given in Table~\ref{tab:2ptmulti}. In Fig.~\ref{fig:2pta09m130}, we compare the efficacy of different smearing sizes in controlling excited states in the 2-point data on the three ensembles $a09m130$, $a06m310$ and $a06m220$. In each case, the onset of the plateau with the larger smearing size occurs at earlier Euclidean time $t$, however, the statistical errors at larger $t$ are larger. The more critical observation is that, while $M_0$ overlap, the mass gaps $a\Delta M_i$ are significantly different in two cases. Thus the excited state parameters are not well determined even with our high statistics, $O(10^5)$ measurements, data. More importantly, except for the $a06m310$ case, the mass gap $a \Delta M_1$ obtained is much larger than $2 a M_\pi$, the value expected if $N\pi\pi$ is the lowest excitation. Based on these observations, we conclude that to resolve the excited state spectrum will require a coupled channel analysis with much higher statistics data. The results of different fits for the bare charges extracted from the three-point data, given in Table~\ref{tab:results3bareu-d}, indicate that these differences in the mass gaps do not significantly effect the extraction of the charges. At current level of precision, the variations in the values of the mass gaps and the corresponding values for the amplitudes compensate each other in fits to the 2- and 3-point data.\looseness-1 \begin{figure*}[tb] \centering \subfigure{ \includegraphics[width=0.45\linewidth]{figs/meff_smearing} \qquad \includegraphics[width=0.45\linewidth]{figs/meff_smearing_wide} } \caption{Illustration of the data for the nucleon $M_{\rm eff}$ versus Euclidean time $t$ and the results of the 4-state fit to the 2-point correlation function. We compare the data obtained with two different smearing sizes on three ensembles. In the right panel we also show results for the $a06m135$ ensemble. The onset of the plateau in $M_{\rm eff}$ occurs at earlier $t$ with the larger smearing size but the errors at larger $t$ are also larger. \label{fig:2pta09m130}} \end{figure*} The analysis of the zero-momentum three-point functions, $C_\Gamma^{(3\text{pt})} (t;\tau)$ was carried out retaining three-states in its spectral decomposition: \begin{align} &C^\text{3pt}_{\Gamma}(t_f,t,t_i) = \nonumber\\ & |{\cal A}_0|^2 \langle 0 | \mathcal{O}_\Gamma | 0 \rangle e^{-aM_0 (t_f - t_i)} +{}\nonumber\\ & |{\cal A}_1|^2 \langle 1 | \mathcal{O}_\Gamma | 1 \rangle e^{-aM_1 (t_f - t_i)} +{}\nonumber\\ & |{\cal A}_2|^2 \langle 2 | \mathcal{O}_\Gamma | 2 \rangle e^{-aM_2 (t_f - t_i)} +{}\nonumber\\ & {\cal A}_1{\cal A}_0^* \langle 1 | \mathcal{O}_\Gamma | 0 \rangle e^{-aM_1 (t_f-t)} e^{-aM_0 (t-t_i)} +{}\nonumber\\ & {\cal A}_0{\cal A}_1^* \langle 0 | \mathcal{O}_\Gamma | 1 \rangle e^{-aM_0 (t_f-t)} e^{-aM_1 (t-t_i)} +{}\nonumber\\ & {\cal A}_2{\cal A}_0^* \langle 2 | \mathcal{O}_\Gamma | 0 \rangle e^{-aM_2 (t_f-t)} e^{-aM_0 (t-t_i)} +{}\nonumber\\ & {\cal A}_0{\cal A}_2^* \langle 0 | \mathcal{O}_\Gamma | 2 \rangle e^{-aM_0 (t_f-t)} e^{-aM_2 (t-t_i)} +{}\nonumber\\ & {\cal A}_1{\cal A}_2^* \langle 1 | \mathcal{O}_\Gamma | 2 \rangle e^{-aM_1 (t_f-t)} e^{-aM_2 (t-t_i)} +{}\nonumber\\ & {\cal A}_2{\cal A}_1^* \langle 2 | \mathcal{O}_\Gamma | 1 \rangle e^{-aM_2 (t_f-t)} e^{-aM_1 (t-t_i)} + \ldots \,, \label{eq:3pt} \end{align} where the source point is at $t_i$, the operator is inserted at time $t$, and the nucleon state is annihilated at the sink time slice $t_f$. The source-sink separation is $\tau \equiv t_f-t_i$. The state $|0\rangle$ represents the ground state and $|n\rangle$, with $n > 0$, the higher states. The ${\cal A}_i$ are the amplitudes for the creation of state $i$ with zero momentum by the nucleon interpolating operator $\chi$. To extract the matrix elements, the amplitudes ${\cal A}_i$ and the masses $M_i$ are obtained from the 4-state fits to the two-point functions. Note that the insertion of the nucleon at the sink time slice $t_f$ and the insertion of the current at time $t$ are both at zero momentum. Thus, by momentum conservation, only the zero momentum projections of the states created at the source time slice contribute to the three-point function. We calculate the three-point correlation functions for a number of values of the source-sink separation $\tau$ that are listed in Table~\ref{tab:ens}. To extract the desired matrix element $\langle 0 | \mathcal{O}_\Gamma | 0 \rangle$, we fit the data at all $\tau$ and $t$ simultaneously using the ansatz given in Eq.~\eqref{eq:3pt}. In this work, we examine three kinds of fits, $2^\ast$-, 2- and $3^\ast$-state fits. The $2^\ast$-state fit corresponds to keeping terms of the type $\matrixe{0}{\mathcal{O}_\Gamma}{0}$ and $\matrixe{0}{\mathcal{O}_\Gamma}{1}$. The 2-state fits also include $\matrixe{1}{\mathcal{O}_\Gamma}{1}$, and the $3^\ast$-state fits further add the $\matrixe{0}{\mathcal{O}_\Gamma}{2}$ and $\matrixe{1}{\mathcal{O}_\Gamma}{2}$ type terms.\looseness-1 In the simultaneous fit to the data versus $t$ and multiple $\tau$ to obtain $\matrixe{0}{\mathcal{O}_\Gamma}{0}$, we skip $\mathop{t_{\rm skip}}\nolimits$ points adjacent to the source and the sink to remove points with the largest ESC. The same $\mathop{t_{\rm skip}}\nolimits$ is used for each $\tau$. The $\mathop{t_{\rm skip}}\nolimits$ selected is a compromise between wanting to include as many points as possible to extract the various terms given in Eq.~\eqref{eq:3pt} with confidence, and the errors in and stability of the full covariance matrix used in the fit. In particular, the choice of $\mathop{t_{\rm skip}}\nolimits$ on the $a=0.06$~fm ensembles is the smallest value for which the covariance matrix was invertable and reasonable. These values of $\mathop{t_{\rm skip}}\nolimits$, tuned for each ensemble, are given in Table~\ref{tab:results3bareu-d}. To visualize the ESC, we plot the data for the following ratio of correlation functions \begin{equation} R_\Gamma(t,\tau) = \frac{C_{\Gamma}^{\text{3pt}}(t, \tau) }{C^{\text{2pt}}(\tau)} \to g_\Gamma \,, \label{eq:ratio} \end{equation} in Figs.~\ref{fig:gA2v3a12}--\ref{fig:gT2v3a06} and show the various fits corresponding to the results in Table~\ref{tab:results3bareu-d}. In the limit $t \to \infty$ and $\tau-t \to \infty$, this ratio converges to the charge $g_\Gamma $. At short times, the ESC is manifest in all cases. For sufficiently large $\tau$, the data should exhibit a flat region about $\tau/2$, and the value should become independent of $\tau$. The current data for $g_A$, $g_S$ and $g_T$, with $\tau$ up to about 1.4~fm, do not provide convincing evidence of this desired asymptotic behavior. To obtain $\matrixe{0}{\mathcal{O}_\Gamma}{0}$, we use the three-state ansatz given in Eq.~\eqref{eq:3pt}. On the three ensembles, $a09m130$, $a06m310$ and $a06m220$, we can compare the data with two different smearing sizes given in Table~\ref{tab:ens}. We find a significant reduction in the ESC in the axial and scalar charges on increasing the smearing size. Nevertheless, the 2- and $3^\ast$-state fits and the two calculations give consistent estimates for the ground state matrix elements. The agreement between these four estimates has increased our confidence in the control over ESC. The results for $g_S^{u-d}$, obtained using $2$-state fits, have larger uncertainty as discussed in Sec.~\ref{sec:poor}, but are again consistent except those from the $a06m220$ ensemble. This higher statistics study of the ESC confirms many features discussed in Ref.~\cite{Bhattacharya:2016zcn}: \begin{itemize} \item The ESC is large in both $g_A^{u-d}$ and $g_S^{u-d}$, and the convergence to the $\mathop{\tau \to \infty}\nolimits$ value is monotonic and from below. \item The ESC is $g_T^{u-d}$ is $\lesssim 10\%$ for $\tau > 1$~fm, and the convergence to the $\mathop{\tau \to \infty}\nolimits$ value is also monotonic but from above. \item The ESC in $g_A^{u-d}$ and $g_S^{u-d}$ is reduced on increasing the size of the smearing, but $g_T^{u-d}$ is fairly insensitive to the smearing size. \item For a given number of measurements at the same $\tau$ and $t$, the statistical precision of $g_T^{u-d}$ is slightly better than that of $g_A^{u-d}$. The data for $g_S^{u-d}$ is noisy, especially at the larger values of $\tau$. On many ensembles, it does not exhibit a monotonic increase with $\tau$. To get $g_S^{u-d}$ with the same precision as in $g_A^{u-d}$ currently will require $\approx 5$ times the statistics. \item The data for each charge and for each source-sink separation $\tau$ becomes symmetric about $\tau/2$ with increasing statistical precision. This is consistent with the $\cosh(t-\tau/2)$ behavior predicted by Eq.~\eqref{eq:3pt} for each transition matrix element. \item The variations in the results with the fit ranges selected for fits to the two-point functions and the number, $\mathop{t_{\rm skip}}\nolimits$, of points skipped in the fits to the three-point data decrease with the increased statistical precision. \item Estimates from the $2$- and the $3^\ast$-state fits overlap for all fourteen measurements of $g_A^{u-d}$ and $g_T^{u-d}$. \item The $3^\ast$-state fits for $g_S^{u-d}$ are not stable in all cases and many of the parameters are poorly determined. To extract our best estimates, we use 2-state fits. \item The largest excited-state contribution comes from the $\langle 0 | O_\Gamma | 1 \rangle$ transition matrix elements. We, therefore discuss a poor person's recipe to get estimates based on the $2^\ast$ fits in Sec.~\ref{sec:poor} that are useful when data at only one value of $\tau$ are available. \end{itemize} Our conclusion on ESC is that with $O(10^5)$ measurements, $3^\ast$ fits, the choice of smearing parameters used and the values of $\tau$ simulated, the excited-state contamination in $g_A^{u-d}$ and $g_T^{u-d}$ has been controlled to within a couple of percent, i.e., the size of the quoted errors. The errors in $g_S^{u-d}$ are at the 5\%--10\% level, and we take results from the 2-state fit as our best estimates. In general, for calculations by other groups when data with reasonable precision are available only at a single value of $\tau$, we show that the $2^\ast$ fit gives a much better estimate than the plateau value. \subsection{A poor person's recipe and $g_S^{u-d}$} \label{sec:poor} Our high statistics calculations allow us to develop the following poor person's recipe for estimating the ground state matrix element when data are available only at a single value of $\tau$. To illustrate this, we picked two values with $\tau \approx 1$~fm ($\tau =\{6,7\}, \{8,10\}, \{10,12\}, \{16,18,20\}$ in lattice units for the $a\approx 0.15, 0.12, 0.09, 0.06$ ensembles) for which we have reasonably precise data at all values of $t$ and for all three isovector charges. We then compared the estimates of the charges from the $2^\ast$ fit to data at these values of $\tau$ with our best estimate from the $3^\ast$ fit (2-state for $g_S^{u-d}$) to the data at multiple $\tau$ and $t$. Fits for all ensembles are shown in Figs.~\ref{fig:gA2v3a12}--\ref{fig:gT2v3a06} and the results collected in Table~\ref{tab:results3bareu-d}. In the case of $g_A^{u-d}$ and $g_T^{u-d}$ we get overlapping results results converging to the $3^\ast$ value. This suggests that, within our statistical precision, all the excited-state terms that behave as $\cosh \Delta M(t-\tau/2)$ in the spectral decomposition are well-approximated by the single term proportional to $\langle 0|{ \cal{O}} | 1 \rangle$ in the $2^\ast$ fit. Isolating this ESC is, therefore, essential. Also, the remainder, the sum of all the terms independent of $t$ is small. This explains why the values of the excited state matrix elements $\langle 1| {\cal{O} } | 1 \rangle$ and $\langle 0| {\cal{O} } | 2 \rangle$, given in Table~\ref{tab:bareEME}, are poorly determined. We further observe that in our implementation of the lattice calculations---HYP smoothing of the lattices plus the Gaussian smearing of the quark sources---the product $(M_1-M_0) \times \tau$ is $ \gtrsim 1$ for $\tau \approx 1$~fm, i.e., $(M_1-M_0) \gtrsim 200$~MeV. Since this condition holds for the physical nucleon spectrum, it is therefore reasonable to expect that the charges extracted from a $2^\ast$ fit to data with $\tau \gtrsim 1$~fm are a good approximation to the $\mathop{\tau \to \infty}\nolimits$ value, whereas the value at the midpoint $t=\tau/2$ (called the plateau value) is not. This is supported by the data for $g_A^{u-d}$ and $g_T^{u-d}$ shown in Table~\ref{tab:results3bareu-d}; there is much better consistency between the $3^\ast$ results and $2^\ast$ fits to data with a single values of $\tau \gtrsim 1$~fm versus the plateau value. In this work, the reason for considering such a recipe is that estimates of $g_S^{u-d}$ have much larger statistical errors, because of which the data at the larger values of $\tau$ do not, in all cases, exhibit the expected monotonic convergence in $\tau$ and have large errors. As a result, on increasing $n$ in an $n$-state fit to data with multiple values of $\tau$ does not always give a better or converged value. We, therefore, argue that to obtain the best estimates of $g_S^{u-d}$ one can make judicious use of this recipe, i.e., use $2^\ast$ fits to the data with the largest value of $\tau$ that conforms with the expectation of monotonic convergence from below. In our case, based on such analyses we conclude that the 2-state fits are more reliable than $3^\ast$ fits for $g_S^{u-d}$. These fourteen values of $g_S^{u-d}$ used in the final analysis are marked with the superscript ${}^\dag$ in Table~\ref{tab:results3bareu-d}. The same strategy is followed for obtaining the connected contribution to the isoscalar charges, $g_{S}^{u+d}$, that are given in Table~\ref{tab:results3bareu+d}. \begin{table*} \centering \begin{ruledtabular} \begin{tabular}{c|ccc|ccc|ccc} ID & $g_A^{u}$ & $g_A^{d}$ & $g_A^{u-d}$ & $g_S^{u}$ & $g_S^{d}$ & $g_S^{u-d}$ & $g_T^{u}$ & $g_T^{d}$ & $g_T^{u-d}$ \\ \hline $a15m310 $ & 0.937(06) & $-$0.313(04) & 1.250(07) & 3.10(08) & 2.23(06) & 0.87(03) & 0.901(06) & $-$0.219(04) & 1.121(06) \\ \hline $a12m310 $ & 0.946(15) & $-$0.328(09) & 1.274(15) & 3.65(13) & 2.69(09) & 0.96(05) & 0.859(12) & $-$0.206(07) & 1.065(13) \\ $a12m220S$ & 0.934(43) & $-$0.332(27) & 1.266(44) & 5.23(49) & 4.23(40) & 1.00(26) & 0.816(44) & $-$0.249(33) & 1.065(39) \\ $a12m220 $ & 0.947(22) & $-$0.318(13) & 1.265(21) & 4.83(35) & 3.72(29) & 1.11( 9) & 0.847(17) & $-$0.201(11) & 1.048(18) \\ $a12m220L$ & 0.942(09) & $-$0.347(08) & 1.289(13) & 4.21(29) & 3.34(26) & 0.87(04) & 0.846(11) & $-$0.203(05) & 1.069(11) \\ \hline $a09m310 $ & 0.930(07) & $-$0.308(04) & 1.238(08) & 3.60(12) & 2.58(10) & 1.02(03) & 0.824(07) & $-$0.203(03) & 1.027(07) \\ $a09m220 $ & 0.945(12) & $-$0.334(06) & 1.279(13) & 4.46(19) & 3.41(16) & 1.05(04) & 0.799(10) & $-$0.203(05) & 1.002(10) \\ $a09m130 $ & 0.919(20) & $-$0.350(16) & 1.269(28) & 5.87(49) & 4.71(41) & 1.16(13) & 0.765(20) & $-$0.196(10) & 0.961(22) \\ $a09m130W$ & 0.935(14) & $-$0.336(08) & 1.271(15) & 5.28(17) & 4.23(14) & 1.05(06) & 0.797(12) & $-$0.203(06) & 1.000(12) \\ \hline $a06m310 $ & 0.923(25) & $-$0.320(15) & 1.243(27) & 4.48(33) & 3.24(24) & 1.24(11) & 0.785(20) & $-$0.197(11) & 0.982(20) \\ $a06m310W$ & 0.906(22) & $-$0.310(16) & 1.216(21) & 4.06(16) & 2.94(11) & 1.12(07) & 0.784(15) & $-$0.192(08) & 0.975(16) \\ $a06m220 $ & 0.912(13) & $-$0.323(13) & 1.235(18) & 4.40(13) & 3.29(09) & 1.11(07) & 0.779(10) & $-$0.197(10) & 0.975(12) \\ $a06m220W$ & 0.917(24) & $-$0.341(15) & 1.257(24) & 4.32(21) & 3.55(18) & 0.77(09) & 0.764(21) & $-$0.198(11) & 0.962(22) \\ $a06m135 $ & 0.917(22) & $-$0.323(13) & 1.240(26) & 5.26(22) & 4.26(15) & 1.00(13) & 0.768(17) & $-$0.183(10) & 0.952(19) \\ \end{tabular} \end{ruledtabular} \caption{Results for the bare connected contributions to the various charges.} \label{tab:resultsbare} \end{table*} \subsection{Transition and excited state matrix elements} \label{sec:excitedME} The only transition matrix element that has been estimated with some degree of confidence is $\langle 0 | \mathcal{O}_\Gamma | 1 \rangle$ as can be inferred from the results given in Table~\ref{tab:bareEME}. Also including information from Figs.~\ref{fig:gA2v3a12}--\ref{fig:gT2v3a06}, our qualitative conclusions on it are as follows: \begin{itemize} \item Estimates of $\langle 0 | \mathcal{O}_A | 1 \rangle$ vary between $-0.1$ and $-0.3$ and account for the negative curvature evident in the figures. All ground-state estimates of $g_A^{u-d}$ converge from below. \item Estimates of $\langle 0 | \mathcal{O}_S | 1 \rangle$ vary between $-0.2$ and $-0.5$ and account for the larger negative curvature observed in the figures. All ground-state estimates of $g_S^{u-d}$ also converge from below. \item Estimates of $\langle 0 | \mathcal{O}_T | 1 \rangle$ vary between 0.1 and 0.3 and account for the positive curvature evident in the figures. The ground-state estimates of $g_T^{u-d}$ converge from above in all cases. \end{itemize} Our long term goal is to improve the precision of these calculations to understand and extract an infinite volume continuum limit value for the transition matrix elements. \subsection{A caveat in the analysis of the isoscalar charges $g_{A,S,T}^{u+d}$ keeping only the connected contribution} \label{sec:PQ} In this paper, we have analyzed only the connected contributions to the isoscalar charges $g_{A,S,T}^{u+d}$. The disconnected contributions are not included as they are not available for all the ensembles, and are analyzed for different, typically smaller, values of source-sink separation $\tau$ because of the lower quality of the statistical signal. Since the proper way to extract the isoscalar charges is to first add the connected and disconnected contributions and then perform the fits using the lattice QCD spectral decomposition to remove excited state contamination, analyzing only the connected contribution introduces an approximation. Isoscalar charges without a disconnected contribution can be defined in a partially quenched theory with an additional quark with flavor $u^\prime$. However, in this theory the Pauli exclusion principle does not apply between the $u$ and $u^\prime$ quarks. The upshot of this is that the spectrum of states in the partially quenched theory is larger, for example, an intermediate $u^\prime u d$ state would be the analogue of a $\Lambda$ baryon\footnote{We thank Stephen Sharpe for providing a diagrammatic illustration of such additional states.}. Thus, the spectral decomposition for this partially quenched theory and QCD is different. The problem arises because our n-state fits assume the QCD spectrum since we take the amplitudes and masses of states from the QCD 2-point function when fitting the 3-point function using Eq.~\eqref{eq:3pt}. One could make fits to 3-point functions leaving all the parameters in Eq.~\eqref{eq:3pt} free, but then even 2-state fits become poorly constrained with current data. We assume that, in practice, the effect due to using the QCD rather than the partially quenched QCD spectra to fit the connected contribution versus $t$ and $\tau$ to remove ESC is smaller than the quoted errors. First, the difference between the plateau value in our largest $\tau$ data and the $\tau \to \infty$ value is a few percent effect, so that any additional systematic is well within the quoted uncertainty. Furthermore, for the tensor charges the disconnected contribution is tiny and consistent with zero, so for the tensor charges one can ignore this caveat. For the axial and scalar charges, the disconnected contribution is between 10\%--20\% of the connected, so we are neglecting possible systematic effects due to extrapolating the connected and disconnected contributions separately. \begin{table*} \centering \begin{ruledtabular} \begin{tabular}{c|ccc|cc|ccc} & \multicolumn{3}{c|} {Axial} & \multicolumn{2}{c|} {Scalar} & \multicolumn{3}{c} {Tensor} \\ ID & $\langle 0 | \mathcal{O}_A | 1 \rangle$ & $\langle 1 | \mathcal{O}_A | 1 \rangle$ & $\langle 0 | \mathcal{O}_A | 2 \rangle$ & $\langle 0 | \mathcal{O}_S | 1 \rangle$ & $\langle 1 | \mathcal{O}_S | 1 \rangle$ & $\langle 0 | \mathcal{O}_T | 1 \rangle$ & $\langle 1 | \mathcal{O}_T | 1 \rangle$ & $\langle 0 | \mathcal{O}_T | 2 \rangle$ \\ \hline $a15m310 $ & $-$0.044( 37) & $-$2.06(1.3) & $-$0.08( 5) & $-$0.37( 3) & $ $ 3.6(4.6) & 0.31( 4) & $-$2.72(1.2) & $-$0.18( 7) \\ \hline $a12m310 $ & $-$0.208( 94) & $ $1.40(2.4) & $ $0.07( 4) & $-$0.72( 9) & $ $ 8.5(10.) & 0.32( 8) & $-$0.82(2.2) & $ $0.08( 4) \\ $a12m220S$ & $-$0.119( 77) & $ $1.46(60) & $ $0.03(10) & $-$0.42(13) & $ $ 3.8(5.7) & 0.19( 8) & $ $0.13(62) & $ $0.10(11) \\ $a12m220 $ & $-$0.047( 52) & $ $0.33(76) & $-$0.08( 5) & $-$0.38(11) & $-$ 2.8(3.6) & 0.21( 5) & $ $0.07(59) & $ $0.12( 4) \\ $a12m220L$ & $-$0.084( 25) & $-$0.21(73) & $-$0.05( 3) & $-$0.38(12) & $ $ 4.6(2.7) & 0.19( 2) & $-$0.04(43) & $ $0.09( 4) \\ \hline $a09m310 $ & $-$0.095( 20) & $-$1.45(1.9) & $ $0.11( 6) & $-$0.39( 4) & $ $ 0.7(1.5) & 0.20( 2) & $ $0.17(1.1) & $ $0.04( 6) \\ $a09m220 $ & $-$0.153( 34) & $-$0.44(98) & $ $0.07( 4) & $-$0.47( 5) & $ $ 1.4(1.0) & 0.16( 3) & $ $0.44(60) & $ $0.13( 3) \\ $a09m130 $ & $-$0.092( 26) & $ $0.65(19) & $ $0.03( 4) & $-$0.42( 7) & $ $ 2.0(1.2) & 0.17( 3) & $ $0.78(14) & $ $0.08( 4) \\ $a09m130W$ & $-$0.098( 26) & $-$0.46(94) & $ $0.06( 6) & $-$0.28( 4) & $ $ 2.2(2.2) & 0.18( 3) & $ $0.37(71) & $ $0.11( 6) \\ \hline $a06m310 $ & $-$0.075( 41) & $ $0.18(51) & $-$0.00( 1) & $-$0.41( 6) & $ $ 1.2(1.4) & 0.14( 5) & $-$0.20(60) & $-$0.08( 9) \\ $a06m310W$ & $-$0.093(124) & $-$0.56(4.5) & $-$0.02(35) & $-$0.44( 9) & $ $10.6(15.) & 0.22(12) & $ $0.41(3.9) & $ $0.04(36) \\ $a06m220 $ & $-$0.184( 40) & $ $0.43(38) & $ $0.28(13) & $-$0.32( 4) & $-$ 0.3(1.1) & 0.09( 4) & $ $0.33(32) & $ $0.05(12) \\ $a06m220W$ & $-$0.249(127) & $ $1.2(2.2) & $ $0.32(25) & $-$0.33(14) & $ $23.4(20.) & 0.29(13) & $-$1.86(3.0) & $-$0.17(25) \\ $a06m135 $ & $-$0.137( 47) & $ $0.81(41) & $ $0.20(13) & $-$0.32( 6) & $ $ 2.4(3.1) & 0.12( 5) & $ $0.82(39) & $ $0.07(12) \\ \end{tabular} \end{ruledtabular} \caption{Estimates of the leading ratios $\langle 0 | \mathcal{O}_\Gamma | 1 \rangle /\langle 0 | \mathcal{O}_\Gamma | 0 \rangle$, $\langle 1 | \mathcal{O}_\Gamma | 1 \rangle /\langle 0 | \mathcal{O}_\Gamma | 0 \rangle$, and $\langle 0 | \mathcal{O}_\Gamma | 2 \rangle /\langle 0 | \mathcal{O}_\Gamma | 0 \rangle$ for the transition and excited state matrix elements in the case of the isovector charges. For the scalar charge, $\langle 0 | \mathcal{O}_\Gamma | 2 \rangle /\langle 0 | \mathcal{O}_\Gamma | 0 \rangle$ is not given since our final results are from the $2$-state fit that are marked with ${}^\dag$ in Table~\protect\ref{tab:results3bareu-d}. } \label{tab:bareEME} \end{table*} \section{Renormalization of Operators} \label{sec:renorm} The renormalization constants $Z_A$, $Z_V$, $Z_S$ and $Z_T$ of the isovector quark bilinear operators are calculated in the regularization-independent symmetric momentum-subtraction (RI-sMOM) scheme~\cite{Martinelli:1994ty,Sturm:2009kb}. We followed the methodology given in Refs.~\cite{Bhattacharya:2015wna,Bhattacharya:2016zcn} and refer the reader to it for details. Results based on the six ensembles, {\it a12m310, a12m220, a09m310, a09m220, a06m310} and {\it a06m220}, obtained in Refs.~\cite{Bhattacharya:2015wna,Bhattacharya:2016zcn} are summarized in Table~\ref{tab:Zfinal} along with the new results on the $a15m310$ ensemble. We briefly summarize the method below for completeness.\looseness-1 The calculation was done as follows: starting with the lattice results obtained in the RI-sMOM scheme at a given Euclidean four-momentum squared $Q^2$, we first convert them to the $\overline{\text{MS}}$ scheme at the same scale (horizontal matching) using two-loop perturbative relations expressed in terms of the coupling constant $\alpha_{\overline{\text{MS}}}(Q^2)$~\cite{Gracey:2011fb}. This estimate at $\mu^2=Q^2$, is then run in the continuum in the $\overline{\text{MS}}$ scheme to $2\mathop{\rm GeV}\nolimits$ using the 3-loop anomalous dimension relations for the scalar and tensor bilinears~\cite{Gracey:2000am,Agashe:2014kda}. These data are labeled by the $Q^2$ in the original RI-sMOM scheme and suffer from artifacts due to nonperturbative effects and the breaking of the Euclidean $O(4)$ rotational symmetry down to the hypercubic group. To get the final estimate, we fit these data versus $Q^2$ using an ansatz motivated by the form of possible artifacts as discussed in Refs.~\cite{Bhattacharya:2015wna,Bhattacharya:2016zcn}. We find that the final renormalization factors on ensembles with constant $a$ show no significant dependence versus $M_\pi$. We, therefore, average the results at different $M_\pi$ to get the mass-independent values at each $a$. In Table~\ref{tab:Zfinal}, we also give the results for the ratios $Z_A/Z_V$, $Z_S/Z_V$, and $Z_T/Z_V$ that show much smaller $O(4)$ breaking, presumably because some of the systematics cancel. From the individual data and the two ratios, $Z_\Gamma /Z_V$ and $g_\Gamma/g_V^{u-d}$, we calculate the renormalized charges in two ways: $Z_\Gamma \times g_\Gamma$ and $(Z_\Gamma /Z_V) \times (g_\Gamma/g_V^{u-d})$ with $Z_V g_V^{u-d} = 1$ since the conservation of the vector current. These two sets of renormalized charges are given in Table~\ref{tab:resultsrenormIV}. \begin{table*} \centering \begin{ruledtabular} \begin{tabular}{c|cccc|ccc} ID & $Z_A^{u-d}$& $Z_S^{u-d}$& $Z_T^{u-d}$& $Z_V^{u-d}$& $Z_A^{u-d}/Z_V^{u-d}$ & $Z_S^{u-d}/Z_V^{u-d}$ & $Z_T^{u-d}/Z_V^{u-d}$ \\ \hline $a=0.15$~fm & $0.96(2)$ & $0.94(4)$ & $0.95(3)$ & $0.92(2)$ & $1.05(2)$ & $1.02(5)$ & $1.02(3)$ \\ $a=0.12$~fm & $0.95(3)$ & $0.90(4)$ & $0.94(4)$ & $0.91(2)$ & $1.045(09)$ & $0.986(09)$ & $1.034(34)$ \\ $a=0.09$~fm & $0.95(4)$ & $0.88(2)$ & $0.98(4)$ & $0.92(2)$ & $1.034(11)$ & $0.955(49)$ & $1.063(29)$ \\ $a=0.06$~fm & $0.97(3)$ & $0.86(3)$ & $1.04(3)$ & $0.95(1)$ & $1.025(09)$ & $0.908(40)$ & $1.100(25)$ \\ \end{tabular} \end{ruledtabular} \caption{The final mass-independent isovector renormalization constants $Z_A^{u-d}$, $Z_S^{u-d}$, $Z_T^{u-d}$, $Z_V^{u-d}$ and the ratios $Z_A^{u-d}/Z_V^{u-d}$, $Z_S^{u-d}/Z_V^{u-d}$ and $Z_T^{u-d}/Z_V^{u-d}$ in the $\overline{\text{MS}}$ scheme at 2~GeV at the four values of the lattice spacing used in our analysis. Results for the $a=0.12$, $a=0.09$ and $a=0.06$~fm ensembles are reproduced from Ref.~\cite{Bhattacharya:2016zcn}.} \label{tab:Zfinal} \end{table*} \begin{table*} \centering \begin{ruledtabular} \begin{tabular}{c|ccc|ccc|cc} & \multicolumn{3}{c|} {$g_\Gamma^{u-d,{\rm bare}}/g_V^{u-d,{\rm bare}}\times Z_\Gamma^{u-d}/Z_V^{u-d}$} & \multicolumn{3}{c|} {$g_\Gamma^{u-d,{\rm bare}} \times Z_\Gamma^{u-d}$} & \multicolumn{2}{c} {} \\ ID & $g_A^{u-d}$ & $g_S^{u-d}$ & $g_T^{u-d}$ & $g_A^{u-d}$ & $g_S^{u-d}$ & $g_T^{u-d}$ & $g_V^{u-d,{\rm bare}}$ & $Z_V g_V^{u-d,{\rm bare}}$ \\ \hline $a15m310 $ & 1.228(25) & 0.828(049) & 1.069(32) & 1.200(26) & 0.816(044) & 1.065(34) & 1.069(04) & 0.983(22) \\ \hline $a12m310 $ & 1.251(19) & 0.891(045) & 1.035(37) & 1.210(41) & 0.865(058) & 1.001(44) & 1.064(05) & 0.968(22) \\ $a12m220S$ & 1.224(44) & 0.916(233) & 1.019(53) & 1.203(56) & 0.903(237) & 1.001(56) & 1.081(18) & 0.983(27) \\ $a12m220 $ & 1.234(25) & 1.024(086) & 1.011(38) & 1.202(43) & 1.001(096) & 0.985(45) & 1.071(09) & 0.975(23) \\ $a12m220L$ & 1.262(17) & 0.807(039) & 1.035(36) & 1.225(41) & 0.786(052) & 1.005(44) & 1.067(04) & 0.971(21) \\ \hline $a09m310 $ & 1.235(15) & 0.936(054) & 1.054(30) & 1.176(50) & 0.893(031) & 1.007(42) & 1.045(03) & 0.962(20) \\ $a09m220 $ & 1.260(19) & 0.958(063) & 1.015(30) & 1.215(53) & 0.926(044) & 0.982(41) & 1.053(03) & 0.969(21) \\ $a09m130 $ & 1.245(32) & 1.050(128) & 0.969(35) & 1.206(57) & 1.019(116) & 0.942(44) & 1.052(08) & 0.969(22) \\ $a09m130W$ & 1.249(21) & 0.952(074) & 1.011(30) & 1.207(53) & 0.923(058) & 0.980(44) & 1.052(06) & 0.968(22) \\ \hline $a06m310 $ & 1.233(30) & 1.090(104) & 1.046(33) & 1.205(46) & 1.065(100) & 1.021(36) & 1.043(06) & 0.991(12) \\ $a06m310W$ & 1.205(24) & 0.984(074) & 1.037(30) & 1.180(42) & 0.964(071) & 1.014(34) & 1.035(11) & 0.983(15) \\ $a06m220 $ & 1.206(21) & 0.959(071) & 1.022(27) & 1.198(41) & 0.953(066) & 1.014(32) & 1.050(07) & 0.997(12) \\ $a06m220W$ & 1.241(26) & 0.672(082) & 1.018(34) & 1.220(45) & 0.661(080) & 1.000(37) & 1.039(09) & 0.987(13) \\ $a06m135 $ & 1.220(27) & 0.876(120) & 1.005(30) & 1.203(45) & 0.864(118) & 0.990(35) & 1.042(10) & 0.990(14) \\ \hline 11-point fit & 1.218(25) & 1.022(80) & 0.989(32) & 1.197(42) & 1.010(74) & 0.966(37) & & \\ $\chi^2/$d.o.f. & 0.21 & 1.43 & 0.10 & 0.05 & 1.12 & 0.20 & & \\ 10-point fit & 1.215(31) & 0.914(108) & 1.000(41) & 1.200(56) & 0.933(108) & 0.994(48) & & \\ $\chi^2/$d.o.f. & 0.24 & 1.30 & 0.09 & 0.06 & 1.15 & 0.09 & & \\ $10^\ast$-point fit & 1.218(25) & 1.021(80) & 0.989(32) & 1.197(43) & 1.009(74) & 0.966(37) & & \\ $\chi^2/$d.o.f. & 0.23 & 1.67 & 0.11 & 0.06 & 1.31 & 0.17 & & \\ $8$-point fit & 1.245(42) & 1.214(130) & 0.977(67) & 1.172(94) & 1.123(105) & 0.899(86) & & \\ $\chi^2/$d.o.f. & 0.20 & 1.14 & 0.13 & 0.06 & 0.87 & 0.13 & & \\ \end{tabular} \end{ruledtabular} \caption{Results for the renormalized isovector charges calculated in two ways, $g_\Gamma^{u-d,{\rm bare}}/g_V^{u-d,{\rm bare}} \times Z_\Gamma^{u-d}/Z_V^{u-d}$ and $g_\Gamma^{u-d,{\rm bare}} \times Z_\Gamma^{u-d}$. The errors are obtained by adding in quadrature the errors in the bare matrix elements and in the renormalization constants given in Table~\protect\ref{tab:Zfinal}. The unrenormalized charges are given in Table~\protect\ref{tab:resultsbare}. In the last two columns, we also give the results for the bare, $g_V^{u-d,{\rm bare}}$ and the renormalized, $Z_V g_V^{u-d,{\rm bare}}$, vector charge. The latter should be unity as it is conserved. The deviations are found to be up to 4\%. Results of the four CCFV fits (11-point, 10-point, $10^\ast$-point, and the $8$-point defined in the text) are given in the bottom eight rows. } \label{tab:resultsrenormIV} \end{table*} We are also interested in extracting flavor diagonal charges which can be written as a sum over isovector ($u-d$) and isoscalar ($u+d$) combinations. These combinations renormalize with the corresponding isovector, $Z^{\rm isovector}$, and isoscalar, $Z^{\rm isoscalar}$, factors that are, in general, different~\cite{Bhattacharya:2005rb}.~\footnote{In general, one considers the singlet and non-singlet combinations in a $N_f$-flavor theory. In this paper, we are only analyzing the insertions on $u$ and $d$ quarks that are taken to be degenerate, so it is convenient to use the 2-flavor labels, isosinglet ($u+d$) and isovector ($u-d$).} Only the isovector renormalization constants are given in Table~\ref{tab:Zfinal}. In perturbation theory, the difference between $Z^{\rm isovector}$ and $Z^{\rm isoscalar}$ appears at two loops, and is therefore expected to be small. Explicit calculations in Refs.~\cite{Alexandrou:2017qyt,Alexandrou:2017oeh,Green:2017keo} show that $Z^{\rm isosinglet} \approx Z^{\rm isovector}$ for the axial and tensor charges. Since the two agree to within a percent, we will assume $Z_{A,T}^{\rm isoscalar} = Z_{A,T}^{\rm isovector}$ in this work, and renormalize both isovector ($u-d$) and isoscalar ($u+d$) combinations of charges using $ Z^{\rm isovector}$. In the case of the tensor charges, this approximation is even less significant since the contribution of the disconnected diagrams to the charges is consistent with zero within errors~\cite{Bhattacharya:2015wna}. In the case of the scalar charge, the difference between $Z^{\rm isosinglet}$ and $Z^{\rm isovector}$ can be large due to the explicit breaking of the chiral symmetry in the Wilson-clover action which induces mixing between flavors. This has not been fully analyzed for our clover-on-HISQ formulation, so only the bare results for $g_S^{u-d}$ and $g_S^{u+d}$, and the renormalized results for $g_S^{u-d}$ are presented in this work. \begin{table*} \centering \begin{ruledtabular} \begin{tabular}{c|cc|ccc} ID & $g_A^{u}$ & $g_A^{d}$ & $g_T^{u}$ & $g_T^{d}$ & $g_T^{u+d}$ \\ \hline $a15m310 $ & 0.920(19) & $-$0.307(07) & 0.860(26) & $-$0.209(07) & 0.649(21) \\ \hline $a12m310 $ & 0.929(17) & $-$0.322(09) & 0.835(30) & $-$0.200(10) & 0.635(26) \\ $a12m220S$ & 0.904(42) & $-$0.321(27) & 0.781(51) & $-$0.238(33) & 0.543(68) \\ $a12m220 $ & 0.924(24) & $-$0.311(14) & 0.818(32) & $-$0.194(12) & 0.624(30) \\ $a12m220L$ & 0.922(12) & $-$0.340(09) & 0.819(29) & $-$0.216(08) & 0.600(26) \\ \hline $a09m310 $ & 0.928(12) & $-$0.308(05) & 0.845(24) & $-$0.208(07) & 0.637(19) \\ $a09m220 $ & 0.931(15) & $-$0.329(08) & 0.810(24) & $-$0.205(08) & 0.604(20) \\ $a09m130 $ & 0.901(23) & $-$0.344(17) & 0.772(29) & $-$0.198(12) & 0.574(28) \\ $a09m130W$ & 0.919(17) & $-$0.330(09) & 0.806(25) & $-$0.205(09) & 0.601(23) \\ \hline $a06m310 $ & 0.916(27) & $-$0.317(16) & 0.836(29) & $-$0.210(13) & 0.626(31) \\ $a06m310W$ & 0.897(24) & $-$0.307(17) & 0.833(26) & $-$0.204(10) & 0.629(25) \\ $a06m220 $ & 0.890(16) & $-$0.316(13) & 0.816(22) & $-$0.206(11) & 0.609(21) \\ $a06m220W$ & 0.905(25) & $-$0.336(16) & 0.809(30) & $-$0.209(12) & 0.600(30) \\ $a06m135 $ & 0.902(23) & $-$0.318(13) & 0.811(26) & $-$0.193(11) & 0.618(26) \\ \hline 11-point fit & 0.895(21) & $-$0.320(12) & 0.790(27) & $-$0.198(10) & 0.590(25) \\ $\chi^2/$d.o.f. & 0.29 & 0.52 & 0.20 & 0.67 & 0.38 \\ 10-point fit & 0.890(27) & $-$0.324(17) & 0.810(36) & $-$0.201(16) & 0.608(37) \\ $\chi^2/$d.o.f. & 0.33 & 0.59 & 0.12 & 0.77 & 0.37 \\ $10^\ast$-point fit & 0.895(21) & $-$0.319(12) & 0.790(27) & $-$0.197(10) & 0.592(25) \\ $\chi^2/$d.o.f. & 0.34 & 0.57 & 0.09 & 0.57 & 0.16 \\ \end{tabular} \end{ruledtabular} \caption{Results for the renormalized connected part of the flavor diagonal charges, $g_\Gamma^{\rm bare}/g_V^{{u-d},{\rm bare}} \times Z_\Gamma^{u-d}/Z_V^{u-d}$. The final errors are obtained by adding in quadrature the errors in estimates of the ratios $g_\Gamma^{\rm bare}/g_V^{{u-d},{\rm bare}}$ to the errors in the ratios of the renormalization constants, $Z_\Gamma^{u-d}/Z_V^{u-d}$ given in Table~\protect\ref{tab:Zfinal}. Results for $g_T^{u+d}$ are presented assuming that the disconnected contributions, shown to be tiny in Ref.~\protect\cite{Bhattacharya:2015wna}, can be neglected. Results of three CCFV fits (the 11-point, the 10-point, and the $10^\ast$-point defined in the text) are given in the bottom six rows. } \label{tab:resultsrenormFD} \end{table*} \section{Continuum, chiral and finite volume fit for the charges $g_A$, $g_S$, $g_T$} \label{sec:results} To obtain estimates of the renormalized charges given in Tables~\ref{tab:resultsrenormIV} and~\ref{tab:resultsrenormFD} in the continuum limit ($a\rightarrow 0$), at the physical pion mass ($M_{\pi^0} = 135$~MeV) and in the infinite volume limit ($L \rightarrow \infty$), we need an appropriate physics motivated fit ansatz. To parametrize the dependence on $M_\pi$ and the finite volume parameter $M_\pi L$, we resort to results from finite volume chiral perturbation theory ($\chi$FT)~\cite{Bernard:1992qa,Bernard:1995dp,Bernard:2006gx,Bernard:2006te,Khan:2006de,Colangelo:2010ba,deVries:2010ah}. For the lattice discretization effects, the corrections start with the term linear in $a$ since the action and the operators in our clover-on-HISQ formalism are not fully $O(a)$ improved. Keeping just the leading correction term in each, plus possibly the chiral logarithm term discussed below, our approach is to make a simultaneous fit in the three variables to the data from the eleven ensembles. We call these the CCFV fits. For the isovector charges and the flavor diagonal axial and tensor charges, the ansatz is \begin{align} g_{A,S,T}^{u-d} (a,M_\pi,L) &= c_1 + c_2a + c_3 M_\pi^2 + c_3^{\rm log} M_\pi^2 \ln \left(\frac{M_\pi}{M_\rho}\right)^2 \nonumber \\ &+ c_4 M_\pi^2 \frac{e^{-M_\pi L}}{X(M_\pi L)} \,, \label{eq:extrapgAST} \end{align} where $M_\rho$ in the chiral logarithm is the renormalization scale. The coefficients, $c_3^{\rm log}$, are known in $\chi$PT, and with lattice QCD data at multiple values of $M_\pi$ and at fixed $a$ and $M_\pi L$ one can compare them against values obtained from the fits. As shown in Fig.~\ref{fig:conUmD-extrap11}, the $M_\pi$ dependence of all three isovector charges is mild and adequately fit by the lowest order term. Since the $c_3^{\rm log}$ predicted by $\chi$PT are large, including it requires also including still higher order terms in $M_\pi$ to fit the mild dependence. In our case, with data at just three values of $M_\pi$ and the observed mild dependence between 320 and 135~MeV, including more than one free parameter is not justified based on the Akaike Information Criterion (AIC) that requires the reduction of $\chi^2$ by two units for each extra parameter. In short, we cannot test the predictions of $\chi$PT. For example, in a fit including the chiral log term and a $M_\pi^3$ term, the two additional terms essentially negate each other over the range of the data, i.e., between 320--135~MeV. If the large $\chi$PT value for the coefficient $c_3^{\rm log}$ of the chiral log is used as an input, then the fit pushes the coefficient of the $M_\pi^3$ term to also be large to keep the net variation within the interval of the data small. Furthermore, as can be seen from Table~\ref{tab:chiralfit}, even the coefficients of the leading order terms are poorly determined for all three charges. This is because the variations between points and the number of points are both small. For these reasons, including the chiral logarithm term to analyze the current data does not add predictive capability, nor does it provide a credible estimate of the uncertainty due to the fit ansatz, nor tests the $\chi$PT value of the coefficient $c_3^{\rm log}$. Consequently, the purpose of our chiral fit reduces to getting the value at $M_\pi=135$~MeV. We emphasize that this is obtained reliably with just the leading chiral correction since the fits are anchored by the data from the two physical pion mass ensembles. The finite-volume correction, in general, consists of a number of terms, each with different powers of $M_\pi L$ in the denominator and depending on several low-energy constants (LEC)~\cite{Khan:2006de}. We have symbolically represented these powers of $M_\pi L$ by $X(M_\pi L)$. Since the variation of this factor is small compared to the exponential over the range of $M_\pi L$ investigated, we set $X(M_\pi L) = {\rm constant}$ and retain only the appropriate overall factor $M_\pi^2 e^{-M_\pi L}$, common to all the terms in the finite-volume expansion, in our fit ansatz. The, {\it a posteriori}, justification for this simplification is that no significant finite volume dependence is observed in the data as shown in Fig.~\ref{fig:conUmD-extrap11}. We have carried out four fits with different selections of the fourteen data points and for the two constructions of the renormalized charges. Starting with the 14 calculations, we first construct a weighted average of the pairs of points from the three $a09m130$, $a06m310$ and $a06m220$ ensembles. For errors, we adopt the Schmelling procedure~\cite{Schmelling:1994pz} assuming maximum correlation between the two values from each ensemble. This gives us eleven data points to fit. \begin{itemize} \item The fit with all the data points is called the 11-point fit. This is used to obtain the final results. \item Remove the coarsest $a15m310$ ensemble point from the analysis. This is called the 10-point fit. \item Remove the $a12m220S$ point as it has the largest errors and the smallest volume. This is called the $10^\ast$-point fit. \item To compare results for $g_A^{u-d}$ with those from the CalLat collaboration~\cite{Chang:2018uxx} (see Sec.~\ref{sec:comparison}), we perform an $8-$point fit that neglects the data from the three $a\approx 0.06$~fm ensembles. \end{itemize} The results from these four fits and for the two ways of constructing the renormalized isovector charges are given in Table~\ref{tab:resultsrenormIV}. We find that the six estimates for $g_A^{u-d}$ and $g_T^{u-d}$ from the 11-point, 10-point and $10^\ast$-point fits with the two ways of renormalization overlap within $1\sigma$. As discussed in Sec.~\ref{sec:comparison}, for $g_A^{u-d}$, the $a15m310$ point plays an important role in the comparison with the CalLat results. For the final results, we use the 11-point fit to the isovector charges renormalized using $g_\Gamma^{\rm bare}/g_V^{\rm bare} \times Z_\Gamma/Z_V$ as some of the systematics cancel in the double ratio. These fits are shown in Fig.~\ref{fig:conUmD-extrap11}. The lattice artifact that has the most impact on the final values is the dependence of $g_A^{u-d}$ and $g_S^{u-d}$ on the lattice spacing $a$. As shown in Fig.~\ref{fig:conUmD-extrap11}, in these cases the CCFV fit coincides with the fit versus just $a$ (pink and grey bands overlap in such cases). On the other hand, one can see from the middle panels, showing the variation versus $M_\pi^2$, that had we only analyzed the data versus $M_\pi^2$ (grey band), we would have gotten a higher value for $g_A^{u-d}$ and a lower one for $g_S^{u-d}$, and both with smaller errors. Our conclusion is that, even when the observed variation is small, it is essential to perform a simultaneous CCFV fit to remove the correlated contributions from the three lattice artifacts. The data for $g_T^{u-d}$ continues to show very little sensitivity to the three variables and the extrapolated value is stable~\cite{Bhattacharya:2016zcn}. A large part of the error in the individual data points, and thus in the extrapolated value, is now due to the poorly behaved two-loop perturbation theory used to match the RI-sMOM to the $\overline{\rm MS}$ scheme in the calculation of the renormalization constant $Z_T$. Further precision in $g_T^{u-d}$, therefore, requires developing more precise methods for calculating the renormalization constants. Overall, compared to the results presented in Ref.~\cite{Bhattacharya:2016zcn}, our confidence in the CCFV fits for all three charges has improved with the new higher precision data. The final results for the isovector charges in the $\overline{\rm MS}$ scheme at 2~GeV from the 11-point fit to data given in Table~\ref{tab:resultsrenormIV} and renormalized using $g_\Gamma^{\rm bare}/g_V^{\rm bare} \times Z_\Gamma/Z_V$ are: \begin{align} g_A^{u-d} &= 1.218(25) \,, \nonumber \\ g_S^{u-d} &= 1.022(80) \,, \nonumber \\ g_T^{u-d} &= 0.989(32) \,. \label{eq:gFinal} \end{align} These results for $g_S^{u-d}$ and $g_T^{u-d}$ meet the target ten percent uncertainty needed to leverage precision neutron decay measurements of the helicity flip parameters $b$ and $b_\nu$ at the $10^{-3}$ level to constrain novel scalar and tensor couplings, $\epsilon_S$ and $\epsilon_T$, arising at the TeV scale~\cite{Bhattacharya:2011qm,Bhattacharya:2016zcn}. \begin{figure*}[tb] \subfigure{ \includegraphics[width=0.32\linewidth]{fig2/gAovergV_a_lo_fv_hlabel} \includegraphics[width=0.32\linewidth]{fig2/gAovergV_mpisq_lo_fv_nolabel} \includegraphics[width=0.32\linewidth]{fig2/gAovergV_mpiL_lo_fv_nolabel} } \subfigure{ \includegraphics[width=0.32\linewidth]{fig2/gSovergV_a_lo_fv_hlabel} \includegraphics[width=0.32\linewidth]{fig2/gSovergV_mpisq_lo_fv_nolabel} \includegraphics[width=0.32\linewidth]{fig2/gSovergV_mpiL_lo_fv_nolabel} } \subfigure{ \includegraphics[width=0.32\linewidth]{fig2/gTovergV_a_lo_fv_hlabel} \includegraphics[width=0.32\linewidth]{fig2/gTovergV_mpisq_lo_fv_nolabel} \includegraphics[width=0.32\linewidth]{fig2/gTovergV_mpiL_lo_fv_nolabel} } \caption{The 11-point CCFV fit using Eq.~\protect\eqref{eq:extrapgAST} to the data for the renormalized isovector charges $g_A^{u-d}$, $g_S^{u-d}$, and $g_T^{u-d}$ in the $\overline{{\rm MS}}$ scheme at 2~GeV. The result of the simultaneous extrapolation to the physical point defined by $a\rightarrow 0$, $M_\pi \rightarrow M_{\pi^0}^{{\rm phys}}=135$~MeV and $M_\pi L \rightarrow \infty$ are marked by a red star. The pink error band in each panel is the result of the simultaneous fit but shown as a function of a single variable. The overlay in the left (middle) panels with the dashed line within the grey band is the fit to the data versus $a$ ($M_\pi^2$), i.e., neglecting dependence on the other two variables. The symbols used to plot the data are defined in the left panels.} \label{fig:conUmD-extrap11} \end{figure*} \begin{figure*}[tb] \subfigure{ \includegraphics[width=0.32\linewidth]{fig3/gAovergV_u_a_lo_fv_hlabel} \includegraphics[width=0.32\linewidth]{fig3/gAovergV_u_mpisq_lo_fv_nolabel} \includegraphics[width=0.32\linewidth]{fig3/gAovergV_u_mpiL_lo_fv_nolabel} } \hspace{0.04\linewidth} \subfigure{ \includegraphics[width=0.32\linewidth]{fig3/gAovergV_d_a_lo_fv_hlabel} \includegraphics[width=0.32\linewidth]{fig3/gAovergV_d_mpisq_lo_fv_nolabel} \includegraphics[width=0.32\linewidth]{fig3/gAovergV_d_mpiL_lo_fv_nolabel} } \caption{The 11-point CCFV fit using Eq.~\protect\eqref{eq:extrapgAST} to the connected data for the flavor diagonal charges $g_A^{u}$ and $g_A^{d}$ renormalized in the $\overline{{\rm MS}}$ scheme at 2~GeV. Only the data for $g_A^u$ show a notable dependence on the lattice spacing $a$. The rest is the same as in Fig.~\protect\ref{fig:conUmD-extrap11}.\looseness-1 \label{fig:extrap-gA-diagonal}} \end{figure*} \begin{figure*}[tb] \subfigure{ \includegraphics[width=0.32\linewidth]{fig4/gTovergV_u_a_lo_fv_hlabel} \includegraphics[width=0.32\linewidth]{fig4/gTovergV_u_mpisq_lo_fv_nolabel} \includegraphics[width=0.32\linewidth]{fig4/gTovergV_u_mpiL_lo_fv_nolabel} } \hspace{0.04\linewidth} \subfigure{ \includegraphics[width=0.32\linewidth]{fig4/gTovergV_d_a_lo_fv_hlabel} \includegraphics[width=0.32\linewidth]{fig4/gTovergV_d_mpisq_lo_fv_nolabel} \includegraphics[width=0.32\linewidth]{fig4/gTovergV_d_mpiL_lo_fv_nolabel} } \caption{The 11-point CCFV fit using Eq.~\protect\eqref{eq:extrapgAST} to the connected data for the flavor diagonal charges $g_T^{u}$ and $g_T^{d}$ renormalized in the $\overline{{\rm MS}}$ scheme at 2~GeV. Only the data for $g_T^u$ show a notable dependence on $M_\pi$. The rest is the same as in Fig.~\protect\ref{fig:conUmD-extrap11}.\looseness-1 \label{fig:extrap-gT-diagonal}} \end{figure*} Results of the 11-point, 10-point, and $10^\ast$-point fits to the connected contributions to the flavor-diagonal charges $g_{A,T}^{u,d}$, using the isovector renormalization factor $Z_{A,T}^{\rm isovector}$, respectively, are given in Table~\ref{tab:resultsrenormFD}. Their behavior versus the lattice spacing and the pion mass is shown in Figs.~\ref{fig:extrap-gA-diagonal} and~\ref{fig:extrap-gT-diagonal} using the 11-point fits, again with $c_3^{\rm log}=0$ in the ansatz given in Eq.~\eqref{eq:extrapgAST}. The data exhibit the following features: \begin{itemize} \item The noticeable variation in the axial charges is in $g_A^u$ versus $a$ which carries over to $g_A^{u-d}$. \item The flavor diagonal charges $g_T^{u,d}$ show little variation except for the small dependence of $g_T^u$ on $M_\pi^2$ which carries over to $g_T^{u-d}$. \end{itemize} Our final results from the 11-point fits for the connected parts of the flavor diagonal charges for the proton are \looseness-1 \begin{align} g_A^{u,{\rm conn}} &= 0.895(21) \qquad\ g_A^{d,{\rm conn}} = -0.320(12) \,, \nonumber \\ g_T^{u,{\rm conn}} &= 0.790(27) \qquad\ g_T^{d,{\rm conn}} = -0.198(10) \,. \label{eq:FDconnected} \end{align} Estimates for the neutron are given by the $u \leftrightarrow d$ interchange. We again remind the reader that the disconnected contributions for the flavor diagonal axial charges are $O(15\%)$ and will be discussed elsewhere. The disconnected contribution to $g_T^{u+d}$ is small (comparable to the statistical errors) and $Z_T^{u-d} \approx Z_T^{u+d}$. Thus, the results for $g_T^{u,d}$ and $g_T^{u+d}$ are a good approximation to the total contribution. The new estimates given here supersede the values presented in Refs.~\cite{Bhattacharya:2015wna,Bhattacharya:2015esa}. \begin{table*}[tb] \begin{center} \renewcommand{\arraystretch}{1.2} \begin{ruledtabular} \begin{tabular}{c|c|c|c|c|c} & $c_1$ & $c_2$ & $c_3$ & $c_4$ & $g_\Gamma$ \\ & & fm${}^{-1}$ & GeV${}^{-2}$ & GeV${}^{-2}$ & \\ \hline $g_A^{u-d}$ & 1.21(3) & 0.41(26) & 0.18(33) &$-$32(19) & 1.218(25) \\ \hline $g_S^{u-d}$ & 1.02(1) &$-$1.57(75) & 0.22(1.12) & 24(54) & 1.022(80) \\ \hline $g_T^{u-d}$ & 0.98(3) & 0.11(38) & 0.55(45) & 5(29) & 0.989(32) \\ \end{tabular} \end{ruledtabular} \caption{Values of the fit parameters in the CCFV ansatz defined in Eq.~\eqref{eq:extrapgAST} with $c_3^{\rm log}=0$. The results are given for the 11-point fit used to extract the three isovector charges. } \label{tab:chiralfit} \end{center} \end{table*} \section{Assessing additional error due to CCFV fit ansatz} \label{sec:errors} In this section we reassess the estimation of errors from various sources and provide an additional systematic uncertainty in the isovector charges due to using a CCFV ansatz with only the leading order correction terms. We first briefly review the systematics that are already addressed in our analysis leading to the results in Eq.~\eqref{eq:gFinal}: \begin{itemize} \item Statistical and excited-state contamination (SESC): Errors from these two sources are jointly estimated in the 2- and $3^\ast$ state fits. The 2- and $3^\ast$ state fits for $g_A^{u-d}$ and $g_T^{u-d}$ give overlapping results and in most cases the error estimates from the quoted $3^\ast$-state fits are larger. For $g_S^{u-d}$, we compare the 2- and $2^\ast$-state fits. Based on these comparisons, an estimate of the magnitude of possible residual ESC is given in the first row of Table~\ref{tab:errors} for all three charges. \item Uncertainty in the determination of the renormalization constants $Z_\Gamma$: The results for the $Z$'s and an estimate of the possible uncertainty presented in Ref.~\cite{Bhattacharya:2016zcn} have not changed. These are reproduced in Tables~\ref{tab:Zfinal} and~\ref{tab:errors}, respectively. With the increase in statistical precision of the bare charges, the uncertainty in the $Z_\Gamma$ is now a significant fraction of the total uncertainty in $g_{A,S,T}^{u-d}$. \item Residual uncertainties due to the three systematics, extrapolations to $a\to 0$ and $M_\pi L \to \infty$ and the variation with $M_\pi$. Estimates of errors in the simultaneous CCFV fit using the lowest order corrections (see Eq.~\eqref{eq:extrapgAST}) are given in rows 3--5 in Table~\ref{tab:errors}. These are, in most cases, judged to be small because the variation with respect to the three variables, displayed in Fig.~\ref{fig:conUmD-extrap11}, is small. With increased statistics and the second physical mass ensemble, $a06m135$, our confidence in the CCFV fits and the error estimates obtained with keeping only the lowest-order corrections in each variable has increased significantly. The exception is the dependence of $g_S^{u-d}$ on $a$ as highlighted by the dependence of the extrapolated value on whether the $a15m310$ point is included (11-point fit) or excluded (10-point fit). \end{itemize} Adding the guesstimates for these five systematic uncertainties, given in rows 1--5, in quadrature, leads to an error estimate given in the sixth row in Table~\ref{tab:errors}. This is consistent with the errors quoted in Eq.~\eqref{eq:gFinal} and reproduced in the seventh row of Table~\ref{tab:errors}. We, therefore, regard the fits and the error estimates given in Eq.~\eqref{eq:gFinal} as adequately capturing the uncertainty due to the five systematics discussed above. The $\chi^2/{\rm d.o.f.}$ of all four fits for the axial and tensor charges given in Table~\ref{tab:resultsrenormIV} are already very small. Therefore, adding higher order terms to the ansatz is not justified as per the Akaike Information Criterion~\cite{Akaike:1100705}. Nevertheless, to be conservative, we quote an additional systematic uncertainty due to the truncation of the CCFV fit ansatz at the leading order in each of the three variables, by examining the variation in the data in Fig.~\ref{fig:conUmD-extrap11}. For $g^{u-d}_A$, the key reason for the difference between our extrapolated value and the experimental results are the data on the $a\approx 0.06$~fm lattices. As discussed in Sec.~\ref{sec:comparison}, an extrapolation in $a$ with and without these ensembles gives $g^{u-d}_A=1.218(25)$ and $g^{u-d}_A=1.245(42)$, respectively. The difference, $0.03$, is roughly half the total spread between the fourteen values of $g^{u-d}_A$ given in Table~\ref{tab:resultsrenormIV}. We, therefore, quote $0.03$ as the additional uncertainty due to the truncation of the fit ansatz. The dominant variation in $g^{u-d}_S$ is again versus $a$, and, as stated above, the result depends on whether the $a15m310$ point is included in the fit. We, therefore, take half the difference, $0.06$, between the 11-point and 10-point fit values as the additional systematic uncertainty. One gets a similar estimate by taking the difference in the fit value at $a=0.06$~fm and $a=0$. For $g^{u-d}_T$, the largest variation is versus $M_\pi^2$. Since we have data from two ensembles at $M_\pi \approx 135$~MeV that anchor the chiral fit, we take half the difference in the fit values at $M_\pi=135$ and $220$~MeV as the estimate of the additional systematic uncertainty.\looseness-1 These error estimates, rounded up to two decimal places, are given in the last row of Table~\ref{tab:errors}. Including them as a second systematic error, our final results for the isovector charges in the $\overline{\rm MS}$ scheme at 2~GeV are: \begin{align} g_A^{u-d} &= 1.218(25)(30) \,, \nonumber \\ g_S^{u-d} &= 1.022(80)(60) \,, \nonumber \\ g_T^{u-d} &= 0.989(32)(10) \,. \label{eq:gFinal2} \end{align} Similar estimates of possible extrapolation uncertainty apply also to results for the connected contributions to the flavor diagonal charges presented in Eq.~\eqref{eq:FDconnected}. Their final analysis, including disconnected contributions, will be presented in a separate publication. \begin{table} \centering \begin{ruledtabular} \begin{tabular}{c|ccc} Error From & $g_A^{u-d}$ & $g_S^{u-d}$ & $g_T^{u-d}$ \\ \hline SESC & $0.02$ $\Uparrow$ & $0.03$ $\Uparrow$ & $0.01$ $\Downarrow$ \\ $Z$ & $0.01$ $\Downarrow$ & $0.04$ $\Uparrow$ & $0.03$ $\Downarrow$ \\ $a$ & $0.02$ $\Downarrow$ & $0.04$ $\Uparrow$ & $0.01$ $\Downarrow$ \\ Chiral & $0.01$ $\Uparrow$ & $0.01$ $\Downarrow$ & $0.02$ $\Downarrow$ \\ Finite volume & $0.01$ $\Uparrow$ & $0.01$ $\Uparrow$ & $0.01$ $\Uparrow$ \\ \hline Guesstimate error & $0.033$ & $0.066$ & $0.04$ \\ \hline Error quoted & $0.025$ & $0.080$ & $0.032$ \\ \hline Fit ansatz & $0.03$ & $0.06$ & $0.01$ \\ \end{tabular} \end{ruledtabular} \caption{Estimates of the error budget for the three isovector charges due to each of the five systematic effects described in the text. The symbols $\Uparrow$ and $\Downarrow$ indicate the direction in which a given systematic is observed to drive the central value obtained from the 11-point fit. The sixth row gives a guesstimate of error obtained by combining these five systematics in quadrature. This guesstimate is consistent with the actual errors obtained from the 11-point fit and quoted in Eq.~\protect\ref{eq:gFinal} and reproduced in the seventh row. The last row gives the additional systematic error assigned to account for possible uncertainty due to the using the CCFV fit ansatz with just the lowest order correction terms as described in the text. } \label{tab:errors} \end{table} Our new estimate $g_S^{u-d}= 1.022(80)(60)$ is in very good agreement with $g_S^{u-d}= 1.02(8)(7)$ obtained by Gonzalez-Alonso and Camalich~\cite{Gonzalez-Alonso:2013ura} using the conserved vector current (CVC) relation $g_S/g_V = (M_N-M_P)^{\rm QCD}/ (m_d-m_u)^{\rm QCD}$ with the FLAG lattice-QCD estimates~\cite{FLAG:2016qm} for the two quantities on the right hand side. The superscript QCD denotes that the results are in a theory with just QCD, i.e., neglecting electromagnetic corrections. Using CVC in reverse, our predictions for $(M_N-M_P)^{\rm QCD}$, using lattice QCD estimates for $m_u$ and $m_d$, are given in Table~\ref{tab:Mn-Mp}. The uncertainty in these estimates is dominated by that in $g_S^{u-d}$.\looseness-1 \begin{table}[ht] \begin{center} \renewcommand{\arraystretch}{1.2} \begin{ruledtabular} \begin{tabular}{c|c|l} $M_N-M_P$ & $N_f$ & $\{m_d,m_u\}^{\rm QCD}$ \\ (MeV) & Flavors & (MeV) \\ \hline $2.58(32)$ & 2+1 & $m_d = 4.68(14)(7),m_u = 2.16(9)(7)$~\protect\cite{FLAG:2016qm} \\ $2.73(44)$ & 2+1+1 & $m_d = 5.03(26),m_u = 2.36(24)$~\protect\cite{FLAG:2016qm} \\ $2.41(27)$ & 2+1 & $m_d - m_u = 2.41(6)(4)(9)$~\protect\cite{Fodor:2016bgu} \\ $2.63(27)$ & 2+1+1 & $m_d = 4.690(54),m_u = 2.118(38)$~\protect\cite{Bazavov:2018omf} \end{tabular} \end{ruledtabular} \caption{Results for the mass difference $(M_N-M_P)^{\rm QCD}$ obtained using the CVC relation with our estimate $g_S^{u-d}= 1.022(80)(60)$ and lattice results for the up and down quark masses from the FLAG review~\cite{FLAG:2016qm} and recent results~\protect\cite{Fodor:2016bgu,Bazavov:2018omf}. } \label{tab:Mn-Mp} \end{center} \end{table} \section{Comparison with Previous Work} \label{sec:comparison} A summary of lattice results for the three isovector charges for $N_f=2$-, 2+1- and 2+1+1-flavors is shown in Figs.~\ref{fig:PASTgA},~\ref{fig:PASTgS} and~\ref{fig:PASTgT}. They show the steady improvement in results from lattice QCD. In this section we compare our results with two calculations published after the analysis and the comparison presented in Ref.~\cite{Bhattacharya:2016zcn}, and that include data from physical pion mass ensembles. These are the ETMC~\cite{Alexandrou:2017oeh,Alexandrou:2017qyt,Alexandrou:2017hac} and CalLat results~\cite{Chang:2018uxx}. \begin{figure} \begin{center} \includegraphics[width=.47\textwidth]{figs/gAcomp-explat-2018-06-23-mag} \end{center} \vspace{-0.5cm} \caption{A summary of results for the axial isovector charge, $g_A^{u-d}$, for $N_f=2$- 2+1- and 2+1+1-flavors. Note the much finer x-axis scale for the plot showing experimental results for $g_A^{u-d}$. The lattice results (top panel) are from: PNDME'18 (this work); PNDME'16~\protect\cite{Bhattacharya:2016zcn}; CalLat'18~\protect\cite{Chang:2018uxx}; LHPC'14~\protect\cite{Green:2012ud}; LHPC'10~\protect\cite{Bratt:2010jn}; RBC/UKQCD'08~\protect\cite{Lin:2008uz}; Lin/Orginos'07~\protect\cite{Lin:2007ap}; ETMC'17~\protect\cite{Alexandrou:2017oeh,Alexandrou:2017hac}; Mainz'17~\protect\cite{Capitani:2017qpc} RQCD'14~\protect\cite{Bali:2014nma}; QCDSF/UKQCD'13~\protect\cite{Horsley:2013ayv}; ETMC'15~\protect\cite{Abdel-Rehim:2015owa} and RBC'08~\protect\cite{Yamazaki:2008py}. Phenomenological and other experimental results (middle panel) are from: AWSR'16~\protect\cite{Beane:2016lcm} and COMPASS'15~\protect\cite{Adolph:2015saz}. The results from neutron decay experiments (bottom panel) have been taken from: Brown'17~\protect\cite{Brown:2017mhw}; Mund'13~\protect\cite{Mund:2012fq}; Mendenhall'12~\protect\cite{Mendenhall:2012tz}; Liu'10~\protect\cite{Liu:2010ms}; Abele'02~\protect\cite{Abele:2002wc}; Mostovoi'01~\protect\cite{Mostovoi:2001ye}; Liaud'97~\protect\cite{Liaud:1997vu}; Yerozolimsky'97~\protect\cite{Erozolimsky:1997wi} and Bopp'86~\protect\cite{Bopp:1986rt}. The lattice-QCD estimates in red indicate that estimates of excited-state contamination, or discretization errors, or chiral extrapolation were not presented. When available, systematic errors have been added to statistical ones as outer error bars marked with dashed lines. } \label{fig:PASTgA} \end{figure} \begin{figure} \begin{center} \includegraphics[width=.47\textwidth,clip]{figs/gScomp-explat-2018-06-23-mag} \end{center} \vspace{-0.5cm} \caption{A summary of results for the isovector scalar charge, $g_S^{u-d}$, for $N_f=2$- 2+1- and 2+1+1-flavors. The lattice results are from: PNDME'18 (this work); PNDME'16~\protect\cite{Bhattacharya:2016zcn}; LHPC'12~\protect\cite{Green:2012ej}; PNDME'11~\protect\cite{Bhattacharya:2011qm}; ETMC'17~\protect\cite{Alexandrou:2017qyt} and RQCD'14~\protect\cite{Bali:2014nma}. The estimates based on the conserved vector current and phenomenology are taken from Gonzalez-Alonso'14~\protect\cite{Gonzalez-Alonso:2013ura} and Adler'75~\protect\cite{Adler:1975he}. The rest is the same as in Fig.~\protect\ref{fig:PASTgA}. } \label{fig:PASTgS} \end{figure} \begin{figure} \begin{center} \includegraphics[width=.47\textwidth,clip]{figs/gTcomp-explat-2018-06-23-mag} \end{center} \vspace{-0.5cm} \caption{A summary of results for the isovector tensor charge, $g_T^{u-d}$, for $N_f=2$- 2+1- and 2+1+1-flavors. The lattice and phenomenology results quoted from: PNDME'18 (this work); PNDME'16~\protect\cite{Bhattacharya:2016zcn}; PNDME'15~\protect\cite{Bhattacharya:2015wna} LHPC'12~\protect\cite{Green:2012ej}; RBC/UKQCD'10~\protect\cite{Aoki:2010xg}; ETMC'17~\protect\cite{Alexandrou:2017qyt}; RQCD'14~\protect\cite{Bali:2014nma} and RBC'08~\protect\cite{Yamazaki:2008py}. The phenomenological estimates are taken from the following sources: Kang'15~\protect\cite{Kang:2015msa}; Goldstein'14~\protect\cite{Goldstein:2014aja}; Pitschmann'14~\protect\cite{Pitschmann:2014jxa}; Anselmino'13~\protect\cite{Anselmino:2013vqa}; Bacchetta'13~\protect\cite{Bacchetta:2012ty} and Fuyuto'13~\protect\cite{Fuyuto:2013gla}. Rest same as in Fig.~\protect\ref{fig:PASTgA}. } \label{fig:PASTgT} \end{figure} The ETMC results $g_A^{u-d}=1.212(40)$, $g_S^{u-d}=0.93(33)$ and $g_T^{u-d}=1.004(28)$~\cite{Alexandrou:2017oeh,Alexandrou:2017qyt,Alexandrou:2017hac} were obtained from a single physical mass ensemble generated with 2-flavors of maximally twisted mass fermions with a clover term at $a=0.0938(4)$~fm, $M_\pi=130.5(4)$~MeV and $M_\pi L = 2.98$. Assuming that the number of quark flavors and finite volume corrections do not make a significant difference, one could compare them against our results from the $a09m130W$ ensemble with similar lattice parameters: $g_A^{u-d}=1.249(21)$, $g_S^{u-d}=0.952(74)$ and $g_T^{u-d}=1.011(30)$. We remind the reader that this comparison is at best qualitative since estimates from different lattice actions are only expected to agree in the continuum limit.\looseness-1 Based on the trends observed in our CCFV fits shown in Figs.~\ref{fig:conUmD-extrap11}--\ref{fig:extrap-gT-diagonal}, we speculate where one may expect to see a difference due to the lack of a continuum extrapolation in the ETMC results. The quantities that exhibit a significant slope versus $a$ are $g_A^{u-d}$ and $g_S^{u-d}$. Again, under the assumptions stated above, we would expect ETMC values $g_A^{u-d}=1.212(40)$ to be larger and $g_S^{u-d}=0.93(33)$ to be smaller than our extrapolated values given in Eq.~\eqref{eq:gFinal}. We find that the scalar charge (ignoring the large error) fits the expected pattern, but the axial charge does not. We also point out that the ETMC error estimates are taken from a single ensemble and a single value of the source-sink separation using the plateau method. Our results from the comparable calculation on the $a09m130W$ ensemble with $\tau=14$ (see Figs.~\ref{fig:gA2v3a09} and~\ref{fig:gT2v3a09} and results in Table~\ref{tab:results3bareu-d}), have much smaller errors. The more detailed comparison we make is against the CalLat result $g_A^{u-d} = 1.271(13)$~\cite{Chang:2018uxx} that agrees with the latest experimental average, $g_A^{u-d} = 1.2766(20)$. The important question is, since the CalLat calculations were also done using the same 2+1+1-flavor HISQ ensembles, why are the two results, after CCFV fits, different? To understand why the results can be different, we first review the notable differences between the two calculations. CalLat uses (i) M\"obius domain wall versus clover for the valence quark action. This means that their discretization errors start at $a^2$ versus $a$ for PNDME. They also have no uncertainty due to the renormalization factor since $Z_A/Z_V=1$ for the M\"obius domain wall on HISQ formalism. (ii) They use gradient flow smearing with $t_{gf}/a=1$ versus one HYP smearing to smooth high frequency fluctuations in the gauge configurations. This can impact the size of statistical errors. (iii) Different construction of the sequential propagator. CalLat inserts a zero-momentum projected axial current simultaneously at all time slices on the lattice to construct the sequential propagator. Their data are, therefore, for the sum of contributions from insertions on {\it all} time slices on the lattice, i.e., including contact terms and insertion on time slices outside the interval between the source and the sink. CalLat fits this summed three-point function versus only the source-sink separation $\tau$ using the 2-state fit ansatz. (iv) The ranges of $\tau$ for which the data have the maximum weight in the respective n-state fits are very different in the two calculations. The CalLat results are obtained from data at much smaller values of $\tau$, which accounts for the smaller error estimates in the data for $g_A^{u-d}$. (v) CalLat analyze the coarser $a\approx 0.15$, $0.12$ and $0.09$~fm ensembles. At $a \approx 0.15$~fm, we can only analyze the $a15m310$ ensemble due to the presence of exceptional configurations in the clover-on-HISQ formulation at lighter pion masses. On the other hand, computing resources have so far limited CalLat from analyzing the three fine $a\approx 0.06$~fm and the physical mass $a09m130$ ensembles. A combination of these factors could easily explain the $\approx 5\%$ difference in the final values. The surprising result, shown in Table~\ref{tab:CalLat}, is that estimates on the seven ensembles analyzed by both collaborations are consistent and do not show a systematic difference. (Note again that results from two different lattice formulations are not, {\it a priori}, expected to agree at finite $a$.) These data suggest that differences at the $1\sigma$ level (see also our analysis in Table~\ref{tab:errors}) are conspiring to produce a 5\% difference in the extrapolated value. Thus, one should look for differences in the details of the CCFV fit. We first examine the extrapolation in $a$. A CCFV fit keeping our data from only the eight $a\approx 0.15$, $0.12$ and $0.09$~fm ensembles gives a larger value, $g_A^{u-d} = 1.245(42)$, since the sign of the slope versus $a$ changes sign as is apparent from the data shown in the top three panels of Fig.~\ref{fig:conUmD-extrap11}. Thus the three $a\approx 0.06$~fm ensembles play an important role in our continuum extrapolation. Our initial concern was possible underestimation of statistical errors in results from the $a \approx 0.06$~fm lattices. This prompted us to analyze three crucial ensembles, $a09m130$, $a06m310$ and $a06m220$, a second time with different smearing sizes and different random selection of source points. The consistency between the pairs of data points on these ensembles suggests that statistical fluctuations are not a likely explanation for the size of the undershoot in $g_A^{u-d}$. The possibility that these ensembles are not large enough to have adequately explored the phase space of the functional integral, and the results are possibly biased, can only be checked with the generation and analysis of additional lattices. The chiral fits are also different in detail. In our data, the errors in the points at $M_\pi \approx 310$, 220 and 130 MeV are similar, consequently all points contribute with similar weight in the fits. The errors in the CalLat data from the two physical mass ensembles $a15m130$ and $a12m130$ are much larger and the fits are predominately weighted by the data at the heavier masses $M_\pi \approx 400$, 350 310 and 220 MeV. Also, CalLat finds a significant change in the value between the $M_\pi \approx \{400,\ 350,\ 310\}$~ MeV and $M_\pi \approx 220$~MeV points, and this concerted change, well within $1\sigma$ errors in individual points, produces a larger dependence on $M_\pi$. In other words, it is the uniformly smaller values on the $M_\pi \approx \{400,\ 350,\ 310\}$~MeV ensembles compared to the data at $M_\pi\approx 220$~MeV that makes the CalLat chiral fits different and the final value of $g_A^{u-d}$ larger. \begin{figure*} \begin{center} \includegraphics[width=.49\textwidth,clip]{figs/fig-ST} \includegraphics[width=.49\textwidth,clip]{figs/plot-LHC} \end{center} \vspace{-0.5cm} \caption{Current and projected $90 \%$ C.L. constraints on $\epsilon_S$ and $\epsilon_T$ defined at 2~GeV in the $\overline{MS}$ scheme. (Left) The beta-decay constraints are obtained from the recent review article Ref.~\protect\cite{Gonzalez-Alonso:2018omy}. The current and future LHC bounds are obtained from the analysis of the $pp \to e + MET + X$. We have used the ATLAS results~\protect\cite{Aaboud:2017efa}, at $\sqrt{s} = 13$~TeV and integrated luminosity of 36 fb$^{-1}$. We find that the strongest bound comes from the cumulative distribution with a cut on the transverse mass at 2 TeV. The projected future LHC bounds are obtained by assuming that no events are observed at transverse mass greater than 3 TeV with an integrated luminosity of 300 fb$^{-1}$. (Right) Comparison of current LHC bounds from $pp \to e + MET + X$ versus $pp \to e^+ e^- + X$. } \label{fig:eSeT} \end{figure*} To summarize, the difference between our and CalLat results comes from the chiral fit and the continuum extrapolation. The difference in the chiral fit is a consequence of the ``jump'' in the CalLat data between $M_\pi = \{400,\ 350,\ 310\}$ and the $220$~MeV data. The CalLat data at $M_\pi \approx 130$~MeV do not contribute much to the fit because of the much larger errors. We do not see a similar jump between our $M_\pi \approx 310$ and $220$~MeV or between the 220 and the 130~MeV data as is evident from Fig.~\ref{fig:conUmD-extrap11}. Also, our four data points at $M_\pi \approx 310$~MeV show a larger spread. The difference in the continuum extrapolation is driven by the smaller estimates on all three fine $a \approx 0.06$~fm ensembles that we have analyzed. Unfortunately, neither of these two differences in the fits can be resolved with the current data, especially since the data on 7 ensembles, shown in Table~\ref{tab:CalLat}, agree within $1\sigma$. Our two conclusions are: (i) figuring out why the $a\approx 0.06$~fm ensembles give smaller estimates is crucial to understanding the difference, and (ii) with present data, a total error estimate of $\approx 5\%$ in $g_A^{u-d}$ is realistic. \begin{table}[ht] \begin{center} \renewcommand{\arraystretch}{1.2} \begin{ruledtabular} \begin{tabular}{l|c|c} & This Work & CalLat \\ \hline $a15m310$ & 1.228(25) & 1.215(12) \\ $a12m310$ & 1.251(19) & 1.214(13) \\ $a12m220S$ & 1.224(44) & 1.272(28) \\ $a12m220$ & 1.234(25) & 1.259(15) \\ $a12m220L$ & 1.262(17) & 1.252(21) \\ $a09m310$ & 1.235(15) & 1.236(11) \\ $a09m220$ & 1.260(19) & 1.253(09) \\ \end{tabular} \end{ruledtabular} \caption{The data for the renormalized axial charge $g_A^{u-d}$ for the proton on the seven 2+1+1-flavor HISQ ensembles that have been analyzed by us and the CalLat collaboration~\protect\cite{Chang:2018uxx}. The results are consistent within $1\sigma$ in most cases. } \label{tab:CalLat} \end{center} \end{table} Even with the high statistics calculation presented here, the statistical and ESC errors in the calculation of the scalar charge are between 5\%--15\% on individual ensembles. As a result, the error after the continuum extrapolation is about $10\%$. Over time, results for $g_S^{u-d}$, presented in Fig.~\ref{fig:PASTgS}, do show significant reduction in the error with improved higher-statistics calculations. The variation of the tensor charge $g_T^{u-d}$ with $a$ or $M_\pi $ or $M_\pi L$ is small. As a result, the lattice estimates have been stable over time as shown in Fig.~\ref{fig:PASTgT}. The first error estimate in our result, $g_T^{u-d} = 0.989(32)(10) $, is now dominated by the error in $Z_T$. \section{Constraining new physics using precision beta decay measurements} \label{sec:est} Nonstandard scalar and tensor charged-current interactions are parametrized by the dimensionless couplings $\epsilon_{S,T}$~\cite{Bhattacharya:2011qm,Cirigliano:2012ab}: \begin{eqnarray} {\cal L}_{\rm CC} &=& - \frac{G_F^{(0)} V_{ud}}{\sqrt{2}} \ \Big[ \ \epsilon_S \ \bar{e} (1 - \gamma_5) \nu_{\ell} \cdot \bar{u} d \nonumber \\ &+ & \epsilon_T \ \bar{e} \sigma_{\mu \nu} (1 - \gamma_5) \nu_{\ell} \cdot \bar{u} \sigma^{\mu \nu} (1 - \gamma_5) d \Big] ~. \end{eqnarray} These couplings can be constrained by a combination of low energy precision beta-decay measurements (of the pion, neutron, and nuclei) combined with our results for the isovector charges $g_{S}^{\rm u-d}$ and $g_T^{\rm u-d}$, as well at the Large Hadron Collider (LHC) through the reaction $pp \to e \nu + X$ and $pp \to e^+ e^- + X$. The LHC constraint is valid provided the mediator of the new interaction is heavier than a few TeV. In Fig.~\ref{fig:eSeT} (left) we show current and projected bounds on $\{\epsilon_S, \epsilon_T\}$ defined at 2~GeV in the $\overline{MS}$ scheme. The beta decays constraints are obtained from the recent review article Ref.~\cite{Gonzalez-Alonso:2018omy}. The current analysis includes all existing neutron and nuclear decay measurements, while the future projection assumes measurements of the various decay correlations with fractional uncertainty of $0.1\%$, the Fierz interference term at the $10^{-3}$ level, and neutron lifetime with uncertainty $\delta \tau_n = 0.1 s$. The current LHC bounds are obtained from the analysis of the $pp \to e + MET + X$, where $MET$ stands for missing transverse energy. We have used the ATLAS results~\cite{Aaboud:2017efa}, at $\sqrt{s} = 13$~TeV and integrated luminosity of 36 fb$^{-1}$. We find that the strongest bound comes by the cumulative distribution with a cut on the transverse mass at 2 TeV. The projected future LHC bounds are obtained by assuming that no events are observed at transverse mass greater than 3~TeV with an integrated luminosity of 300 fb$^{-1}$. The LHC bounds become tighter on the inclusion of $Z$-like mediated process $pp \to e^+ e^- + X$. As shown in Fig.~\ref{fig:eSeT} (right), including both $W$-like and $Z$-like mediated processes, the current LHC bounds are comparable to future low energy ones, motivating more precise low energy experiments. In this analysis we have neglected the NLO QCD corrections~\cite{Alioli:2018ljm}, which would further strengthen the LHC bounds by $O(10\%)$. Similar bounds are obtained using the CMS data~\cite{Sirunyan:2018mpc,Sirunyan:2018exx}. \section{Conclusions} \label{sec:conclusions} We have presented a high-statistics study of the isovector and flavor-diagonal charges of the nucleon using clover-on-HISQ lattice QCD formulation. By using the truncated solver with bias correction error-reduction technique with the multigrid solver, we have significantly improved the statistical precision of the data. Also, we show stability in the isolation and mitigation of excited-state contamination by keeping up to three states in the analysis of data at multiple values of source-sink separation $\mathop{\tau}\nolimits$. Together, these two improvements allow us to demonstrate that the excited-state contamination in the axial and the tensor channels has been reduced to the 1\%--2\% level. The high-statistics analysis of eleven ensembles covering the range 0.15--0.06~fm in the lattice spacing, $M_\pi =$ 135--320~MeV in the pion mass, and $M_\pi L =$ 3.3--5.5 in the lattice size allowed us to analyze the three systematic uncertainties due to lattice discretization, dependence on the quark mass and finite lattice size, by making a simultaneous fit in the three variables $a$, $M_\pi^2$ and $M_\pi L$. Data from the two physical mass ensembles, $a09m130$ and $a06m135$, anchor the improved chiral fit. Our final estimates for the isovector charges are given in Eq.~\eqref{eq:gFinal2}. One of the largest sources of uncertainty now is from the calculation of the renormalization constants for the quark bilinear operators. These are calculated nonperturbatively in the RI-sMOM scheme over a range of values of the scale $Q^2$. As discussed in Ref.~\cite{Bhattacharya:2016zcn}, the dominant systematics in the calculation of the $Z$'s comes from the breaking of the rotational symmetry on the lattice and the 2-loop perturbative matching between the RI-sMOM and the $\overline{\text{MS}}$ schemes. Our estimate $g_A^{u-d}=1.218(25)(30)$ is about $1.5 \sigma$ (about $5\%$) below the experimental value $g_A/g_V = 1.2766(20)$. Such low values are typical of most lattice QCD calculations. The recent calculation by the CalLat collaboration, also using the 2+1+1-flavor HISQ ensembles, gives $g_A^{u-d}=1.271(13)$~\cite{Chang:2018uxx}. A detailed comparison between the two calculations is presented in Sec~\ref{sec:comparison}. We show in Table~\ref{tab:CalLat} that results from seven ensembles, which have been analyzed by both collaborations, agree within $1\sigma$ uncertainty. Our analysis indicates that the majority of the difference comes from the chiral and continuum extrapolations, with $1\sigma$ differences in individual points getting amplified. Given that CalLat have not analyzed the fine $0.06$~fm ensembles and their data on the two physical pion mass ensembles, $a15m130$ and $a12m130$ have much larger errors and do not contribute significantly to their chiral fit, we conclude that our error estimate is more realistic. Further work is, therefore, required to resolve the difference between the two results. Our results for the isovector scalar and tensor charges, $g_S^{u-d}=1.022(80)(60)$ and $g_T^{u-d}=0.989(32)(10)$, have achieved the target accuracy of 10\% needed to put bounds on scalar and tensor interactions, $\epsilon_S$ and $\epsilon_T$, arising at the TeV scale when combined with experimental measurements of $b$ and $b_\nu$ parameters in neutron decay experiments with $10^{-3}$ sensitivity~\cite{Bhattacharya:2011qm}. In Sec.~\ref{sec:est}, we update the constraints on $\epsilon_S$ and $\epsilon_T$ from both low energy experiments combined with our new lattice results on $g_S^{u-d}$ and $g_T^{u-d}$, and from the ATLAS and the CMS experiments at the LHC. We find that the constraints from low energy experiments combined with matrix elements from lattice QCD are comparable to those from the LHC. For the tensor charges, we find that the dependence on the lattice size, the lattice spacing and the light-quark mass is small, and the simultaneous fit in these three variables, keeping just the lowest-order corrections, has improved over that presented in Ref.~\cite{Bhattacharya:2015wna}. We have also updated our estimates for the connected parts of the flavor-diagonal charges. For the tensor charges, the contribution of the disconnected diagram is consistent with zero~\cite{Bhattacharya:2015wna,Bhattacharya:2015esa}, so the connected contribution, $g_T^{u} = 0.790(27)$ and $g_T^{d} = - 0.198(10)$ for the proton, is a good approximation to the full result that will be discussed elsewhere. The extraction of the scalar charge of the proton has larger uncertainty. The statistical errors in the lattice data for $g_S^{u-d}(a, M_\pi, M_\pi L)$ are 3--5 times larger than those in $g_T^{u-d}(a,M_\pi,M_\pi L)$, and the data show significant dependence on the lattice spacing $a$ and a weaker dependence on the pion mass $M_\pi$. Our estimate, $g_S^{u-d}=1.022(80)(60)$, is in very good agreement with the estimate $g_S^{u-d}=1.02(8)(7)$ obtained using the CVC relation $g_S/g_V = (M_N-M_P)^{\rm QCD}/ (m_d-m_u)^{\rm QCD}$ in Ref.~\cite{Gonzalez-Alonso:2013ura}. In Table~\ref{tab:Mn-Mp}, we used our new estimate to update the results for the mass difference $(M_N-M_P)^{\rm QCD}$ obtained by using the CVC relation. Taking the recent 2+1 flavor value $m_d - m_u = 2.41(6)(4)(9)$~MeV from the BMW collaboration~\cite{Fodor:2016bgu} gives $(M_N-M_P)^{\rm 2+1QCD} = 2.41(27)$~MeV, while the 2+1+1-flavor estimates $m_u=2.118(38)$~MeV and $m_d=4.690(54)$~MeV from the MILC/Fermilab/TUMQCD collaboration~\cite{Bazavov:2018omf} give $(M_N-M_P)^{\rm 2+1+1QCD} = 2.63(27)$~MeV.
1910.12941
\section{Credits} This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for NAACL 2019 by Stephanie Lukin and Alla Roskovskaya, ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn, those for ACL 2008 by JohannaD. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the \emph{International Joint Conference on Artificial Intelligence} and the \emph{Conference on Computer Vision and Pattern Recognition}. \section{Introduction} The following instructions are directed to authors of papers submitted to ACL 2019 or accepted for publication in its proceedings. All authors are required to adhere to these specifications. Authors are required to provide a Portable Document Format (PDF) version of their papers. \textbf{The proceedings are designed for printing on A4 paper.} \section{General Instructions} Manuscripts must be in two-column format. Exceptions to the two-column format include the title, authors' names and complete addresses, which must be centered at the top of the first page, and any full-width figures or tables (see the guidelines in Subsection~\ref{ssec:first}). \textbf{Type single-spaced.} Start all pages directly under the top margin. See the guidelines later regarding formatting the first page. The manuscript should be printed single-sided and its length should not exceed the maximum page limit described in Section~\ref{sec:length}. Pages are numbered for initial submission. However, \textbf{do not number the pages in the camera-ready version}. By uncommenting {\small\verb|\aclfinalcopy|} at the top of this document, it will compile to produce an example of the camera-ready formatting; by leaving it commented out, the document will be anonymized for initial submission. When you first create your submission on softconf, please fill in your submitted paper ID where {\small\verb|***|} appears in the {\small\verb|\def\aclpaperid{***}|} definition at the top. The review process is double-blind, so do not include any author information (names, addresses) when submitting a paper for review. However, you should maintain space for names and addresses so that they will fit in the final (accepted) version. The ACL 2019 \LaTeX\ style will create a titlebox space of 2.5in for you when {\small\verb|\aclfinalcopy|} is commented out. \subsection{The Ruler} The ACL 2019 style defines a printed ruler which should be presented in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document without the provided style files, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment the {\small\verb|\aclfinalcopy|} command in the document preamble.) Reviewers: note that the ruler measurements do not align well with lines in the paper -- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. In most cases one would expect that the approximate location will be adequate, although you can also use fractional references (\emph{e.g.}, the first paragraph on this page ends at mark $108.5$). \subsection{Electronically-available resources} ACL provides this description in \LaTeX2e{} (\texttt{\small acl2019.tex}) and PDF format (\texttt{\small acl2019.pdf}), along with the \LaTeX2e{} style file used to format it (\texttt{\small acl2019.sty}) and an ACL bibliography style (\texttt{\small acl\_natbib.bst}) and example bibliography (\texttt{\small acl2019.bib}). These files are all available at \texttt{\small http://acl2019.org/downloads/ acl2019-latex.zip}. We strongly recommend the use of these style files, which have been appropriately tailored for the ACL 2019 proceedings. \subsection{Format of Electronic Manuscript} \label{sect:pdf} For the production of the electronic manuscript you must use Adobe's Portable Document Format (PDF). PDF files are usually produced from \LaTeX\ using the \textit{pdflatex} command. If your version of \LaTeX\ produces Postscript files, you can convert these into PDF using \textit{ps2pdf} or \textit{dvipdf}. On Windows, you can also use Adobe Distiller to generate PDF. Please make sure that your PDF file includes all the necessary fonts (especially tree diagrams, symbols, and fonts with Asian characters). When you print or create the PDF file, there is usually an option in your printer setup to include none, all or just non-standard fonts. Please make sure that you select the option of including ALL the fonts. \textbf{Before sending it, test your PDF by printing it from a computer different from the one where it was created.} Moreover, some word processors may generate very large PDF files, where each page is rendered as an image. Such images may reproduce poorly. In this case, try alternative ways to obtain the PDF. One way on some systems is to install a driver for a postscript printer, send your document to the printer specifying ``Output to a file'', then convert the file to PDF. It is of utmost importance to specify the \textbf{A4 format} (21 cm x 29.7 cm) when formatting the paper. When working with \texttt{dvips}, for instance, one should specify \texttt{-t a4}. Or using the command \verb|\special{papersize=210mm,297mm}| in the latex preamble (directly below the \verb|\usepackage| commands). Then using \texttt{dvipdf} and/or \texttt{pdflatex} which would make it easier for some. Print-outs of the PDF file on A4 paper should be identical to the hardcopy version. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs as soon as possible. \subsection{Layout} \label{ssec:layout} Format manuscripts two columns to a page, in the manner these instructions are formatted. The exact dimensions for a page on A4 paper are: \begin{itemize} \item Left and right margins: 2.5 cm \item Top margin: 2.5 cm \item Bottom margin: 2.5 cm \item Column width: 7.7 cm \item Column height: 24.7 cm \item Gap between columns: 0.6 cm \end{itemize} \noindent Papers should not be submitted on any other paper size. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs above as soon as possible. \subsection{Fonts} For reasons of uniformity, Adobe's \textbf{Times Roman} font should be used. In \LaTeX2e{} this is accomplished by putting \begin{quote} \begin{verbatim} \usepackage{times} \usepackage{latexsym} \end{verbatim} \end{quote} in the preamble. If Times Roman is unavailable, use \textbf{Computer Modern Roman} (\LaTeX2e{}'s default). Note that the latter is about 10\% less dense than Adobe's Times Roman font. \begin{table}[t!] \begin{center} \begin{tabular}{|l|rl|} \hline \textbf{Type of Text} & \textbf{Font Size} & \textbf{Style} \\ \hline paper title & 15 pt & bold \\ author names & 12 pt & bold \\ author affiliation & 12 pt & \\ the word ``Abstract'' & 12 pt & bold \\ section titles & 12 pt & bold \\ subsection titles & 11 pt & bold \\ document text & 11 pt &\\ captions & 10 pt & \\ abstract text & 10 pt & \\ bibliography & 10 pt & \\ footnotes & 9 pt & \\ \hline \end{tabular} \end{center} \caption{\label{font-table} Font guide. } \end{table} \subsection{The First Page} \label{ssec:first} Center the title, author's name(s) and affiliation(s) across both columns. Do not use footnotes for affiliations. Do not include the paper ID number assigned during the submission process. Use the two-column format only when you begin the abstract. \textbf{Title}: Place the title centered at the top of the first page, in a 15-point bold font. (For a complete guide to font sizes and styles, see Table~\ref{font-table}) Long titles should be typed on two lines without a blank line intervening. Approximately, put the title at 2.5 cm from the top of the page, followed by a blank line, then the author's names(s), and the affiliation on the following line. Do not use only initials for given names (middle initials are allowed). Do not format surnames in all capitals (\emph{e.g.}, use ``Mitchell'' not ``MITCHELL''). Do not format title and section headings in all capitals as well except for proper names (such as ``BLEU'') that are conventionally in all capitals. The affiliation should contain the author's complete address, and if possible, an electronic mail address. Start the body of the first page 7.5 cm from the top of the page. The title, author names and addresses should be completely identical to those entered to the electronical paper submission website in order to maintain the consistency of author information among all publications of the conference. If they are different, the publication chairs may resolve the difference without consulting with you; so it is in your own interest to double-check that the information is consistent. \textbf{Abstract}: Type the abstract at the beginning of the first column. The width of the abstract text should be smaller than the width of the columns for the text in the body of the paper by about 0.6 cm on each side. Center the word \textbf{Abstract} in a 12 point bold font above the body of the abstract. The abstract should be a concise summary of the general thesis and conclusions of the paper. It should be no longer than 200 words. The abstract text should be in 10 point font. \textbf{Text}: Begin typing the main body of the text immediately after the abstract, observing the two-column format as shown in the present document. Do not include page numbers. \textbf{Indent}: Indent when starting a new paragraph, about 0.4 cm. Use 11 points for text and subsection headings, 12 points for section headings and 15 points for the title. \begin{table} \centering \small \begin{tabular}{cc} \begin{tabular}{|l|l|} \hline \textbf{Command} & \textbf{Output}\\\hline \verb|{\"a}| & {\"a} \\ \verb|{\^e}| & {\^e} \\ \verb|{\`i}| & {\`i} \\ \verb|{\.I}| & {\.I} \\ \verb|{\o}| & {\o} \\ \verb|{\'u}| & {\'u} \\ \verb|{\aa}| & {\aa} \\\hline \end{tabular} & \begin{tabular}{|l|l|} \hline \textbf{Command} & \textbf{Output}\\\hline \verb|{\c c}| & {\c c} \\ \verb|{\u g}| & {\u g} \\ \verb|{\l}| & {\l} \\ \verb|{\~n}| & {\~n} \\ \verb|{\H o}| & {\H o} \\ \verb|{\v r}| & {\v r} \\ \verb|{\ss}| & {\ss} \\\hline \end{tabular} \end{tabular} \caption{Example commands for accented characters, to be used in, \emph{e.g.}, \BibTeX\ names.}\label{tab:accents} \end{table} \subsection{Sections} \textbf{Headings}: Type and label section and subsection headings in the style shown on the present document. Use numbered sections (Arabic numerals) in order to facilitate cross references. Number subsections with the section number and the subsection number separated by a dot, in Arabic numerals. Do not number subsubsections. \begin{table*}[t!] \centering \begin{tabular}{lll} output & natbib & previous ACL style files\\ \hline \citep{Gusfield:97} & \verb|\citep| & \verb|\cite| \\ \citet{Gusfield:97} & \verb|\citet| & \verb|\newcite| \\ \citeyearpar{Gusfield:97} & \verb|\citeyearpar| & \verb|\shortcite| \\ \end{tabular} \caption{Citation commands supported by the style file. The citation style is based on the natbib package and supports all natbib citation commands. It also supports commands defined in previous ACL style files for compatibility. } \end{table*} \textbf{Citations}: Citations within the text appear in parentheses as~\cite{Gusfield:97} or, if the author's name appears in the text itself, as Gusfield~\shortcite{Gusfield:97}. Using the provided \LaTeX\ style, the former is accomplished using {\small\verb|\cite|} and the latter with {\small\verb|\shortcite|} or {\small\verb|\newcite|}. Collapse multiple citations as in~\cite{Gusfield:97,Aho:72}; this is accomplished with the provided style using commas within the {\small\verb|\cite|} command, \emph{e.g.}, {\small\verb|\cite{Gusfield:97,Aho:72}|}. Append lowercase letters to the year in cases of ambiguities. Treat double authors as in~\cite{Aho:72}, but write as in~\cite{Chandra:81} when more than two authors are involved. Collapse multiple citations as in~\cite{Gusfield:97,Aho:72}. Also refrain from using full citations as sentence constituents. We suggest that instead of \begin{quote} ``\cite{Gusfield:97} showed that ...'' \end{quote} you use \begin{quote} ``Gusfield \shortcite{Gusfield:97} showed that ...'' \end{quote} If you are using the provided \LaTeX{} and Bib\TeX{} style files, you can use the command \verb|\citet| (cite in text) to get ``author (year)'' citations. You can use the command \verb|\citealp| (alternative cite without parentheses) to get ``author year'' citations (which is useful for using citations within parentheses, as in \citealp{Gusfield:97}). If the Bib\TeX{} file contains DOI fields, the paper title in the references section will appear as a hyperlink to the DOI, using the hyperref \LaTeX{} package. To disable the hyperref package, load the style file with the \verb|nohyperref| option: \\{\small \verb|\usepackage[nohyperref]{acl2019}|} \textbf{Compilation Issues}: Some of you might encounter the following error during compilation: ``{\em \verb|\pdfendlink| ended up in different nesting level than \verb|\pdfstartlink|.}'' This happens when \verb|pdflatex| is used and a citation splits across a page boundary. To fix this, the style file contains a patch consisting of the following two lines: (1) \verb|\RequirePackage{etoolbox}| (line 454 in \texttt{acl2019.sty}), and (2) A long line below (line 455 in \texttt{acl2019.sty}). If you still encounter compilation issues even with the patch enabled, disable the patch by commenting the two lines, and then disable the \verb|hyperref| package (see above), recompile and see the problematic citation. Next rewrite that sentence containing the citation. (See, {\em e.g.}, {\small\tt http://tug.org/errors.html}) \textbf{Digital Object Identifiers}: As part of our work to make ACL materials more widely used and cited outside of our discipline, ACL has registered as a CrossRef member, as a registrant of Digital Object Identifiers (DOIs), the standard for registering permanent URNs for referencing scholarly materials. As of 2017, we are requiring all camera-ready references to contain the appropriate DOIs (or as a second resort, the hyperlinked ACL Anthology Identifier) to all cited works. Thus, please ensure that you use Bib\TeX\ records that contain DOI or URLs for any of the ACL materials that you reference. Appropriate records should be found for most materials in the current ACL Anthology at \url{http://aclanthology.info/}. As examples, we cite \cite{P16-1001} to show you how papers with a DOI will appear in the bibliography. We cite \cite{C14-1001} to show how papers without a DOI but with an ACL Anthology Identifier will appear in the bibliography. As reviewing will be double-blind, the submitted version of the papers should not include the authors' names and affiliations. Furthermore, self-references that reveal the author's identity, \emph{e.g.}, \begin{quote} ``We previously showed \cite{Gusfield:97} ...'' \end{quote} should be avoided. Instead, use citations such as \begin{quote} ``\citeauthor{Gusfield:97} \shortcite{Gusfield:97} previously showed ... '' \end{quote} Any preliminary non-archival versions of submitted papers should be listed in the submission form but not in the review version of the paper. ACL 2019 reviewers are generally aware that authors may present preliminary versions of their work in other venues, but will not be provided the list of previous presentations from the submission form. \textbf{Please do not use anonymous citations} and do not include when submitting your papers. Papers that do not conform to these requirements may be rejected without review. \textbf{References}: Gather the full set of references together under the heading \textbf{References}; place the section before any Appendices. Arrange the references alphabetically by first author, rather than by order of occurrence in the text. By using a .bib file, as in this template, this will be automatically handled for you. See the \verb|\bibliography| commands near the end for more. Provide as complete a citation as possible, using a consistent format, such as the one for \emph{Computational Linguistics\/} or the one in the \emph{Publication Manual of the American Psychological Association\/}~\cite{APA:83}. Use of full names for authors rather than initials is preferred. A list of abbreviations for common computer science journals can be found in the ACM \emph{Computing Reviews\/}~\cite{ACM:83}. The \LaTeX{} and Bib\TeX{} style files provided roughly fit the American Psychological Association format, allowing regular citations, short citations and multiple citations as described above. \begin{itemize} \item Example citing an arxiv paper: \cite{rasooli-tetrault-2015}. \item Example article in journal citation: \cite{Ando2005}. \item Example article in proceedings, with location: \cite{borsch2011}. \item Example article in proceedings, without location: \cite{andrew2007scalable}. \end{itemize} See corresponding .bib file for further details. Submissions should accurately reference prior and related work, including code and data. If a piece of prior work appeared in multiple venues, the version that appeared in a refereed, archival venue should be referenced. If multiple versions of a piece of prior work exist, the one used by the authors should be referenced. Authors should not rely on automated citation indices to provide accurate references for prior and related work. \textbf{Appendices}: Appendices, if any, directly follow the text and the references (but see above). Letter them in sequence and provide an informative title: \textbf{Appendix A. Title of Appendix}. \subsection{Footnotes} \textbf{Footnotes}: Put footnotes at the bottom of the page and use 9 point font. They may be numbered or referred to by asterisks or other symbols.\footnote{This is how a footnote should appear.} Footnotes should be separated from the text by a line.\footnote{Note the line separating the footnotes from the text.} \subsection{Graphics} \textbf{Illustrations}: Place figures, tables, and photographs in the paper near where they are first discussed, rather than at the end, if possible. Wide illustrations may run across both columns. Color illustrations are discouraged, unless you have verified that they will be understandable when printed in black ink. \textbf{Captions}: Provide a caption for every illustration; number each one sequentially in the form: ``Figure 1. Caption of the Figure.'' ``Table 1. Caption of the Table.'' Type the captions of the figures and tables below the body, using 10 point text. Captions should be placed below illustrations. Captions that are one line are centered (see Table~\ref{font-table}). Captions longer than one line are left-aligned (see Table~\ref{tab:accents}). Do not overwrite the default caption sizes. The acl2019.sty file is compatible with the caption and subcaption packages; do not add optional arguments. \subsection{Accessibility} \label{ssec:accessibility} In an effort to accommodate people who are color-blind (as well as those printing to paper), grayscale readability for all accepted papers will be encouraged. Color is not forbidden, but authors should ensure that tables and figures do not rely solely on color to convey critical distinctions. A simple criterion: All curves and points in your figures should be clearly distinguishable without color. \section{Translation of non-English Terms} It is also advised to supplement non-English characters and terms with appropriate transliterations and/or translations since not all readers understand all such characters and terms. Inline transliteration or translation can be represented in the order of: original-form transliteration ``translation''. \section{Length of Submission} \label{sec:length} The ACL 2019 main conference accepts submissions of long papers and short papers. Long papers may consist of up to eight (8) pages of content plus unlimited pages for references. Upon acceptance, final versions of long papers will be given one additional page -- up to nine (9) pages of content plus unlimited pages for references -- so that reviewers' comments can be taken into account. Short papers may consist of up to four (4) pages of content, plus unlimited pages for references. Upon acceptance, short papers will be given five (5) pages in the proceedings and unlimited pages for references. For both long and short papers, all illustrations and tables that are part of the main text must be accommodated within these page limits, observing the formatting instructions given in the present document. Papers that do not conform to the specified length and formatting requirements are subject to be rejected without review. ACL 2019 does encourage the submission of additional material that is relevant to the reviewers but not an integral part of the paper. There are two such types of material: appendices, which can be read, and non-readable supplementary materials, often data or code. Do not include this additional material in the same document as your main paper. Additional material must be submitted as one or more separate files, and must adhere to the same anonymity guidelines as the main paper. The paper must be self-contained: it is optional for reviewers to look at the supplementary material. Papers should not refer, for further detail, to documents, code or data resources that are not available to the reviewers. Refer to Appendix~\ref{sec:appendix} and Appendix~\ref{sec:supplemental} for further information. Workshop chairs may have different rules for allowed length and whether supplemental material is welcome. As always, the respective call for papers is the authoritative source. \section*{Acknowledgments} The acknowledgments should go immediately before the references. Do not number the acknowledgments section. Do not include this section when submitting your paper for review. \\ \noindent \textbf{Preparing References:} \\ Include your own bib file like this: \verb|\bibliographystyle{acl_natbib}| \verb| \section{Results} \subsection{Baseline Comparisons} In our experiments, we evaluate our model under four different feature settings: Text, Text+Meta, Text+Network, Text+Meta+Network. HLPNN-Text is our model only using tweet text as input. HLPNN-Meta is the model that combines text and metadata (description, location, name, user language, time zone). HLPNN-Net is the model that combines text and mention network. HLPNN is our full model that uses text, metadata, and mention network for Twitter user geolocation. We present comparisons between our model and previous work in Table \ref{result}. As shown in the table, our model outperforms these baselines across three datasets under various feature settings. Only using text feature from tweets, our model HLPNN-Text works the best among all these text-based location prediction systems and wins by a large margin. It not only improves prediction accuracy but also greatly reduces mean error distance. Compared with a strong neural model equipped with local dialects \cite{rahimi2017neural}, it increases Acc@161 by an absolute value 4\% and reduces mean error distance by about 400 kilometers on the challenging Twitter-World dataset, without using any external knowledge. Its mean error distance on Twitter-World is even comparable to some methods using network feature \cite{do2017multiview}. With text and metadata, HLPNN-Meta correctly predicts locations of 57.2\% users in WNUT dataset, which is even better than these location prediction systems that use text, metadata, and network. Because in the WNUT dataset the ground truth location is the closest city's center, Our model achieves 0 median error when its accuracy is greater than 50\%. Note that \citet{miura2017unifying} used 279K users added with metadata in their experiments on Twitter-US, while we use all 449K users for training and evaluation, and only 53\% of them have metadata, which makes it difficult to make a fair comparison. Adding network feature further improves our model's performances. It achieves state-of-the-art results combining all features on these three datasets. Even though unifying network information is not the focus of this paper, our model still outperforms or has comparable results to some well-designed network-based location prediction systems like \cite{rahimi2018semi}. On Twitter-US dataset, our model variant HLPNN-Net achieves a 4.6\% increase in Acc@161 against previous state-of-the-art methods \cite{do2017multiview} and \cite{rahimi2018semi}. The prediction accuracy of HLPNN-Net on WNUT dataset is similar to \cite{miura2017unifying}, but with a noticeable lower mean error distance. \subsection{Ablation Study} In this section, we provide an ablation study to examine the contribution of each model component. Specifically, we remove the character-level word embedding, the word-level attention, the field-level attention, the transformer encoders, and the country supervision signal one by one at a time. We run experiments on the WNUT dataset with text features. \iffalse \begin{table}[!h] \centering \resizebox{0.48\textwidth}{!}{ \begin{tabular}{lccc} \hline\hline & Accuracy & Acc@161 & Mean \\ \hline HLPNN & 57.6 & 73.4 & 538.8 \\ w/o Char-CNN & 55.8 & 72.0 & 620.7 \\ w/o Word-Att & 56.3 & 72.4 & 585.7 \\ w/o Field-Att & 57.2 & 72.9 & 563.6 \\ w/o encoders & 56.8 & 72.4 & 613.2 \\ w/o country & 56.7 & 72.6 & 605.97 \\ \hline\hline \end{tabular} } \caption{An ablation study on WNUT dataset. The median distance is 0 for all the models, so we do not include it in the table.} \label{ablation} \end{table} \fi \begin{table}[!h] \centering \resizebox{0.48\textwidth}{!}{ \begin{tabular}{lcccc} \hline\hline & Accuracy & Acc@161 & Median & Mean \\ \hline HLPNN & 37.3 & 52.9 & 109.3 & 1289.4 \\ w/o Char-CNN & 36.3 & 51.0 & 130.8 & 1429.9 \\ w/o Word-Att & 36.4 & 51.5 & 130.2 & 1377.5 \\ w/o Field-Att & 37.0 & 52.0 & 121.8 & 1337.5 \\ w/o encoders & 36.8 & 52.5 & 117.4 & 1402.9 \\ w/o country & 36.7 & 52.6 & 124.8 & 1399.2 \\ \hline\hline \end{tabular} } \caption{An ablation study on WNUT dataset.} \label{ablation} \end{table} The performance breakdown for each model component is shown in Table \ref{ablation}. Compared to the full model, we can find that the character-level word embedding layer is especially helpful for dealing with noisy social media text. The word-level attention also provides performance gain, while the field-level attention only provides a marginal improvement. The reason could be the multi-head attention layers in the transformer encoders already captures important information among different feature fields. These two transformer encoders learn the correlation between features and decouple these two level predictions. Finally, using the country supervision can help model to achieve a better performance with a lower mean error distance. \subsection{Country Effect} To directly measure the effect of adding country-level supervision, we define a relative country error which is the percentage of city-level predictions located in incorrect countries among all misclassified city-level predictions. \begin{align*} \resizebox{0.48\textwidth}{!}{$\operatorname{relative\ country\ error} = \frac{\operatorname{\#\ of\ incorrect\ country}}{\operatorname{\#\ of\ incorrect\ city}}$} \end{align*} The lower this metric means the better one model can predict the city-level location, at least in the correct country. We vary the weight $\alpha$ of country-level supervision signal in our loss function from 0 to 20. The larger $\alpha$ means the more important the country-level supervision during the optimization. When $\alpha$ equals 0, there is no country-level supervision in our model. As shown in Figure \ref{alpha}, increasing $\alpha$ would improve the relative country error from 26.2\% to 23.1\%, which shows the country-level supervision signal indeed can help our model predict the city-level location towards the correct country. This possibly explains why our model has a lower mean error distance when compared to other methods \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{./image/alpha.png} \caption{Relative country error with varying $\alpha$ on test dataset. Experiments were conducted on WNUT dataset with text feature.} \label{alpha} \end{figure} \section{Experiment Settings} \subsection{Datasets} To validate our method, we use three widely adopted Twitter location prediction datasets. Table \ref{data} shows a brief summary of these three datasets. They are listed as follows. \textbf{Twitter-US} is a dataset compiled by \citet{roller2012supervised}. It contains 429K training users, 10K development users, and 10K test users in North America. The ground truth location of each user is set to the first geotag of this user in the dataset. We assign the closest city to each user's ground truth location using the city category built by \citet{han2012geolocation}. Since this dataset only covers North America, we change the first level location prediction from countries to administrative regions (eg. state or province). The administrative region for each city is obtained from the original city category. \textbf{Twitter-World} is a Twitter dataset covering the whole world, with 1,367K training users, 10K development users, and 10K test users \cite{han2012geolocation}. The ground truth location for each user is the center of the closest city to the first geotag of this user. Only English tweets are included in this dataset, which makes it more challenging for a global-level location prediction task. We downloaded these two datasets from Github \footnote{https://github.com/afshinrahimi/geomdn}. Each user in these two datasets is represented by the concatenation of their tweets, followed by the geo-coordinates. We queried Twitter's API to add user metadata information to these two datasets in February 2019. We only get metadata for about 53\% and 67\% users in Twitter-US and Twitter-World respectively. Because of Twitter's privacy policy change, we could not get the time zone information anymore at the time of collection. \textbf{WNUT} was released in the 2nd Workshop on Noisy User-generated Text \cite{han2016twitter}. The original user-level dataset consists of 1 million training users, 10K users in development set and test set each. Each user is assigned with the closest city center as the ground truth label. Because of Twitter's data sharing policy, only tweet IDs of training and development data are provided. We have to query Twitter's API to reconstruct the training and development dataset. We finished our data collection around August 2017. About 25\% training and development users' data cannot be accessed at that time. The full anonymized test data is downloaded from the workshop website \footnote{https://noisy-text.github.io/2016/geo-shared-task.html}. \subsection{Text Preprocessing \& Network Construction} For all the text fields, we first convert them into lower case, then use a tweet-specific tokenizer from NLTK\footnote{https://www.nltk.org/api/nltk.tokenize.html} to tokenize them. To keep a reasonable vocabulary size, we only keep tokens with frequencies greater than 10 times in our word vocabulary. Our character vocabulary includes characters that appear more than 5 times in the training corpus. We construct user networks from mentions in tweets. For WNUT, we keep users satisfying one of the following conditions in the mention network: (1) users in the original dataset (2) users who are mentioned by two different users in the dataset. For Twitter-US and Twitter-World, following previous work \cite{rahimi2018semi}, a uni-directional edge is set if two users in our dataset directly mentioned each other, or they co-mentioned another user. We remove celebrities who are mentioned by more than 10 different users from the mentioning network. These celebrities are still kept in the dataset and their network embeddings are set as 0. \subsection{Evaluation Metrics} We evaluate our method using four commonly used metrics listed below.\\ \textbf{Accuracy}: The percentage of correctly predicted home cities.\\ \textbf{Acc@161}: The percentage of predicted cities which are within a 161 km (100 miles) radius of true locations to capture near-misses.\\ \textbf{Median}: The median distance measured in kilometer from the predicted city to the true location coordinates. \\ \textbf{Mean}: The mean value of error distances in predictions. \subsection{Hyperparameter Settings} In our experiments, we initialize word embeddings with released 300-dimensional Glove vectors \cite{pennington2014glove}. For words not appearing in Glove vocabulary, we randomly initialize them from a uniform distribution U(-0.25, 0.25). We choose the character embedding dimension as 50. The character embeddings are randomly initialized from a uniform distribution U(-1.0,1.0), as well as the timezone embeddings and language embeddings. These embeddings are all learned during training. Because our three datasets are sufficiently large to train our model, the learning is quite stable and performance does not fluctuate a lot. Network embeddings are trained using LINE \cite{tang2015line} with parameters of dimension 600, initial learning rate 0.025, order 2, negative sample size 5, and training sample size 10000M. Network embeddings are fixed during training. For users not appearing in the mention network, we set their network embedding vectors as $0$. \begin{table}[!h] \resizebox{0.48\textwidth}{!}{ \begin{tabular}{lccc} \hline \hline & Twitter-US & Twitter-World & WNUT \\ \hline Batch size & 32 & 64 & 64 \\ \hline Initial learning rate & $10^{-4}$ & $10^{-4}$ & $10^{-4}$ \\ \hline \begin{tabular}[c]{@{}l@{}}$D$: Word embedding \\ dimension\end{tabular} & 300 & 300 & 300 \\ \hline \begin{tabular}[c]{@{}l@{}}$d$: Char. embedding\\ dimension\end{tabular} & 50 & 50 & 50 \\ \hline \begin{tabular}[c]{@{}l@{}}$l_c$: filter sizes\\ in Char. CNN\end{tabular} & 3,4,5 & 3,4,5 & 3,4,5 \\ \hline \begin{tabular}[c]{@{}l@{}}Filter number \\ for each size\end{tabular} & 100 & 100 & 100 \\ \hline $h$: number of heads & 10 & 10 & 10 \\ \hline \begin{tabular}[c]{@{}l@{}}$L$: layers of \\ transformer encoder\end{tabular} & 3 & 3 & 3 \\ \hline $\lambda$: initial penalty term & 1 & 1 & 1 \\ \hline \begin{tabular}[c]{@{}l@{}}$\alpha$: weight for country\\ supervision \end{tabular} & 1 & 1 & 1 \\ \hline \begin{tabular}[c]{@{}l@{}}$D_{ff}$: inner \\ dimension of FFN\end{tabular} & 2400 & 2400 & 2400 \\ \hline \begin{tabular}[c]{@{}l@{}}Max number of \\ tweets per user\end{tabular} & 100 & 50 & 20 \\ \hline \hline \end{tabular} } \caption{A summary of hyperparameter settings of our model.} \label{parameters} \end{table} \begin{table*}[!t] \resizebox{\textwidth}{!}{ \begin{tabular}{lcccccccccc} \hline \hline \multirow{2}{*}{} & \multicolumn{3}{c}{Twitter-US} & \multicolumn{3}{c}{Twitter-World} & \multicolumn{4}{c}{WNUT} \\ \cline{2-11} &Acc@161$\uparrow$& Median$\downarrow$ & Mean$\downarrow$ & Acc@161$\uparrow$ & Median$\downarrow$ & Mean$\downarrow$ & Accuracy$\uparrow$ & Acc@161$\uparrow$ & Median$\downarrow$ & Mean$\downarrow$ \\ \hline Text & \multicolumn{10}{c}{} \\ \citet{wing2014hierarchical} & 49.2 & 170.5 & 703.6 & 32.7 & 490.0 & 1714.6 & - & - & - & - \\ \citet{rahimi2015exploiting}* & 50 & 159 & 686 & 32 & 530 & 1724 & - & - & - & - \\ \citet{miura2017unifying}-TEXT & 55.6 & 110.5 & 585.1 & - & - & - & 35.4 & 50.3 & 155.8 & 1592.6 \\ \citet{rahimi2017neural} & 55 & 91 & 581 & 36 & 373 & 1417 & - & - & - & - \\ HLPNN-Text &\textbf{57.1}&\textbf{89.92}& \textbf{516.6} &\textbf{40.1}&\textbf{299.1} & \textbf{1048.1}& \textbf{37.3} & \textbf{52.9} & \textbf{109.3} & \textbf{1289.4} \\ \hline Text+Meta & & & & & & & & & & \\ \citet{miura2017unifying}-META &\textbf{67.2} & \textbf{46.8} &\textbf{356.3}& - & - & - & 54.7 & 70.2 & 0 & 825.8 \\ HLPNN-Meta & 61.1 & 64.3 & 454.8 & \textbf{56.4}&\textbf{86.2}&\textbf{762.1}& \textbf{57.2} & \textbf{73.1} & \textbf{0} & \textbf{572.5} \\ \hline Text+Net & \multicolumn{10}{c}{} \\ \citet{rahimi2015twitter}* & 60 & 78 & 529 & 53 & 111 & 1403 & - & - & - & - \\ \citet{rahimi2017neural} & 61 & 77 & 515 & 53 & 104 & 1280 & - & - & - & - \\ \citet{miura2017unifying}-UNET & 61.5 & 65 & 481.5 & - & - & - & \textbf{38.1} & \textbf{53.3} & \textbf{99.9} & 1498.6 \\ \citet{do2017multiview} & 66.2 & 45 & 433 & 53.3 & 118 & 1044 & - & - & - & - \\ \citet{rahimi2018semi}-MLP-TXT+NET & 66 & 56 & 420 & 58 & \textbf{53} & 1030 & - & - & - & - \\ \citet{rahimi2018semi}-GCN & 62 & 71 & 485 & 54 & 108 & 1130 & - & - & - & - \\ HLPNN-Net &\textbf{70.8}&\textbf{31.6}&\textbf{361.5} &\textbf{58.9} & 59.9 & \textbf{827.6} & 37.8 & \textbf{53.3} & 105.26 & \textbf{1297.7} \\ \hline Text+Meta+Net & & & & & & & & & & \\ \citet{miura2016simple} & - & - & - & - & - & - & 47.6 & - & 16.1 & 1122.3 \\ \citet{jayasinghe2016csiro} & - & - & - & - & - & - & 52.6 & - & 21.7 & 1928.8 \\ \citet{miura2017unifying} & 70.1 & 41.9 & 335.7 & - & - & - & 56.4 & 71.9 & \textbf{0} & 780.5 \\ HLPNN & \textbf{72.7} &\textbf{28.2}& \textbf{323.1} &\textbf{68.4} & \textbf{6.20} & \textbf{610.0} & \textbf{57.6} & \textbf{73.4} & \textbf{0} & \textbf{538.8} \\ \hline \hline \end{tabular} } \caption{Comparisons between our method and baselines. We report results under four different feature settings: Text, Text+Metadata, Text+Network, Text+Metadata+Network. ``-'' signifies that no results were published for the given dataset, ``*'' denotes that results are cited from \citet{rahimi2017neural}. Note that \citet{miura2017unifying} only used 279K users added with metadata in their experiments of Twitter-US.} \label{result} \end{table*} A brief summary of hyperparameter settings of our model is shown in Table \ref{parameters}. The initial learning rate is $10^{-4}$. If the validation accuracy on the development set does not increase, we decrease the learning rate to $10^{-5}$ and train the model for additional 3 epochs. Empirically, training terminates within 10 epochs. Penalty $\lambda$ is initialized as $1.0$ and is adapted during training. We apply dropout on the input of Bi-LSTM layer and the output of two sub-layers in transformer encoders with dropout rate 0.3 and 0.1 respectively. We use the Adam update rule \cite{kingma2014adam} to optimize our model. Gradients are clipped between -1 and 1. The maximum numbers of tweets per user for training and evaluating on Twitter-US are 100 and 200 respectively. We only tuned our model, learning rate, and dropout rate on the development set of WNUT. \section{Introduction} Accurate estimation of user location is an important factor for many online services, such as recommendation systems \cite{quercia2010recommending}, event detection \cite{sakaki2010earthquake}, and disaster management \cite{carley2016crowd}. Though internet service providers can directly obtain users' location information from some explicit metadata like IP address and GPS signal, such private information is not available for third-party contributors. With this motivation, researchers have developed location prediction systems for various platforms, such as Wikipedia \cite{overell2009geographic}, Facebook \cite{backstrom2010find}, and Twitter \cite{han2012geolocation}. In the case of Twitter, due to the sparsity of geotagged tweets \cite{graham2014world} and the unreliability of user self-declared home location in profile \cite{hecht2011tweets}, there is a growing body of research trying to determine users' locations automatically. Various methods have been proposed for this purpose. They can be roughly divided into three categories. The first type consists of tweet text-based methods, where the word distribution is used to estimate geolocations of users \cite{roller2012supervised, wing2011simple}. In the second type, methods combining metadata features such as time zone and profile description are developed to improve performance \cite{han2013stacking}. Network-based methods form the last type. Several studies have shown that incorporating friends' information is very useful for this task \cite{miura2017unifying, ebrahimi2018unified}. Empirically, models enhanced with network information work better than the other two types, but they do not scale well to larger datasets \cite{rahimi2015twitter}. In recent years, neural network based prediction methods have shown great success on this Twitter user geolocation prediction task \cite{ rahimi2017neural, miura2017unifying}. However, these neural network based methods largely ignore the hierarchical structure among locations (eg. country versus city), which have been shown to be very useful in previous study \cite{mahmud2012tweet, wing2014hierarchical}. In recent work, \citet{huang2017predicting} also demonstrate that country-level location prediction is much easier than city-level location prediction. It is natural to ask whether we can incorporate the hierarchical structure among locations into a neural network and use the coarse-grained location prediction to guide the fine-grained prediction. Besides, most of these previous work uses word-level embeddings to represent text, which may not be sufficient for noisy text from social media. In this paper, we present a hierarchical location prediction neural network (HLPNN) for user geolocation on Twitter. Our model combines text features, metadata features (personal description, profile location, name, user language, time zone), and network features together. It uses a character-aware word embedding layer to deal with the noisy text and capture out-of-vocabulary words. With transformer encoders, our model learns the correlation between different feature fields and outputs two classification representations for country-level and city-level predictions respectively. It first computes the country-level prediction, which is further used to guide the city-level prediction. Our model is flexible in accommodating different feature combinations, and it achieves state-of-the-art results under various feature settings. \section{Related Work} Because of insufficient geotagged data \cite{graham2014world, binxuan2019large}, there is a growing interest in predicting Twitter users' locations. Though there are some potential privacy concerns, user geolocation is a key factor for many important applications such as earthquake detection \cite{earle2012twitter}, and disaster management \cite{carley2016crowd}, health management \cite{huang2018location}. Early work tried to identify users' locations by mapping their IP addresses to physical locations \cite{buyukokkten1999exploiting}. However, such private information is only accessible to internet service providers. There is no easy way for a third-party to find Twitter users' IP addresses. Later, various text-based location prediction systems were proposed. \citet{bilhaut2003geographic} utilize a geographical gazetteer as an external lexicon and present a rule-based geographical references recognizer. \citet{amitay2004web} extracted location-related information listed in a gazetteer from web content to identify geographical regions of webpages. However, as shown in \cite{berggren2016inferring}, performances of gazetteer-based methods are hindered by the noisy and informal nature of tweets. Moving beyond methods replying on external knowledge sources (e.g. IP and gazetteers), many machine learning based methods have recently been applied to location prediction. Typically, researchers first represent locations as earth grids \cite{wing2011simple, roller2012supervised}, regions \cite{miyazaki2018twitter, qian2017probabilistic}, or cities \cite{han2013stacking}. Then location classifiers are built to categorize users into different locations. \citet{han2012geolocation} first utilized feature selection methods to find location indicative words, then they used multinomial naive Bayes and logistic regression classifiers to find correct locations. \citet{han2013stacking} further present a stacking based method that combines tweet text and metadata together. Along with these classification methods, some approaches also try to learn topic regions automatically by topic modeling, but these do not scale well to the magnitude of social media \cite{hong2012discovering, zhang2017rate}. Recently, deep neural network based methods are becoming popular for location prediction \cite{miura2016simple}. \citet{huang2017predicting} integrate text and user profile metadata into a single model using convolutional neural networks, and their experiments show superior performance over stacked naive Bayes classifiers. \citet{miura2017unifying, ebrahimi2018unified} incorporate user network connection information into their neural models, where they use network embeddings to represent users in a social network. \citet{rahimi2018semi} also uses text and network feature together, but their approach is based on graph convolutional neural networks. Similar to our method, some research has tried to predict user location hierarchically \cite{mahmud2012tweet, wing2014hierarchical}. \citet{mahmud2012tweet} develop a two-level hierarchical location classifier which first predicts a coarse-grained location (country, time zone), and then predicts the city label within the corresponding coarse region. \citet{wing2014hierarchical} build a hierarchical tree of earth grids. The probability of a final fine-grained location can be computed recursively from the root node to the leaf node. Both methods have to train one classifier separately for each parent node, which is quite time-consuming for training deep neural network based methods. Additionally, certain coarse-grained locations may not have enough data samples to train a local neural classifier alone. Our hierarchical location prediction neural network overcomes these issues and only needs to be trained once. \section{Conclusion} In this paper, we propose a hierarchical location prediction neural network, which combines text, metadata, network information for user location prediction. Our model can accommodate various feature combinations. Extensive experiments have been conducted to validate the effectiveness of our model under four different feature settings across three commonly used benchmarks. Our experiments show our HLPNN model achieves state-of-the-art results on these three datasets. It not only improves the prediction accuracy but also significantly reduces the mean error distance. In our ablation analysis, we show that using character-aware word embeddings is helpful for overcoming noise in social media text. The transformer encoders effectively learn the correlation between different features and decouple the two different level predictions. In our experiments, we also analyzed the effect of adding country-level regularization. The country-level supervision could effectively guide the city-level prediction towards the correct country, and reduce the errors where users are misplaced in the wrong countries. Though our HLPNN model achieves great performances under Text+Net and Text+Meta+Net settings, potential improvements could be made using better graph-level classification frameworks. We currently only use network information to train network embeddings as user-level features. For future work, we would like to explore ways to combine graph-level classification methods and our user-level learning model. Propagating features from connected friends would provide much more information than just using network embedding vectors. Besides, our model assumes each post of one user all comes from one single home location but ignores the dynamic user movement pattern like traveling. We plan to incorporate temporal states to capture location changes in future work. \section{Method} There are seven features we want to utilize in our model --- tweet text, personal description, profile location, name, user language, time zone, and mention network. The first four features are text fields where users can write anything they want. User language and time zone are two categorical features that are selected by users in their profiles. Following previous work \cite{rahimi2018semi}, we construct mention network directly from mentions in tweets, which is also less expensive to collect than following network\footnote{https://developer.twitter.com}. \begin{figure*}[!h] \centering \includegraphics[width=0.8\textwidth]{./image/arch2.png} \caption{The architecture of our hierarchical location prediction neural network.} \label{arch} \end{figure*} The overall architecture of our hierarchical location prediction model is shown in Figure \ref{arch}. It first maps four text features into a word embedding space. A bidirectional LSTM (Bi-LSTM) neural network \cite{hochreiter1997long} is used to extract location-specific features from these text embedding vectors. Following Bi-LSTM, we use a word-level attention layer to generate representation vectors for these text fields. Combining all the text representations, a user language embedding, a timezone embedding, and a network embedding, we apply several layers of transformer encoders \cite{vaswani2017attention} to learn the correlation among all the feature fields. The probability for each country is computed after a field-level attention layer. Finally, we use the country probability as a constraint for the city-level location prediction. We elaborate details of our model in the following sections. \subsection{Word Embedding} Assume one user has $T$ tweets, there are $T+3$ text fields for this user including personal description, profile location, and name. We first map each word in these $T+3$ text fields into a low dimensional embedding space. The embedding vector for word $w$ is computed as $x_w = [E(w),CNN_c(w)]$, where $[,]$ denotes vector concatenation. $E(w)$ is the word-level embedding retrieved directly from an Embedding matrix $E\in R^{V\times D}$ by a lookup operation, where $V$ is the vocabulary size, and $D$ is the word-level embedding dimension. $CNN_c(w)$ is a character-level word embedding that is generated from a character-level convolutional layer. Using character-level word embeddings is helpful for dealing with out-of-vocabulary tokens and overcoming the noisy nature of tweet text. The character-level word embedding generation process is as follows. For a character $c_i$ in the word $w=(c_1,...,c_k)$, we map it into a character embedding space and get a vector $v_{c_i}\in R^{d}$. In the convolutional layer, each filter $u \in R^{l_c \times d}$ generates a feature vector $\boldsymbol{ \theta }=[\theta_1,\theta_2,...,\theta_{k-l_c+1}]\in R^{k-l_c+1}$, where $\theta_i=\textit{relu}(u \circ v_{c_i:c_{i+l_c-1}}+b)$. $b$ is a bias term, and ``$\circ$'' denotes element-wise inner product between $u$ and character window $v_{c_i:c_{i+l_c-1}}\in R^{l_c\times d}$. After this convolutional operation, we use a max-pooling operation to select the most representative feature $\hat \theta = max(\boldsymbol{ \theta })$. With $D$ such filters, we get the character-level word embedding $CNN_c(w)\in R^D$. \subsection{Text Representation} After the word embedding layer, every word in these $T+3$ texts are transformed into a $2D$ dimension vector. Given a text with word sequence $(w_1,...,w_N)$, we get a word embedding matrix $X\in R^{N\times 2D}$ from the embedding layer. We then apply a Bi-LSTM neural network to extract high-level semantic representations from text embedding matrices. At every time step $i$, a forward LSTM takes the word embedding $x_i$ of word $w_i$ and previous state $\overrightarrow {h_{i-1}}$ as inputs, and generates the current hidden state $\overrightarrow h_i$. A backward LSTM reads the text from $w_N$ to $w_1$ and generates another state sequence. The hidden state $h_i \in R^{2D}$ for word $w_i$ is the concatenation of $\overrightarrow {h_i}$ and $\overleftarrow {h_i}$. Concatenating all the hidden states, we get a semantic matrix $H\in R^{N\times 2D}$ \begin{equation*} \begin{aligned} & \overrightarrow {h_i}=\overrightarrow{LSTM}(x_i,\overrightarrow {h_{i-1}})\\ &\overleftarrow {h_i}=\overleftarrow{LSTM}(x_i,\overleftarrow {h_{i+1}}) \\ \end{aligned} \end{equation*} Because not all words in a text contribute equally towards location prediction, we further use a multi-head attention layer \cite{vaswani2017attention} to generate a representation vector $f\in R^{2D}$ for each text. There are $h$ attention heads that allow the model to attend to important information from different representation subspaces. Each head computes a text representation as a weighted average of these word hidden states. The computation steps in a multi-head attention layer are as follows. \begin{equation*} \begin{aligned} & f = \operatorname{MultiHead}(q,H)=[\operatorname{head_1},...,\operatorname{head_h}]W^O \\ & \operatorname{head_i}(q,H) = \operatorname{softmax}(\frac{qW_i^Q\cdot (HW_i^K)^T}{\sqrt{d_k}})HW_i^V \end{aligned} \end{equation*} where $q\in R^{2d}$ is an attention context vector learned during training, $W_i^Q,W_i^K,W_i^V\in R^{2D\times d_k}$, and $W^O \in R^{2D\times 2D}$ are projection parameters, $d_k=2D/h$. An attention head $head_i$ first projects the attention context $q$ and the semantic matrix $H$ into query and key subspaces by $W_i^Q$, $W_i^K$ respectively. The matrix product between query $qW_i^Q$ and key $HW_i^K$ after softmax normalization is an attention weight that indicates important words among the projected value vectors $HW_i^V$. Concatenating $h$ heads together, we get one representation vector $f\in R^{2D}$ after projection by $W^O$ for each text field. \subsection{Feature Fusion} For two categorical features, we assign an embedding vector with dimension $2D$ for each time zone and language. These embedding vectors are learned during training. We pretrain network embeddings for users involved in the mention network using LINE \cite{tang2015line}. Network embeddings are fixed during training. We get a feature matrix $F\in R^{(T+6)\times 2D}$ by concatenating text representations of $T+3$ text fields, two embedding vectors of categorical features, and one network embedding vector. We further use several layers of transformer encoders \cite{vaswani2017attention} to learn the correlation between different feature fields. Each layer consists of a multi-head self-attention network and a feed-forward network (FFN). One transformer encoder layer first uses input feature to attend important information in the feature itself by a multi-head attention sub-layer. Then a linear transformation sub-layer $FFN$ is applied to each position identically. Similar to \citet{vaswani2017attention}, we employ residual connection \cite{he2016deep} and layer normalization \cite{ba2016layer} around each of the two sub-layers. The output $F_1$ of the first transformer encoder layer is generated as follows. \begin{equation*} \begin{aligned} & F' = \operatorname{LayerNorm}(\operatorname{MultiHead}(F,F)+F) \\ &F_1 = \operatorname{LayerNorm}(\operatorname{FFN}(F')+F')\\ \end{aligned} \end{equation*} where $\operatorname{FFN}(F')=\operatorname{max}(0,F'W_1+b_1)W_2+b2$, $W_1\in R^{2D\times D_{ff}}$, and $W_2\in R^{D_{ff}\times 2D}$. Since there is no position information in the transformer encoder layer, our model cannot distinguish between different types of features, eg. tweet text and personal description. To overcome this issue, we add feature type embeddings to the input representations $F$. There are seven features in total. Each of them has a learned feature type embedding with dimension $2D$ so that one feature type embedding and the representation of the corresponding feature can be summed. Because the input and the output of transformer encoder have the same dimension, we stack $L$ layers of transformer encoders to learn representations for country-level prediction and city-level prediction respectively. These two sets of encoders share the same input $F$, but generate different representations $F_{co}^L$ and $F_{ci}^L$ for country and city predictions. The final classification features for country-level and city-level location predictions are the row-wise weighted average of $F_{co}$ and $F_{ci}$. Similar to the word-level attention, we use a field-level multi-head attention layer to select important features from $T+6$ vectors and fuse them into a single vector. \begin{equation*} \begin{aligned} F_{co} = \operatorname{MultiHead}(q_{co},F_{co}^L)\\ F_{ci} = \operatorname{MultiHead}(q_{ci},F_{ci}^L) \end{aligned} \end{equation*} where $q_{co}, q_{ci}\in R^{2D}$ are two attention context vectors. \begin{table*}[!h] \centering \resizebox{0.8\textwidth}{!}{ \begin{tabular}{lccccccccc} \hline \hline \multirow{2}{*}{} & \multicolumn{3}{c}{Twitter-US} & \multicolumn{3}{c}{Twitter-World} & \multicolumn{3}{c}{WNUT} \\ \cline{2-10} & Train & Dev. & Test & Train & Dev. & Test & Train & Dev. & Test \\ \hline \# users & 429K & 10K & 10K & 1.37M & 10K & 10K & 742K & 7.46K & 10K \\ \hline \begin{tabular}[c]{@{}l@{}}\# users \\ with meta\end{tabular} & 228K & 5.32K & 5.34K & 917K & 6.50K & 6.48K & 742K & 7.46K & 10K \\ \hline \# tweets & 36.4M & 861K & 831K & 11.2M & 488K & 315K & 8.97M & 90.3K & 99.7K \\ \hline \begin{tabular}[c]{@{}l@{}} \# tweets \\ per user\end{tabular} & 84.60 & 86.14 & 83.12 & 8.16 & 48.83 & 31.59 & 12.09 & 12.10 & 9.97 \\ \hline \hline \end{tabular} } \caption{A brief summary of our datasets. For each dataset, we report the number of users, number of users with metadata, number of tweets, and average number of tweets per user. We collected metadata for 53\% and 67\% of users in Twitter-US and Twitter-World. Time zone information was not available when we collected metadata for these two datasets. About 25\% of training and development users' data was inaccessible when we collected WNUT in 2017.} \label{data} \end{table*} \subsection{Hierarchical Location Prediction} The final probability for each country is computed by a softmax function \begin{equation*} \begin{aligned} P_{co} = \operatorname{softmax}(W_{co}F_{co}+b_{co}) \end{aligned} \end{equation*} where $W_{co} \in R^{M_{co}\times 2D}$ is a linear projection parameter, $b_{co}\in R^{M_{co}}$ is a bias term, and $M_{co}$ is the number of countries. After we get the probability for each country, we further use it to constrain the city-level prediction \begin{equation*} \begin{aligned} P_{ci} = &\operatorname{softmax}(W_{ci}F_{ci}+b_{ci} + \lambda P_{co} Bias) \end{aligned} \end{equation*} where $W_{ci} \in R^{M_{ci}\times 2D}$ is a linear projection parameter, $b_{ci}\in R^{M_{ci}}$ is a bias term, and $M_{ci}$ is the number of cities. $Bias\in R^{M_{co}\times M_{ci}}$ is the country-city correlation matrix. If city $j$ belongs to country $i$, then $Bias_{ij}$ is $0$, otherwise $-1$. $\lambda$ is a penalty term learned during training. The larger of $\lambda$, the stronger of the country constraint. In practise, we also experimented with letting the model learn the country-city correlation matrix during training, which yields similar performance. We minimize the sum of two cross-entropy losses for country-level prediction and city-level prediction. \begin{equation*} \begin{aligned} loss = -( Y_{ci}\cdot log P_{ci} + \alpha Y_{co}\cdot log P_{co}) \end{aligned} \end{equation*} where $Y_{ci}$ and $Y_{co}$ are one-hot encodings of city and country labels. $\alpha$ is the weight to control the importance of country-level supervision signal. Since a large $\alpha$ would potentially interfere with the training process of city-level prediction, we just set it as $1$ in our experiments. Tuning this parameter on each dataset may further improve the performance. \section{Credits} This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for NAACL 2019 by Stephanie Lukin and Alla Roskovskaya, ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn, those for ACL 2008 by JohannaD. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the \emph{International Joint Conference on Artificial Intelligence} and the \emph{Conference on Computer Vision and Pattern Recognition}. \section{Introduction} The following instructions are directed to authors of papers submitted to ACL 2019 or accepted for publication in its proceedings. All authors are required to adhere to these specifications. Authors are required to provide a Portable Document Format (PDF) version of their papers. \textbf{The proceedings are designed for printing on A4 paper.} \section{General Instructions} Manuscripts must be in two-column format. Exceptions to the two-column format include the title, authors' names and complete addresses, which must be centered at the top of the first page, and any full-width figures or tables (see the guidelines in Subsection~\ref{ssec:first}). \textbf{Type single-spaced.} Start all pages directly under the top margin. See the guidelines later regarding formatting the first page. The manuscript should be printed single-sided and its length should not exceed the maximum page limit described in Section~\ref{sec:length}. Pages are numbered for initial submission. However, \textbf{do not number the pages in the camera-ready version}. By uncommenting {\small\verb|\aclfinalcopy|} at the top of this document, it will compile to produce an example of the camera-ready formatting; by leaving it commented out, the document will be anonymized for initial submission. When you first create your submission on softconf, please fill in your submitted paper ID where {\small\verb|***|} appears in the {\small\verb|\def\aclpaperid{***}|} definition at the top. The review process is double-blind, so do not include any author information (names, addresses) when submitting a paper for review. However, you should maintain space for names and addresses so that they will fit in the final (accepted) version. The ACL 2019 \LaTeX\ style will create a titlebox space of 2.5in for you when {\small\verb|\aclfinalcopy|} is commented out. \subsection{The Ruler} The ACL 2019 style defines a printed ruler which should be presented in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document without the provided style files, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment the {\small\verb|\aclfinalcopy|} command in the document preamble.) Reviewers: note that the ruler measurements do not align well with lines in the paper -- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. In most cases one would expect that the approximate location will be adequate, although you can also use fractional references (\emph{e.g.}, the first paragraph on this page ends at mark $108.5$). \subsection{Electronically-available resources} ACL provides this description in \LaTeX2e{} (\texttt{\small acl2019.tex}) and PDF format (\texttt{\small acl2019.pdf}), along with the \LaTeX2e{} style file used to format it (\texttt{\small acl2019.sty}) and an ACL bibliography style (\texttt{\small acl\_natbib.bst}) and example bibliography (\texttt{\small acl2019.bib}). These files are all available at \texttt{\small http://acl2019.org/downloads/ acl2019-latex.zip}. We strongly recommend the use of these style files, which have been appropriately tailored for the ACL 2019 proceedings. \subsection{Format of Electronic Manuscript} \label{sect:pdf} For the production of the electronic manuscript you must use Adobe's Portable Document Format (PDF). PDF files are usually produced from \LaTeX\ using the \textit{pdflatex} command. If your version of \LaTeX\ produces Postscript files, you can convert these into PDF using \textit{ps2pdf} or \textit{dvipdf}. On Windows, you can also use Adobe Distiller to generate PDF. Please make sure that your PDF file includes all the necessary fonts (especially tree diagrams, symbols, and fonts with Asian characters). When you print or create the PDF file, there is usually an option in your printer setup to include none, all or just non-standard fonts. Please make sure that you select the option of including ALL the fonts. \textbf{Before sending it, test your PDF by printing it from a computer different from the one where it was created.} Moreover, some word processors may generate very large PDF files, where each page is rendered as an image. Such images may reproduce poorly. In this case, try alternative ways to obtain the PDF. One way on some systems is to install a driver for a postscript printer, send your document to the printer specifying ``Output to a file'', then convert the file to PDF. It is of utmost importance to specify the \textbf{A4 format} (21 cm x 29.7 cm) when formatting the paper. When working with \texttt{dvips}, for instance, one should specify \texttt{-t a4}. Or using the command \verb|\special{papersize=210mm,297mm}| in the latex preamble (directly below the \verb|\usepackage| commands). Then using \texttt{dvipdf} and/or \texttt{pdflatex} which would make it easier for some. Print-outs of the PDF file on A4 paper should be identical to the hardcopy version. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs as soon as possible. \subsection{Layout} \label{ssec:layout} Format manuscripts two columns to a page, in the manner these instructions are formatted. The exact dimensions for a page on A4 paper are: \begin{itemize} \item Left and right margins: 2.5 cm \item Top margin: 2.5 cm \item Bottom margin: 2.5 cm \item Column width: 7.7 cm \item Column height: 24.7 cm \item Gap between columns: 0.6 cm \end{itemize} \noindent Papers should not be submitted on any other paper size. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs above as soon as possible. \subsection{Fonts} For reasons of uniformity, Adobe's \textbf{Times Roman} font should be used. In \LaTeX2e{} this is accomplished by putting \begin{quote} \begin{verbatim} \usepackage{times} \usepackage{latexsym} \end{verbatim} \end{quote} in the preamble. If Times Roman is unavailable, use \textbf{Computer Modern Roman} (\LaTeX2e{}'s default). Note that the latter is about 10\% less dense than Adobe's Times Roman font. \begin{table}[t!] \begin{center} \begin{tabular}{|l|rl|} \hline \textbf{Type of Text} & \textbf{Font Size} & \textbf{Style} \\ \hline paper title & 15 pt & bold \\ author names & 12 pt & bold \\ author affiliation & 12 pt & \\ the word ``Abstract'' & 12 pt & bold \\ section titles & 12 pt & bold \\ subsection titles & 11 pt & bold \\ document text & 11 pt &\\ captions & 10 pt & \\ abstract text & 10 pt & \\ bibliography & 10 pt & \\ footnotes & 9 pt & \\ \hline \end{tabular} \end{center} \caption{\label{font-table} Font guide. } \end{table} \subsection{The First Page} \label{ssec:first} Center the title, author's name(s) and affiliation(s) across both columns. Do not use footnotes for affiliations. Do not include the paper ID number assigned during the submission process. Use the two-column format only when you begin the abstract. \textbf{Title}: Place the title centered at the top of the first page, in a 15-point bold font. (For a complete guide to font sizes and styles, see Table~\ref{font-table}) Long titles should be typed on two lines without a blank line intervening. Approximately, put the title at 2.5 cm from the top of the page, followed by a blank line, then the author's names(s), and the affiliation on the following line. Do not use only initials for given names (middle initials are allowed). Do not format surnames in all capitals (\emph{e.g.}, use ``Mitchell'' not ``MITCHELL''). Do not format title and section headings in all capitals as well except for proper names (such as ``BLEU'') that are conventionally in all capitals. The affiliation should contain the author's complete address, and if possible, an electronic mail address. Start the body of the first page 7.5 cm from the top of the page. The title, author names and addresses should be completely identical to those entered to the electronical paper submission website in order to maintain the consistency of author information among all publications of the conference. If they are different, the publication chairs may resolve the difference without consulting with you; so it is in your own interest to double-check that the information is consistent. \textbf{Abstract}: Type the abstract at the beginning of the first column. The width of the abstract text should be smaller than the width of the columns for the text in the body of the paper by about 0.6 cm on each side. Center the word \textbf{Abstract} in a 12 point bold font above the body of the abstract. The abstract should be a concise summary of the general thesis and conclusions of the paper. It should be no longer than 200 words. The abstract text should be in 10 point font. \textbf{Text}: Begin typing the main body of the text immediately after the abstract, observing the two-column format as shown in the present document. Do not include page numbers. \textbf{Indent}: Indent when starting a new paragraph, about 0.4 cm. Use 11 points for text and subsection headings, 12 points for section headings and 15 points for the title. \begin{table} \centering \small \begin{tabular}{cc} \begin{tabular}{|l|l|} \hline \textbf{Command} & \textbf{Output}\\\hline \verb|{\"a}| & {\"a} \\ \verb|{\^e}| & {\^e} \\ \verb|{\`i}| & {\`i} \\ \verb|{\.I}| & {\.I} \\ \verb|{\o}| & {\o} \\ \verb|{\'u}| & {\'u} \\ \verb|{\aa}| & {\aa} \\\hline \end{tabular} & \begin{tabular}{|l|l|} \hline \textbf{Command} & \textbf{Output}\\\hline \verb|{\c c}| & {\c c} \\ \verb|{\u g}| & {\u g} \\ \verb|{\l}| & {\l} \\ \verb|{\~n}| & {\~n} \\ \verb|{\H o}| & {\H o} \\ \verb|{\v r}| & {\v r} \\ \verb|{\ss}| & {\ss} \\\hline \end{tabular} \end{tabular} \caption{Example commands for accented characters, to be used in, \emph{e.g.}, \BibTeX\ names.}\label{tab:accents} \end{table} \subsection{Sections} \textbf{Headings}: Type and label section and subsection headings in the style shown on the present document. Use numbered sections (Arabic numerals) in order to facilitate cross references. Number subsections with the section number and the subsection number separated by a dot, in Arabic numerals. Do not number subsubsections. \begin{table*}[t!] \centering \begin{tabular}{lll} output & natbib & previous ACL style files\\ \hline \citep{Gusfield:97} & \verb|\citep| & \verb|\cite| \\ \citet{Gusfield:97} & \verb|\citet| & \verb|\newcite| \\ \citeyearpar{Gusfield:97} & \verb|\citeyearpar| & \verb|\shortcite| \\ \end{tabular} \caption{Citation commands supported by the style file. The citation style is based on the natbib package and supports all natbib citation commands. It also supports commands defined in previous ACL style files for compatibility. } \end{table*} \textbf{Citations}: Citations within the text appear in parentheses as~\cite{Gusfield:97} or, if the author's name appears in the text itself, as Gusfield~\shortcite{Gusfield:97}. Using the provided \LaTeX\ style, the former is accomplished using {\small\verb|\cite|} and the latter with {\small\verb|\shortcite|} or {\small\verb|\newcite|}. Collapse multiple citations as in~\cite{Gusfield:97,Aho:72}; this is accomplished with the provided style using commas within the {\small\verb|\cite|} command, \emph{e.g.}, {\small\verb|\cite{Gusfield:97,Aho:72}|}. Append lowercase letters to the year in cases of ambiguities. Treat double authors as in~\cite{Aho:72}, but write as in~\cite{Chandra:81} when more than two authors are involved. Collapse multiple citations as in~\cite{Gusfield:97,Aho:72}. Also refrain from using full citations as sentence constituents. We suggest that instead of \begin{quote} ``\cite{Gusfield:97} showed that ...'' \end{quote} you use \begin{quote} ``Gusfield \shortcite{Gusfield:97} showed that ...'' \end{quote} If you are using the provided \LaTeX{} and Bib\TeX{} style files, you can use the command \verb|\citet| (cite in text) to get ``author (year)'' citations. You can use the command \verb|\citealp| (alternative cite without parentheses) to get ``author year'' citations (which is useful for using citations within parentheses, as in \citealp{Gusfield:97}). If the Bib\TeX{} file contains DOI fields, the paper title in the references section will appear as a hyperlink to the DOI, using the hyperref \LaTeX{} package. To disable the hyperref package, load the style file with the \verb|nohyperref| option: \\{\small \verb|\usepackage[nohyperref]{acl2019}|} \textbf{Compilation Issues}: Some of you might encounter the following error during compilation: ``{\em \verb|\pdfendlink| ended up in different nesting level than \verb|\pdfstartlink|.}'' This happens when \verb|pdflatex| is used and a citation splits across a page boundary. To fix this, the style file contains a patch consisting of the following two lines: (1) \verb|\RequirePackage{etoolbox}| (line 454 in \texttt{acl2019.sty}), and (2) A long line below (line 455 in \texttt{acl2019.sty}). If you still encounter compilation issues even with the patch enabled, disable the patch by commenting the two lines, and then disable the \verb|hyperref| package (see above), recompile and see the problematic citation. Next rewrite that sentence containing the citation. (See, {\em e.g.}, {\small\tt http://tug.org/errors.html}) \textbf{Digital Object Identifiers}: As part of our work to make ACL materials more widely used and cited outside of our discipline, ACL has registered as a CrossRef member, as a registrant of Digital Object Identifiers (DOIs), the standard for registering permanent URNs for referencing scholarly materials. As of 2017, we are requiring all camera-ready references to contain the appropriate DOIs (or as a second resort, the hyperlinked ACL Anthology Identifier) to all cited works. Thus, please ensure that you use Bib\TeX\ records that contain DOI or URLs for any of the ACL materials that you reference. Appropriate records should be found for most materials in the current ACL Anthology at \url{http://aclanthology.info/}. As examples, we cite \cite{P16-1001} to show you how papers with a DOI will appear in the bibliography. We cite \cite{C14-1001} to show how papers without a DOI but with an ACL Anthology Identifier will appear in the bibliography. As reviewing will be double-blind, the submitted version of the papers should not include the authors' names and affiliations. Furthermore, self-references that reveal the author's identity, \emph{e.g.}, \begin{quote} ``We previously showed \cite{Gusfield:97} ...'' \end{quote} should be avoided. Instead, use citations such as \begin{quote} ``\citeauthor{Gusfield:97} \shortcite{Gusfield:97} previously showed ... '' \end{quote} Any preliminary non-archival versions of submitted papers should be listed in the submission form but not in the review version of the paper. ACL 2019 reviewers are generally aware that authors may present preliminary versions of their work in other venues, but will not be provided the list of previous presentations from the submission form. \textbf{Please do not use anonymous citations} and do not include when submitting your papers. Papers that do not conform to these requirements may be rejected without review. \textbf{References}: Gather the full set of references together under the heading \textbf{References}; place the section before any Appendices. Arrange the references alphabetically by first author, rather than by order of occurrence in the text. By using a .bib file, as in this template, this will be automatically handled for you. See the \verb|\bibliography| commands near the end for more. Provide as complete a citation as possible, using a consistent format, such as the one for \emph{Computational Linguistics\/} or the one in the \emph{Publication Manual of the American Psychological Association\/}~\cite{APA:83}. Use of full names for authors rather than initials is preferred. A list of abbreviations for common computer science journals can be found in the ACM \emph{Computing Reviews\/}~\cite{ACM:83}. The \LaTeX{} and Bib\TeX{} style files provided roughly fit the American Psychological Association format, allowing regular citations, short citations and multiple citations as described above. \begin{itemize} \item Example citing an arxiv paper: \cite{rasooli-tetrault-2015}. \item Example article in journal citation: \cite{Ando2005}. \item Example article in proceedings, with location: \cite{borsch2011}. \item Example article in proceedings, without location: \cite{andrew2007scalable}. \end{itemize} See corresponding .bib file for further details. Submissions should accurately reference prior and related work, including code and data. If a piece of prior work appeared in multiple venues, the version that appeared in a refereed, archival venue should be referenced. If multiple versions of a piece of prior work exist, the one used by the authors should be referenced. Authors should not rely on automated citation indices to provide accurate references for prior and related work. \textbf{Appendices}: Appendices, if any, directly follow the text and the references (but see above). Letter them in sequence and provide an informative title: \textbf{Appendix A. Title of Appendix}. \subsection{Footnotes} \textbf{Footnotes}: Put footnotes at the bottom of the page and use 9 point font. They may be numbered or referred to by asterisks or other symbols.\footnote{This is how a footnote should appear.} Footnotes should be separated from the text by a line.\footnote{Note the line separating the footnotes from the text.} \subsection{Graphics} \textbf{Illustrations}: Place figures, tables, and photographs in the paper near where they are first discussed, rather than at the end, if possible. Wide illustrations may run across both columns. Color illustrations are discouraged, unless you have verified that they will be understandable when printed in black ink. \textbf{Captions}: Provide a caption for every illustration; number each one sequentially in the form: ``Figure 1. Caption of the Figure.'' ``Table 1. Caption of the Table.'' Type the captions of the figures and tables below the body, using 10 point text. Captions should be placed below illustrations. Captions that are one line are centered (see Table~\ref{font-table}). Captions longer than one line are left-aligned (see Table~\ref{tab:accents}). Do not overwrite the default caption sizes. The acl2019.sty file is compatible with the caption and subcaption packages; do not add optional arguments. \subsection{Accessibility} \label{ssec:accessibility} In an effort to accommodate people who are color-blind (as well as those printing to paper), grayscale readability for all accepted papers will be encouraged. Color is not forbidden, but authors should ensure that tables and figures do not rely solely on color to convey critical distinctions. A simple criterion: All curves and points in your figures should be clearly distinguishable without color. \section{Translation of non-English Terms} It is also advised to supplement non-English characters and terms with appropriate transliterations and/or translations since not all readers understand all such characters and terms. Inline transliteration or translation can be represented in the order of: original-form transliteration ``translation''. \section{Length of Submission} \label{sec:length} The ACL 2019 main conference accepts submissions of long papers and short papers. Long papers may consist of up to eight (8) pages of content plus unlimited pages for references. Upon acceptance, final versions of long papers will be given one additional page -- up to nine (9) pages of content plus unlimited pages for references -- so that reviewers' comments can be taken into account. Short papers may consist of up to four (4) pages of content, plus unlimited pages for references. Upon acceptance, short papers will be given five (5) pages in the proceedings and unlimited pages for references. For both long and short papers, all illustrations and tables that are part of the main text must be accommodated within these page limits, observing the formatting instructions given in the present document. Papers that do not conform to the specified length and formatting requirements are subject to be rejected without review. ACL 2019 does encourage the submission of additional material that is relevant to the reviewers but not an integral part of the paper. There are two such types of material: appendices, which can be read, and non-readable supplementary materials, often data or code. Do not include this additional material in the same document as your main paper. Additional material must be submitted as one or more separate files, and must adhere to the same anonymity guidelines as the main paper. The paper must be self-contained: it is optional for reviewers to look at the supplementary material. Papers should not refer, for further detail, to documents, code or data resources that are not available to the reviewers. Refer to Appendix~\ref{sec:appendix} and Appendix~\ref{sec:supplemental} for further information. Workshop chairs may have different rules for allowed length and whether supplemental material is welcome. As always, the respective call for papers is the authoritative source. \section*{Acknowledgments} The acknowledgments should go immediately before the references. Do not number the acknowledgments section. Do not include this section when submitting your paper for review. \\ \noindent \textbf{Preparing References:} \\ Include your own bib file like this: \verb|\bibliographystyle{acl_natbib}| \verb| \section*{Acknowledgments} This work was supported in part by the Office of Naval Research (ONR) N0001418SB001 and N000141812108. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the ONR. \section{Results} \subsection{Baseline Comparisons} In our experiments, we evaluate our model under four different feature settings: Text, Text+Meta, Text+Network, Text+Meta+Network. HLPNN-Text is our model only using tweet text as input. HLPNN-Meta is the model that combines text and metadata (description, location, name, user language, time zone). HLPNN-Net is the model that combines text and mention network. HLPNN is our full model that uses text, metadata, and mention network for Twitter user geolocation. We present comparisons between our model and previous work in Table \ref{result}. As shown in the table, our model outperforms these baselines across three datasets under various feature settings. Only using text feature from tweets, our model HLPNN-Text works the best among all these text-based location prediction systems and wins by a large margin. It not only improves prediction accuracy but also greatly reduces mean error distance. Compared with a strong neural model equipped with local dialects \cite{rahimi2017neural}, it increases Acc@161 by an absolute value 4\% and reduces mean error distance by about 400 kilometers on the challenging Twitter-World dataset, without using any external knowledge. Its mean error distance on Twitter-World is even comparable to some methods using network feature \cite{do2017multiview}. With text and metadata, HLPNN-Meta correctly predicts locations of 57.2\% users in WNUT dataset, which is even better than these location prediction systems that use text, metadata, and network. Because in the WNUT dataset the ground truth location is the closest city's center, Our model achieves 0 median error when its accuracy is greater than 50\%. Note that \citet{miura2017unifying} used 279K users added with metadata in their experiments on Twitter-US, while we use all 449K users for training and evaluation, and only 53\% of them have metadata, which makes it difficult to make a fair comparison. Adding network feature further improves our model's performances. It achieves state-of-the-art results combining all features on these three datasets. Even though unifying network information is not the focus of this paper, our model still outperforms or has comparable results to some well-designed network-based location prediction systems like \cite{rahimi2018semi}. On Twitter-US dataset, our model variant HLPNN-Net achieves a 4.6\% increase in Acc@161 against previous state-of-the-art methods \cite{do2017multiview} and \cite{rahimi2018semi}. The prediction accuracy of HLPNN-Net on WNUT dataset is similar to \cite{miura2017unifying}, but with a noticeable lower mean error distance. \subsection{Ablation Study} In this section, we provide an ablation study to examine the contribution of each model component. Specifically, we remove the character-level word embedding, the word-level attention, the field-level attention, the transformer encoders, and the country supervision signal one by one at a time. We run experiments on the WNUT dataset with text features. \iffalse \begin{table}[!h] \centering \resizebox{0.48\textwidth}{!}{ \begin{tabular}{lccc} \hline\hline & Accuracy & Acc@161 & Mean \\ \hline HLPNN & 57.6 & 73.4 & 538.8 \\ w/o Char-CNN & 55.8 & 72.0 & 620.7 \\ w/o Word-Att & 56.3 & 72.4 & 585.7 \\ w/o Field-Att & 57.2 & 72.9 & 563.6 \\ w/o encoders & 56.8 & 72.4 & 613.2 \\ w/o country & 56.7 & 72.6 & 605.97 \\ \hline\hline \end{tabular} } \caption{An ablation study on WNUT dataset. The median distance is 0 for all the models, so we do not include it in the table.} \label{ablation} \end{table} \fi \begin{table}[!h] \centering \resizebox{0.48\textwidth}{!}{ \begin{tabular}{lcccc} \hline\hline & Accuracy & Acc@161 & Median & Mean \\ \hline HLPNN & 37.3 & 52.9 & 109.3 & 1289.4 \\ w/o Char-CNN & 36.3 & 51.0 & 130.8 & 1429.9 \\ w/o Word-Att & 36.4 & 51.5 & 130.2 & 1377.5 \\ w/o Field-Att & 37.0 & 52.0 & 121.8 & 1337.5 \\ w/o encoders & 36.8 & 52.5 & 117.4 & 1402.9 \\ w/o country & 36.7 & 52.6 & 124.8 & 1399.2 \\ \hline\hline \end{tabular} } \caption{An ablation study on WNUT dataset.} \label{ablation} \end{table} The performance breakdown for each model component is shown in Table \ref{ablation}. Compared to the full model, we can find that the character-level word embedding layer is especially helpful for dealing with noisy social media text. The word-level attention also provides performance gain, while the field-level attention only provides a marginal improvement. The reason could be the multi-head attention layers in the transformer encoders already captures important information among different feature fields. These two transformer encoders learn the correlation between features and decouple these two level predictions. Finally, using the country supervision can help model to achieve a better performance with a lower mean error distance. \subsection{Country Effect} To directly measure the effect of adding country-level supervision, we define a relative country error which is the percentage of city-level predictions located in incorrect countries among all misclassified city-level predictions. \begin{align*} \resizebox{0.48\textwidth}{!}{$\operatorname{relative\ country\ error} = \frac{\operatorname{\#\ of\ incorrect\ country}}{\operatorname{\#\ of\ incorrect\ city}}$} \end{align*} The lower this metric means the better one model can predict the city-level location, at least in the correct country. We vary the weight $\alpha$ of country-level supervision signal in our loss function from 0 to 20. The larger $\alpha$ means the more important the country-level supervision during the optimization. When $\alpha$ equals 0, there is no country-level supervision in our model. As shown in Figure \ref{alpha}, increasing $\alpha$ would improve the relative country error from 26.2\% to 23.1\%, which shows the country-level supervision signal indeed can help our model predict the city-level location towards the correct country. This possibly explains why our model has a lower mean error distance when compared to other methods \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{./image/alpha.png} \caption{Relative country error with varying $\alpha$ on test dataset. Experiments were conducted on WNUT dataset with text feature.} \label{alpha} \end{figure} \section{Experiment Settings} \subsection{Datasets} To validate our method, we use three widely adopted Twitter location prediction datasets. Table \ref{data} shows a brief summary of these three datasets. They are listed as follows. \textbf{Twitter-US} is a dataset compiled by \citet{roller2012supervised}. It contains 429K training users, 10K development users, and 10K test users in North America. The ground truth location of each user is set to the first geotag of this user in the dataset. We assign the closest city to each user's ground truth location using the city category built by \citet{han2012geolocation}. Since this dataset only covers North America, we change the first level location prediction from countries to administrative regions (eg. state or province). The administrative region for each city is obtained from the original city category. \textbf{Twitter-World} is a Twitter dataset covering the whole world, with 1,367K training users, 10K development users, and 10K test users \cite{han2012geolocation}. The ground truth location for each user is the center of the closest city to the first geotag of this user. Only English tweets are included in this dataset, which makes it more challenging for a global-level location prediction task. We downloaded these two datasets from Github \footnote{https://github.com/afshinrahimi/geomdn}. Each user in these two datasets is represented by the concatenation of their tweets, followed by the geo-coordinates. We queried Twitter's API to add user metadata information to these two datasets in February 2019. We only get metadata for about 53\% and 67\% users in Twitter-US and Twitter-World respectively. Because of Twitter's privacy policy change, we could not get the time zone information anymore at the time of collection. \textbf{WNUT} was released in the 2nd Workshop on Noisy User-generated Text \cite{han2016twitter}. The original user-level dataset consists of 1 million training users, 10K users in development set and test set each. Each user is assigned with the closest city center as the ground truth label. Because of Twitter's data sharing policy, only tweet IDs of training and development data are provided. We have to query Twitter's API to reconstruct the training and development dataset. We finished our data collection around August 2017. About 25\% training and development users' data cannot be accessed at that time. The full anonymized test data is downloaded from the workshop website \footnote{https://noisy-text.github.io/2016/geo-shared-task.html}. \subsection{Text Preprocessing \& Network Construction} For all the text fields, we first convert them into lower case, then use a tweet-specific tokenizer from NLTK\footnote{https://www.nltk.org/api/nltk.tokenize.html} to tokenize them. To keep a reasonable vocabulary size, we only keep tokens with frequencies greater than 10 times in our word vocabulary. Our character vocabulary includes characters that appear more than 5 times in the training corpus. We construct user networks from mentions in tweets. For WNUT, we keep users satisfying one of the following conditions in the mention network: (1) users in the original dataset (2) users who are mentioned by two different users in the dataset. For Twitter-US and Twitter-World, following previous work \cite{rahimi2018semi}, a uni-directional edge is set if two users in our dataset directly mentioned each other, or they co-mentioned another user. We remove celebrities who are mentioned by more than 10 different users from the mentioning network. These celebrities are still kept in the dataset and their network embeddings are set as 0. \subsection{Evaluation Metrics} We evaluate our method using four commonly used metrics listed below.\\ \textbf{Accuracy}: The percentage of correctly predicted home cities.\\ \textbf{Acc@161}: The percentage of predicted cities which are within a 161 km (100 miles) radius of true locations to capture near-misses.\\ \textbf{Median}: The median distance measured in kilometer from the predicted city to the true location coordinates. \\ \textbf{Mean}: The mean value of error distances in predictions. \subsection{Hyperparameter Settings} In our experiments, we initialize word embeddings with released 300-dimensional Glove vectors \cite{pennington2014glove}. For words not appearing in Glove vocabulary, we randomly initialize them from a uniform distribution U(-0.25, 0.25). We choose the character embedding dimension as 50. The character embeddings are randomly initialized from a uniform distribution U(-1.0,1.0), as well as the timezone embeddings and language embeddings. These embeddings are all learned during training. Because our three datasets are sufficiently large to train our model, the learning is quite stable and performance does not fluctuate a lot. Network embeddings are trained using LINE \cite{tang2015line} with parameters of dimension 600, initial learning rate 0.025, order 2, negative sample size 5, and training sample size 10000M. Network embeddings are fixed during training. For users not appearing in the mention network, we set their network embedding vectors as $0$. \begin{table}[!h] \resizebox{0.48\textwidth}{!}{ \begin{tabular}{lccc} \hline \hline & Twitter-US & Twitter-World & WNUT \\ \hline Batch size & 32 & 64 & 64 \\ \hline Initial learning rate & $10^{-4}$ & $10^{-4}$ & $10^{-4}$ \\ \hline \begin{tabular}[c]{@{}l@{}}$D$: Word embedding \\ dimension\end{tabular} & 300 & 300 & 300 \\ \hline \begin{tabular}[c]{@{}l@{}}$d$: Char. embedding\\ dimension\end{tabular} & 50 & 50 & 50 \\ \hline \begin{tabular}[c]{@{}l@{}}$l_c$: filter sizes\\ in Char. CNN\end{tabular} & 3,4,5 & 3,4,5 & 3,4,5 \\ \hline \begin{tabular}[c]{@{}l@{}}Filter number \\ for each size\end{tabular} & 100 & 100 & 100 \\ \hline $h$: number of heads & 10 & 10 & 10 \\ \hline \begin{tabular}[c]{@{}l@{}}$L$: layers of \\ transformer encoder\end{tabular} & 3 & 3 & 3 \\ \hline $\lambda$: initial penalty term & 1 & 1 & 1 \\ \hline \begin{tabular}[c]{@{}l@{}}$\alpha$: weight for country\\ supervision \end{tabular} & 1 & 1 & 1 \\ \hline \begin{tabular}[c]{@{}l@{}}$D_{ff}$: inner \\ dimension of FFN\end{tabular} & 2400 & 2400 & 2400 \\ \hline \begin{tabular}[c]{@{}l@{}}Max number of \\ tweets per user\end{tabular} & 100 & 50 & 20 \\ \hline \hline \end{tabular} } \caption{A summary of hyperparameter settings of our model.} \label{parameters} \end{table} \begin{table*}[!t] \resizebox{\textwidth}{!}{ \begin{tabular}{lcccccccccc} \hline \hline \multirow{2}{*}{} & \multicolumn{3}{c}{Twitter-US} & \multicolumn{3}{c}{Twitter-World} & \multicolumn{4}{c}{WNUT} \\ \cline{2-11} &Acc@161$\uparrow$& Median$\downarrow$ & Mean$\downarrow$ & Acc@161$\uparrow$ & Median$\downarrow$ & Mean$\downarrow$ & Accuracy$\uparrow$ & Acc@161$\uparrow$ & Median$\downarrow$ & Mean$\downarrow$ \\ \hline Text & \multicolumn{10}{c}{} \\ \citet{wing2014hierarchical} & 49.2 & 170.5 & 703.6 & 32.7 & 490.0 & 1714.6 & - & - & - & - \\ \citet{rahimi2015exploiting}* & 50 & 159 & 686 & 32 & 530 & 1724 & - & - & - & - \\ \citet{miura2017unifying}-TEXT & 55.6 & 110.5 & 585.1 & - & - & - & 35.4 & 50.3 & 155.8 & 1592.6 \\ \citet{rahimi2017neural} & 55 & 91 & 581 & 36 & 373 & 1417 & - & - & - & - \\ HLPNN-Text &\textbf{57.1}&\textbf{89.92}& \textbf{516.6} &\textbf{40.1}&\textbf{299.1} & \textbf{1048.1}& \textbf{37.3} & \textbf{52.9} & \textbf{109.3} & \textbf{1289.4} \\ \hline Text+Meta & & & & & & & & & & \\ \citet{miura2017unifying}-META &\textbf{67.2} & \textbf{46.8} &\textbf{356.3}& - & - & - & 54.7 & 70.2 & 0 & 825.8 \\ HLPNN-Meta & 61.1 & 64.3 & 454.8 & \textbf{56.4}&\textbf{86.2}&\textbf{762.1}& \textbf{57.2} & \textbf{73.1} & \textbf{0} & \textbf{572.5} \\ \hline Text+Net & \multicolumn{10}{c}{} \\ \citet{rahimi2015twitter}* & 60 & 78 & 529 & 53 & 111 & 1403 & - & - & - & - \\ \citet{rahimi2017neural} & 61 & 77 & 515 & 53 & 104 & 1280 & - & - & - & - \\ \citet{miura2017unifying}-UNET & 61.5 & 65 & 481.5 & - & - & - & \textbf{38.1} & \textbf{53.3} & \textbf{99.9} & 1498.6 \\ \citet{do2017multiview} & 66.2 & 45 & 433 & 53.3 & 118 & 1044 & - & - & - & - \\ \citet{rahimi2018semi}-MLP-TXT+NET & 66 & 56 & 420 & 58 & \textbf{53} & 1030 & - & - & - & - \\ \citet{rahimi2018semi}-GCN & 62 & 71 & 485 & 54 & 108 & 1130 & - & - & - & - \\ HLPNN-Net &\textbf{70.8}&\textbf{31.6}&\textbf{361.5} &\textbf{58.9} & 59.9 & \textbf{827.6} & 37.8 & \textbf{53.3} & 105.26 & \textbf{1297.7} \\ \hline Text+Meta+Net & & & & & & & & & & \\ \citet{miura2016simple} & - & - & - & - & - & - & 47.6 & - & 16.1 & 1122.3 \\ \citet{jayasinghe2016csiro} & - & - & - & - & - & - & 52.6 & - & 21.7 & 1928.8 \\ \citet{miura2017unifying} & 70.1 & 41.9 & 335.7 & - & - & - & 56.4 & 71.9 & \textbf{0} & 780.5 \\ HLPNN & \textbf{72.7} &\textbf{28.2}& \textbf{323.1} &\textbf{68.4} & \textbf{6.20} & \textbf{610.0} & \textbf{57.6} & \textbf{73.4} & \textbf{0} & \textbf{538.8} \\ \hline \hline \end{tabular} } \caption{Comparisons between our method and baselines. We report results under four different feature settings: Text, Text+Metadata, Text+Network, Text+Metadata+Network. ``-'' signifies that no results were published for the given dataset, ``*'' denotes that results are cited from \citet{rahimi2017neural}. Note that \citet{miura2017unifying} only used 279K users added with metadata in their experiments of Twitter-US.} \label{result} \end{table*} A brief summary of hyperparameter settings of our model is shown in Table \ref{parameters}. The initial learning rate is $10^{-4}$. If the validation accuracy on the development set does not increase, we decrease the learning rate to $10^{-5}$ and train the model for additional 3 epochs. Empirically, training terminates within 10 epochs. Penalty $\lambda$ is initialized as $1.0$ and is adapted during training. We apply dropout on the input of Bi-LSTM layer and the output of two sub-layers in transformer encoders with dropout rate 0.3 and 0.1 respectively. We use the Adam update rule \cite{kingma2014adam} to optimize our model. Gradients are clipped between -1 and 1. The maximum numbers of tweets per user for training and evaluating on Twitter-US are 100 and 200 respectively. We only tuned our model, learning rate, and dropout rate on the development set of WNUT. \section{Introduction} Accurate estimation of user location is an important factor for many online services, such as recommendation systems \cite{quercia2010recommending}, event detection \cite{sakaki2010earthquake}, and disaster management \cite{carley2016crowd}. Though internet service providers can directly obtain users' location information from some explicit metadata like IP address and GPS signal, such private information is not available for third-party contributors. With this motivation, researchers have developed location prediction systems for various platforms, such as Wikipedia \cite{overell2009geographic}, Facebook \cite{backstrom2010find}, and Twitter \cite{han2012geolocation}. In the case of Twitter, due to the sparsity of geotagged tweets \cite{graham2014world} and the unreliability of user self-declared home location in profile \cite{hecht2011tweets}, there is a growing body of research trying to determine users' locations automatically. Various methods have been proposed for this purpose. They can be roughly divided into three categories. The first type consists of tweet text-based methods, where the word distribution is used to estimate geolocations of users \cite{roller2012supervised, wing2011simple}. In the second type, methods combining metadata features such as time zone and profile description are developed to improve performance \cite{han2013stacking}. Network-based methods form the last type. Several studies have shown that incorporating friends' information is very useful for this task \cite{miura2017unifying, ebrahimi2018unified}. Empirically, models enhanced with network information work better than the other two types, but they do not scale well to larger datasets \cite{rahimi2015twitter}. In recent years, neural network based prediction methods have shown great success on this Twitter user geolocation prediction task \cite{ rahimi2017neural, miura2017unifying}. However, these neural network based methods largely ignore the hierarchical structure among locations (eg. country versus city), which have been shown to be very useful in previous study \cite{mahmud2012tweet, wing2014hierarchical}. In recent work, \citet{huang2017predicting} also demonstrate that country-level location prediction is much easier than city-level location prediction. It is natural to ask whether we can incorporate the hierarchical structure among locations into a neural network and use the coarse-grained location prediction to guide the fine-grained prediction. Besides, most of these previous work uses word-level embeddings to represent text, which may not be sufficient for noisy text from social media. In this paper, we present a hierarchical location prediction neural network (HLPNN) for user geolocation on Twitter. Our model combines text features, metadata features (personal description, profile location, name, user language, time zone), and network features together. It uses a character-aware word embedding layer to deal with the noisy text and capture out-of-vocabulary words. With transformer encoders, our model learns the correlation between different feature fields and outputs two classification representations for country-level and city-level predictions respectively. It first computes the country-level prediction, which is further used to guide the city-level prediction. Our model is flexible in accommodating different feature combinations, and it achieves state-of-the-art results under various feature settings. \section{Related Work} Because of insufficient geotagged data \cite{graham2014world, binxuan2019large}, there is a growing interest in predicting Twitter users' locations. Though there are some potential privacy concerns, user geolocation is a key factor for many important applications such as earthquake detection \cite{earle2012twitter}, and disaster management \cite{carley2016crowd}, health management \cite{huang2018location}. Early work tried to identify users' locations by mapping their IP addresses to physical locations \cite{buyukokkten1999exploiting}. However, such private information is only accessible to internet service providers. There is no easy way for a third-party to find Twitter users' IP addresses. Later, various text-based location prediction systems were proposed. \citet{bilhaut2003geographic} utilize a geographical gazetteer as an external lexicon and present a rule-based geographical references recognizer. \citet{amitay2004web} extracted location-related information listed in a gazetteer from web content to identify geographical regions of webpages. However, as shown in \cite{berggren2016inferring}, performances of gazetteer-based methods are hindered by the noisy and informal nature of tweets. Moving beyond methods replying on external knowledge sources (e.g. IP and gazetteers), many machine learning based methods have recently been applied to location prediction. Typically, researchers first represent locations as earth grids \cite{wing2011simple, roller2012supervised}, regions \cite{miyazaki2018twitter, qian2017probabilistic}, or cities \cite{han2013stacking}. Then location classifiers are built to categorize users into different locations. \citet{han2012geolocation} first utilized feature selection methods to find location indicative words, then they used multinomial naive Bayes and logistic regression classifiers to find correct locations. \citet{han2013stacking} further present a stacking based method that combines tweet text and metadata together. Along with these classification methods, some approaches also try to learn topic regions automatically by topic modeling, but these do not scale well to the magnitude of social media \cite{hong2012discovering, zhang2017rate}. Recently, deep neural network based methods are becoming popular for location prediction \cite{miura2016simple}. \citet{huang2017predicting} integrate text and user profile metadata into a single model using convolutional neural networks, and their experiments show superior performance over stacked naive Bayes classifiers. \citet{miura2017unifying, ebrahimi2018unified} incorporate user network connection information into their neural models, where they use network embeddings to represent users in a social network. \citet{rahimi2018semi} also uses text and network feature together, but their approach is based on graph convolutional neural networks. Similar to our method, some research has tried to predict user location hierarchically \cite{mahmud2012tweet, wing2014hierarchical}. \citet{mahmud2012tweet} develop a two-level hierarchical location classifier which first predicts a coarse-grained location (country, time zone), and then predicts the city label within the corresponding coarse region. \citet{wing2014hierarchical} build a hierarchical tree of earth grids. The probability of a final fine-grained location can be computed recursively from the root node to the leaf node. Both methods have to train one classifier separately for each parent node, which is quite time-consuming for training deep neural network based methods. Additionally, certain coarse-grained locations may not have enough data samples to train a local neural classifier alone. Our hierarchical location prediction neural network overcomes these issues and only needs to be trained once. \section{Conclusion} In this paper, we propose a hierarchical location prediction neural network, which combines text, metadata, network information for user location prediction. Our model can accommodate various feature combinations. Extensive experiments have been conducted to validate the effectiveness of our model under four different feature settings across three commonly used benchmarks. Our experiments show our HLPNN model achieves state-of-the-art results on these three datasets. It not only improves the prediction accuracy but also significantly reduces the mean error distance. In our ablation analysis, we show that using character-aware word embeddings is helpful for overcoming noise in social media text. The transformer encoders effectively learn the correlation between different features and decouple the two different level predictions. In our experiments, we also analyzed the effect of adding country-level regularization. The country-level supervision could effectively guide the city-level prediction towards the correct country, and reduce the errors where users are misplaced in the wrong countries. Though our HLPNN model achieves great performances under Text+Net and Text+Meta+Net settings, potential improvements could be made using better graph-level classification frameworks. We currently only use network information to train network embeddings as user-level features. For future work, we would like to explore ways to combine graph-level classification methods and our user-level learning model. Propagating features from connected friends would provide much more information than just using network embedding vectors. Besides, our model assumes each post of one user all comes from one single home location but ignores the dynamic user movement pattern like traveling. We plan to incorporate temporal states to capture location changes in future work. \section{Method} There are seven features we want to utilize in our model --- tweet text, personal description, profile location, name, user language, time zone, and mention network. The first four features are text fields where users can write anything they want. User language and time zone are two categorical features that are selected by users in their profiles. Following previous work \cite{rahimi2018semi}, we construct mention network directly from mentions in tweets, which is also less expensive to collect than following network\footnote{https://developer.twitter.com}. \begin{figure*}[!h] \centering \includegraphics[width=0.8\textwidth]{./image/arch2.png} \caption{The architecture of our hierarchical location prediction neural network.} \label{arch} \end{figure*} The overall architecture of our hierarchical location prediction model is shown in Figure \ref{arch}. It first maps four text features into a word embedding space. A bidirectional LSTM (Bi-LSTM) neural network \cite{hochreiter1997long} is used to extract location-specific features from these text embedding vectors. Following Bi-LSTM, we use a word-level attention layer to generate representation vectors for these text fields. Combining all the text representations, a user language embedding, a timezone embedding, and a network embedding, we apply several layers of transformer encoders \cite{vaswani2017attention} to learn the correlation among all the feature fields. The probability for each country is computed after a field-level attention layer. Finally, we use the country probability as a constraint for the city-level location prediction. We elaborate details of our model in the following sections. \subsection{Word Embedding} Assume one user has $T$ tweets, there are $T+3$ text fields for this user including personal description, profile location, and name. We first map each word in these $T+3$ text fields into a low dimensional embedding space. The embedding vector for word $w$ is computed as $x_w = [E(w),CNN_c(w)]$, where $[,]$ denotes vector concatenation. $E(w)$ is the word-level embedding retrieved directly from an Embedding matrix $E\in R^{V\times D}$ by a lookup operation, where $V$ is the vocabulary size, and $D$ is the word-level embedding dimension. $CNN_c(w)$ is a character-level word embedding that is generated from a character-level convolutional layer. Using character-level word embeddings is helpful for dealing with out-of-vocabulary tokens and overcoming the noisy nature of tweet text. The character-level word embedding generation process is as follows. For a character $c_i$ in the word $w=(c_1,...,c_k)$, we map it into a character embedding space and get a vector $v_{c_i}\in R^{d}$. In the convolutional layer, each filter $u \in R^{l_c \times d}$ generates a feature vector $\boldsymbol{ \theta }=[\theta_1,\theta_2,...,\theta_{k-l_c+1}]\in R^{k-l_c+1}$, where $\theta_i=\textit{relu}(u \circ v_{c_i:c_{i+l_c-1}}+b)$. $b$ is a bias term, and ``$\circ$'' denotes element-wise inner product between $u$ and character window $v_{c_i:c_{i+l_c-1}}\in R^{l_c\times d}$. After this convolutional operation, we use a max-pooling operation to select the most representative feature $\hat \theta = max(\boldsymbol{ \theta })$. With $D$ such filters, we get the character-level word embedding $CNN_c(w)\in R^D$. \subsection{Text Representation} After the word embedding layer, every word in these $T+3$ texts are transformed into a $2D$ dimension vector. Given a text with word sequence $(w_1,...,w_N)$, we get a word embedding matrix $X\in R^{N\times 2D}$ from the embedding layer. We then apply a Bi-LSTM neural network to extract high-level semantic representations from text embedding matrices. At every time step $i$, a forward LSTM takes the word embedding $x_i$ of word $w_i$ and previous state $\overrightarrow {h_{i-1}}$ as inputs, and generates the current hidden state $\overrightarrow h_i$. A backward LSTM reads the text from $w_N$ to $w_1$ and generates another state sequence. The hidden state $h_i \in R^{2D}$ for word $w_i$ is the concatenation of $\overrightarrow {h_i}$ and $\overleftarrow {h_i}$. Concatenating all the hidden states, we get a semantic matrix $H\in R^{N\times 2D}$ \begin{equation*} \begin{aligned} & \overrightarrow {h_i}=\overrightarrow{LSTM}(x_i,\overrightarrow {h_{i-1}})\\ &\overleftarrow {h_i}=\overleftarrow{LSTM}(x_i,\overleftarrow {h_{i+1}}) \\ \end{aligned} \end{equation*} Because not all words in a text contribute equally towards location prediction, we further use a multi-head attention layer \cite{vaswani2017attention} to generate a representation vector $f\in R^{2D}$ for each text. There are $h$ attention heads that allow the model to attend to important information from different representation subspaces. Each head computes a text representation as a weighted average of these word hidden states. The computation steps in a multi-head attention layer are as follows. \begin{equation*} \begin{aligned} & f = \operatorname{MultiHead}(q,H)=[\operatorname{head_1},...,\operatorname{head_h}]W^O \\ & \operatorname{head_i}(q,H) = \operatorname{softmax}(\frac{qW_i^Q\cdot (HW_i^K)^T}{\sqrt{d_k}})HW_i^V \end{aligned} \end{equation*} where $q\in R^{2d}$ is an attention context vector learned during training, $W_i^Q,W_i^K,W_i^V\in R^{2D\times d_k}$, and $W^O \in R^{2D\times 2D}$ are projection parameters, $d_k=2D/h$. An attention head $head_i$ first projects the attention context $q$ and the semantic matrix $H$ into query and key subspaces by $W_i^Q$, $W_i^K$ respectively. The matrix product between query $qW_i^Q$ and key $HW_i^K$ after softmax normalization is an attention weight that indicates important words among the projected value vectors $HW_i^V$. Concatenating $h$ heads together, we get one representation vector $f\in R^{2D}$ after projection by $W^O$ for each text field. \subsection{Feature Fusion} For two categorical features, we assign an embedding vector with dimension $2D$ for each time zone and language. These embedding vectors are learned during training. We pretrain network embeddings for users involved in the mention network using LINE \cite{tang2015line}. Network embeddings are fixed during training. We get a feature matrix $F\in R^{(T+6)\times 2D}$ by concatenating text representations of $T+3$ text fields, two embedding vectors of categorical features, and one network embedding vector. We further use several layers of transformer encoders \cite{vaswani2017attention} to learn the correlation between different feature fields. Each layer consists of a multi-head self-attention network and a feed-forward network (FFN). One transformer encoder layer first uses input feature to attend important information in the feature itself by a multi-head attention sub-layer. Then a linear transformation sub-layer $FFN$ is applied to each position identically. Similar to \citet{vaswani2017attention}, we employ residual connection \cite{he2016deep} and layer normalization \cite{ba2016layer} around each of the two sub-layers. The output $F_1$ of the first transformer encoder layer is generated as follows. \begin{equation*} \begin{aligned} & F' = \operatorname{LayerNorm}(\operatorname{MultiHead}(F,F)+F) \\ &F_1 = \operatorname{LayerNorm}(\operatorname{FFN}(F')+F')\\ \end{aligned} \end{equation*} where $\operatorname{FFN}(F')=\operatorname{max}(0,F'W_1+b_1)W_2+b2$, $W_1\in R^{2D\times D_{ff}}$, and $W_2\in R^{D_{ff}\times 2D}$. Since there is no position information in the transformer encoder layer, our model cannot distinguish between different types of features, eg. tweet text and personal description. To overcome this issue, we add feature type embeddings to the input representations $F$. There are seven features in total. Each of them has a learned feature type embedding with dimension $2D$ so that one feature type embedding and the representation of the corresponding feature can be summed. Because the input and the output of transformer encoder have the same dimension, we stack $L$ layers of transformer encoders to learn representations for country-level prediction and city-level prediction respectively. These two sets of encoders share the same input $F$, but generate different representations $F_{co}^L$ and $F_{ci}^L$ for country and city predictions. The final classification features for country-level and city-level location predictions are the row-wise weighted average of $F_{co}$ and $F_{ci}$. Similar to the word-level attention, we use a field-level multi-head attention layer to select important features from $T+6$ vectors and fuse them into a single vector. \begin{equation*} \begin{aligned} F_{co} = \operatorname{MultiHead}(q_{co},F_{co}^L)\\ F_{ci} = \operatorname{MultiHead}(q_{ci},F_{ci}^L) \end{aligned} \end{equation*} where $q_{co}, q_{ci}\in R^{2D}$ are two attention context vectors. \begin{table*}[!h] \centering \resizebox{0.8\textwidth}{!}{ \begin{tabular}{lccccccccc} \hline \hline \multirow{2}{*}{} & \multicolumn{3}{c}{Twitter-US} & \multicolumn{3}{c}{Twitter-World} & \multicolumn{3}{c}{WNUT} \\ \cline{2-10} & Train & Dev. & Test & Train & Dev. & Test & Train & Dev. & Test \\ \hline \# users & 429K & 10K & 10K & 1.37M & 10K & 10K & 742K & 7.46K & 10K \\ \hline \begin{tabular}[c]{@{}l@{}}\# users \\ with meta\end{tabular} & 228K & 5.32K & 5.34K & 917K & 6.50K & 6.48K & 742K & 7.46K & 10K \\ \hline \# tweets & 36.4M & 861K & 831K & 11.2M & 488K & 315K & 8.97M & 90.3K & 99.7K \\ \hline \begin{tabular}[c]{@{}l@{}} \# tweets \\ per user\end{tabular} & 84.60 & 86.14 & 83.12 & 8.16 & 48.83 & 31.59 & 12.09 & 12.10 & 9.97 \\ \hline \hline \end{tabular} } \caption{A brief summary of our datasets. For each dataset, we report the number of users, number of users with metadata, number of tweets, and average number of tweets per user. We collected metadata for 53\% and 67\% of users in Twitter-US and Twitter-World. Time zone information was not available when we collected metadata for these two datasets. About 25\% of training and development users' data was inaccessible when we collected WNUT in 2017.} \label{data} \end{table*} \subsection{Hierarchical Location Prediction} The final probability for each country is computed by a softmax function \begin{equation*} \begin{aligned} P_{co} = \operatorname{softmax}(W_{co}F_{co}+b_{co}) \end{aligned} \end{equation*} where $W_{co} \in R^{M_{co}\times 2D}$ is a linear projection parameter, $b_{co}\in R^{M_{co}}$ is a bias term, and $M_{co}$ is the number of countries. After we get the probability for each country, we further use it to constrain the city-level prediction \begin{equation*} \begin{aligned} P_{ci} = &\operatorname{softmax}(W_{ci}F_{ci}+b_{ci} + \lambda P_{co} Bias) \end{aligned} \end{equation*} where $W_{ci} \in R^{M_{ci}\times 2D}$ is a linear projection parameter, $b_{ci}\in R^{M_{ci}}$ is a bias term, and $M_{ci}$ is the number of cities. $Bias\in R^{M_{co}\times M_{ci}}$ is the country-city correlation matrix. If city $j$ belongs to country $i$, then $Bias_{ij}$ is $0$, otherwise $-1$. $\lambda$ is a penalty term learned during training. The larger of $\lambda$, the stronger of the country constraint. In practise, we also experimented with letting the model learn the country-city correlation matrix during training, which yields similar performance. We minimize the sum of two cross-entropy losses for country-level prediction and city-level prediction. \begin{equation*} \begin{aligned} loss = -( Y_{ci}\cdot log P_{ci} + \alpha Y_{co}\cdot log P_{co}) \end{aligned} \end{equation*} where $Y_{ci}$ and $Y_{co}$ are one-hot encodings of city and country labels. $\alpha$ is the weight to control the importance of country-level supervision signal. Since a large $\alpha$ would potentially interfere with the training process of city-level prediction, we just set it as $1$ in our experiments. Tuning this parameter on each dataset may further improve the performance. \section*{Acknowledgments} This work was supported in part by the Office of Naval Research (ONR) N0001418SB001 and N000141812108. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the ONR.
0903.1462
\section{Introduction} Recently, there has been considerable interest in studying correlations between separated systems in `no signaling' theories, which include subquantum (e.g., classical), quantum, and superquantum theories. The primary foundational aim is to characterize quantum mechanics, i.e., to identify physical principles that distinguish quantum mechanics from other theories that satisfy a `no signaling' principle (see below). From an information-theoretic standpoint, a classical state space has the structure of a simplex. An $n$-simplex is a particular sort of convex set: a convex polytope generated by $n+1$ vertices that are not confined to any $(n-1)$-dimensional subspace (e.g., a triangle as opposed to a rectangle). The simplest classical state space is the 1-bit space (1-simplex), consisting of two pure or extremal deterministic states, $\mathbf{0} = \left ( \begin{array}{c}1 \\ 0 \end{array} \right )$ and $\mathbf{1} = \left ( \begin{array}{c}0 \\ 1 \end{array} \right )$, represented by the vertices of the simplex, with mixtures---convex combinations of pure states---represented by the line segment between the two vertices: $\mathbf{p} = p\,\mathbf{0} + (1-p)\,\mathbf{1}$, for $0 \leq p \leq 1$. A simplex has the rather special property that a mixed state can be represented in one and only one way as a mixture of pure states, the vertices of the simplex. No other state space has this feature: if the state space is not a simplex, the representation of mixed states as convex combinations of pure states is not unique. The state space of classical mechanics is an infinite-dimensional simplex, where the pure states are all deterministic states, with enough structure to support transformations acting on the vertices that include the canonical transformations generated by Hamiltonians. The simplest quantum system is the qubit, whose state space as a convex set has the structure of a sphere (the Bloch sphere), which is not a simplex. The non-unique decomposition of mixtures into pure states underlies the impossibility of a universal cloning operation for pure states in nonclassical theories or, more generally, the impossibility of a universal broadcasting operation for an arbitrary set of states, and the monogamy of nonclassical correlations, which are generic features of non-simplex theories \cite{Masanes2006}. The space of `no signaling' probability distributions is a convex polytope that is not a simplex (see \cite{JonesMasanes2005}, \cite{BLMPP2005}, \cite{BarrettPironio2005}). Some of these vertices are non-deterministic Popescu-Rohrlich (PR) boxes \cite{PopescuRohrlich94}, or generalizations of PR-boxes. A PR-box is a hypothetical device or nonlocal information channel that is more nonlocal than quantum mechanics, in the sense that the correlations between outputs of the box for given inputs violate the Tsirelson bound \cite{Tsirelson1980}. A PR-box is defined as follows: there are two inputs, $x \in \{0,1\}$ and $y\in \{0,1\}$, and two outputs, $a\in \{0,1\}$ and $b\in \{0,1\}$. The box is bipartite and nonlocal in the sense that the $x$-input and $a$-ouput can be separated from the $y$-input and $b$-output by any distance without altering the correlations. For convenience, we can think of the $x$-input as controlled by Alice, who monitors the $a$-ouput, and the $y$-input as controlled by Bob, who monitors the $b$-output. Alice's and Bob's inputs and outputs are then required to be correlated according to: \begin{equation} a\oplus b = x.y \label{eqn:PRbox} \end{equation} where $\oplus$ is addition mod 2, i.e., \begin{itemize} \item[(i)] same outputs (i.e., 00 or 11) if the inputs are 00 or 01 or 10 \item[(ii)] different outputs (i.e., 01 or 10) if the inputs are 11 \end{itemize} The `no signaling' condition is a requirement on the marginal probabilities: the marginal probability of Alice's outputs do not depend on Bob's input, i.e., Alice cannot tell what Bob's input was by looking at the statistics of her outputs, and conversely. Formally: \begin{eqnarray} \sum_{b\in\{0,1\}}p(a,b|x,y) = p(a|x), \, a, x, y \in\{0,1\} \\ \sum_{a\in\{0,1\}}p(a,b|x,y) = p(b|y), \, b, x, y \in\{0,1\} \end{eqnarray} The correlations (\ref{eqn:PRbox}) together with the `no signaling' condition entail that the marginals are equal to 1/2 for all inputs $x, y\in\{0,1\}$ and all outputs $a,b\in\{0,1\}$: \begin{equation} p(a=0|x) = p(a=1|x) = p(b=0|y) = p(b=1|y) = 1/2 \end{equation} A PR-box can be defined equivalently in terms of the joint probabilities for all inputs and all outputs, as in Table 1. For bipartite probability distributions, with two input values and two output values, the vertices of the `no signaling' polytope are all PR-boxes (differing only with respect to permutations of the input values and/or output values) or deterministic boxes. \begin{table}[h!] \begin{center} \begin{tabular}{|ll||ll|ll|} \hline &$x$&$0$ & &$1$&\\ $y$&&&&&\\\hline\hline $0$ &&$p(00|00) = 1/2$&$ p(10|00 = 0) = 0$ & $p(00|10) = 1/2$&$ p(10|10) = 0$ \\ &&$p(01|00) = 0$&$p(11|00) = 1/2$ & $p(01|1)=0$&$ p(11|10) = 1/2$ \\\hline $1$&&$p(00|01)=1/2$&$ p(10|01)=0$ & $p(00|11=0$&$ p(10|11)=1/2$ \\ &&$p(01|01)=0$&$ p(11|01)=1/2$ & $p(01|11)=1/2$&$ p(11|11)=0$ \\\hline \end{tabular} \end{center} \caption{Joint probabilities for the PR-box} \end{table} Consider the problem of simulating a PR-box: how close can Alice and Bob come to simulating the correlations of a PR-box for random inputs if they are limited to certain resources? In units where $a = \pm 1, b = \pm 1$, \begin{equation} \langle 00\rangle = p(\mbox{same output}|00) - p(\mbox{different output}|00) \end{equation} so: \begin{eqnarray} p(\mbox{same output}|00) & = & \frac{1 + \langle 00\rangle}{2} \\ p(\mbox{different output}|00) & = & \frac{1-\langle 00\rangle}{2} \end{eqnarray} and similarly for input pairs 01, 10, 11. It follows that the probability of a successful simulation is given by: \begin{eqnarray} \mbox{prob(successful sim)} & = & \frac{1}{4}(p(\mbox{same output}|00) + p(\mbox{same output}|01) + \nonumber \\ & & p(\mbox{same output}|10) + p(\mbox{different output}|11)) \\ & = & \frac{K}{8} + \frac{1}{2} \end{eqnarray} where $K = \langle 00\rangle + \langle 01\rangle + \langle 10\rangle - \langle 11\rangle$ is the Clauser-Horne-Shimony-Holt (CHSH) correlation. Bell's locality argument \cite{BellEPR} in the CHSH version \cite{CHSH} shows that if Alice and Bob are limited to classical resources, i.e., if they are required to reproduce the correlations on the basis of shared randomness or common causes established before they separate (after which no communication is allowed), then $K_{C} \leq 2$, so the optimal probability of success is 3/4. If Alice and Bob are allowed to base their strategy on shared entangled states prepared before they separate, then the Tsirelson inequality requires that $K_{Q} \leq 2\sqrt{2}$, so the optimal probability of success limited by quantum resources is approximately .85. For the PR-box, K = 4, so the probability of success is, of course, 1. It is easy to show that the correlations of a PR-box are monogamous and that the pure states, defined as above, cannot be cloned \cite{Masanes2006}. In a recent paper \cite{SkrzypczykBrunnerPopescu2008}, the authors introduce a dynamics for PR-boxes and show that the Tsirelson bound defines the limit of nonlocality swapping for noisy PR-boxes. This is a very remarkable result about the nonlocality of nonclassical `no signaling' theories. Before Schr\"{o}dinger \cite[p. 555]{Schr1} characterized nonlocal entanglement as `\textit{the} characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought,' another feature of quantum mechanics, emphasized by Bohr, was generally regarded as the distinguishing feature of quantum systems: the apparent dependence of measured values on the local experimental context. This contextuality is exhibited in various ways---noncommutativity, the uncertainty principle, the impossibility of assigning values to all observables of a quantum system simultaneously, while requiring the functional relationships between observables to hold for the corresponding values (so, e.g., the value assigned to the square of an observable should be the square of the value assigned to the observable)---but for our purposes here the relevant result is the theorem by Kochen and Specker \cite{KochenSpecker}. Kochen and Specker identified a finite noncommuting set of 1-dimensional projection operators on a 3-dimensional Hilbert space, in which an individual projection operator can belong to different orthogonal triples of projection operators representing different bases or contexts, such that no assignment of 0 and 1 values to the projection operators is possible that is both (i) \emph{noncontextual} (i.e., each projection operator is assigned one and only one value, independent of context), and (ii) \emph{respects the orthogonality relations }(so that the assignment of the value 1 to a 1-dimensional projection operator $P$ requires the assignment of 0 to any projection operator orthogonal to $P$). A quantum system associated with a 3-dimensional Hilbert space is only required to produce a value for an observable represented by a 1-dimensional projection operator $P$ with respect to the context defined by $P$ and its orthogonal complement $P^{\perp}$ in a nonmaximal measurement, or with respect to a context defined by a particular orthogonal triple of projection operators in a maximal measurement. Unlike the situation in classical mechanics, different maximal measurement contexts for a quantum system are exclusive, or `incompatible' in Bohr's terminology: they cannot all be embedded into one context. In this sense, measurement in quantum mechanics is contextual, and the distribution of measurement outcomes for a quantum state cannot be simulated by a noncontextual assignment of values to all observables, or even to certain finite sets of observables, by the Kochen-Specker theorem. Note that the Kochen-Specker result does not justify the claim that the outcome of a measurement of an observable would have been different if the observable had been measured with respect to a different context. This is a counterfactual statement concerning an unperformed measurement, and---as Asher Peres was fond of repeating---unperformed measurements have no results: there is, in principle, no way to check this claim. Note also that the contextuality of individual measurement outcomes is masked by the statistics, which is noncontextual: the probability that a measurement of an observable corresponding to a projection operator $P$ yields the value 1 in a quantum state $\ket{\psi}$ is the same, irrespective of the measurement context, i.e., irrespective of what other projection operators are measured together with $P$ in the state $\ket{\psi}$. Similarly, the effect of nonlocality in quantum mechanics is not directly represented in the statistics: there is no violation of the `no signaling' principle---Alice's statistics is unaffected by Bob's measurements. Locality in Bell's sense is a probabilistic noncontextuality constraint with respect to \emph{remote} contexts. Specifically, in terms of the inputs and outputs of a PR-box, locality is the requirement that (I) the probability of a given output, $a$ of $x$, \emph{conditional on a shared random variable for the two inputs $x$ and $y$,} is independent of the remote $y$-context, and also (II) independent of the outcome $b$ for a given remote $y$-context (and conversely). Note that (I) is not the same as the `no signaling' condition, which is (I) without the qualification `conditional on a shared random variable.' That is, the `no signaling' condition refers to `surface probabilities,' while the condition (I) refers to `hidden probabilities' (to use a terminology due to van Fraassen \cite{Fraassen1982}). Shimony \cite{Shimony2006} calls conditions (I) and (II) `parameter independence' and `outcome independence,' respectively.\footnote{This formulation of Bell's locality condition as the conjunction of two independent conditions was first proposed by Jarrett \cite{Jarrett1984}, who called (I) `locality' and (II) `completeness.'} In the following, we define a family of bipartite boxes with $n$ possible input values instead of two, i.e., $x \in \{1, \ldots, n\}, y \in \{1, \ldots, n\}$, and binary outputs $a \in \{0,1\}, b \in \{0,1\}$, which allows the consideration of a range of nonlocal contexts defined by pairs of inputs to the box. In a quantum simulation based on a strategy that exploits shared entangled states to reproduce the correlations, the inputs are associated with measurements of specific observables, and the nonlocal box contexts can be associated with different local measurement contexts that share a common element corresponding to an input value. The correlational constraints are motivated by a version of the Kochen-Specker theorem due to Klyachko \cite{Klyachko2002,Klyachko2007}, so we call such boxes Kochen-Specker-Klyachko boxes or, briefly, KS-boxes. The family of KS-boxes is parametrized by the marginal probability $p$ for the output 1, where $0 \leq p \leq 1/2$. The marginals cover a range of cases, from those that can be simulated classically to the superquantum correlations that saturate the Clauser-Horne-Shimony-Holt inequality, when the KS-box is a generalization of a PR-box and hence a vertex of the `no signaling' polytope. For certain marginal probabilities, a KS-box can display correlations that are no more nonlocal than classical correlations, as measured by the CHSH correlation, even though a perfect simulation of the correlations for all inputs with classical or quantum resources is impossible. We sketch Klyachko's version of the Kochen-Specker theorem in \S 2. The defining correlations of the KS-box are set out in \S 3, where we consider the issue of simulating a KS-box with classical or quantum resources. In \S 4, we consider simulating a PR-box with a KS-box and show that, for a marginal probability $p = 1/3$, a KS-box is no better than shared randomness as a resource in simulating the correlations of a PR-box, even though the KS-box cannot be perfectly simulated by classical or quantum resources for all inputs. In \S 5, we drop the marginal constraint and consider the behavior of a KS-box for all marginal probabilities $0 \leq p \leq 1/2$. We conclude in \S 6 with some remarks commenting on the significance of these results for contextuality and nonlocality in `no signaling' theories. \section{Klyachko's Version of the Kochen-Specker Theorem} Consider a unit sphere and imagine a circle $\Sigma_{1}$ on the equator of the sphere with an inscribed pentagon and pentagram, with the vertices of the pentagram labelled in order 1, 2, 3, 4, 5 (see Fig. 1).\footnote{The following formulation of Klyachko's proof owes much to a discussion with Ben Toner and differs from the analysis in \cite{Klyachko2002,Klyachko2007}.} Note that the angle subtended at the center $O$ by adjacent vertices of the pentagram defining an edge (e.g., 1 and 2) is $\theta = 4\pi/5$, which is greater than $\pi/2$. It follows that if the radii linking O to the vertices are pulled upwards towards the north pole of the sphere, the circle with the inscribed pentagon and pentagram will move up on the sphere towards the north pole. Since $\theta = 0$ when the radii point to the north pole (and the circle vanishes), $\theta$ must pass through $\pi/2$ before the radii point to the north pole, which means that it is possible to draw a circle $\Sigma_{2}$ with an inscribed pentagon and pentagram on the sphere at some point between the equator and the north pole, \emph{such that the angle subtended at $O$ by an edge of the pentagram is $\pi/2$}. We label the centre of this circle $P$ (see Fig. 2; note that the line OP is orthogonal to the circle $\Sigma_{2}$ and is not in the plane of the pentagram). \begin{figure}[!ht] \begin{picture}(300,180)(-65,0) \begin{tikzpicture} \tikzstyle vertex=[circle,draw,fill=black,inner sep=1pt] \path (0,0) coordinate (O); \path (3*72+18:3cm) coordinate (P1); \path (18:3cm) coordinate (P2); \path (2*72+18:3cm) coordinate (P3); \path (4*72+18:3cm) coordinate (P4); \path (72+18:3cm) coordinate (P5); \path (3*72+18:.4cm) coordinate (Q); \node at (.25,.3) {O}; \node at (-20:.3cm) {$\theta$}; \path (3*72+18:3.5cm) node {$1$}; \path (18:3.5cm) node {$2$}; \path (2*72+18:3.5cm) node {$3$}; \path (4*72+18:3.5cm) node {$4$}; \path (72+18:3.5cm) node {$5$}; \draw[very thick] (P1) -- (P2) -- (P3) -- (P4) -- (P5) -- cycle; \draw (P1) -- (P4) (P4) -- (P2) (P2) -- (P5) (P5) -- (P3) (P3) -- (P1); \draw (P1) -- (O) (O) -- (P2); \draw (Q) arc (-125:10:.5cm); \draw (0,0) circle (3cm); \node[vertex] at (0,0) {}; \node[vertex] at (P1) {}; \node[vertex] at (P2) {}; \node[vertex] at (P3) {}; \node[vertex] at (P4) {}; \node[vertex] at (P5) {}; \end{tikzpicture} \label{fig1} \end{picture} \caption{Circle $\Sigma_{1}$ with inscribed pentagram} \end{figure} One can therefore define five orthogonal triples of vectors, i.e., five bases in a 3-dimensional Hilbert space $\hil{H}_{3}$, representing five different measurement contexts: \[ \begin{array}{lll} \ket{1},&\ket{2},&\ket{v} \\ \ket{2},& \ket{3}, & \ket{w} \\ \ket{3}, & \ket{4}, & \ket{x} \\ \ket{4}, & \ket{5}, & \ket{y} \\ \ket{5}, & \ket{1}, & \ket{z} \end{array} \] Here $\ket{v}$ is orthogonal to $\ket{1}$ and $\ket{2}$, etc. Note that each vector $\ket{1},\ket{2}, \ket{3}, \ket{4}, \ket{5}$ belongs to two different contexts. The vectors $\ket{u}, \ket{v}, \ket{x}, \ket{y}, \ket{z}$ play no role in the following analysis, and we can take a context as defined by an edge of the pentagram in the circle $\Sigma_{2}$. Consider, now, assigning 0's and 1's to all the vertices of the pentagram in $\Sigma_{2}$ noncontextually (i.e., each vertex is assigned a value independently of the edge to which it belongs), in such a way as to satisfy the orthogonality constraint that at most one 1 can be assigned to the two vertices of an edge. It is obvious by inspection that the orthogonality constraint can be satisfied noncontextually by assignments of zero 1's, one 1, or two 1's (but not by three 1's, four 1's, or five 1's). Call such assignments `charts.' We say that the constraints can be satisfied by charts of type $C_{0}, C_{1}, C_{2}$. It follows that for such charts, where $v(i)$ is the value assigned to the vertex $i$: \begin{equation} \sum_{i=1}^{5}v(i) \leq 2 \end{equation} If we label the possible charts with a hidden variable $\lambda \in \Lambda$, and average over $\Lambda$, then the probability of a vertex being assigned the value 1 is given by: \begin{equation} p(v(i) =1) = \sum_{\Lambda} v(i|\lambda)p(\lambda) \end{equation} so: \begin{eqnarray} \sum_{i=1}^{5}p(v(i)=1) & = & \sum_{i=1}^{5}\sum_{\Lambda} v(i|\lambda)p(\lambda)\nonumber \\ & = & \sum_{\Lambda} (\sum_{i=1}^{5} v(i|\lambda))p(\lambda) \leq 2 \label{Klyachko} \end{eqnarray} We have shown that the sum of the probabilities assigned to the vertices of the pentagram on the circle $\Sigma_{2}$ must be less than or equal to 2, if the selection of a vertex (denoted by the assignment of 1) is made noncontextually in such a way as to satisfy the orthogonality constraint. Note that this Klyachko inequality follows without any assumption about the relative weighting of the charts. \begin{figure}[!ht] \begin{picture}(300,250)(-65,0) \begin{tikzpicture} \tikzstyle vertex=[circle,draw,fill=black,inner sep=1pt] \path (0,0) coordinate (P); \path (0,-5) coordinate (O); \path (3*72+18:3cm) coordinate (P1); \path (18:3cm) coordinate (P2); \path (2*72+18:3cm) coordinate (P3); \path (4*72+18:3cm) coordinate (P4); \path (72+18:3cm) coordinate (P5); \path (0,-4.2) coordinate (Q); \node at (.25,.3) {P}; \node at (.3,-4.8) {O}; \node at (-.2,-4.4) {$\phi$}; \path (3*72+18:3.5cm) node {$1$}; \path (3*72+10:1.2cm) node {$s$}; \path (18:3.5cm) node {$2$}; \path (25:1.5cm) node {$s$}; \path (-94:2cm) node {$r$}; \path (-40:.65cm) node {$\sqrt{2}$}; \path (2*72+18:3.5cm) node {$3$}; \path (4*72+18:3.5cm) node {$4$}; \path (72+18:3.5cm) node {$5$}; \draw[very thick] (P1) -- (P2) -- (P3) -- (P4) -- (P5) -- cycle; \draw (P1) -- (P4) (P4) -- (P2) (P2) -- (P5) (P5) -- (P3) (P3) -- (P1); \draw (O) -- (P) (O) -- (P1) (P) -- (P2) (P) -- (P1); \draw (Q) arc (90:125:.8cm); \draw (0,0) circle (3cm); \node[vertex] at (0,0) {}; \node[vertex] at (P1) {}; \node[vertex] at (P2) {}; \node[vertex] at (P3) {}; \node[vertex] at (P4) {}; \node[vertex] at (P5) {}; \node[vertex] at (O) {}; \end{tikzpicture} \end{picture} \caption{Circle $\Sigma_{2}$ with inscribed pentagram} \label{fig2} \end{figure} Now consider a quantum system in the state defined by a unit vector that passes through the north pole of the sphere. This vector passes through the point $P$ in the center of the circle $\Sigma_{2}$. Call this state $\ket{\psi}$. A simple geometric argument shows that if probabilities are assigned to the states or 1-dimensional projectors defined by the vertices of the pentagram on $\Sigma_{2}$ by the state $\ket{\psi}$, then the sum of the probabilities is greater than 2! To see this, note that the probability assigned to a vertex, say the vertex 1, is: \begin{equation} |\langle 1|\psi\rangle|^{2} = \cos^{2} \phi \end{equation} where $\ket{1}$ is the unit vector defined by the radius from O to the vertex 1. Since the lines from the center $O$ of the sphere to the vertices of an edge of the pentagram on $\Sigma_{2}$ are radii of length 1 subtending a right angle, each edge of the pentagram has length $\sqrt{2}$. The angle subtended at $P$ by the lines joining $P$ to the two vertices of an edge is $4\pi/5$, so the length, $s$, of the line joining $P$ to a vertex of the pentagram is: \begin{equation} s = \frac{1}{\sqrt{2}\cos \frac{\pi}{10}} \end{equation} Now, $\cos \phi = r$, where $r$ is the length of the line $OP$, and $r^{2} + s^{2} = 1$, so: \begin{equation} \cos^{2} \phi = r^{2} = 1-s^{2} = \frac{\cos \frac{\pi}{5}}{1+\cos\frac{\pi}{5}} = 1/\sqrt{5} \end{equation} (because $\cos\pi/5 = \frac{1}{4}(1+\sqrt{5})$), and so: \begin{equation} \sum_{i=1}^{5} p(v(i) =1) = 5 \times 1/\sqrt{5} = \sqrt{5} > 2 \end{equation} \section{The KS-Box} \label{sec:KS box} We define a KS-box as follows: The box has two inputs, $x, y \in\{1, \ldots, n\}$, and two outputs, $a, b \in \{0,1\}$. We call $n$ the dimension of the KS-box. As with a PR-box, we suppose that the $x$-input and $a$-output can be separated by any distance from the $y$-input and $b$-output without affecting the correlations, which are required to be: \begin{enumerate} \item[(i)] if $x = y$, then $a = b$ \item[(ii)] if $x \neq y$, then $a\cdot b = 0$ \end{enumerate} That is, if the inputs are the same, the outputs are the same; if the inputs are different, at least one output is 0 (i.e., both outputs cannot be 1). The marginal probabilities are required to satisfy the `no signaling' constraint. We shall consider KS-boxes with various marginals and show that they have different properties. We call a KS-box with a marginal probability of $p$ for the output 1 a KS$_{p}$-box. In this section, we consider the problem of simulating a 5-dimensional KS-box with classical and quantum resources. For reasons that will become clear below (see the discussion in \S 6), $n=5$ is the smallest number of inputs for which the wider range of nonlocal contexts defined by input pairs $x, y$ precludes a perfect classical or quantum simulation for certain marginal probabilities. We also initially require the marginal constraint $p = 1/3$, but we will eventually drop this constraint and consider the behavior of a KS$_{p}$-box for all marginal probabilities $0 \leq p \leq 1/2$ (while requiring, of course, `no signaling'). For convenience, we shall refer to the condition (ii)---if $x \neq y$ then $a.b=0$---as the `$\perp$' constraint, since it is motivated by the Kochen and Specker orthogonality condition requiring that an assignment of 1's and 0's to the 1-dimensional projection operators of a maximal context defined by a basis in Hilbert space should respect the orthogonality relations. Consider now the problem of simulating a $5$-dimensional KS$_{p}$-box, with $p = 1/3$, with classical resources: to what extent can Alice and Bob simulate the correlations of the KS-box for random inputs if their only allowed resource is shared randomness? The requirement of perfect correlation if the inputs are the same forces local noncontextuality, i.e., Alice and Bob will have to base their strategy on shared charts selected by a shared random variable. The pentagon edges and pentagram edges exhaust all possible input pairs $\{x,y\}$ for a 5-dimensional KS-box. For a marginal probability $p = 1/5$, a perfect classical simulation of a 5-dimensional KS-box can be achieved with shared charts $C_{1}$, in which only a single vertex is assigned a 1. For marginals $p \leq 1/5$, a perfect classical simulation can be achieved if Alice and Bob mix the strategy for $p = 1/5$ and output 0 simultaneously and randomly for a certain fraction of agreed-upon rounds (i.e., before separation, they generate a random bit string with the appropriate probability of 0's, which they share, and they associate successive rounds of the simulation---successive input pairs---with elements of the string; when the shared bit is 0, they both output 0 independently of the input or, equivalently, they use chart $C_{0}$). In other words, they mix the above strategy with the strategy: `output 0 for any input,' with the appropriate mixture probabilities. Clearly, however, it is impossible to generate a marginal probability $p > 1/5$ without using charts $C_{2}$ as well. To satisfy the marginal constraint $p = 1/3$, Alice and Bob will have to adopt a strategy in which the output for a given input is based on either of the following two mixtures of shared charts, $M_{1}$ or $M_{2}$, selected by a shared random variable: \begin{enumerate} \item[$M_{1}$:] 2/3 $C_{2}$, 1/3 $C_{1}$ \item[$M_{2}$:] 5/6 $C_{2}$, 1/6 $C_{0}$ \end{enumerate} or on mixtures of these two mixtures. We now observe that for charts $C_{2}$, the `$\perp$' constraint can be satisfied either for pentagon edges or for pentagram edges, but not both. See Fig. 3, where a chart $C_{2}$, indicated by the circled 0's and 1's, satisfies the `$\perp$' constraint for the pentagram edges. If the assigned value 1 is moved from the vertex 4 to the vertex 2, for example, the chart satisfies the `$\perp$' constraint for the pentagon edges, but violates the constraint for the pentagram edges. (For charts $C_{3}, C_{4}, C_{5}$, both pentagon edges and pentagram edges violate the `$\perp$' constraint.) \begin{figure}[!ht] \begin{picture}(300,240)(-65,0) \begin{tikzpicture} \tikzstyle vertex=[circle,draw,fill=black,inner sep=1pt] \path (0,0) coordinate (O); \path (3*72+18:3cm) coordinate (P1); \path (18:3cm) coordinate (P2); \path (2*72+18:3cm) coordinate (P3); \path (4*72+18:3cm) coordinate (P4); \path (72+18:3cm) coordinate (P5); \node at (.3,.2) {O}; \path (3*72+18:3.5cm) node {$1$}; \path (3*72+18:4.2cm) node[draw,shape=circle] {$1$}; \path (18:3.5cm) node {$2$}; \path (18:4.2cm) node[draw,shape=circle] {$0$}; \path (2*72+18:3.5cm) node {$3$}; \path (2*72+18:4.2cm) node[draw,shape=circle] {$0$}; \path (4*72+18:3.5cm) node {$4$}; \path (4*72+18:4.2cm) node[draw,shape=circle] {$1$}; \path (72+18:3.5cm) node {$5$}; \path (72+18:4.2cm) node[draw,shape=circle] {$0$}; \draw[very thick] (P1) -- (P2) -- (P3) -- (P4) -- (P5) -- cycle; \draw (P1) -- (P4) (P4) -- (P2) (P2) -- (P5) (P5) -- (P3) (P3) -- (P1); \draw (0,0) circle (3cm); \node[vertex] at (0,0) {}; \node[vertex] at (P1) {}; \node[vertex] at (P2) {}; \node[vertex] at (P3) {}; \node[vertex] at (P4) {}; \node[vertex] at (P5) {}; \end{tikzpicture} \end{picture} \caption{Chart $C_{2}$ satisfying `$\perp$' constraint for pentagram edges} \label{fig3} \end{figure} The probability of a successful simulation of the KS-box for random inputs $x = 1, \ldots, 5, y = 1, \ldots, 5$ is: \begin{eqnarray} \mbox{prob(successful sim)} & = & \frac{1}{25}(\sum_{x=y}p(a=b|x,y) \nonumber \\ & + & \sum_{\mbox{\small{p-gram edges}}}p(a\cdot b= 0|x, y) \nonumber \\ & + & \sum_{\mbox{\small{p-gon edges}}}p(a\cdot b = 0|x, y)) \end{eqnarray} So for the two mixtures, $M_{1}$ and $M_{2}$: \begin{eqnarray} \mbox{prob(successful sim)}_{M_{1}} &= &\frac{1}{25} (5 + 10 + 10[1-(\frac{2}{3}\cdot \frac{1}{5} + \frac{1}{3} \cdot 0)]) \nonumber \\ & = & 1 - \frac{1}{25} \cdot \frac{4}{3} \end{eqnarray} \begin{eqnarray} \mbox{prob(successful sim)}_{M_{2}} &= &\frac{1}{25} (5 + 10 + 10[1-(\frac{5}{6}\cdot \frac{1}{5} + \frac{1}{6} \cdot 0)]) \nonumber \\ & = & 1 - \frac{1}{25} \cdot \frac{5}{3} \end{eqnarray} Assuming the `$\perp$' constraint is satisfied for the pentagram edges, the first term in the sum refers to the five possible pairs of the same input for Alice and Bob, and the second term refers to the ten possible pairs of inputs corresponding to pentagram edges, where the probability of successful simulation is 1 in both cases. The third term refers to the ten possible pairs of inputs corresponding to pentagon edges, where the probability of failure is 1/5 in the case of $C_{2}$ charts and 0 in the case of $C_{1}$ or $C_{0}$ charts. So the \emph{optimal }probability of a successful simulation with classical resources is: \begin{eqnarray} \mbox{optimal prob(succesful sim)}_{C} & = & 1 - \frac{1}{25}\cdot\frac{4}{3} \nonumber \\ & \approx & .94667 \label{eq:optimal classical} \end{eqnarray} We now show that if Alice and Bob are allowed quantum resources, i.e., shared entangled states, they can achieve a greater probability of successful simulation of a 5-dimensional KS$_{p}$-box with $p = 1/3$ than the optimal classical strategy. First note that, analogously to the classical case, a perfect quantum simulation can be achieved for a marginal probability $p= 1/5$ if Alice and Bob initially (before separation) share copies of the maximally entangled state: \begin{equation} \frac{1}{\sqrt{5}} \sum_{i-1}^{5}\ket{i}\ket{i} \in \hil{H}_{5}\otimes\hil{H}_{5} \label{eq:biorthog5} \end{equation} where $\{\ket{i}, i = 1, \ldots, 5\}$ is an orthogonal quintuple of states, i.e., a basis, in $\hil{H}_{5}$. The strategy is for Alice and Bob to produce outputs for given inputs $x = 1, \ldots, 5, y = 1, \ldots, 5$ via local measurements in this basis on their respective Hilbert spaces. The form of the biorthogonal representation with equal coefficients (\ref{eq:biorthog5}) guarantees that the outputs for the same inputs $x = y$ will satisfy the perfect correlation constraint (i) with $p=1/5$, and that the outputs for different inputs $x \neq j$ will satisfy the `$\perp$' constraint (ii)---which is simply an orthogonality constraint in this case---with $p=1/5$. For marginals $0 \leq p \leq 1/5$, Alice and Bob can mix this strategy for $p = 1/5$ with the strategy `output 0 for any input,' with the appropriate mixture probabilities, as in the classical case. For a marginal probability $p > 1/5$, a quantum simulation will have to adopt a different strategy. For the marginal $p = 1/3$, suppose Alice and Bob initially (before separation) share many copies of the maximally entangled state \begin{equation} \frac{1}{\sqrt{3}} \sum_{i = 1}^{3} \ket{\alpha_{i}}\ket{\alpha_{i}} \in \hil{H}_{3}\otimes\hil{H}_{3} \label{eq:biorthog3} \end{equation} where $\{\ket{\alpha_{1}}, \ket{\alpha_{2}}, \ket{\alpha_{3}}\}$ is an orthogonal basis in $\hil{H}_{3}$. A biorthogonal representation with equal coefficients takes the same form for any basis. The strategy, for inputs $x = 1,\ldots,5, y = 1, \ldots, 5$, is for Alice to measure in \emph{any} basis containing the state $\ket{i}$ and for Bob to measure in \emph{any} basis containing the state $\ket{j}$, and to output the measurement outcome, where the states $\ket{i}$ and $\ket{j}$ are defined by the vertices of the pentagram/pentagon on the circle $\Sigma_{2}$. The form of the biorthogonal decomposition (\ref{eq:biorthog3}) now guarantees that the outputs for the same inputs $x =y$ will be perfectly correlated, but the outputs for any two different inputs $x \neq y$, will satisfy the `$\perp$' constraint for the \emph{pentagram} edges, which represent orthogonal pairs of states, but not for the \emph{pentagon} edges, which represent non-orthogonal pairs of states (as we have labeled the edges). The angle, $\chi$, subtended at $O$ by two non-orthogonal states corresponding to two radii of the unit sphere subtending an edge of the \emph{pentagon} (see Fig. 4) is given by:\footnote{This is the inverse of the golden ratio, the limit of the ratio of successive terms in the Fibonacci series: $\tau = \frac{\sqrt{5} + 1}{2}$: $1/\tau = \tau -1$.} \begin{equation} \cos \chi = \frac{\sqrt{5} - 1}{2} \end{equation} To see this, note that: \begin{equation} \sin \frac{\chi}{2} = s \sin \frac{\pi}{5} = \frac{\sin \frac{\pi}{5}}{\sqrt{2}\cos \frac{\pi}{10}} = \sqrt{2} \sin\frac{\pi}{10} = \sqrt{2}\frac{\sqrt{5}-1}{4} \end{equation} \begin{figure}[!ht] \begin{picture}(300,240)(-65,0) \begin{tikzpicture} \tikzstyle vertex=[circle,draw,fill=black,inner sep=1pt] \path (0,0) coordinate (P); \path (0,-5) coordinate (O); \path (3*72+18:3cm) coordinate (P1); \path (18:3cm) coordinate (P2); \path (2*72+18:3cm) coordinate (P3); \path (4*72+18:3cm) coordinate (P4); \path (72+18:3cm) coordinate (P5); \path (3*72+38:3.3cm) coordinate (Q); \node at (.25,.3) {P}; \node at (.3,-4.8) {O}; \node at (-.9,-3.5) {$\chi$}; \path (3*72+18:3.5cm) node {$1$}; \path (3*72+10:1.2cm) node {$s$}; \path (18:3.5cm) node {$2$}; \path (2*72+18:3.5cm) node{$3$}; \path (2*72+10:1.2cm) node{$s$}; \path (4*72+18:3.5cm) node {$4$}; \path (72+18:3.5cm) node {$5$}; \draw[very thick] (P1) -- (P2) -- (P3) -- (P4) -- (P5) -- cycle; \draw (P1) -- (P4) (P4) -- (P2) (P2) -- (P5) (P5) -- (P3) (P3) -- (P1); \draw (O) -- (P3); \draw (O) -- (P1); \draw (O) -- (P) (P) -- (P3) (P) -- (P1); \draw (Q) arc (145:153:2cm); \draw (0,0) circle (3cm); \node[vertex] at (0,0) {}; \node[vertex] at (P1) {}; \node[vertex] at (P2) {}; \node[vertex] at (P3) {}; \node[vertex] at (P4) {}; \node[vertex] at (P5) {}; \node[vertex] at (O) {}; \end{tikzpicture} \end{picture} \caption{Pentagram on $\Sigma_{2}$ showing angle $\chi$ between states $\ket{1}$ and $\ket{3}$} \label{fig4} \end{figure} It follows that the probability of success for a quantum simulation based on this strategy is given by: \begin{eqnarray} \mbox{prob(successful sim)}_{Q} & = & \frac{1}{25} (5 + 10 + 10[1-\frac{1}{3} (\frac{\sqrt{5} - 1}{2})^{2}]) \nonumber \\ & = & 1 - \frac{1}{25}\epsilon \end{eqnarray} where \[ \epsilon = 10(\frac{1}{3}(\frac{\sqrt{5}-1}{2}^{2}) \approx 10 \times .12732 < \frac{4}{3} \] i.e., a quantum simulation strategy based on shared maximally entangled states in $\hil{H}_{3}\otimes\hil{H}_{3}$ has a greater probability of success than the optimal classical strategy: \begin{equation} \mbox{prob(successful sim)}_{Q} \approx .94907 > \mbox{optimal prob(successful sim)}_{C} \end{equation} \section{Simulating a PR-box with a KS-box} As we have seen, a 5-dimensional KS$_{p}$-box with $p=1/3$ is nonclassical, so we expect the correlations to be monogamous. It is easy to see that they must be monogamous to avoid the possibility of signaling. For example, suppose Alice could share the KS-correlations with Bob and also with Charles. (We do not suppose that Bob and Charles share the KS-correlations.) Suppose Alice, Bob, and Charles all input 1. Then Bob's output must be the same as Charles' output, which means that: \begin{equation} p_{BC}(01|\mbox{Alice's input = 1}) = p_{BC}(10|\mbox{Alice's input = 1}) = 0 \end{equation} where $p_{BC}(01|\mbox{Alice's input = 1}), p_{BC}(10|\mbox{Alice's input = 1})$ are the joint probabilities of different outputs for Bob and Charles, given that Alice inputs 1. Now suppose that Alice changes her input to 2. In this case, if Alice's output is 1 (which occurs with probability 1/3), Bob's output and Charles' output must both be 0. If Alice's output is 0 (which occurs with probability 2/3), Bob and Charles can jointly output 00 or 01 or 10 or 11, each with equal probability 1/6, i.e., \begin{equation} p_{BC}(01|\mbox{Alice's input = 2}) = p_{BC}(10|\mbox{Alice's input = 2}) = 1/6 \end{equation} So if Alice could share the KS-correlations with Bob and also with Charles, then Bob and Charles could detect the change in probability from 0 to 1/6 (the first measurement of a difference in their outputs would indicate this), and Alice could signal to Bob and Charles, i.e., `no signaling' entails monogamy. Consider, now, the problem of simulating a PR-box with a KS-box. That is, suppose Alice and Bob are equipped with 5-dimensional KS$_{p}$-boxes with $p = 1/3$ as communication channels. To what extent can they successfully simulate the correlations of a PR-box for random inputs 0 and 1? The following strategy has a probability of 3/4 for successful simulation: \begin{itemize} \item Alice inputs 2 for PR-box input 0, and 1 for PR-box input 1 \item Bob inputs 3 for PR-box input 0, and 1 for PR-box input 1 \end{itemize} To get the PR-box marginals of 1/2 for the outputs 0 and 1, Alice and Bob simultaneously flip their outputs randomly for half the input pairs (i.e., before separation, they generate a random bit string, with equal probabilities for 0 and 1, which they share, and they associate successive rounds of the simulation---successive input pairs---with elements of the string; when the shared bit is 1, they both flip the output). Then: \begin{itemize} \item inputs 00 (i.e., KS-inputs 23) $\rightarrow$ outputs (00 or 11), 01, 10 \item inputs 01 (i.e., KS-inputs 21) $\rightarrow$ outputs (00 or 11), 01, 10 \item inputs 10 (i.e., KS-inputs 13) $\rightarrow$ outputs (00 or 11), 01, 10 \item inputs 11 (i.e., KS-inputs 11) $\rightarrow$ outputs 00, 11 \end{itemize} with equal probability for each possibility, i.e., 1/3 for each of the outcomes (00 or 11), 01, 10 in the case of inputs 00, 01, 10, and 1/2 for each of the outcomes 00, 11 in the case of inputs 11. If, in addition, Bob flips his output each round, then: \begin{itemize} \item inputs 00 (i.e., KS-inputs 23) $\rightarrow$ outputs (01 or 10), 00, 11 \item inputs 01 (i.e., KS-inputs 21) $\rightarrow$ outputs (01 or 10), 00, 11 \item inputs 10 (i.e., KS-inputs 13) $\rightarrow$ outputs (01 or 10), 00, 11 \item inputs 11 (i.e., KS-inputs 11) $\rightarrow$ outputs 01, 10 \end{itemize} The (01 or 10) outputs for the input pairs 00, 01, 10 represent failures, and these occur with probability $3/4 \times 1/3 = 1/4$, so: \[ \mbox{prob(successful sim)} = 3/4 \] It is clear that there is no way of reducing the failure rate, so this is in fact the optimal strategy. It follows that a 5-dimensional KS$_{p}$-box with $p = 1/3$, which exhibits superquantum correlations, is classical with respect to nonlocality. That is, \emph{such a KS-box adds nothing to shared randomness as a resource in simulating the superquantum nonlocal correlations of a PR-box.} This is confirmed by noting that, for any pair of inputs for Alice, and any pair of inputs for Bob, the CHSH inequality is satisfied by the correlations of a 5-dimensional KS$_{p}$-box with $p=1/3$, i.e., the maximum value of the correlation is equal to 2. To compare with the units in terms of which the CHSH inequality is usually expressed, where the observables take the values $\pm 1$, let $a = \pm 1$, $b = \pm 1$. Then for inputs $x = 1, \ldots, 5, y = 1, \ldots, 5$: \begin{eqnarray} \langle xy\rangle_{x=y}& = & 1 \\ \langle xy\rangle_{x \neq y} & = & -1/3 \end{eqnarray} and for any $2 \times 2$ pairs of input values: \begin{equation} K = \langle xy\rangle + \langle xy'\rangle +\langle x'y\rangle - \langle x'y'\rangle \leq 2 \end{equation} since at most two of these terms can be equal to 1, in which case the remaining two terms are each equal to -1/3. It follows that the correlations for any two inputs for $x$ and any two inputs for $y$ can be recovered from a local hidden variable theory, but there is no product space that will generate the correlations between outputs for all possible input values to a 5-dimensional KS$_{p}$-box with $p = 1/3$, if the output values are required to be noncontextual, i.e., edge-independent (because the possibility of successfully simulating such a KS-box with only shared randomness as a resource is less than .95, as we saw in \S 3). \section{Dropping the Marginal Constraint} For the marginal constraint: \begin{equation} p= 1/5 \end{equation} we saw in \S 3 that a perfect classical simulation of a 5-dimensional KS-box can be achieved with charts $C_{1}$. Similarly, a perfect quantum simulation can be achieved if Alice and Bob share copies of the maximally entangled state: \begin{equation} \frac{1}{\sqrt{5}} \sum_{i-1}^{5}\ket{i}\ket{i} \end{equation} where $\{\ket{i}, i = 1, \ldots, 5\}$ is a basis in $\hil{H}_{5}$. If \begin{equation} 0 \leq p \leq 1/5 \end{equation} a perfect simulation can be achieved if Alice and Bob mix either of the above strategies with the strategy: `output 0 for any input,' with the appropriate mixture probabilities. If \begin{equation} 1/5 \leq p \leq 1/3 \end{equation} Alice and Bob can mix the strategy for $p= 1/3$ in \S 3 with the strategy for $p = 1/5$, with appropriate mixture probabilities. A perfect simulation is impossible, but a quantum simulation is superior to the optimal classical simulation in this case, because a classical strategy will have to use $C_{2}$ charts as well as $C_{1}$ and $C_{0}$ charts . For \begin{equation} p = 1/2 \end{equation} if the inputs are the same, the outputs are required to be the same, with equal probability; if the inputs are different, then---since the output pair 11 has zero probability---it follows that the output pairs 01 and 10 must occur with equal probability 1/2 (i.e., the output pair 00 has zero probability). So, for this case, the correlations of a 5-dimensional KS-box become: \begin{itemize} \item if $x=y$, then $a=b$ \item if $x \neq y$, then $a \neq b$ \end{itemize} It is now apparent that, for the marginal $p = 1/2$, and for pairs of inputs like $x \in \{1, 2\}, y \in \{1, 3\}$, i.e., where one of the inputs for Alice and Bob is the same and the other two different, \emph{a 5-dimensional KS-box is equivalent to a PR-box.} If we interpret the KS-inputs $x = 2, 1$ as corresponding to the PR-inputs $x = 0,1$, respectively, and the KS-inputs $y = 3, 1$ as corresponding to the PR-inputs $y=0,1$, respectively, and Bob always flips his outputs, then the CHSH inequality is saturated and the correlations are precisely those of a PR-box, with the same marginals: \begin{equation} \langle 23\rangle + \langle 21\rangle + \langle 13\rangle - \langle 11\rangle = K_{PR} = 4 \end{equation} If \begin{equation} 1/3 \leq p \leq 1/2 \end{equation} a perfect simulation is impossible, but a quantum simulation is superior to a classical simulation. The CHSH inequality is violated: \begin{equation} 2 \leq K_{KS} \leq 4\, \mbox{when}\, 1/3 \leq p \leq 1/2 \end{equation} The marginal probability $p = \frac{1+\sqrt{2}}{6}$ yields the Tsirelson bound $2\sqrt{2}$. Note, however, that a perfect quantum simulation of all the correlations of a 5-dimensional KS-box with this marginal is impossible, even though for any two inputs $x$ and any two inputs $y$, the KS-box is no more nonlocal than quantum mechanics---just as the correlations of the $p = 1/3$ case are superquantum, while being no more nonlocal than a classical theory for any two inputs $x$ and any two inputs $y$. As we noted in \S1, the space of `no signaling' bipartite probability distributions, with arbitrary inputs $x \in \{1, \ldots, n\}, y \in \{1, \ldots, n\}$ and binary outputs, 0 or 1 has the form of a convex polytope, with the vertices representing generalized PR-boxes (which differ only with respect to permutations of the inputs and/or outputs), or deterministic boxes, or (in the case $n > 2$) combinations of these. A 5-dimensional KS$_{p}$-box can be defined in terms of its joint probabilities as in Table 2. For $p=1/2$, the KSK-box is a generalized PR-box, with $p(00|xy) = p(11|xy) = 1/2$ in the diagonal cells, and $p(01|xy) = p(10|xy) = 1/2$ in the off-diagonal cells. Permuting the outputs for $y$ yields a box with $p(01|xy) = p(10|xy) = 1/2$ in the diagonal cells, and $p(00|xy) = p(11|xy) = 1/2$ in the off-diagonal cells, in which case the probabilities for $x=2,1; y = 3,1$ are as in the definition of a PR-box in \S1 (effectively a permutation of the inputs, with $x=2$ representing the PR-input $x=0$ and $y=3$ representing the PR-input $0$). It is now easy to see that the probabilities of a KS$_{p}$-box for $p < 1/2$ can be generated by mixing the extremal KS-box with $p = 1/2$ and the extremal deterministic box with $p(00|xy) = 1$ in each of the cells, in the ratio $2p: 1-2p$, so these KS-boxes lie inside the `no signaling' polytope. \begin{table}[h!] \begin{center} \begin{tabular}{|ll||ll|ll|ll|lll|} \hline &$x$&$1$& &$2$ & &$\hdots$ & &$5$&&\\ y&&&&&&&&&&\\ \hline\hline $1$&&$1-p$ &$0$ &$1-2p$ &$p$ & $\ddots$ &&$1-2p$&$p$& \\ &&$0$ &$p$ &$p$ &$0$ & & &$p$&$0$&\\\hline $2$&&$1-2p$ &$p$ &$1-p$ &$0$ & $\ddots$& &$1-2p$&$p$&\\ &&$p$ &$0$ &$0$ &$p$& & &$p$&$0$&\\\hline $\vdots$& & $\ddots$ && $\ddots$&& $\ddots$ && $\ddots$&&\\\hline $5$&&$1-2p$ &$p$ &$1-2p$ &$p$ & $\ddots$& &$1-p$&$0$&\\ &&$p$ &$0$ &$p$ &$0$& & &$0$&$p$&\\\hline \hline \end{tabular} \end{center} \caption{Joint probabilities for a 5-dimensional KS$_{p}$-box.} \end{table} \section{Commentary} An $n$-dimensional KS$_{p}$-box with marginal $p = 1/n$ can be perfectly simulated by a quantum simulation in which Alice and Bob share copies of the maximally entangled state $\frac{1}{\sqrt{n}} \sum\ket{i}\ket{i} \in \hil{H}_{n}\otimes\hil{H}_{n}$ and produce outputs for given inputs via local measurements in the same basis $\{\ket{i}, i = 1, \ldots, n\}$ on their respective Hilbert spaces, where the $n$ orthogonal basis states are associated with the inputs $x = 1, \ldots, n, y = 1, \ldots, n$. The perfect correlation constraint (i) for the same inputs $x, y$ will be satisfied, and the `$\perp$' constraint (ii) for different inputs will be satisfied as a quantum orthogonality constraint. Similarly, a perfect classical simulation can be achieved if Alice and Bob share classical charts with $n$ vertices selected by a shared random variable, in which a single vertex is pre-assigned the value 1 and the remaining $n-1$ vertices are pre-assigned the value 0. A perfect quantum or classical simulation can also be achieved for $0 \leq p \leq 1/n$ by mixing the strategy for $p = 1/n$ with the strategy `output 0 for any input' with the appropriate mixture probabilities, as we saw in \S 3 for the case $p = 1/5$. For $p > 1/n$, however, this is not possible, and a quantum simulation will have to adopt a strategy in which Alice and Bob produce outputs for given inputs on the basis of local measurements on copies of a shared entangled state $\frac{1}{\sqrt{m}} \sum_{i=1}^{m}\ket{i}\ket{i} \in \hil{H}_{m}\otimes\hil{H}_{m}$ with $m < n$ to generate the marginal probability. Then different input pairs $x,y$; $x',y'$ can be associated with different local measurement contexts defined by different bases in $\hil{H}_{m}$, and it is possible that the same input can be associated with two or more \emph{incompatible} local measurement contexts (as we saw for $p = 1/5$ in \S 3, where the input corresponding to a vertex of the pentagram could be associated with two contexts associated with two bases in $\hil{H}_{3}$ represented by the two edges of the pentagram intersecting in the vertex). A little reflection shows that if each input can be associated with two or more incompatible local measurement contexts associated with different bases, then $n \geq 5$. If $n=2$, a perfect quantum simulation is possible for all marginal probabilities $0 \leq p \leq 1/2$ if Alice and Bob share copies of the maximally entangled state $\frac{1}{\sqrt{2}} \sum\ket{i}\ket{i}$ in $\hil{H}_{2}$. There can only be one local measurement context associated with each input, because the state $\ket{i} \in \hil{H}_{2}$ corresponding to the input $i$ cannot belong to two different bases in $\hil{H}_{2}$. If $n=3$, there are three possible local measurement contexts represented by the input pairs 12, 13; 21, 23; 31, 32 (we take permutations of contexts such as 12 and 21 as equivalent). The orthogonality relations of the three contexts are represented by the edges of a triangle, in which each vertex (corresponding to an input of the KS-box) is associated with two contexts. Clearly, in $\hil{H}_{3}$, the three contexts can be embedded into a single context associated with a basis in $\hil{H}_{3}$ (since the triangle also represents the orthogonality relations of the three basis states). In order for each input to be associated with two incompatible local contexts in a quantum simulation, the two contexts would have to be represented by orthogonal bases with a common basis state in a proper subspace of $\hil{H}_{3}$, i.e., in a 2-dimensional Hilbert space, which is impossible. If $n=4$, there are six possible local measurement contexts represented by the input pairs 12, 13, 14; 21, 23, 24; 31, 32, 34; 41, 42, 43. The orthogonality relations of the six contexts are represented by the edges and diagonals of a square, in which each vertex is associated with three contexts. Again, the six contexts can be embedded into a single context associated with a basis in $\hil{H}_{4}$ (since the square with diagonals also represents the orthogonality relations of the four basis states). In order for each input to be associated with at least two incompatible local contexts in a quantum simulation, the two contexts would have to be represented by different bases with a common basis state in a proper subspace of $\hil{H}_{4}$, i.e., in a 3-dimensional Hilbert space (since this is impossible on $\hil{H}_{2}$). If we remove two edges of the square with diagonals, in such a way that each vertex is associated with two contexts, the orthogonality relations in $\hil{H}_{3}$ are inconsistent with the assumption that there are four distinct vertices, each associated with two incompatible local contexts. \begin{figure}[!ht] \begin{picture}(300,80)(-15,0) \begin{tikzpicture} \tikzstyle vertex=[circle,draw,fill=black,inner sep=1pt] \path (0,0) coordinate (O); \path (3*72+18:1cm) coordinate (P1); \path (18:1cm) coordinate (P2); \path (2*72+18:1cm) coordinate (P3); \path (4*72+18:1cm) coordinate (P4); \path (72+18:1cm) coordinate (P5); \path (-4.4,-.8) coordinate (S1); \path (-2.5,-.8) coordinate (S2); \path (-2.5,.9) coordinate (S3); \path (-4.4,.9) coordinate (S4); \path (-8,-.8) coordinate (T1); \path (-6,-.8) coordinate (T2); \path (-7,.9) coordinate (T3); \path (-9.6,-.8) coordinate (L1); \path (-9.6,.9) coordinate (L2); \path (-9.6,-1.1) node {$1$}; \path (-9.6,1.2) node {$2$}; \path (-8,-1.1) node {$1$}; \path (-6,-1.1) node {$2$}; \path (-7,1.2) node {$3$}; \path (-4.4,-1.1) node {$1$}; \path (-2.5,-1.1) node {$2$}; \path (-2.5,1.2) node {$3$}; \path (-4.4,1.2) node {$4$}; \path (3*72+18:1.3cm) node {$1$}; \path (18:1.3cm) node {$2$}; \path (2*72+18:1.3cm) node {$3$}; \path (4*72+18:1.3cm) node {$4$}; \path (72+18:1.3cm) node {$5$}; \draw (S1) -- (S2) -- (S3) -- (S4) -- cycle; \draw (S1) -- (S3); \draw (S2) -- (S4); \draw (T1) -- (T2) -- (T3) -- cycle; \draw (L1) -- (L2); \draw (P1) -- (P2) -- (P3) -- (P4) -- (P5) -- cycle; \draw (P1) -- (P4) (P4) -- (P2) (P2) -- (P5) (P5) -- (P3) (P3) -- (P1); \node[vertex] at (L1) {}; \node[vertex] at (L2) {}; \node[vertex] at (T1) {}; \node[vertex] at (T2) {}; \node[vertex] at (T3) {}; \node[vertex] at (S1) {}; \node[vertex] at (S2) {}; \node[vertex] at (S3) {}; \node[vertex] at (S4) {}; \node[vertex] at (P1) {}; \node[vertex] at (P2) {}; \node[vertex] at (P3) {}; \node[vertex] at (P4) {}; \node[vertex] at (P5) {}; \end{tikzpicture} \label{fig5} \end{picture} \caption{Basis orthogonality relations in $\hil{H}_{n}$, for $n= 2, 3, 4, 5$} \end{figure} For example, suppose we remove the two diagonals of the square. The vertices 1 and 3 are represented by 1-dimensional projection operators in $\hil{H}_{3}$ that are both orthogonal to the plane defined by the 1-dimensional projectors representing vertices 2 and 4, which requires that 1 and 3 are represented by the same 1-dimensional projector. (See Fig. 5.) If $n=5$, there are ten possible local measurement contexts. The orthogonality relations are represented by the edges and diagonals of a pentagon, i.e., a pentagon with an inscribed pentagram, in which each vertex is associated with four contexts. The ten contexts can be embedded into a single context associated with a basis in $\hil{H}_{5}$ (since the pentagon and pentagram edges also represent the orthogonality relations of the five basis states). If we remove the diagonals (the edges of the pentagram), or if we remove the edges of the pentagon, each vertex is associated with two contexts. As we showed in \S 3, the orthogonality relations of the pentagram (or, equivalently, the pentagon, but not both) can be implemented in $\hil{H}_{3}$, in such a way that each vertex is associated with two incompatible local contexts. Note that in considering a classical or quantum simulation of an $n$-dimensional KS$_{p}$-box with $p > 1/n$, there is a trade-off between satisfying the marginal constraint and perfectly simulating the correlations. For example, in the case of the 5-dimensional KS$_{p}$-box with $p = 1/3$, one could always adopt a strategy for simulating the $p = 1/3$ case by mixing the strategy for $p = 1/5$ with the classical or quantum strategy for $p = 1/3$ considered in \S 3. The simulation will fail on two counts: with respect to meeting the marginal constraint, and with respect to recovering the correlations. But such a strategy will do better at recovering the correlations than the strategy for $p = 1/3$, and will also achieve a marginal probability for the output 1 that is closer to the value $p = 1/3$ than the strategy for $p = 1/5$. What is clear, though, is that the closer a simulation strategy approximates the correlation constraint, the more the value of $p$ decreases from the required value of 1/3 (where the probability of meeting the correlation constraint is less than .95) to 1/5 (where the probability of meeting the correlation constraint is 1). In the preceding discussion, we opted to consider the question of simulating the correlations of an $n$-dimensional KS$_{p}$-box under the assumption that the simulation meets the marginal constraint. The space of `no signaling' bipartite theories for two binary-valued observables for each party---equivalently the space of `no signaling' bipartite probability distributions with binary-valued inputs and binary-valued outputs---can be divided into a classical region bounded by the value 2 for the CHSH correlation $K = \langle 00\rangle + \langle 01\rangle + \langle 10\rangle - \langle 11\rangle$, a quantum region bounded by the Tsirelson bound $2\sqrt{2}$, and a superquantum region between the Tsirelson bound and the maximum value $K = 4$ attained by a PR-box: \begin{eqnarray} K_{C} & \leq & 2 \label{eq:CHSH(C)} \\ K_{Q} & \leq & 2\sqrt{2} \label{eq:CHSH(Q)} \\ K_{PR} & = & 4 \label{eq:CHSH(PR)} \end{eqnarray} Any probability distribution in the classical region can be represented as a unique mixture (convex combination) of bipartite pure states that are locally deterministic for each party, represented by vertices of the classical polytope, which is a simplex. It follows that the distribution can be generated by a random variable shared between the two parties, where the values label local deterministic states assigning values to given inputs. Probability distributions in the region outside the classical simplex exhibit correlations that are more nonlocal than classical correlations. Each probability distribution in the nonclassical region can be represented non-uniquely as a mixture of pure or extremal states represented by vertices of the `no signaling' polytope, which is not a simplex. A KS-box, as a hypothetical superquantum information channel, reveals a further dimension of structure in the information-theoretic properties of `no signaling' theories, having to do with the contextuality of theories outside the classical simplex. To reveal this structure requires considering theories with more than two observables for each party. Consider a 5-dimensional KS-box with $p=1/3$. Referring to the discussion in \S 3, let \begin{equation} Z = |\sum_{\mbox{\small{p-gram edges}}}p(a\cdot b= 0|x, y) - \sum_{\mbox{\small{p-gon edges}}}p(a\cdot b = 0|x, y))| \end{equation} and define the correlation: \begin{equation} \mathcal{K}_{C} = \sum_{x=y}p(a=b|x,y) - Z \end{equation} It follows from equation (\ref{eq:optimal classical}) in \S 3 that the optimal classical value for $\mathcal{K}$ is: \begin{equation} \mathcal{K_{C}} = 5 - \frac{4}{3} \end{equation} This expresses a constraint on the probabilities derived from a noncontextual assignment of 0's and 1's to the inputs $1,\ldots,5$ satisfying the orthogonality constraint, either for the pentagram edges or for the pentagon edges, where local noncontextuality in satisfying the orthogonality constraint is forced by the requirement of perfect correlation for the same inputs $x=y$. For a 5-dimensional KS-box, we have: \begin{equation} \mathcal{K}_{KS}=5 \end{equation} Since $\mathcal{K}_{C} < \mathcal{K}_{KS}$, the correlations of a 5-dimensional KS-box with $p = 1/3$ cannot be recovered from a probability distribution that lies inside the classical simplex. However, for any subset of $2 \times 2$ input pairs $K < 2$, so the correlations for any particular subset of $2 \times 2$ input pairs can be recovered from a probability distribution that lies inside the classical simplex. In other words, if Alice and Bob are told in advance that they will be required to simulate the correlations of a particular subset of $2 \times 2$ input pairs to a 5-dimensional KS-box with $p = 1/3$, there is a local strategy based on shared randomness that will enable them to do so. What is significant here is that the different classical local `contexts' defined by the classical simplices associated with the different subsets of $2 \times 2$ input pairs cannot be embedded into the classical simplex for all 5 x 5 input pairs. Note that the lattice of subspaces of a simplex is a Boolean algebra, with a 1-1 correspondence between the vertices and the facets (the $(n-1)$-dimensional faces). So the `contexts' defined by these classical simplices are Boolean algebras. For the maximally entangled quantum state in $\hil{H}_{3}\otimes\hil{H}_{3}$, we obtain: \begin{equation} \sum_{x=y}p(a=b|x,y) - Z =5 - \epsilon \end{equation} where $\epsilon < \frac{4}{3}$. We conjecture that this is the optimal quantum value $\mathcal{K_{Q}}$. The fact that the quantum bound exceeds the classical bound reflects a feature of quantum probability assignments that is not shared by classical probability assignments: not only is the `$\perp$' constraint satisfied as an orthogonality constraint by 0, 1 probabilities for the orthogonal pentagram edges, but the `$\perp$' constraint is satisfied for the non-orthogonal pentagon edges probabilistically, in the sense that the probability that two non-orthogonal vertices are both assigned the value 1 decreases continuously with the square of the cosine of the angle between the vertices, as the angle varies between 0 and orthogonality. The inequality \begin{equation} \mathcal{K}_{C} < \mathcal{K}_{Q} < \mathcal{K}_{KS} \end{equation} then expresses the relative extent to which the correlations of each type of theory are contextual, in the sense that the correlations for all inputs (or all observables) cannot be derived from a joint probability distribution for all pairs of inputs in the classical simplex, even though the correlations for every subset of $2 \times 2$ inputs can be derived from a joint probability distribution that lies inside the corresponding classical simplex---i.e., these classical local `contexts' cannot be embedded into the classical simplex for the full set of joint probabilities. A KS-box can be superquantum with respect to contextuality as measured by the correlation $\mathcal{K}$, while being no more nonlocal than a classical theory, as measured by the CHSH correlation $K$ for any subset of $2 \times 2$ input pairs. Similarly, since the Tsirelson bound can be attained by the correlations for certain subsets of $2 \times 2$ input pairs to a 5-dimensional KS$_{p}$-box with $p > 1/3$, while a perfect quantum simulation for all pairs of inputs is impossible, it follows that a KS-box can be superquantum with respect to contextuality, as measured by the correlation $\mathcal{K}$, while being no more nonlocal than quantum mechanics, as measured by the CHSH correlation $K$ for any subset of $2 \times 2$ input pairs. \section*{Acknowledgements} Jeffrey Bub acknowledges support from the University of Maryland Institute for Physical Science and Technology and informative discussions with Daniel Rohrlich. Allen Stairs acknowledges support from NSF grant no. 0822545. \bibliographystyle{plain}
0903.2272
\section{Introduction} \label{sec:intro} \vspace{0.0cm} \PARstart{D}{igital} cameras use image-processing tools, e.g., interpolation techniques, such as those used in analog camcorders, in order to achieve good quality images. One big difference between digital cameras and analog camcorders is that digital cameras store digital data in flash memories. Thanks to storing digital data, functionalities such as image editing and enhancement can be added. But the price of flash memories is still very high, so that low image bit-rates are required in order to enable storage of large number of images at a reasonable cost. To achieve this, most digital cameras use lossy compression schemes like JPEG~\cite{JPEGBook} to store the images. In this paper, we focus on the color interpolation process that is used in many cameras, and specifically on how this interpolation should be taken into consideration when designing image compression for digital cameras. In order to produce full color images, most digital cameras place color filters on monochrome sensors. While some high-end digital cameras use three CCD plates to get full color images, where each plate takes one color component, most digital cameras use a single CCD plate, with several different color filters, and produce full color images by using an interpolation technique. Although there are several different color filter arrays (CFA)~\cite{Yamada00SSC} \cite{Yamanaka77P}, in this paper, we focus on the Bayer CFA which is most widely used in digital cameras. The Bayer CFA, as shown in Fig.~\ref{fig:1}, uses 2 by 2 repeating patterns (RP) in which there are two green pixels, one red and one blue. There is only one color component in each pixel, so the other two color components for a given pixel have to be interpolated using neighboring pixel information. For example, in a bilinear interpolation method, the red (blue) color component on a green pixel in Fig.~\ref{fig:1} is produced by the average value of two adjacent red (blue) pixels. Although there are several possible interpolation algorithms~\cite{Ramanath02JEI} \cite{Longere02PIEEE} \cite{Trussell02IP} \cite{Adams98ICIP} \cite{Kimmel99IP} \cite{Kehtarnavaz03JEI}, it is clear that from an information theoretic viewpoint they all result in an increase of redundancy. \begin{figure}[tb] \centering \includegraphics[width=3.5cm]{ccd} \caption{Bayer color filter array. Each letter indicates the position of a different color filter. R, G and B are for Red, Green and Blue, respectively. The gray block indicates 2 by 2 repeating pattern.} \label{fig:1} \end{figure} \begin{figure}[tb] \centering \begin{minipage}[b]{1.0\linewidth} \centering \includegraphics[width=8.8cm]{diagram1}\\ \vspace{0.0cm} \centerline{(a)}\smallskip \end{minipage} \begin{minipage}[b]{1.0\linewidth} \centering \includegraphics[width=8.8cm]{diagram2}\\ \vspace{0.0cm} \centerline{(b)} \end{minipage} \caption{Block diagrams of (a) the conventional method and (b) the proposed method. In (a) an image processing stage is followed by a compression stage. In (b) interpolation and post-processing in an image processing stage are done after compression and decompression.} \label{fig:diagram} \vspace{0.0cm} \end{figure} In a conventional method, as shown in Fig.~\ref{fig:diagram} (a), after finishing the image processing stage, a lossless or lossy image compression algorithm is used before storing the image. Although in theory one could achieve the same compression with or without the interpolation, to do so would require exploiting the specific characteristics of the interpolation technique within the compression algorithm, which is clearly not easy to achieve if we wish to use a standard compliant compression method without modification. For this reason, in this paper we propose image transform algorithms to encode the image {\em before} interpolation, so that interpolation is performed only after decoding. We call this approach Interpolation after Decoding (IAD), see Fig.~\ref{fig:diagram} (b). There are some other functions, such as white balancing and color correction, that are performed in the image processing stage. These are shown as pre- and post-processings in the figure. In our proposed approach less data needs to be encoded, since only one color is available at each pixel position before interpolation. The main challenge is then how to organize the available data for encoding in order to best exploit the spatial redundancy for compression. Methods to use increase image quality using the redundancy of interpolation in post- and pre-processing stages and using encoder characteristics during interpolation have been studied. In~\cite{Herley98ICIP}, under the assumption of a fixed interpolation algorithm, the quantization noise is reduced by using an iterative method that incorporates information about the interpolation algorithm. By contrast our approach assumes only a specific CFA and can operate with any interpolation technique. The main difference is that our algorithms compress non-interpolated images without introducing the redundancy of interpolation, whereas the algorithm in~\cite{Herley98ICIP} improves image quality by exploiting the redundancy of interpolation. In~\cite{Baharav02SPIE}, under a given compression method, minimizing error between a decoded full color source and a decoded image after interpolation is studied, but the complexity required is too high to use in digital cameras. In~\cite{Koh03ICIP}, as a modified approach of our previous work in~\cite{Lee01ICIP}, CFA data compression with different format conversion is proposed but the method is limited to work with bilinear interpolation. Since in our algorithm, interpolation is not involved in encoding and decoding processes, the algorithm itself is independent from interpolation methods. Also, interpolation is done on the decoded pixels and so any interpolation method which is not sensitive to the coding error can be applied. As an image coder, JPEG is widely used in digital cameras because it is relatively simple and provides good performance, especially when the compression ratio is low. JPEG is a block discrete cosine transform (DCT) based coder and the blocking artifacts can become severe as the compression ratio becomes higher. Discrete wavelet transform (DWT) based coders such as EZW~\cite{Shapiro93SP}, SPIHT~\cite{Said96CSVT} and EBCOT~\cite{Taubman00IP} (adopted in JPEG2000~\cite{JPEG2000Book}) are also used as image coders. A DWT based coder does not produce blocking artifacts and it provides good performance at high compression ratios. In this paper, we use JPEG and SPIHT as representative of the DCT and DWT based approaches, respectively. Our proposed algorithms are tested under both of these coding techniques. Although we focus on standard compression methods, interpolation aware compression methods can provide better performance especially when the interpolation method used is sensitive to the coding error of standard compression methods. In this paper, extending our previous work in~\cite{Lee01ICIP}, we propose several different algorithms to transform the non-interpolated images before compression. We provide performance results of the proposed algorithms with different coders (JPEG and SPIHT) and interpolation methods (bilinear and adaptive interpolation). Also, using a simple example based on one-dimensional data, we propose an analysis to provide some intuition about why our approach outperforms conventional compression-after-interpolation (CAI) methods. In our problem an original full color image is not available, since cameras are assumed to capture images with single color pixels. Thus, for the purpose of comparison we use as a reference a full color image obtained by interpolating the original (uncompressed) captured image. Therefore our problem will be to find coding schemes that are optimized in terms of minimizing the error with respect to that original interpolated image. The experimental results show that the proposed algorithms outperform the conventional method in the full range of compression ratios for JPEG coding with bilinear interpolation and up to 20 : 1 or 40 : 1 compression ratio for SPIHT coding depending on the interpolation methods used. Thus, in both cases, our proposed techniques are superior in the range of compression ratios that are used in practical digital cameras (i.e., those corresponding to high quality images). This paper is organized as follows: in Section~\ref{sec:toy}, the theoretical rate distortion performance of the CAI and IAD approaches is analyzed by using a 1-D sequence and DPCM encoding. Proposed image transform algorithms are addressed in Section~\ref{sec:algorithm}. Experimental results are provided as demonstration of the validity of our algorithm in Sections~\ref{sec:capt_results} and~\ref{sec:adaptive}. Finally, the conclusion of this work is in Section \ref{sec:capt_conclusion}. \section{Performance comparison using one dimensional sources} \label{sec:toy} The main difference between the CAI and IAD methods is the order of compression and interpolation. In this section we propose an analysis to provide some intuition about why an IAD method can theoretically outperform a CAI method by considering differential pulse code modulation (DPCM) compression of a one dimensional first order autoregressive (AR) process. DPCM exploits the correlation between two adjacent pixels to reduce the residual energy coded, whereas transform coding used in standard image coding exploits spacial correlation to pack a large fraction of its total energy in relatively few transform coefficients. Therefore DPCM compression has similar coding process (including decorrelation, quantization and entropy coding) to standard image coding, although the performance of decorrelation in DPCM is weaker than that in transform coding. First, we compare the R-D performance of DPCM and DPCM after interpolation (DPCMI). Then, we show that the IAD method outperforms the CAI method under the following assumptions: (i) DPCM coding is used, (ii) the interpolated sequence is divided into two sub-sequences during coding and (iii) the distortion is measured after interpolation. Although open loop DPCM is not generally used due to the error propagation in the decoded sequence, its analysis is easier, given that the difference sequence has an explicit theoretical R-D curve when the source is a Gaussian AR process. \begin{figure}[tb] \centering \begin{tabular}{cc} \includegraphics[width=4.8cm]{dpcm1} & \includegraphics[width=4.8cm]{dpcm2} \\ \vspace{0.2cm} (a) & (b)\\ \end{tabular} \caption{Gray and white boxes indicate original and interpolated samples, respectively. In (a), $\{Z_n\}$ is a differential sequence of the original sequence taken from sensors and in (b), $\{T_n\}$ and $\{S_n\}$ indicate a differential sequence of the interpolated sequence.} \label{fig:DPCM} \end{figure} Therefore we consider open loop DPCM of one dimensional first order zero mean Gaussian AR processes. Let \setlength{\arraycolsep}{0.0cm} \begin{equation}\label{eq:AR} X_n{}={}\rho X_{n-1}{}+{}W_n, ~ n = 1, 2, \cdots , \end{equation} \setlength{\arraycolsep}{5pt} denote the process, where $\{W_n\}$ is a zero-mean sequence of independent and identically distributed random variables and $W_n \sim N(0,\sigma_W^2)$, and $\rho$ is the correlation coefficient ($0\leq \rho < 1$). Then from the probability distribution of $W_n$, the probability distribution of $X_n$ is $N(0, \sigma_W^2 /(1-\rho^2))$. We assume that the initial state $X_0$ is given and we are interested in the source outputs for $~n \geq 1$. We define the differential sequence of $\{X_n\}$ as $\{Z_n\}$ \setlength{\arraycolsep}{0.0em} \begin{equation}\label{eq:DPCM} Z_n \triangleq X_n - X_{n-1} \\ {}={} (\rho -1)X_{n-1}{}+{}W_n~. \end{equation} \setlength{\arraycolsep}{5pt} Since $X_{n-1}$ and $W_n$ are independent, $Z_n$ also has Gaussian distribution ($Z_n \sim N(0,~2 \sigma_W^2 /(1+\rho))$). The rate distortion (R-D) function for a Gaussian source with mean square error (MSE) distortion can be written in closed form~\cite{cover91info} and therefore the R-D function of ${Z_n}$ is \begin{equation}\label{eq:RD_1} R_1(D) = \frac{1}{2} \log_2 (\frac{2 \sigma_W^2}{(1+\rho)D}),~~~~ \text{for }0 \leq D <\frac{2 \sigma_W^2}{1+\rho}~, \end{equation} where $D$ denotes average distortion of the data coded. The distortion of interpolated pixels is addressed in the last part of this section. Next, we double the number of samples by using a linear interpolation method and define this new sequence $Y_n$ as : \begin{equation}\label{eq:INT} \begin{cases} Y_{2n} \triangleq X_n, \\ Y_{2n+1} \triangleq (X_n + X_{n+1})/2. \end{cases} \end{equation} From this sequence, as shown in Fig.~\ref{fig:DPCM} (b), two differential sequences $\{T_n\}$ and $\{S_n\}$ can be defined as \begin{equation}\label{eq:INT2} \begin{cases} T_n \triangleq Y_{2n-1}-Y_{2n-2} \\ S_n \triangleq Y_{2n}- Y_{2n-1}. \end{cases} \end{equation} Note that because of the chosen interpolation mechanism, $T_n$ is identical to $S_n$ (i.e., $T_n = S_n = (X_n - X_{n-1})/2$) and the probability distribution of $T_n$ (or $S_n$) is $N(0,\sigma_W^2/2(1+\rho))$. Since both $\{T_n\}$ and $\{S_n\}$ are Gaussian sources, their R-D functions are : \begin{equation}\label{eq:RD_T} R_T(D) = R_S(D) = \frac{1}{2} \log_2 (\frac{\sigma_W^2}{2(1+\rho)D}), ~\text{for } 0 \leq D < \frac{\sigma_W^2}{2(1+\rho)}~. \end{equation} In this example, since $S_n$ and $T_n$ are same, there is no need to encode $S_n$ if $T_n$ is available. However, in our original problem, the difference of neighboring pixels of interpolated images is not same since more than two pixels are involved in 2-D interpolation. Also, a standard compliant compression algorithm cannot employ additional information related to interpolation. Therefore, although $S_i$ and $T_j$ can be spatially correlated, we assume that $S_i$ and $T_j$ will be encoded independently for all $i$ and $j$. Then the rate distortion function of DPCM for $Y_n$, i.e., the R-D function for the DPCMI approach, will be : \begin{equation}\label{eq:RD_2} R_2(D) = R_T(D) + R_S(D) = \log_2 (\frac{\sigma_W^2}{2(1+\rho)D}), ~\text{for } 0 \leq D < \frac{\sigma_W^2}{2(1+\rho)}~. \end{equation} \begin{figure}[tb] \centering \includegraphics[width=11cm]{AR_RD} \caption{The upper (lower) graph is for $\rho = 0.9$ ($\rho = 0.1$). Solid lines indicate the R-D curve of the differential sequence ($\{Z_n\}$) and dotted lines indicate the R-D curve of the differential sequences after interpolation ($\{S_n\}$ and $\{T_n\}$).} \label{fig:AR_RD} \end{figure} The main difference between above two methods (DPCM and DPCMI) is the number of samples to be coded and the variance of the respective sequences. The number of samples encoded by DPCM is half the number encoded by DPCMI, while DPCM encodes a sequence with 4 times larger variance than that encoded by DPCMI. There is a clear trade-off between the two methods since reducing the number of samples will tend to reduce the rate, while increasing variance will tend to increase the rate per sample. Fig.~\ref{fig:AR_RD} compares the performance of the two methods with different AR coefficients. In the figure, the performance of DPCM is better than that of DPCMI at higher rates but is worse at lower rates. This shows that at high rates having to encode fewer samples (DPCM) is better, even though the variance of those samples is higher. The trade-off is reversed at low bit rates. Intuitively, DPCM starts to have an error at lower bit-rate than DPCMI due to the smaller number of samples, but its error is increased faster than that of DPCMI due to the larger variance. Therefore at higher rate, the performance of DPCM is better and it can be worse at lower rate. Instead of theoretical R-D curves, we now consider operational R-D curves of general DPCM coders that use a uniform quantizer and an entropy coder. We assume that a given quantizer has $N$ quantization bins. In the DPCM system, let each bin size be $\Delta$ (except top and bottom bins assuming that the range of source is infinite), let the average MSE be $d$ and, after entropy coding, let the average rate be r. Then, in the DPCMI system, each bin size can be $\Delta / 2$ and the average MSE is $d/4$ for $T_n$ since the maximum sample value of $T_n$ is half the maximum value for $Z_n$. This is because $T_n = (X_n-X_{n-1})/2 = Z_n/2$. However the number of samples in a given bin is exactly same as that in a corresponding bin of $Z_n$, so the rate is still $r$ after applying the same entropy coder. Therefore, if the R-D curve of DPCM passes a point $(r,d)$ then that of DPCMI passes a point $(2r,d/4)$ (note that $T_n$ and $S_n$ each have rate $r$ in the DPCMI system). This relation is formulated as \begin{equation}\label{eq:RD_F} G(d) = \frac {1}{2} H(\frac {d}{4}), \end{equation} where $G$ and $H$ are the R-D functions of DPCM and DPCMI, respectively. This also indicates that in general there will be a trade-off point between the CAI and IAD approaches. In (\ref{eq:RD_1}), the R-D function is determined by ${Z_n}$ (i.e., the coefficients in a DPCM domain corresponding to a non-interpolated sequence) and this is not equivalent to the R-D function of the IAD method (in which the R-D function is determined by the interpolated sequence after decompression). In order to evaluate the performance of the IAD method, we first need to know the R-D performance of the source sequence. The R-D performance of the difference sequences (generated by open loop DPCM system) may not be same that of source sequences since the decoder only has a quantized version of previous sample values. But, in the orthogonal (or approximately orthogonal) transform coding case, which is more relevant for image coding (i.e., DCT or DWT), the distortion in the transform domain is the same as or close to that in a pixel domain depending on the transform . Therefore the approximation used in open loop DPCM is not required and so the analysis approximately holds. The following shows that in the IAD method, the average MSE can be decreased after interpolation. Let us assume that the average MSE of the reconstructed sequence ($\{\hat{X}_n\}$) before interpolation is $d$ then the sample interpolated between $\hat{X}_n$ and $\hat{X}_{n+1}$ is $(\hat{X}_n+\hat{X}_{n+1})/2$, and its average MSE satisfies \setlength{\arraycolsep}{0.0em} \begin{eqnarray}\label{eq:MSEI} E(\frac{\hat{X}_n+\hat{X}_{n+1}}{2}-\frac{X_n+X_{n+1}}{2})^2 &&{}={}E(\frac{(\hat{X}_n-X_n)^2+(\hat{X}_{n+1}-X_{n+1})^2}{4})\nonumber\\ &&~~~~{+}E(\frac{(\hat{X}_n-X_n)\cdot(\hat{X}_{n+1}-X_{n+1})}{2})\nonumber\\ &&{}\leq{} \frac{E(\hat{X}_n-X_n)^2+E(\hat{X}_{n+1}-X_{n+1})^2}{2}\nonumber\\ &&{}={}d~\text{.} \end{eqnarray} \setlength{\arraycolsep}{5pt} In (\ref{eq:MSEI}), inequality comes from \setlength{\arraycolsep}{0.0em} \begin{eqnarray} &&E((\hat{X}_n-X_n)\cdot(\hat{X}_{n+1}-X_{n+1})) {}\leq{} \frac{E(\hat{X}_n-X_n)^2+E(\hat{X}_{n+1}-X_{n+1})^2}{2}~\text{.} \end{eqnarray} \setlength{\arraycolsep}{5pt} Equality holds only when $E((\hat{X}_n-X_n)-(\hat{X}_{n+1}-X_{n+1}))^2 = 0$ (i.e., if the error of two reconstructed pixels is same then the error of the interpolated pixel is same as the reconstructed pixel). If the error of $\hat{X}_n$ (i.e., $\hat{X}_n -X_n$) and $\hat{X}_{n+1}$ is uncorrelated and has zero mean then the average MSE of the interpolated pixel is $d/2$. Therefore average distortion can be decreased after interpolation. This means interpolated pixels are not necessarily considered during performance analysis (since their distortion is always smaller than or equal to that of pixels coded), and so the analysis based on $R_{1}(D)$ and $R_{2}(D)$ is still valid for the interpolated sequence. Also, under the assumption that the R-D functions of DPCM and pixel domains are similar, the IAD method provides better performance in a larger range of different compression ratios. In a 2-D case, ${X_n}$ and ${Y_n}$ can be considered as non-interpolated and interpolated images, respectively. Since transforms, rather than DPCM, are typically used for images, we can consider ${Z_n}$ and ${T_n}$ (and ${S_n}$) to represent the coefficients in the transform domain of the non-interpolated and interpolated images, respectively. Although the order of the process is same between 1-D and 2-D cases, our analysis of the 1-D sequence may not be directly applied to a 2-D case since each function in the process is not same. But, we can expect that the IAD method outperforms at very high rates (at least, until the rates are higher than the rates required for the IAD method without compression) since the IAD method uses around half the amount of data. Also, in the IAD method, images may have larger variance, since it has weaker spatial correlation due to larger distance between adjacent pixels. So, similar to the result in the 1-D case (shown in Fig.~\ref{fig:AR_RD}), the R-D curve of the IAD method drops more sharply than that of the CAI method. Therefore, the R-D curves of IAD and CAI methods may be crossed as bit-rate is decreased, and depending on the location of the cross point, the IAD method can provide better performance in practical applications such as digital cameras. \section{IMAGE TRANSFORM TO REDUCE REDUNDANCY} \label{sec:algorithm} In order to demonstrate a practical IAD scheme our first goal is to transform the non-interpolated input data into a format suitable for general image coders. The input data of the IAD method consists of only one color value for each pixel, while in the CAI method there are three color values for each pixel, obtained by interpolation. In general image coders, it is assumed that incoming data is uniform (i.e., all pixels have the same color components) and that the image has rectangular shape. Our goal is then to design a reversible image transform that can produce image data suitable for coding (without increasing the amount of data to be coded). A detailed diagram of the encoding and decoding blocks of the IAD method is shown in Fig.~\ref{fig:diagnew}. \begin{figure}[tb] \centering \includegraphics[width=11cm]{diagram_new} \caption{Detailed diagram of the encoding and decoding parts of the proposed IAD method. Luminance (Y) data needs several transforms due to data location after format conversion, whereas chrominance (Cb/Cr) data can be coded directly.} \label{fig:diagnew} \end{figure} First, we propose a color format conversion algorithm, since image coders normally use YCbCr format. After format conversion, luminance (Y) data is not available at every pixel position. In order to make the Y data compact, we propose a transform that relocates pixels and removes pixels for which no Y data is available. Then we show how to encode the resulting data which no longer has rectangular shape. \subsection{Color format conversion} \label{sec:format_conversion} In the CAI method, the data to be compressed is in RGB format (obtained by interpolating the CFA data). This data is converted to YCbCr format before compression. In JPEG, normally 4:2:2 or 4:2:0 sampling is used. In JPEG2000, chrominance coefficients in high frequency bands after wavelet transform are not coded since the human visual system is less sensitive to the chrominance data. In the IAD method, to avoid increasing the redundancy, the number of pixels should not be increased after color format conversion. While there are several different methods to achieve this, we choose a method such that 2 green, 1 red and 1 blue pixels are converted to 2 Y, 1 Cb and 1 Cr pixel values. This is reasonable since luminance data is more important than chrominance data and the format conversion can be reversible. We first propose a simple and fast method based on 2 by 2 blocks and then propose more complex methods that provide better performance. \subsubsection{Format conversion based on 2 by 2 blocks} \begin{figure}[tb] \begin{minipage}[b]{1.0\linewidth} \centering \includegraphics[width=9cm]{2by2} \centerline{(a) \hspace{2.7cm}(b)\hspace{2.7cm}(c)}\vspace{-0.0cm} \end{minipage} \caption{The gray region in (a) indicates the possible location of Y data after the format conversion. (b) shows the distance between two green (or luminance) pixels. (c) shows the location of Y and Cr (Cb) data in a 2 by 2 block.} \label{fig:distance} \vspace{0.0cm} \end{figure} In this format conversion, each 4 pixel block contains 2 green, 1 red and 1 blue pixels. Then two luminance and two chrominance values (i.e., Cb and Cr) are obtained by using one green pixel for each of the two luminances, and using the average of the two green pixels for the chrominance calculation. This operation can be represented as follows : \begin{equation}\label{eq:fc} \left[\hspace{-0.1cm} \begin{array}{c} Y^{ul} \\ Y^{lr} \\ Cb \\ Cr \end{array} \hspace{-0.1cm} \right] = \left[\hspace{-0.1cm} \begin{array}{clcr} a_{11} & a_{12} & 0 & a_{13} \\ a_{11} & 0 & a_{12} & a_{13} \\ a_{21} & \frac{a_{22}}{2} & \frac{a_{22}}{2} & a_{23} \\ a_{31} & \frac{a_{32}}{2} & \frac{a_{32}}{2} & a_{33} \end{array} \hspace{-0.1cm}\right]\hspace{-0.2cm} \left[\hspace{-0.1cm} \begin{array}{c} R \\ G^{ul} \\ G^{lr} \\ B \end{array} \hspace{-0.1cm}\right] + \left[\hspace{-0.1cm} \begin{array}{c} 0 \\ 0 \\ 128 \\ 128 \end{array} \hspace{-0.1cm}\right], \end{equation} where, as shown in Fig.~\ref{fig:distance} (b) and (c), the superscripts $ul$ and $lr$ indicate the upper left and lower right positions in a 2 by 2 CFA block, respectively. The coefficients $a_{ij}$ are the (i,j) coefficients of the standard RGB to YCbCr conversion matrix~\cite{Hamilton92JFIF} defined as follows: \begin{equation}\label{eq:rgb2ycbcr} \left[\hspace{-0.1cm} \begin{array}{c} Y \\ Cb \\ Cr \end{array} \hspace{-0.1cm}\right] = \left[\hspace{-0.2cm} \begin{array}{clcr} 0.297 & 0.587 & 0.114 \\ -0.169 & -0.331 & 0.500 \\ 0.500 & -0.419 & -0.081 \end{array} \hspace{-0.2cm}\right]\hspace{-0.2cm} \left[\hspace{-0.1cm} \begin{array}{c} R \\ G \\ B \end{array} \hspace{-0.1cm}\right] + \left[\hspace{-0.2cm} \begin{array}{c} 0 \\ 128 \\ 128 \end{array} \hspace{-0.2cm}\right] \end{equation} We now need to decide what the location of these $Y^{ul}Y^{lr}CbCr$ pixels should be. For the $Cb$ and $Cr$ data, each component could be located in any fixed position in the 2 by 2 block, since only one value of each chrominance is generated for the block. In the Y data case, however, one ($Y^{ul}$) should be located in the upper left region since $Y^{ul}$ is the weighted average of $G^{ul}$, $R$ and $B$ (as shown in Fig.~\ref{fig:distance} (a)) and the other ($Y^{lr}$) should be located in the lower right region of the block. In our algorithm, we put the Y data at each green pixel position because green is roughly 60\% of the Y data (the shape of the Y image is shown in Fig.~\ref{fig:capt_3} (a)). The location of the Y data is important, since improperly located Y data induces artificial high frequency components which can degrade the coding performance. This method is simple and fast but YCbCr data of each 2 by 2 block depends only on the RGB data in that 2 by 2 block. Therefore, the YCbCr data potentially has more high frequency components than that generated by using bilinear interpolation (because each block is treated independently, while in the bilinear interpolation case each Y term is obtained from a larger set of pixels). \subsubsection{Format conversion based on larger blocks} In order to generate smoother YCbCr data, we can consider a whole image as a block. After generating the RGB data for each pixel by using bilinear interpolation, Y, Cb and Cr can be calculated from the RGB data on green, blue and red pixels respectively. These positions are chosen according to the degree of influence of each color (i.e., the dominant color components of Cb and Cr are blue and red, respectively). Although each pixel has RGB data after interpolation, the amount of YCbCr data is not increased, since each pixel position has only one component, either Y, Cb or Cr. This format conversion is also simple but the reverse format conversion is more complex due to bilinear interpolation. For example, as in Fig.~\ref{fig:1}, we consider that the image size is 4 by 4. Then $Y_{22}$ (the luminance value of the $G_{22}$ position) is calculated as \begin{equation}\label{eq:YonG} \begin{split} Y_{22} &= \left[0~~\frac{a_{11}}{2}~~0~~0~~\frac{a_{13}}{2}~~a_{12}~~\frac{a_{13}}{2}~~0~~0~\frac{a_{11}}{2}~~0~~0~~0~~0~~0~~0 \right] \\ & \quad \cdot \left[G_{11}~R_{12}~G_{13}~R_{14}~B_{21}~G_{22}~B_{23}~G_{24}~G_{31}~R_{32}~G_{33}~R_{34}~B_{41}~G_{42}~B_{43}~G_{44} \right]^T~\text{.} \end{split} \end{equation} But, in the reverse format conversion, $G_{22}$ is calculated as \begin{equation}\label{eq:GonY} \begin{split} G_{22} &= \left[\text{\small{-1243 ~-3832~-1079~217~-1763~14895~-1736~ -135~-974~-3658~-868~136~35~-638~26~37}} \right] \cdot 10^{-4} \\ & \quad \cdot \left[Y_{11}~Cr_{12}~Y_{13}~Cr_{14}~Cb_{21}~Y_{22}~Cb_{23}~Y_{24}~Y_{31}~Cr_{32}~Y_{33}~Cr_{34}~Cb_{41}~Y_{42}~Cb_{43}~Y_{44} \right]^T~\text{.} \end{split} \end{equation} From the example of (\ref{eq:YonG}) and (\ref{eq:GonY}), we can see that in general the forward format conversion may be based on a few neighboring pixels but reverse conversion could require all pixels in the same block. Therefore, in order to generate the original RGB data from the YCbCr data, a $w\cdot h$ by $w\cdot h$ reverse format conversion matrix is needed, where $w$ and $h$ are the width and height of an image respectively. Although the decoding process (including reverse format conversion) can be done in a system with high computing power (e.g., personal computers), the matrix is too large and the reverse conversion may still be too time consuming. In order to reduce the computational complexity, the above format conversion method can be applied to blocks generated by dividing the source image. Since interpolation is done by using the pixels in the block, the column (or row) of the reverse format conversion matrix is reduced to $W\cdot H$, where $W$ and $H$ are the width and height of a block respectively. In this block based format conversion, the bilinear interpolation needs to be modified since only the information of pixels in the same block can be used for interpolation. For example, the green value on $(i,j)$ position (which corresponds to a red or blue pixel position) can be calculated as \begin{equation}\label{eq:GonR} G_{ij} = ~\frac{G'_{(i-1)j} + G'_{(i+1)j} + G'_{i(j-1)}+ G'_{i(j+1)} }{I_{(i-1)j}+I_{(i+1)j}+I_{i(j-1)}+I_{i(j+1)}}, \end{equation} where $G'_{kl} = I_{kl} \cdot G_{kl}$, and $I_{kl}$ is an indicater function defined as follows. \begin{equation}\label{eq:I} I_{kl} = \begin{cases} 1,&~\text{if $kl$ and $ij$ are in the same block} \\ 0,&~\text{otherwise.}\\ \end{cases} \end{equation} Then $G_{ij}$ is used to calculate $Cb_{ij}$ (or $Cr_{ij})$ at the blue (or red) position. Note that, in the decoding process, an approximation method which uses only neighboring pixels can be used since more distant pixels do not have a big influence (see the coefficients in (\ref{eq:GonY})). But in this paper, we focus on a block-based method in order to avoid the effect of error in format conversion. The performance comparison among different block size format conversion is given in Section~\ref{sec:results_format_conversion}. \subsection{Nonlinear transform to compact luminance data} \label{sec:nonlinear} \begin{figure}[tb] \begin{minipage}[b]{1.0\linewidth} \centering \includegraphics[width=9.5cm]{y_tr} \\ \centerline{(a) \hspace{2.4cm}(b) \hspace{2.4cm}(c) }\vspace{-0.0cm} \end{minipage} \caption{Transform of Y (luminance) image. In the figure, dark and light gray pixels indicate Y data and white pixels indicate empty position. (a) indicates quincunx located Y image after format conversion, (b) and (c) indicate Y image after transform. In (b), each odd column data is shifted to the left even column and in (c), each pixel is rotated 45 degree clockwise. } \label{fig:capt_3} \end{figure} After the color format conversion, the Y values are not available at all the original pixel positions (since the Y data is located only in the position of the green pixels), so general image compression methods cannot be directly applied to compress the Y image. Therefore another reversible transform is needed to change the Y pixels located on a quincunx lattice (see Fig.~\ref{fig:capt_3} (a)) to normally located Y pixels (i.e., so that we obtain a Y image with no blank pixels). As in Fig.~\ref{fig:capt_3} (b), one possible simple transform is a horizontal pixel shift where pixels in odd columns are shifted to the left even column and all odd columns are removed. This transform can be formulated as \begin{equation}\label{eq:shift} if~ x+y = odd,~ \left[ \begin{array}{c} X\\Y \end{array} \right] = \begin{cases} ~\left[~~ \begin{array}{c} {\frac{x}{2}}\\y \end{array} ~~\right],~if~ x = even,\\ ~\left[ \begin{array}{c} {\frac{x-1}{2}}\\y \end{array} \right],~if~ x = odd, \end{cases} \end{equation} where $(x,y)$ and $(X,Y)$ are the pixel positions in the images before and after transform, respectively. Here we assume that the origin is the lower left corner of an image. A vertical shift transform can be similarly defined, but we focus here on the horizontal shift transform. After the transform is performed, a vertical edge (e.g., Fig.~\ref{fig:capt_3} (b)) leads to artificial high frequency components in the vertical direction, which leads to coding performance degradation. This also happens when the edge is vertically biased (i.e., steeper than a 45 degree line). Note that if a vertical shift had been chosen the same problem would arise with respect to horizontal (and horizontally biased) edges. Thus, under JPEG coding with a high compression ratio, most of this artificial high frequency information may be lost. Spatially weak correlation is another reason that contributes to making the results worse. If the distance between adjacent pixels in a row (or a column) in a CFA is assumed to be 1 then, after horizontal shifting, the vertical and horizontal distances of adjacent pixels in the Y data are $\sqrt{2}$ and $2$ respectively (see Fig.~\ref{fig:distance} (b)). An alternative simple transform to remove blank pixels among Y data, which does not pose these problems, is a 45 degree rotation formulated as \begin{equation}\label{eq:rotation} \begin{split} \left[ \begin{array}{c} X \\ Y \end{array} \right] = \frac{1}{2} \left( \left[ \begin{array}{clcr} 1 & 1 \\ -1 & 1 \end{array} \right] \left[ \begin{array}{c} x\\y \end{array} \right] + \left[ \begin{array}{c} -1 \\ w-1 \end{array} \right] \right),\\ ~~\text{for }x + y = odd, \end{split} \end{equation} where $w$ indicates the image width. As shown in Fig.~\ref{fig:capt_3} (c), after rotation, Y data is concentrated on the center of an image with an oblique rectangular shape. This transform does not induce artificial high frequencies and the distances between adjacent pixels in a row or column are now $\sqrt{2}$. But since the data no longer has a standard rectangular shape area, some redundancy is added when the boundary pixels are coded. This is addressed in the next section and the performance comparison between shift and rotation transform is presented in Section~\ref{sec:results_nonlinear_transform}. As shown in (\ref{eq:shift}) and (\ref{eq:rotation}), the complexity of both methods is low. The rotation method needs 1 comparison, 2.5 addition and 1 shift operation per pixel whereas the shift method needs 1.5 comparison, 1.25 addition and 0.5 shift operation. \subsection{Data cropping for images obtained by the rotation transform} \label{sec:cropping} After the horizontal shift transform (see Fig.~\ref{fig:capt_3} (b)) Y data can be directly encoded. But the shape of Y data after the rotation transform is not rectangular (see Fig.~\ref{fig:capt_3} (c)) and thus coding the shape bounding box (i.e., the whole rectangular region that includes the oblique rectangular shape of Y data) would result in some inefficiency in the coding. Therefore a proper cropping method is needed to remove the data outside of the oblique rectangular area containing Y data. \subsubsection{Data cropping for JPEG (DCT based coders)} In JPEG, the size of a DCT block is 8 by 8 and blocks that consist of blank pixels only (blank blocks) do not need to be coded. In addition, we do not need to send any side information about the location of Y data since it can be calculated at the decoder given the size of the original image. As shown in Fig.~\ref{fig:capt_3} (c), the number of blank blocks depends on the width and height of the image and 6 bits (2 bits for a zero DC value and 4 bits for EOB (end-of-block)) are needed to code a blank block when standard Huffman tables of JPEG are employed. In case of 512 by 512 images, out of 4096 blocks, the number of blank blocks is 1984 and without coding blank blocks, we can save 1488 bytes. The blocks containing boundary pixels of Y data (boundary blocks) also contain blank pixels since Y data are in an oblique rectangular shape. As a result, compared to the shift method, the number of blocks to be coded is increased by $(w+h)/16$ in case that the width and height are multiples of 16. Proper padding methods are needed for boundary blocks since the discontinuity between blank and data pixels in the block creates artificial edges that require a significant coding rate. Because boundary blocks have Y data only in the position of an upper or lower triangular region, padding can be simply done by diagonal mirroring using a data copy where the source data position is determined by table look-up. Four different look-up tables are needed since each boundary needs a different pattern of a data copy. Better performance can be achieved by using low-pass extrapolation (LPE)~\cite{Kaup99CSVT} or shape adaptive DCT (SA-DCT)~\cite{Sikora95SP}\cite{Kaup99CSVT}\cite{Stas99CSVT}. LPE is relatively simple and provides good R-D performance whereas SA-DCT provides better performance but is more complex. \subsubsection{Data cropping for SPIHT (DWT based coders)} Contrary to JPEG, SPIHT is not a block based coder and so the method used with JPEG cannot be applied. Therefore we need to introduce a new coding method in order to code Y data in the oblique rectangular area only. In the still image coding of MPEG-4, arbitrarily shaped objects are coded by using shape adaptive DWT (SA-DWT)~\cite{Li00CSVT}. SA-DWT uses a length adaptive 1-D DWT after finding the first non-blank pixel in the line and the low-pass (high-pass) wavelet coefficients are placed into the corresponding location in the low-pass (high-pass) band (i.e., the shape is reserved in each band after transform). One of the good features of SA-DWT is that the number of coefficients after SA-DWT is identical to the number of data pixels. In order to code data pixels only, we employ SPIHT with SA-DWT. But without modifying entropy coding in SPIHT, some redundancy is still added since SPIHT uses a two by two block arithmetic coding algorithm. \begin{figure}[tb] \centering \includegraphics[width=8.5cm]{mask} \caption{The coefficients map after SA-DWT (when the width and height of images are 512). Gray regions indicate meaningful coefficients and black and white regions indicate blank coefficients.} \label{fig:mask} \end{figure} In Fig.~\ref{fig:mask}, only the gray regions contain meaningful coefficients after SA-DWT. Out of 16 two by two blocks in the lowest frequency band, only 4 blocks located in each corner consist of blank coefficients. Since all descendants of these blocks (white regions in the figure) are blank coefficients, these regions are not coded. But blank coefficients in black regions in the figure are involved in coding due to the entropy coding scheme of SPIHT, and since they are not skipped some redundancy is introduced. The complexity of the SA-DWT is no more than that of conventional DWT for the shape bounding box size image~\cite{Li00CSVT}. Here, the width and height of the shape bounding box are $(w+h)/2$, but the complexity is much lower since the shape is at most half the size of the shape bounding box. Also, a simpler SA-DWT method can be applied since the shape is convex. In fact, after finding the non-blank data position (which can be calculated from the width and height of the image), the complexity is about half of DWT for interpolated images. Also, in the entropy coding of SPIHT, the white region in Fig.~\ref{fig:mask} is not coded and the tree related to the black region is terminated early when all the descendent are blank coefficients. \subsection{Influence of chrominance data over luminance data } In the IAD algorithms, one Cb (Cr) data is chosen out of 4 CFA pixels and the width and height of the Cb (Cr) image are $w/2$ and $h/2$, respectively. Therefore the data size is reduced to a quarter of that in the conventional technique whereas the pixel distance is doubled. But contrary to the CAI method, in which the coding results of luminance and chrominance data are fully separated, chrominance data with large distortion can add distortion to luminance data after interpolation, and vice versa. For the case of format conversion with 2 by 2 blocks, the coding error in RGB data is calculated as follows. \begin{equation}\label{eq:error_influence} \left[\hspace{-0.1cm} \begin{array}{c} e(R) \\ e(G^{ul}) \\ e(G^{lr}) \\ e(B) \end{array} \hspace{-0.1cm}\right] = \left[\hspace{-0.1cm} \begin{array}{clcr} a_{11} & a_{12} & 0 & a_{13} \\ a_{11} & 0 & a_{12} & a_{13} \\ a_{21} & \frac{a_{22}}{2} & \frac{a_{22}}{2} & a_{23} \\ a_{31} & \frac{a_{32}}{2} & \frac{a_{32}}{2} & a_{33} \end{array} \hspace{-0.1cm}\right]^{-1} \left[ \hspace{-0.1cm}\begin{array}{c} e(Y^{ul}) \\ e(Y^{lr}) \\ e(Cb) \\ e(Cr) \end{array} \hspace{-0.1cm}\right], \end{equation} where $e(\cdot)$ is the error of each component due to lossy coding. Since the final Y data (after interpolation) is calculated from the distorted RGB data, i.e., from distorted YCbCr data, the error in the final Y data depends on the quantization errors in both Y and Cb (Cr) data. For example, after applying bilinear interpolation, the error of the final Y on $G_{33}$ (in Fig.~\ref{fig:1}) is calculated as follows. \begin{equation}\label{eq:error_influence_ex} \begin{split} e(\tilde{Y_{33}}) &= \frac{a_{11}}{2}(e(R_{32})+e(R_{34}))+a_{12}~e(G_{33})+\frac{a_{13}}{2}(e(B_{23})+e(B_{43}))\\ &=0.897~e(Y_{33})+0.075~(e(Y_{31})+e(Y_{42}))+0.029~(e(Y_{13})+e(Y_{24}))-0.103~e(Y_{44})\\ &~~+0.210~(e(Cr_{32})-e(Cr_{34}))+0.101~(e(Cb_{43})-e(Cb_{23})), \end{split} \end{equation} where $\tilde{Y}$ is the final Y, and this comes from (\ref{eq:YonG}) and (\ref{eq:error_influence}). In (\ref{eq:error_influence_ex}), the error in the final Y depends on not only the error in Y but also the error difference between two Cb (Cr) involved in the interpolation. Therefore, in order to maximize the quality of final Y and Cb (Cr) data under given bit budget, bit allocation between Y and Cb (Cr) data needs to be considered. In SPIHT, each component is coded separately and there are no explicit mechanisms for bit allocation, whereas in JPEG bit allocation to each component cannot be explicitly controlled and is determined by the chosen quantization tables and the data characteristics. Therefore in SPIHT, it is necessary to determine the bit allocation between luminance and chrominance data based on human visual sensitivity to each component. Moreover, in the proposed methods, the bit-rate of one component affects the quality of the other components, so the overall performance changes depending on the bit allocation. Here, we simply consider bit allocation based on the quality of luminance data since the human visual system is more sensitive to luminance data. Fig.~\ref{fig:bit_allocation} (a) shows the quality change of luminance data after interpolation depending on the overall bit-rate and the bit-rate of the luminance data. The intersection of the curves in the figure shows more bit budget for Y does not guarantee higher PSNR of Y. Also, we can find that the PSNR of Y is maximized when the bit budget of Y is roughly $80\%$ of the overall bit budget. Similarly, Fig.~\ref{fig:bit_allocation} (b) shows the quality change of chrominance data after interpolation. Although the PSNR decreases sharply, this happens in the range where the bit-rate of luminance data is lower than that of chrominance data. In general, the bit-rate of luminance data is higher than that of chrominance data and this drop does not have a significant effect. This also guarantees that we can focus on the quality of luminance data. Since the R-D characteristics are different for each image, we fixed the bit-rate of Cb (Cr) data to be a quarter of that of Y data (i.e., $66.7\%$ of the overall budget). \begin{figure}[tb] \centering \begin{tabular}{cc} \includegraphics[width=7.6cm]{color_inf3} \\ (a)\\ \includegraphics[width=7.9cm]{color_inf4} \\ (b)\\ \end{tabular} \caption{The curves indicate (a) luminance and (b) chrominance PSNR after interpolation depending on the overall bit-rate. SPIHT is used as a compression method and PSNR is calculated from the distortion between the interpolated image before compression and the final output image of proposed methods. The bit-rates shown in the box correspond to (a) luminance and (b) chrominance data.} \label{fig:bit_allocation} \end{figure} \section{Experimental results and discussion} \label{sec:capt_results} In order to confirm the validity of the IAD algorithms, we implemented these algorithms (horizontal shift transform with 2 by 2 block format conversion and rotation transform with 2 by 2 and 64 by 64 block format conversion) and compared the results with those obtained with CAI (JPEG with 4:2:2 format and SPIHT) methods. Due to the lack of CFA raw data, we generate CFA raw data by using test images such as ``Baboon'', ``Lenna'' and ``Macaw'' (H : 512, W : 512, 24 bit color, 786.432KB). In fact, what we obtain is not CFA raw data, since in these images all image processing functions, except interpolation, have been already done. Since our main focus has been on interpolation and compression methods, other image processing parts are not considered. Our results are the same as the results achieved when all image processing functions are done before compression and interpolation. As we mentioned in Section~\ref{sec:toy}, in order to compare the performance, we consider interpolated images without compression as the reference images. The results of the proposed and conventional algorithms are compared using the PSNR of luminance (and chrominance) data and the average $\Delta{E}$ in CIELAB color space at each target bit-rate. Parts of experimental results are also shown in a webpage~\cite{Lee04Data}. In the webpage, visual comparison is provided under a fixed compression ratio ($15:1$) and the difference of bit-rate under near-lossless coding is provided by using fixed PSNR (48dB). The compression ratio used in digital cameras is about $3:1$ to $20:1$ depending on vendors and user setting~\cite{HP}\cite{KODAK}, and the visual difference of the results is not clearly noticeable although the IAD method provides large PSNR gain in this low compression range. But with the IAD method, same quality can be achieved with much lower bit-rate. We first compare the performance of different color format conversion and nonlinear transform with cropping proposed in Sections~\ref{sec:format_conversion} and~\ref{sec:nonlinear} (\ref{sec:cropping}), respectively. After that we compare the overall performance of the IAD method to that of the CAI method. \subsection{Color format conversion} \label{sec:results_format_conversion} \begin{figure}[tb] \begin{minipage}[b]{1.0\linewidth} \centering \includegraphics[width=8.8cm]{lenna_block} \centerline{(a) \hspace{3.4cm} (b)} \end{minipage} \caption{(a) Coding performance comparison of Lenna image using different color format conversion methods. (b) Coding gain of the format conversion using larger blocks against the format conversion with 2 by 2 blocks. Luminance data are coded by using SPIHT with shape adaptive DWT (SA-DWT) after rotation transform and the PSNR is calculated with $Y$ and $\hat{Y}$ in Fig.~\ref{fig:diagnew} .} \label{fig:block} \end{figure} The coding performance of the format conversion with different block sizes is shown in Fig.~\ref{fig:block}. Since the interpolated data at boundary pixels of each block is less smooth, the interpolation method with a smaller number of boundary pixels can give a better result. For example, 75\% of Y data are on block boundaries when 4 by 4 blocks are used whereas 12.3\% are on block boundaries when 64 by 64 blocks are used. Therefore as shown in Fig.~\ref{fig:block}, the format conversion with larger blocks gives better results than that with smaller blocks although the complexity of decoding is higher. \subsection{Nonlinear transform with cropping} \label{sec:results_nonlinear_transform} \begin{figure}[tb] \begin{minipage}[b]{1.0\linewidth} \centering \includegraphics[width=8.6cm]{tran_diff} \centerline{(a) \hspace{3.4cm} (b)} \end{minipage} \caption{Luminance PSNR difference between the rotation and horizontal shift methods after compression by using (a) JPEG and (b) SPIHT. 2 by 2 block format conversion is used in both cases.} \label{fig:tran_diff} \end{figure} The performance of horizontal shift and rotation methods after coding is shown in Fig.~\ref{fig:tran_diff}. Since, in JPEG coding (shown in (a)), high frequency components introduce more errors due to larger quantization step sizes chosen for them, the horizontal shift method, which introduces more high frequency components, results in worse performance. Also in JPEG coding, an image is efficiently coded by using EOB, so that increased high frequency energy translates into more bits, as the EOB happens later on average in a zigzag scan. Thus, as we expected, the horizontal shift transform generates more high frequency components and gives a worse result. But for the ``Baboon'' image, the image itself contains large high frequency components and most coefficients cannot be coded as EOB, therefore the high frequency components induced by the shift method have less of an impact than for other images. In this case, the added redundancy coming from the data shape of the rotation method may lead to worse performance than the horizontal shifting, given that the effect of additional high frequencies is not as significant for these images. Contrary to JPEG coding, SPIHT coding does not compress high frequency components with larger quantization values. Instead it uses bit-plane encoding at all frequencies and the number of bit-planes transmitted at a given bit-rate is roughly the same at all frequencies. Therefore the coding performance of the shift method is comparable to that of the rotation method. But if the source is simple (i.e., not having large energy in high frequency bands) then the shift method provides worse energy compaction and so the result is worse (as shown in the case of ``Lenna" image). Figs.~\ref{fig:tran_diff} (a) and (b) show that the coding gain of the rotation method is decreased as the bit-rate is increased, except when the bit-rate is less than 10KB. In the low bit-rate region, small coefficients are quantized to zero, therefore most coefficients are not transmitted because an EOB has been reached (in JPEG coding) or only a small number of coefficients is coded (in SPIHT coding). But in the shift method, many coefficients are large and cannot be quantized to zero. Therefore higher coding gain is achieved with the rotation method. As quantization values become smaller, the coefficients of the rotation method (which are quantized to zero in a low bit-rate region) are no longer quantized to zero and the bit-rate increases sharply. Instead, most coefficients of the shift method are already non-zero (in the low bit-rate region) and the bit-rate is increased more slowly. Therefore the coding gain of the rotation method is reduced as the bit-rate becomes higher. \subsection{Overall performance} As shown in Figs.~\ref{fig:final_result} (a),(c) and (e), the IAD algorithms achieve better luminance PSNR except for low bit-rates, depending on format conversion and compression methods. With JPEG compression, the PSNR of the shift method drops sharply and the performance of this method is worse than that of CAI methods in case that the bit-rate is approximately under 50KB (i.e., the compression ratio is roughly $15:1$) whereas the rotation methods outperform the CAI method under all other compression ratios used in~\cite{JPEGSW}. With SPIHT compression, the performance of shift and rotation with 2 by 2 block format conversion methods is similar (see Fig.~\ref{fig:tran_diff} (b)) and they outperform the CAI method when the bit-rate is over 20KB or 25KB (i.e., a compression ratio is $39:1$ or $31:1$) as shown in Fig.~\ref{fig:diff_l}. As expected, the rotation with 64 by 64 block transform method gives a better result than other proposed methods. Under same bit-rate, IAD algorithms can assign more bits for each pixel since the IAD algorithms only use approximately half of luminance data. This is the reason why the IAD methods outperform the CAI method. Also, as shown in our analysis (see Fig.~\ref{fig:AR_RD}), IAD algorithms outperform in a wider range of PSNR with the ``Baboon'' image (low spatial correlation) than with the ``Lenna'' and ``Macaw'' images (high spatial correlation). In the chrominance data cases as in Figs.~\ref{fig:final_result} (b),~(d) and (f), the PSNR gain is even higher (though PSNR is not so meaningful in color components). In the CAI algorithm, if the 4:2:2 format is used for JPEG compression then two adjacent pixels use same color data and some color information is lost. But in the IAD algorithm, color format conversion is reversible and all color information can be presented. Although, even in the CAI method, there is no color information loss if JPEG with 4:4:4 format is used, the bit-rate for the color information is increased and so, by using this increased bit-rate, lower compression ratio can be applied in the IAD algorithm. In our experiments, chrominance data compression with 4:4:4 format is tested with SPIHT compression. Since the size of chrominance data of the CAI algorithm is 4 times larger than that of proposed ones, the bit budget per pixel of IAD algorithms is 4 times larger than that of conventional one and this gives a large PSNR gain. Although shift and rotation with 2 by 2 block transform provide exactly the same chrominance data (since both transforms use the same 2 by 2 color format conversion), the chrominance PSNRs of the two algorithms are not identical. This shows that the luminance distortion also affects the quality of chrominance data. In order to compare the error in a perceptually uniform color space, average $\Delta{E}$ in the CIELAB space is used for a second measure. As shown in Fig.~\ref{fig:error_bi}, IAD methods (rotation with 64 by 64 block transform) provide smaller errors than CAI methods with both JPEG (under all compression ratios considered) and SPIHT (up to more than $50:1$). In addition to the higher PSNR gain, lower average $\Delta{E}$ and lower complexity, the IAD algorithms have other advantages such as lower blocking artifacts after JPEG coding and fast consecutive capturing. Reduction of blocking artifacts is achieved because a lower compression ratio is used at a given rate. Additionally, because interpolation is done after decompression, it can reduce blocking artifacts similar to a de-blocking processing after JPEG decompression. Moreover luminance data and chrominance data use different block shapes, which may also help to reduce blocking artifacts. Figs. \ref{fig:lenna_comp} (b) and (c) shows the result of CAI and IAD methods, respectively. As expected, the result of the CAI method shows more blocking artifacts. Finally, fast consecutive capturing is possible since the compression time is shorter because only around half of Y data has to be encoded, while interpolation (in case of 2 by 2 format conversion) and post processing functions are not needed during the capture process. \begin{figure*}[tb] \centering \begin{tabular}{cc} \includegraphics[width=7.8cm]{final_b_l} & \includegraphics[width=7.8cm]{final_b_c} \vspace{-0.3cm}\\ (a)&(b)\\ \includegraphics[width=7.8cm]{final_l_l} & \includegraphics[width=7.8cm]{final_l_c}\vspace{-0.3cm} \\ (c)&(d)\\ \includegraphics[width=7.8cm]{final_m_l} & \includegraphics[width=7.8cm]{final_m_c} \vspace{-0.3cm} \\ (e)&(f)\\ \end{tabular} \caption{The curves indicate the luminance and chrominance PSNR after applying overall coding schemes. \textit{shift} and \textit{rotation} (2x2 and 64x64) indicate non-linear transform used in the IAD method and \textit{conventional} indicates the CAI method.} \label{fig:final_result} \end{figure*} \begin{figure}[tb] \centering \includegraphics[width=8.8cm]{final_diff_spiht} \caption{The PSNR gain of different proposed methods against the conventional method. Vertical and horizontal axes indicate the luminance PSNR gain and overall bit-rate respectively and SPIHT is used as a compression method. } \label{fig:diff_l} \end{figure} \begin{figure*}[tb] \centering \includegraphics[width=15cm]{error_CIELab_bilinear} \caption{The curves indicate the average $\Delta{E}$ of IAD and CAI methods with bilinear interpolation.} \label{fig:error_bi} \end{figure*} \section{Comparison with adaptive interpolation} \label{sec:adaptive} In the proposed methods, interpolation is done after decompression, so more complex interpolation can be applied without increasing the complexity of the encoding system. Therefore, with lower complexity than that of bilinear interpolation, we can achieve visually better results. Since coding errors can give negative effects to the performance of interpolation, in this chapter, we compare the performance of proposed methods with more complex interpolation methods. From the compression viewpoint, bilinear interpolation is a good method because it results in smoother (and thus easier to compress) images. Also in the IAD algorithms, the color components generated by using the bilinear interpolation have an error that results from averaging the error of neighbor pixels, so that the average distortion of interpolated color components can be lower than that of coded color components (similar to the 1-D case shown in (\ref{eq:MSEI})). This is also confirmed by the experimental results shown in Fig.~\ref{fig:block} (a) and Fig.~\ref{fig:final_result} (c) (SPIHT). Note that the two figures have different horizontal axis and the rate used in Fig.~\ref{fig:final_result} is 1.5 times larger than that of Fig.~\ref{fig:block}. The result verifies that PSNR is increased after interpolation except at high bit-rates (where round-off error plays an important role because the coding errors are relatively small). Although bilinear interpolation is simple and fast, it works like a low pass filtering and does produce smoothing of edges. To preserve more edge information, several different adaptive interpolation algorithms have been proposed \cite{Gunturk05SP}. Depending on the local information, adaptive interpolation algorithms take a different interpolation method and use the correlation of different color components. After applying the adaptive interpolation, the interpolated image has more edge information (i.e., more high frequency components) and it cannot be easily compressed. In this sense the IAD algorithms have an advantage. Note that the IAD algorithms perform interpolation after decoding, so the coded data is independent of interpolation algorithms. But due to lossy compression, IAD and CAI algorithms have different data before the interpolation. Therefore it could happen that they have different edge information and take a different directional interpolation method for pixels at the same position. This results in high distortion in the generated color components. Also, the error of one color component is involved in the interpolation of other color components and the distortion of generated pixels can be increased. As a result, by using the adaptive interpolation, the IAD algorithms achieve some gains from data smoothness before compression (especially in the case of rotation with 64 by 64 block format conversion) but may lose in performance from choosing different directions during interpolation due to distorted data. Therefore when error-sensitive interpolation methods (which are very sensitive about the quantization noise of existing compression methods) are required, interpolation aware compression methods (which can keep more information used in the interpolation) are needed. But in this paper, we mainly focus on the performance comparison between IAD and CAI algorithms, and we use existing compression methods with minor modification. To verify the performance of the IAD algorithms with the adaptive interpolation, we consider 3 different adaptive interpolation algorithms, namely, constant hue-based, gradient based and median-based interpolation~\cite{Ramanath02JEI}. Constant hue-based interpolation is proposed by Cok~\cite{Cok87P} and Kimmel~\cite{Kimmel99IP}, where hue is defined by a vector of ratios as ($R/G,~B/G$). In this algorithm, the green color component is used as a denominator and a small error of the green component may induce a large error in hue, especially when green values are small. Therefore the IAD algorithms do not provide good performance when this interpolation is applied. Gradient based interpolation is proposed by Laroche and Prescott~\cite{Laroche94P}. In this algorithm, at first, green components on blue (red) pixel positions are determined by using directional bilinear interpolation, where the direction is selected by the gradient of neighboring blue (red) components. After determining green components, blue (red) components are interpolated from the differences between blue (red) and green components. Fig.~\ref{fig:final_result_laro} shows the coding results of IAD algorithms. With JPEG compression, the performance of IAD algorithms (except rotation with 64 by 64 block format conversion) is worse than that of the CAI algorithm since different direction is determined by large error in high frequency components and blocking effect and the error of green components also affects red and blue components. But with SPIHT compression, the coding error is evenly distributed and different directional interpolation is reduced. Therefore as shown in Fig.~\ref{fig:final_result_laro} (d), the IAD algorithms outperform the CAI algorithm although the gain is smaller than when bilinear interpolation is used (shown in Fig.~\ref{fig:diff_l}). The performance is also tested with Median-based interpolation (proposed by Freeman~\cite{Freeman88P}) which employs two step processes. The first pass is the bilinear interpolation and the second pass is selecting the median of color differences of neighboring pixels. Fig.~\ref{fig:final_result_free} shows the coding results of IAD algorithms with a 3 by 3 median filter. Similar to the gradient-based interpolation, the IAD algorithms provide worse results when JPEG is applied. But with SPIHT, IAD algorithms still provide better results up to $20:1$ or $40:1$ compression ratio depending on the format conversion methods. Fig.~\ref{fig:error_free} shows average $\Delta{E}$ in the CIELAB space. IAD algorithms provide better results up to more than $20:1$ compression ratio and then the average $\Delta{E}$ of both algorithm goes similar. Fig.~\ref{fig:lenna_comp2} shows the results after applying bilinear and Freeman interpolation. As expected, The image with Freeman interpolation (Fig.~\ref{fig:lenna_comp2} (c)) is sharper and close to the original image (Fig.~\ref{fig:lenna_comp} (a)). As a result, the IAD algorithms with SPIHT provide better results with the gradient based and median-based interpolation. But due to coding inefficiency, irregular coding error and blocking effect, adaptive interpolation for a given pixel may be different depending on using quantized or unquantized data, resulting in potential degradation after interpolation when quantized data is used interpolation for each pixel. Therefore the performance of the IAD algorithms with JPEG is worse. \begin{figure*}[tb] \centering \begin{tabular}{cc} \includegraphics[width=7.8cm]{final_b_l_laro} & \includegraphics[width=7.8cm]{final_l_l_laro} \\ (a)&(b)\\ \includegraphics[width=7.8cm]{final_m_l_laro} & \includegraphics[width=7.8cm]{final_diff_spiht_laro} \\ (c)&(d)\\ \end{tabular} \caption{The curves in (a), (b) and (c) indicate the luminance PSNR after applying overall coding schemes with gradient based interpolation. The curve in (d) indicates the PSNR gain of different IAD methods against the CAI method with SPIHT. } \label{fig:final_result_laro} \end{figure*} \begin{figure*}[tb] \centering \begin{tabular}{cc} \includegraphics[width=7.8cm]{final_b_l_free} & \includegraphics[width=7.8cm]{final_l_l_free} \\ (a)&(b)\\ \includegraphics[width=7.8cm]{final_m_l_free} & \includegraphics[width=7.8cm]{final_diff_spiht_free} \\ (c)&(d)\\ \end{tabular} \caption{The curves in (a), (b) and (c) indicate the luminance PSNR after applying overall coding schemes with median-based interpolation and SPIHT. The curve in (d) indicates the PSNR gain of different IAD methods against the CAI method with SPIHT.} \label{fig:final_result_free} \end{figure*} \begin{figure*}[tb] \centering \includegraphics[width=15cm]{error_CIELab_freeman} \caption{The curves indicate the average $\Delta{E}$ of IAD and CAI methods with median-based interpolation and SPIHT.} \label{fig:error_free} \vspace{10cm} \end{figure*} \section{CONCLUSION} \label{sec:capt_conclusion} \vspace{0.0cm} In this paper, we investigated the redundancy decreasing method that merges an image processing stage and an image compression stage. Several color format conversion algorithms and shift and rotation transforms are introduced to compress CFA images before making full color images by interpolation. We showed that the proposed algorithms outperform the conventional method in the full range of compression ratios for JPEG coding with bilinear interpolation and up to $20:1$ or $40:1$ compression ratio (depending on the color format conversion and interpolation methods) for SPIHT coding when the bilinear, gradient based and median-based interpolation are applied. Also we analyzed the PSNR gain and explained why it becomes higher as the compression ratio becomes lower, showing also a 1D DPCM sequence example to provide some intuition. Because the proposed algorithms use only around half the amount of Y data and only need an additional simple transform, the computational complexity can be decreased. Also adaptive interpolation methods can be applied without increasing the encoder complexity, so fast consecutive capturing can be achieved with visually better results. In this paper, we tried to minimize the changes to existing compression methods in order to focus on the performance of changing the encoding order (i.e., the order of interpolation and compression). The performance of the IAD method can be improved with different entropy coding (in SPIHT). Also, in the rotation method, a new quantization table may be useful for a JPEG codec, due to the directional difference of human visual sensitivity. \begin{figure}[tb] \begin{minipage}[b]{1.0\linewidth} \centering \includegraphics[angle=-90,width=16.5cm]{lenna_comparison1} \centerline{(a) \hspace{4.9cm} (b) \hspace{4.9cm} (c)} \end{minipage} \caption{Comparison of blocking artifacts. (a) is the original image, and (b) and (c) are the images after applying CAI and IAD, respectively. Bilinear interpolation and JPEG compression are used, where the compression ratio is $35.2:1$.} \label{fig:lenna_comp} \end{figure} \begin{figure}[tb] \begin{minipage}[b]{1.0\linewidth} \centering \includegraphics[angle=-90,width=16.5cm]{lenna_comparison2} \centerline{(a) \hspace{4.9cm} (b) \hspace{4.9cm} (c)} \end{minipage} \caption{Comparison of different interpolation. (a) and (b) are the images after applying CAI and IAD, respectively. Bilinear interpolation and SPIHT compression are used. (c) is the images after applying CAI with Freeman interpolation. Compression ratio used is $16:1$. Note that the encoding of (b) and (c) are identical.} \label{fig:lenna_comp2} \end{figure} \bibliographystyle{IEEEtran} \input{single_final.bbl} \end{document}
0903.2606
\section{Introduction} The characterization of variability timescales can provide information on the sizes and locations of the emission regions in active galactic nuclei. Although Doppler boosted emission from a relativistic jet provides a very reasonable explanation for the non-thermal spectra and small-scale radio morphology of the BL Lacertae objects and flat spectrum radio quasars that are now usually called blazars (e.g., Blandford \& Rees 1978; Urry \& Padovani 1995), the question of just where in such jets the emission at different wavelengths arises remains somewhat uncertain (e.g., Marscher et al.\ 2008). While one of the defining characteristics of blazars is extreme variability, periodic or quasi-periodic contributions to the electromagnetic emission have not been clearly detected, or even claimed to be present, in the vast majority of blazars, although they certainly have been searched for. Probably the best case for such, albeit impermanent, special variations is S5 0716$+$714, which once showed quasi-periodic variations on the timescale of 1 day, followed by a weaker indication of a variable component of about 7 days, over the course of an intensive month-long monitoring program. Quite remarkably, these fluctuations were present simultaneously in an optical and a radio band (Quirrenbach et al.\ 1991). On another occasion, quasi-periodicity with a time scale of 4 days seemed to be present in the optical band (Heidt \& Wagner 1996). Five major optical outbursts between 1995 and 2007 seem to occur at intervals of $\sim 3.0 \pm 0.3$ years (e.g., Raiteri et al.\ 2003; Foschini et al.\ 2006; Gupta et al.\ 2008a, and references therein). Very recently, Gupta, Srivastava \& Wiita (2009) performed a wavelet analysis on the 20 best nights of over 100 high quality optical data sets taken by Montagni et al.\ (2006). They found very high probabilities that S5 0716$+$714 had quasi-periodic components to its intra-night variability on time scales from $\sim$25 to $\sim$73 minutes on several different nights. Only one other blazar, OJ 287 (0851$+$203), seems to have shown periodic variations in its light curves over a range of time scales comparable to that for S5 0716$+$714. A 15.7 min periodicity in 37 GHz radio observations was reported by Valtaoja et al.\ (1985) for OJ 287. In optical bands, a 23 min periodicity was claimed by Carrasco, Dultzin-Hacyan, \& Cruz-Gonzalez (1985) and short-lived 32 min periodicity was reported by Carini et al.\ (1992). Long term optical data on OJ 287 have shown a periodicity of $\sim$11.7 years; detailed analyses in this case support the hypothesis that this source contains a binary system of supermassive black holes (SMBHs) and the major flares arise when the less massive SMBH passes through an accretion disk surrounding the bigger one (e.g., Sillanp{\"a}{\"a} et al.\ 1996; Valtonen et al.\ 2008). A few other blazars may have shown significant periodicity in their flux variations. In the blazar PKS 2155$-$304, a quasi-periodicity around 0.7 days seemed to be present in 5 days of observations at UV and optical wavelengths (Urry et al.\ 1993), and there was a hint that simultaneous x-ray observations were well correlated with them (Brinkmann et al.\ 1994). One of four $\gtrsim$60 ks x-ray observations of the quasar 3C 273 by the {\it XMM--Newton} satellite also appears to have a quasi-periodic component with a time scale of about 3.3 ks (Espaillat et al.\ 2008). A recent analysis of a 91 ks {\it XMM--Newton} observation indicated the presence of a $\sim$ 1 hour periodicity in the narrow line Seyfert 1 galaxy RE J1034$+$396 (Gierlinski et al.\ 2008). Using long term (and, unfortunately, very inhomogeneous) optical data on 10 radio-selected blazars, Fan et al.\ (2002) have used the Jurkevich method to claim detection of quasi-periodicity in 9 of them, with putative periods in the range of 1.4 to 17.9 years. In \S 2 we discuss the x-ray data for 24 blazars stretching over more than 12 years. The structure function analyses of these data are given in \S 3.1 and they yield possible quasi-periodicities for 20 of those objects; however, the great majority of those periods are either $\sim$1 year or harmonics of an annual period and must be assumed to be an observational artifact. The structure functions of four objects showed periods significantly different from a year and we performed additional analyses of these data (\S \S 3.2 and 3.3) which strongly support the presence of a quasi-period of $\sim$17 days for AO 0235$+$164 and of $\sim$420 days for 2321$+$419. These results provide the first good evidences for a nearly periodic component to x-ray blazar variability longer than a few hours. In \S 4 we discuss our results in terms of several mechanisms that could produce nearly periodic fluctuations and we obtain estimates for the central black hole masses in these blazars in the rather unlikely case that the observed fluctuations are fundamentally related to orbits of emission regions at the inner edges of accretion disks. Our conclusions are in \S 5. \section{Data} We extracted one day average x-ray fluxes in the 1.5--12 keV energy range from the All Sky Monitor (ASM) instrument on board the {\it Rossi X-ray Timing Explorer} satellite\footnote{ASM/RXTE Website: http://xte.mit.edu/ASM\_lc.html} (RXTE) for the 24 blazars listed in Table 1, which gives their names and coordinates in the first three columns. This data covers the period January 1 1996 through September 1 2008. These objects are all of the blazars in the list of Nieppola, Tornikoski \& Valtaoja (2006) for which data is available on the RXTE web-site; all of them are low- and intermediate- energy peaked blazars. A description of the ASM and how light curves are obtained from it is given in Levine et al.\ (1996). In these lengthy data sets, we found that the source flux counts were given as negative on many days of observations, indicating that the source fluxes were then below the detection threshold of the ASM/RXTE. Such negative flux counts, or upper limits were omitted in our analysis. We first converted the ASM/RXTE fluxes, given in counts/sec, into a Crab flux unit, using the relation 1 crab = 75 counts/sec; then the flux of the source is converted into Janskys, using 1 crab = 2.4 $\times$ 10$^{-11}$ W m$^{-2}$. The x-ray light curves for four of the blazars are presented in Fig.\ 1. Since the ASM is a survey instrument, and none of these blazars are usually among the brightest of x-ray sources, the typical S/N ($\sim 1.5-3$) is rather poor for each daily data point, though the S/N is usually substantially higher during the times when the fluxes are near their peaks. Still, the ASM is an unique x-ray instrument, in that it can provide a multi-year light curve of any reasonably strong source. Our analysis turns out to yield significant, non-artifactual, periods for only two of the 24 blazars, AO 0235$+$164 and 1ES 2321$+$419 (\S 3). Recently it has been noticed that a small number of ASM sources have their nominal intensity modulated by the emission of a nearby galactic X-ray source (e.g., Kaur et al.\ 2007). However, the nearest other source to AO 0235$+$164 observed by ASM/RXTE\footnote{http://xte.mit.edu/lcextrct/asmsel.html} is separated by $\simeq 4^{\circ}$, which is far enough away that it is highly unlikely to contaminate the flux from 0235$+$164. In the case of 2321$+$419 the nearest ASM source is separated by over $16^{\circ}$ and so definitely too far away to cause a contamination problem. \section{Analyses and Results} \subsection{Structure Functions} Ordinary Fourier transform methods are not optimal in a search for periodicity in these blazar light curves because the samplings of these light curves are not exactly uniform. Nor can simple periodograms give useful results. Under these circumstances, a structure function (SF) analysis is the best way to quantitatively determine any time scale of variation on unevenly sampled data sets, as these ASM measurements have become once we chose to discard the days with ``negative'' fluxes. The first order SF is related to the power spectrum density (PSD) and discrete correlation function (DCF) and is thus a powerful tool to search for periodicities and time scales in time series data (e.g., Simonetti, Cordes \& Heeschen 1985; Gupta et al.\ 2008b and references therein). The first order SF for a data set, $a$, having uniformly sampled points is defined as \begin{eqnarray} D^{1}_{a}(k) = {\frac{1}{N^{1}_{a}(k)}} {\sum_{i=1}^N} w(i)w(i+k){[a(i+k) - a(i)]}^{2}, \end{eqnarray} where $k$ is the time lag, ${N^{1}_{a}(k)} = \sum w(i)w(i+k)$, and the weighting factor $w(i)$ is 1 if a measurement exists for the $i^{th}$ interval, and 0 otherwise. Since the data in our case is quasi-uniform, we first calucated the differences squared for all pairs of data points and then averaged the the samples into bins of one or a few days; measurements are taken to not exist for negative flux values. Simply summarized, the behavior of the first order SF will at first rise with time lag (after a possible plateau arising from noise). Following this rising portion, the SF will then fall into one of the following classes: (i) if no plateau exists, any time scale of variability exceeds the length of the data train; (ii) if there are one or more plateaus, each one indicates a time scale of variability; and (iii) if that plateau is followed by a dip in the SF, the lag corresponding to the minimum of that dip, indicates a possible periodic cycle (unless such a dip is seen at a lag close to the maximum length of the data train, when it is probably an artifact). Structure function analyses have been employed for quite some time in examining the nature of AGN variability. For example, using long term radio observations of a sample of over 50 radio loud AGN, Hughes, Aller \& Aller (1992) reported that most of them showed some plateau in their SFs; the mean time scale they found for BL Lacs was 1.95 yrs while that for quasars was 2.35 yr. A recent extension of this analysis using SFs and other techniques also examined higher-frequency radio data and found that small flux density variations were often present on 1 to 2 year time scales but larger outbursts were much rarer; no significant difference between AGN classes were detected (Hovatta et al.\ 2007). In a different band, SFs were calculated from a large number of intra-night optical light curves for several blazars (Sagar et al.\ 2004) and for a group of both radio loud and radio quiet quasars as well as blazars (Stalin et al.\ 2005). Indications of preferred observed time scales of a few hours were found for some objects in each AGN class, and hints of quasi-periods were found for the BL Lac 0851$+$202 and the core-dominated quasars 0846$+$513 and 1216$+$010; however, in none of these cases were more than two dips seen in the SF, so no confident claim of quasi-periodicity could be made based on those data sets and those SF analyses alone. The nominal periodicities and approximate standard errors obtained from the SF analyses for the x-ray light curves of each blazar in our sample are given in the fourth column of Table 1 and the percentage of negative points are listed in the last column. We wish to stress that for no blazar have we found a dominant component with a precise period, but we will henceforth use the words ``period'' and ``periodicity'' to denote the strongest nearly periodic signals seen in these data sets where the upper-limits are not included in the analysis. Most of these SF indicated periods are found as averages from at least two dips and hence standard errors can be obtained, but the period given for OQ 530 is estimated from a single dip and so no errors can be quoted. The great majority of the periods in Table 1 are very close to one year. This is not surprising, as there has been a previous report of annual, as well as daily, satellite orbital period (96 minutes) and satellite precession period (53 day) variations found to be imposed on the ASM fluxes of binary X-ray sources (Wen et al.\ 2006). Detection of these periods presumably can be attributed to windowing effects arising from the satellite and will not be investigated further. The SF of the entire RXTE light curve of the blazar AO 0235$+$164 is plotted in the lower-left panel of Fig.\ 2, binned in 2 day intervals. This SF shows several significant dips, with the first at about 17 days and the second about 34 days, providing a hint for a periodic component of about 17 days. The displayed binning values were chosen for each source so that peaks and dips would be clearest, but they remain visible when different bin sizes are used. The lower-left panel of Fig.\ 3 displays the SF for 2321$+$419, binned in 4 day intervals; following an almost flat region consistent with noise out to about 50 days the deepest dips are at about 425 and 850 days but are rather broad. To see whether this hint of a period is genuine, we performed other analyses on the same data set (\S \S 3.2--3.4). In contrast to those two cases, the lower-left panel of Fig.\ 4 shows the SF for 3C 454.3 (binned over 8 days), for which the only periodicity indicated by multiple clear dips in the SF is at essentially one year. This is typical of the SFs of 13 of the 24 cases, where the only clearly indicated period is 365 days within a 1$\sigma$ error; in two other cases, S5 0454$+$844 and 3C 273, the best value of any period is essentially within 2$\sigma$ of a year and in one case, BL 1320$+$084, it is roughly 3$\sigma$ away. All of these nominal periodicities are almost certainly instrumental `windowing' effects, along with the essentially one-half year periodicities detected for AO 0235$+$164 and OJ 287. In four other cases the SF yielded no plausible periodic signal. In the last four, most interesting, cases, periods not comparable to one year (or one-half year) were found. Table 2 lists these four blazars with the most plausible real periods in column 1 and gives their SF identified periodicity(-ties) in column 2. Their partial light curves are those shown in Fig.\ 1. \subsection{Discrete Correlation Functions} The Discrete Correlation function (DCF) method was first introduced by Edelson \& Krolik (1988) and it was later generalized to provide better error estimates (Hufnagel \& Bregman 1992). The DCF is suitable for unevenly sampled data, which is the case in most astronomical observations. In our case, as we have discarded the nominally negative counts, the data becomes unevenly sampled. Here we give only a brief introduction to the method; for details refer to Hovatta et al.\ (2007) and references therein. The first step is to calculate the unbinned correlation (UDCF) using the given time series through (e.g., Hovatta et al.\ 2007) \begin{equation} UDCF_{ij} = {\frac{(a(i) - \bar{a})(b(j) - \bar{b})}{\sqrt{\sigma_a^2 \sigma_b^2}}}, \end{equation} where $a(i)$ and $b(j)$ are the individual points in the time series $a$ and $b$, respectively, $\bar{a}$ and $\bar{b}$ are respectively the means of the time series and $\sigma_a^2$ and $\sigma_b^2$ are their variances. The correlation function is binned after calculation of the UDCF. The DCF method does not automatically define a bin size, so several values need to be tried. If the bin size is too big, useful information is lost, but if the bin size is too small, a spurious correlation can be found. For example, we have found that a bin size of 10 days is good for 2321$+$419 but the minimum bin of 1 day length was best for AO 0235$+$164 while 15 days was appropriate for 3C 454.3. Taking $\tau$ as the center of time bin and $n$ is the number of points in each bin, the DCF is found from the UDCF via \begin{equation} DCF(\tau) = {\frac{1}{n}} \sum ~UDCF_{ij}(\tau) . \end{equation} The error for each bin can be calculated using \begin{equation} \sigma_{\mathrm def}(\tau) = {\frac{1}{n-1}} \Bigl\{ \sum ~\bigl[ UDCF_{ij} - DCF(\tau) \bigr]^2 \Bigr\}^{0.5} . \end{equation} A DCF analysis is frequently used for finding the correlation and possible lags between multi-frequency AGN data where different data trains are used in the calculation (e.g., Villata et al.\ 2004; Raiteri et al.\ 2003; Hovatta et al.\ 2007 and references therein). When the same data train is used, there is obviously a peak at zero DCF indicating that there is no time lag between the two, but any other strong peaks in the DCF can indicate a periodicity. A disadvantage of this method is that it does not give an exact probability that a resulting peak actually represents a periodicity. The only way to investigate the internal reliability of the DCF method is to use simulations; however, we have not done so, instead verifying the DCF analysis by cross-checking the results by the SF and Lomb-Scargle Periodogram (\S 3.3) methods. The results of the DCF analysis for AO 0235$+$164, 1ES 2321$+$419 and 3C 454.3 are shown in the upper-right panels of Figs.\ 2--4, respectively. The plotted maximum values of the DCF lags plotted were chosen so as to avoid crowded points, but the same features are present in SFs extending to the full lengths of the datasets. The resulting periods for these blazars, along with the negative DCF results for OQ 530 and BL Lac, are given in the fourth column of Table 2. \subsection{Lomb-Scargle Periodograms} The Lomb-Scargle Periodogram (LSP) is another useful technique for searching time series for periodic patterns. This method has a good tolerance for missing values (e.g., Glynn, Chen \& Mushegian 2006), so it does not require any special treatment for gaps in the data and is thus quite suitable for non-uniform data trains. Therefore the LSP method is frequently used by astronomers and has found use in other fields as well (e.g., Glynn et al.\ 2006). It also has the advantage of providing a $p$-value which specifies the significance of a peak. The LSP was first introduced by Lomb (1976) and later extended by Scargle (1982); somewhat later a more practical mathematical formulation was found (Press \& Rybicki 1989). Here we briefly describe the method and formulae. We used a publicly available R language code for Lomb-Scargle Periodograms\footnote{http://research.stowers-institute.org/efg/2005/LombScargle}. If $N$ is the total number of observations, the LSP is defined at a frequency $\omega_j$ as (Press \& Rybicki 1989; Glynn et al, 2006) \begin{displaymath} P(\omega_j) = {\frac{1}{2 \sigma^2}} \Bigl\{ \frac {(\sum^N_{i=1} [a(t_i)-\bar{a}] \cos[\omega_j(t_i-\tau)])^2}{\sum^N_{i=1} \cos^2[\omega_j(t_i-\tau)]} \end{displaymath} \begin{equation} + \frac {(\sum^N_{i=1} [a(t_i)-\bar{a}] \sin[\omega_j(t_i-\tau)])^2} {\sum^N_{i=1} \sin^2[\omega_j(t_i-\tau)]} \Bigr\}. \end{equation} Here $j = 1 \dots M$, where $\tau$ is defined by \begin{equation} \tan(2 \omega_j \tau) = \frac{\sum_{i=1}^N \sin(2 \omega_j t_i)}{\sum_{i=1}^N \cos(2 \omega_j t_i)}, \end{equation} and $M$ depends on the number of independent frequencies, $N_0$, through $M = N_0 \approx -6.363 + 1.193N +0.00098N^2$ (Press et al.\ 2002). The LSP also provides the ability to test for the presence of more than a single frequency. We can define a range of frequencies to be tested in the R code for the LSP, and it yields the most significant peak and its significance level. In searching for some periodic behavior of a data set, we actually test the null hypothesis, or false-alarm probability, that the given data train is non-periodic at each frequency. If the probability that the peak value of the LSP is smaller than $x$, the $p$-value, or probability of the null hypothesis that the observed peak in an LSP was found by chance, is (e.g.. Glynn et al.\ 2006) \begin{equation} p = 1-(1-e^{-x})^M. \end{equation} The smaller the $p$-value for a given peak, the higher its significance; the maximum limit that can reasonably specified for a $p$-value is 0.05, i.e., any peaks having $p$-values smaller than 0.05 are considered as significant. Two difficulties usually arise while using such periodograms (e.g., Scargle 1982); the first is statistical and the other is spectral leakage. The statistical difficulty is mitigated by using large sample sizes which improves the S/N for possible period detection as this S/N is proportional to the number of data points and here we use thousands of points encompassing many cycles of each putative periodicity. Spectral leakage, or aliasing, involves leakage of power to some other frequencies that are actually not present in the data. A small presence of unevenness in the data spacing substantially reduces aliasing and astronomical data is typically irregular enough that aliasing is effectively eliminated. However, if the sampling is semi-regular (intermediate between randomly and evenly spaced) significant leakage of periodogram power to the side-lobes can occur. The usual way to minimize both statistical and leakage problems is to window or taper the data by smoothing in the spectral domain. But the disadvantage of smoothing is that the spectral values at different frequencies are no longer independent and hence, the joint statistical properties become more complicated. Since the unevenness in our x-ray data (after rejection of negative data points) can be best characterized as random, any leakages of power are expected to be small, and we need not smooth our data. The results of our LSP analysis, showing the peaks of the normalized power spectral densities for AO 0235$+$164, 1ES 2321$+$419, and 3C 454.3 are shown in the bottom-right panels of Figs.\ 2--4, respectively. The resulting periods for the first two blazars, along with the negative results for OQ 530 and BL Lac and their false alarm probablities, $p$, are given in the third column of Table 2. \subsection{\bf Nearly Periodic Variations in Two Blazars} For all of our sources we have apparent periodic variations present for several (at least five) cycles. We first performed the SF and DCF analyses for the light curves of all the sources using unbinned data. Since the data length is large, on these original plots of the SF and DCF the points were very crowded. To reduce the data crowding, we binned the data in a variety of ways and chose to display the results for lengthy portions of the data trains and for binning values that provide good clarity in each of the plots. The same features remain if the entire non-negative data trains and different bin sizes are employed. To make clearer the scatter about the nearly periodic components of the light curves, we have also plotted folded light curves based on the partial light curves in Fig.\ 1 for AO 0235$+$164 and 1ES 2321$+$419 and on four cycles of the quasi-annual variation seen in 3C 454.3. They are displayed in the upper-left panels of Figs.\ 2--4. With a nominal period of 17 days the date of the zero phase on the plot for AO 0235$+$164 is MJD 50095; with a period of 423 days the plot zero phase for 1ES 2321$+$419 is MJD 50287; using a period of 369 days the zero phase for 3C 454.3 is MJD 50347. It would be very interesting to see if subsequent observations detect fluctuations with the same, or similar, periodicities. Because of the large number of upper-limits in all of the data sets, it must be noted that our neglecting those values in our analyses is problematic. For the DCF analysis an alternative, albeit still somewhat arbitrary, approach would be to include all such points but to set their values equal to zero. We did perform such analyses and in all cases no significant peaks other than at lags of zero were seen and so any information on periodicity was lost to this form of DCF. It is to a large extent because of the uncertainties induced by the many upper-limits and poor individual S/N values that we consider our claimed variations to be nearly periodic and not overwhelmingly convincing. \section{Discussion} \subsection{Properties of the Blazars with Periodic Variability} The blazar AO 0235$+$164 has a redshift of $z = 0.94$ based on detection of emission lines (Nilsson et al.\ 1996). Since it was among the first objects to be classified as a BL Lac (Spinrad \& Smith 1975) and it is a highly variable and rather bright source it has been extensively studied, with over 640 papers mentioning this source. Its fractional polarization is up to $\sim 40$\% in both the visible and IR bands (e.g., Impey et al.\ 1982) and it is significantly variable from the radio to the x-ray bands on timescales ranging from less than an hour to many years (e.g., Ghosh \& Soundararajaperumal 1995; Heidt \& Wagner 1996; Fan \& Lin 1999; Romero, Cellone \& Combi 2000; Webb et al.\ 2000; Raiteri et al.\ 2001; Padovani et al.\ 2004; Sagar et al.\ 2004; Gupta et al.\ 2008a). Some of this fast variability is probably due to gravitational microlensing (e.g., Webb et al.\ 2000) as there are foreground absorbing systems at $z = 0.524$ and $z = 0.851$ (Burbidge et al.\ 1976). Raiteri et al.\ (2001) used 25 years of radio and optical data on AO 0235$+$164 to argue it seemed to have a long quasi-period of $\sim$5.7 years, but the predicted outburst in 2004 was not detected (e.g., Raiteri et al.\ 2006). These authors then suggested a $\sim 8$ year periodicity might be a better fit to the data. More recent optical observations provide some measure of support for that suggestion (Gupta et al.\ 2008a). Our analysis of the archival RXTE/ASM data provides the first claim for an x-ray periodicity for this popular blazar. Using scaling relations between low-frequency extended radio emission and high frequency beamed emission (Giovannini et al.\ 2001), Wu et al.\ (2007) have determined a rough value of the Doppler factor for the relativistic jet of AO 0235$+$164 of $\delta \simeq 10.5$. The other blazar that seems show a periodic component to the variability revealed by the RXTE/ASM data set is 1ES 2321$+$419 ($z = 0.059$; Padovani \& Giommi 1995). This source is significantly fainter than AO 0235$+$164 in the optical band and has therefore received much less attention. Still, it has been studied since its x-ray detection (Elvis et al.\ 1992) in optical (e.g., Falomo \& Kotilainen 1999) and radio (e.g., Kollgaard et al.\ 1996) bands and a spectral energy distribution is available (Nieppola et al. 2006). There have not been any sustained efforts to look for variability in this blazar in any waveband. A rather low estimate of $\delta \simeq 1.7$ is available (Wu et al.\ 2007) \subsection{Unlikely Explanations for Periodic Variability} The simplest explanation for such nearly periodic x-ray variability in most AGN might be that the flux arises from hot spots, spiral shocks or other non-axisymmetric phenomena related to orbital motions very close to the innermost stable circular orbit around a supermassive black hole (SMBH) (e.g., Zhang \& Bao 1991; Chakrabarti \& Wiita 1993; Mangalam \& Wiita 1993). In the case of AO 0235$+$164 a 17 day period at the inner edge of a disk corresponds to a SMBH mass of $1.7 \times 10^{9} M_{\odot}$ for a non-rotating BH and $1.1 \times 10^{10} M_{\odot}$ for a maximally rotating BH (e.g., Gupta et al.\ 2009). While the latter mass is quite high, the former is a reasonable value, so it is conceivable that a temporary hot spot in the inner region of an accretion disk is somehow responsible for the observed quasi-periodic variations. However, for 2321$+$419 a 420 day period yields a SMBH mass of $7.5 \times 10^{10} M_{\odot}$ for a non-rotating BH and $4.8 \times 10^{11} M_{\odot}$ for a maximally rotating BH. In this case, in order to reduce the SMBH mass to a quite high, but conceivable, value of $3 \times 10^9 M_{\odot}$, in either case of BH spin the dominant hot spot would need to be located at a distance of $\sim 51 GM/c^2$ from the SMBH, which seems to be rather far away for a hot spot that contributes significant flux. Another reason to discount this hot spot scenario for both blazars is that blazar disks are almost certainly close to face-on but large hot spot amplification arises from near-field gravitational lensing and that is strong only if the observer's line-of-sight is close to the disk plane (e.g., Bao, Wiita \& Hadrava 1996). A somewhat related possibility is that we are seeing the interaction between a second black hole and the disk surrounding the primary one, as seems to be the case for OJ 287 (e.g.\ Valtonen et al.\ 2008). However, such an orbital cycle should probably yield a more precise and long-lived periodicity than we have found for the x-ray emission of both AO 0235$+$164 and 2321$+$419, so we believe this hypothesis is quite unlikely. In addition, for AO 0235$+$164 the rather short period strongly disfavors the binary black hole hypothesis. The microlensing hypothesis does not appear to be able to produce a quasi-periodic component to the variability, so even if it does play a role in producing some variability in the observed flux of AO 0235$+$164, it probably is irrelevant to the fluctuations of interest here. \subsection{More Likely Explanations} As the preponderance of other evidence has the x-rays seen from blazars, particularly in active phases, emerge from their jets, and not their putative accretion disks or coronae, it makes sense to examine how such quasi-periods could be related to jet structures. Turbulence behind a shock propagating down a jet (e.g., Marscher, Gear \& Travis 1992) is a very logical, but not yet carefully treated, way to produce variability. For such turbulent flows the dominant eddies' turnover times should yield short-lived, quasi-periodic, but probably modest, fluctuations in emissivity. Regions at different distances behind the shock will emit preferentially at different wavelengths. But because Doppler boosting can greatly amplify (roughly by a factor of $\delta^2$ to $\delta^3$, e.g., Blandford \& Rees 1978) even weak intrinsic flux variations produced by small changes in the magnetic field strength or relativistic electron density, can be raised to the level at which they can be detected (e.g., Qian et al.\ 1991). This same Doppler boosting reduces the time-scale at which these fluctuations are observed by a factor of $\delta$ compared to the time-scale they possess in the emission frame. Although it is difficult to quantify these effects precisely, this mechanism does seem to provide an excellent way to understand the optical intra-night variability with quasi-periods of tens of minutes that are only occasionally seen and that have timescales that vary from night to night in the blazar S5 0716$+$714 (Gupta et al.\ 2009). The same turbulence in shocked-jet scenario could be playing out in a blazar such as AO 0235$+$164, where the fairly high value of $\delta \approx 10$ easily allows for modest fluctuations to become easily visible; however, the observed period of $\sim 17$ days converts into an eddy turnover time of $\sim 87$ days in the rest frame for such a Doppler factor. This would require a much larger, but still reasonably sized, eddy to be involved. For 2321$+$419 this turbulent jet explanation is somewhat less likely to work if the Doppler factor is only $\sim 1.7$, as it would only produce amplifications of 5 or less. Moreover, the nominal rest-frame eddy turnover time would be $\sim 675$ days, which implies quite a substantial sized eddy and that the x-ray variations were arising at distances $> 1$ pc from the nucleus. It is quite likely that blazar jets will possess some essentially helical structure, such as can be easily induced by magnetohydrodynamical instabilities in a magnetized jet (e.g., Hardee \& Rosen 1999) or through precession. Indeed, in the few cases where the innermost portions of radio jets can be resolved transversely using VLBI, edge-brightened and non-axisymmetric structures are seen (e.g., M87, Ly, Walker \& Junor 2007; Cen A, Bach et al.\ 2008; Mkn 501, Piner et al.\ 2009). A relativistic shock propagating down such a perturbed jet will induce significantly increased emission at the locations where the shock intersects a region of enhanced magnetic field and/or electron density corresponding to such a non-axisymmetric structure. Thanks to the extreme sensitivity of Doppler boosting to viewing angle, very substantial changes in the amplitude (and polarization) of radio and optical jet emission will be seen by an observer at fixed angle to the jet axis as the most strongly emitting region effectively swings past the observer (e.g., Camenzind \& Krockenberger 1992; Gopal-Krishna \& Wiita 1992). There is no reason why the intersection of a relativistic shock with a quasi-helical perturbation would not perform similar feats for the x-ray emission, even though these high energy photons are unlikely to emerge from exactly the same jet regions as the optical and radio photons (e.g., Marscher et al.\ 2008) and should have somewhat different temporal dependences. Because of the apparently large Doppler factor of the jet of AO 0235$+$164, the observed substantial and nearly periodic components in its x-ray light curve can be naturally attributed to the intersections of a relativistic shock with successive twists of a non-axisymmetric jet structure. The apparently modest Doppler factor of the jet in 2321$+$419 makes this explanation less immediately attractive; however, all other hypotheses work even less well for this source if the estimated low Doppler factor is correct. So the intersection of a shock with a non-axisymmetric jet structure also seems to be the most plausible explanation for the behavior of this blazar. \section{Conclusions} We searched the RXTE/ASM light curves extending over 12 years of 24 blazars for possible periodic variations using structure functions. Many of them showed apparent periods, but the majority of these were close to one year and presumably not real. The four blazars that showed indications of non-artifactual periods were examined further using discrete correlation functions and Lomb-Scargle periodograms. Two blazars showed nearly common periodic components to their x-ray variability through all three methods and had low ($<0.03$) false alarm probabilities according to the LSP method: AO 0235$+$164 shows an observed period of $\sim 17$ days while 1ES 2321+419 has one of $\sim 420$ days. It is quite unlikely that these nearly periodic fluctuations are caused by orbiting hot spots on or above accretion disks or by a companion black hole crashing through an accretion disk on each orbit. It is even less likely that these fluctuations are produced by microlensing. Turbulence behind a shock moving through a relativistic jet may provide an adequate explanation of our results if the variations are dominated by large-scale eddies moving into and out from our line of sight. Still, the most attractive hypothesis to explain these variations appears to be the intersection of a shock with an essentially helical structure wrapping around the relativistic jet. In this case, x-ray polarimetry variations should be correlated with the flux changes (e.g., Gopal-Krishna \& Wiita 1992) and might eventually provide a way to distinguish between these different possible explanations. \acknowledgments We thank the referee for several suggestions that improved the presentation of the results. PJW's work is supported in part by a subcontract to GSU from NSF grant AST 05-07529 to the University of Washington. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by NASA's Goddard Space Flight Center (GSFC). ASM results were provided by the ASM and RXTE teams at MIT and at the RXTE SOF and GOF at NASA's GSFC. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA.
0903.2223
\section{Introduction} A component of particle motion in magnetized plasmas is the stochastic electric drift produced by the electric field of the turbulence and by the confining magnetic field. This drift determines a trapping effect or eddy motion in turbulence with slow time variation \cite{kraichnan}. Typical particle trajectories show sequences of trapping events (trajectory winding on almost closed paths) and long jumps. Numerical simulations have shown that the trapping process completely changes the statistical properties of the trajectories. Particle motion in a stochastic potential was extensively studied \cite{mccomb}-\cite{Bcarte}, but the process of trapping was not described until recently. New statistical methods were developed \cite{V98}, \cite{VS04} that permitted to determine the effects of trapping. These are semi-analytical methods based on\ a set of deterministic trajectories obtained from the Eulerian correlation of the stochastic velocity. It was shown that trapping determines memory effects, quasi-coherent behavior and non-Gaussian distribution \cite{VS04}. The trapped trajectories have quasi-coherent behavior and they form structures similar to fluid vortices. The diffusion coefficients decrease due to trapping and their scaling in the parameters of the stochastic field is modified. We have shown that anomalous diffusion appears due to collisions and average flows. A review of \ the effects of trapping on test particle statistics and on turbulent transport is presented in the first part of this paper. The effects of trajectory trapping on the nonlinear dynamics of the test modes for the drift turbulence are presented in the second part of the paper. The semi-analytical methods developed for test particles are extended to test mode evolution in a turbulent magnetized plasmas. Test modes are usually studied for modelling wave-wave interaction in turbulent plasmas \cite{K02}. A different perspective is developed here by considering test modes on turbulent plasmas. They are described by nonlinear equations with the advection term containing the stochastic $\mathbf{E}\times \mathbf{B}$ drift described by a stochastic field with known statistical characteristics. The growth rate of the test modes is determined as function of these statistical parameters. We develop a Lagrangian approach of the type of that introduced by Dupree \cite{D66}, \cite{D72}. The difference is that in Dupree's method the stochastic trapping of trajectory was neglected and consequently the results can be applied to quasilinear turbulence. Our method takes into account the trapping and the non-standard statistics of trajectories that it yields and thus it is able to describe the nonlinear effects appearing in strong turbulence. The paper is organized as follows. The test particle model is presented in Section 2. Section 3 contains and a short description of the statistical methods. The nonlinear effects of trajectory trapping on test particle statistics and transport are presented in Section 4. The general physical explanation for the anomalous diffusion regimes appearing in the presence of trajectory trapping and the formation of trajectory structures are discussed in this section. The problem of test modes in turbulent plasmas for the case of drift turbulence is presented in Section 5 where the growth rate and the frequency are determined as function of the statistical characteristics of the turbulence. The complex effects of trajectory trapping on the drift modes are analyzed in Section 6. The conclusions are summarized in Section 7. \section{Test particle model} The test particle studies rely on known statistical characteristics of the stochastic field. They are determined from experimental studies or numerical simulations. The main aim of these studies is to determine the diffusion coefficients. The statistics of test particle trajectories provides the transport coefficients in turbulent plasmas without approaching the very complicated problem of self-consistent turbulence that explains the detailed mechanism of generation and saturation of the turbulent potential. The possible diffusion regimes can be obtained by considering various models for the statistics of the stochastic field. We consider in slab geometry an electrostatic turbulence represented by an electrostatic potential $\phi ^{e}(\mathbf{x},t),$ where $\mathbf{x}\equiv (x_{1},x_{2})$ are the Cartesian coordinates in the plane perpendicular to the confining magnetic field directed along $z$ axis, $\mathbf{B}=B\mathbf{e}% _{z}$. The test particle motion in the guiding center approximation is determined by \begin{equation} \frac{d\mathbf{x}(t)}{dt}=v(\mathbf{x},t)\equiv -\mathbf{\nabla }\phi (% \mathbf{x},t)\times \mathbf{e}_{z}, \label{ec1} \end{equation} where $\mathbf{x}(t)$ represent the trajectory of the particle guiding center, $\mathbf{\nabla }$ is the gradient in the $(x_{1},x_{2})$ plane and $% \phi (\mathbf{x},t)=\phi ^{e}(\mathbf{x},t)/B$. The electrostatic potential $% \phi (\mathbf{x},t)$ is considered to be a stationary and homogeneous Gaussian stochastic field, with zero average. It is completely determined by the two-point Eulerian correlation function (EC), $E(\mathbf{x},t),$ defined by \begin{equation} E(\mathbf{x},t)\equiv \left\langle \phi (\mathbf{x}^{,},t^{,})\,\phi (% \mathbf{x}^{,}+\mathbf{x},t^{,}+t)\right\rangle . \label{pec} \end{equation} The average $\left\langle ...\right\rangle $ is the statistical average over the realizations of $\phi (\mathbf{x},t),$ or the space and time average over $\mathbf{x}^{,}$ and $t^{,}$. This function evidences three parameters that characterize the (isotropic) stochastic field: the amplitude $\Phi =% \sqrt{E(\mathbf{0},0)}$, the correlation time $\tau _{c},$ which is the decay time of the Eulerian correlation and the correlation length $\lambda _{c},$ which is the characteristic decay distance. These three parameters combine in a dimensionless Kubo number \begin{equation} K=\tau _{c}/\tau _{fl} \label{K} \end{equation} where $\tau _{fl}=\lambda _{c}/V$\ $\ $is the time of flight of the particles over the correlation length and $V=\Phi /\lambda _{c}$ is the amplitude of the stochastic velocity. The diffusion coefficient is determined as (see \cite{Taylor}) \begin{equation} D_{i}(t)=\int_{0}^{t}d\tau \;L_{ii}(\tau ) \label{D} \end{equation} where \begin{equation} L_{ij}(t;\,t_{1})\equiv \left\langle v_{i}(\mathbf{0},0)\,v_{j}(\mathbf{x}% (t),t)\right\rangle \label{CL} \end{equation} is the correlation of the Lagrangian velocity (LVC). It is obtained using the decorrelation trajectory method, a semi-analytical approach presented below. Equation (\ref{ec1}) represents the nonlinear kernel of the test particle problem. The statistical methods will be presented for Eq. (\ref{ec1}) for simplicity. They were developed to include complex models with other components of the motion (particle collisions, average flows, motion along the confining magnetic field, etc.). The effects of these components on the transport will be discussed in Section 4. \section{The nested subensemble approach} Test particle transport in magnetized plasmas in the nonlinear regime characterized by trajectory trapping was analytically studied only the last decade when the decorrelation trajectory method (DTM) \cite{V98} and the nested subensemble approach (NSA) \cite{VS04} were developed. Trajectory trapping is essentially related to the invariance of the Lagrangian potential. Thus, a statistical method is adequate for the study of this process if it is compatible with the invariance of the potential. The NSA is the development of the DTM as a systematic expansion that obtains much more statistical information. The main idea in NSA is to study the stochastic equation (\ref{ec1}) in subensembles of realizations of the stochastic field. First the whole set of realizations $R$ is separated in subensembles $(S1),$ which contain all realizations with given values of the potential and of the velocity in the starting point of the trajectories $\mathbf{x=0}$, $t=0$: \begin{equation} (S1):\quad \phi (\mathbf{0},0)=\phi ^{0},\quad \mathbf{v}(\mathbf{0},0)=% \mathbf{v}^{0}. \label{2} \end{equation} Then, each subensemble $(S1)$ is separated in subensembles $(S2)$ corresponding to fixed values of the second derivatives of the potential in $% \mathbf{x=0}$, $t=0$ \begin{equation} (S2):\quad \phi _{ij}(\mathbf{0},0)\equiv \left. \frac{\partial ^{2}\phi (% \mathbf{x},t)}{\partial x_{i}\partial x_{j}}\right| _{\mathbf{x}=\mathbf{0}% ,t=0}=\phi _{ij}^{0} \label{s2} \end{equation} where $ij=11,12,22.$ Continuing this procedure up to an order $n$, a system of nested subensembles is constructed. The stochastic (Eulerian) potential and velocity in a subensemble are Gaussian fields but non-stationary and non-homogeneous, with space and time dependent averages and correlations. The correlations are zero in $\mathbf{x=0}$, $t=0$ and increase with the distance and time. The average potential and velocity performed in a subensemble depend on the parameters of that subensemble and of the subensembles that include it. They are determined by the Eulerian correlation of the potential (see \cite{VS04} for details). The stochastic equation (\ref{1}) is studied in each highest order subensemble $(Sn).$ The average Eulerian velocity determines an average motion in each $(Sn).$ Neglecting the fluctuations of the trajectories, the average trajectory in $% (Sn),$ $\mathbf{X}(t;Sn),$ is obtained from \begin{equation} \frac{d\mathbf{X}(t;Sn)}{dt}=\varepsilon _{ij}\frac{\partial \Phi (\mathbf{X}% ;Sn)}{\partial X_{j}}. \label{xmed} \end{equation} This approximation consists in neglecting the fluctuations of the trajectories in the subensemble $(Sn).$ It is rather good because it is performed in the subensemble $(Sn)$ where the trajectories are similar due to the fact that they are super-determined. Besides the necessary and sufficient initial condition $\mathbf{x}(0)=\mathbf{0,}$ they have supplementary initial conditions determined by the definition (\ref{2}-\ref {s2}) of the subensembles. The strongest condition is the inial potential $% \phi (\mathbf{0},0)=\phi ^{0}$ that is a conserved quantity in the static case and determines comparable sizes of the trajectories in a subensemble. Moreover, the amplitude of the velocity fluctuations in $(Sn)$, the source of the trajectory fluctuations, is zero in the starting point of the trajectories and reaches the value corresponding to the whole set of realizations only asymptotically. This reduces the differences between the trajectories in $(Sn)$ and thus their fluctuations. The statistics of trajectories for the whole set of realizations (in particular the LVC) is obtained as weighted averages of these trajectories $% \mathbf{X}(t;Sn).$ The weighting factor is the probability that a realization belongs to the subensemble $(Sn);$ it is analytically determined. Essentially, this method reduces the problem of determining the statistical behavior of the stochastic trajectories to the calculation of weighted averages of some smooth, deterministic trajectories determined from the EC of the stochastic potential. This semi-analytical statistical approach (the nested subensemble method) is a systematic expansion that satisfies at each order $n>1$ all statistical conditions required by the invariance of the Lagrangian potential in the static case. The order $n=1$ corresponds to the decorrelation trajectory method introduced in \cite{V98}. In this case only the average potential is conserved. The nested subensemble method is quickly convergent. This is a consequence of the fact that the mixing of periodic trajectories, which characterizes this nonlinear stochastic process, is directly described at each order of our approach. The results obtained in first order (the decorrelation trajectory method ) for $D(t)$ are practically not modified in the second order \cite{VS04}. Thus, the decorrelation trajectory method is a good approximation for determining diffusion coefficients. The second order nested subensemble method is important because it provides detailed statistical information on trajectories: the probability of the displacements and of the distance between neighboring trajectories in the whole ensemble of realizations and also in the subensembles $(S1).$ A high degree of coherence is so evidenced in the stochastic motion of trapped trajectories. \section{Trapping effects on test particles} \subsection{\protect\bigskip Trajectory structures} Detailed statistical information about particle trajectories was obtained using the nested subensemble method \cite{VS04}. This method determines the statistics of the trajectories that start in points with given values of the potential. This permits to evidence the high degree of coherence of the trapped trajectories. The trapped trajectories correspond to large absolute values of the initial potential while the trajectories starting from points with the potential close to zero perform long displacements before decorrelation. These two types of trajectories have completely different statistical characteristics \cite{VS04}. The trapped trajectories have a quasi-coherent behavior. Their average displacement, dispersion and probability distribution function saturate in a time $\tau _{s}$. The time evolution of the square distance between two trajectories is very slow showing that neighboring particles have a coherent motion for a long time, much longer than $\tau _{s}$. They are characterized by a strong clump effect with the increase of the average square distance that is slower than the Richardson law. These trajectories form structures, which are similar with fluid vortices and represent eddying regions. The size and the built-up time of the structures depend on the value of the initial potential. Trajectory structures appear with all sizes, but their characteristic formation time increases with the size. These structures or eddying regions are permanent in static stochastic potentials. The saturation time $\tau _{s}$ represents the average time necessary for the formation of the structure. In time dependent potentials the structures with $\tau _{s}>\tau _{c}$ are destroyed and the corresponding trajectories contribute to the diffusion process. These free trajectories have a continuously growing average displacement and dispersion. They have incoherent behavior and the clump effect is absent. The probability distribution functions for both types of trajectories are non-Gaussian. The average size of the structures $S(K)$ in a time dependent potential is plotted in Figure 1. One can see that for $K<1$ the structures are absent ($% S\cong 0$) and that they appear for $K>1$ and continuously grow as $K$ increases. The dependence on $K$ is a power low with the exponent dependent on the EC of the potential. The exponent is 0.19 for the Gaussian EC and 0.35 for a large EC that decays as $1/r^{2}.$ \begin{center} \resizebox{3.7in}{!}{\includegraphics{./figure2.eps}} {\small Figure 1: The average size of the trajectory structures for Gaussian EC (dashed line) and for an EC that decays as $1/r^{2}$ (continuous line). } \end{center} \subsection{Anomalous diffusion regimes} Test particle studies connected with experimental measurements of the statistical properties of the turbulence provide the transport coefficients with the condition that there is space-time scale separation between the fluctuations and the average quantities. Particle density advected by the stochastic $\mathbf{E\times B}$ drift in turbulent plasmas leads in these conditions to a diffusion equation for the average density with the diffusion coefficient given by the asymptotic value of Eq. (\ref{D}). Recent numerical simulations \cite{BJ03} confirm a close agreement between the diffusion coefficient obtained from the density flux and the test particle diffusion coefficient. Experiment based studies of test particle transport permit to strongly simplify the complicated self-consistent problem of turbulence and to model the transport coefficients by means of test particle stochastic advection. The running diffusion coefficient $D(t)$ is defined as the time derivative of the mean square displacement of test particles and is determined according to Eq. (\ref{D}) as the time integral of the Lagrangian velocity correlation (LVC). Thus, test particle approach is based on the evaluation of the LVC for given EC of the fluctuating potential. The turbulent transport in magnetized plasmas is a strongly nonlinear process. It is characterized by the trapping of the trajectories, which determines a strong influence on the transport coefficient and on the statistical characteristics of the trajectories. The transport induced by the $\mathbf{E\times B}$ stochastic drift in electrostatic turbulence \cite {V04} (including effects of collisions \cite{V00}, average flows \cite{V01}, motion along magnetic field \cite{V02}, effect of magnetic shear \cite{P07}) and the transport in magnetic turbulence \cite{V03}, \cite{N04} were studied in a series of papers using the decorrelation trajectory method. It was also shown that a direct transport (an average velocity) appears in turbulent magnetized plasmas due to the inhomogeneity of the magnetic field \cite{V06}% - \cite{V08}. This statistical method was developed for the study of complex processes as the zonal flow generation \cite{B03}, \cite{B05}. The results of all these studies are rather unexpected when the nonlinear effects are strong. The diffusion coefficients are completely different of those obtained in quasilinear conditions. A rich class of anomalous diffusion regimes is obtained for which the dependence on the parameters is completely different compared to the scaling obtained in quasilinear turbulence. All the components of particle motion (parallel motion, collisions, average flows, etc.) have strong influence on the diffusion coefficients in the non-linear regimes characterized by the presence of trajectory trapping. The reason for these anomalous transport regimes can be understood by analyzing the shape of the correlation of the Lagrangian velocity for particles moving by the $\mathbf{E}\times \mathbf{B}$ drift in a static potential \cite{VPS}. In the absence of trapping, the typical LVC for a static field is a function that decay to zero in a time of the order $\tau _{fl}=\lambda _{c}/V$. This leads to Bohm type asymptotic diffusion coefficients $D_{B}=V^{2}\tau _{fl}=V\lambda _{c}$. Only a constant $c$ is influenced by the EC of the stochastic field and the diffusion coefficient is $D=cD_{B}$ for all EC's. In the case of the $\mathbf{E\times B}$ drift, a completely different shape of the LVC is obtained for static potentials due to trajectory trapping. A typical example of the LVC is presented in Figure 2. This function decays to zero in a time of the order $\tau _{fl}$ but at later times it becomes negative, it reaches a minimum and then it decays to zero having a long, negative tail. The tail has power law decay with an exponent that depends on the EC of the potential \cite{V04}. The positive and negative parts compensate such that the integral of $L(t)$, the running diffusion coefficient $D(t)$, decays to zero. The transport in static potential is thus subdiffusive. The long time tail of the LVC shows that the stochastic trajectories in static potential have a long time memory. This stochastic process is unstable in the sense that any weak perturbation produces a strong influence on the transport. A perturbation represents a decorrelation mechanism and its strength is characterized by a decorrelation time $\tau _{d}$. The weak perturbations correspond to long decorrelation times, $\tau _{d}>\tau _{fl}.$ In the absence of trapping, such a weak perturbation does not produce a modification of the diffusion coefficient because the LVC is zero at $t>\tau _{fl}.$ In the presence of trapping, which is characterized by long time LVC as in Figure 2, such perturbation influences the tail of the LVC and destroys the equilibrium between the positive and the negative parts. Consequently, the diffusion coefficient is a \textit{decreasing function of }$\tau _{d}.$ It means that when the decorrelation mechanism becomes stronger ($\tau _{d}$ decreases) the transport increases. This is a consequence of the fact that the long time LVC is negative. This behavior is completely different of that obtained in stochastic fields that do not produce trapping. In this case, the transport is stable to the weak perturbations. An influence of the decorrelation can appear only when the later is strong such that $\tau _{d}<\tau _{fl}$ and it determines the increase of the diffusion coefficient with the increase of $% \tau _{d}$. This inverse behavior appearing in the presence of trapping is determined by the fact that a stronger perturbation (with smaller $\tau _{d}$% ) liberates a larger number of trajectories, which contribute to the diffusion. \begin{center} \resizebox{3.7in}{!}{\includegraphics{./figure1.eps}} {\small Figure 2: Typical Lagrangian velocity correlation in static potential.} \end{center} \vspace*{0.1in} The decorrelation can be produced for instance by the time variation of the stochastic potential, which produces the decay of both Eulerian and Lagrangian correlations after the correlation time $\tau _{c}$. The decorrelation time in this case is $\tau _{c}$ and it is usually represented by a dimensionless parameter, the Kubo number defined by Eq. (\ref{K}). The transport becomes diffusive with an asymptotic diffusion coefficient that scales as $D_{tr}=cV\lambda _{c}K^{\gamma }$, with $\gamma $ in the interval [-1, 0] (trapping scaling \cite{V04}). The diffusion coefficient is a decreasing function of $\tau _{c}$ in the nonlinear regime $K>1$. For other types of perturbations, their interaction with the trapping process produces more complicated nonlinear effects. For instance, particle collisions lead to the generation of a positive bump on the tail of the LVC \cite{V00} due to the property of the 2-dimensional Brownian motion of returning in the already visited places. Other decorrelation mechanisms appearing in plasmas are average component of the velocity like poloidal rotation \cite{V01} or the parallel motion that determines decorrelation when the potential has a finite correlation length along the confining magnetic field. The effects of an average component of the velocity are discussed in Section 5.1. in connection with drift turbulence. \section{Test modes on drift turbulence} Test particle trajectories are strongly related to plasma turbulence. The dynamics of the plasma basically results from the Vlasov-Maxwell system of equations, which represents the conservation laws for the distribution functions along particle trajectories. Studies of plasma turbulence based on trajectories were initiated by Dupree \cite{D66}, \cite{D72} and developed especially in the years seventies (see the review paper \cite{K02} and references there in). These methods do not account for trajectory trapping and thus they apply to the quasilinear regime or to unmagnetized plasmas. A very important problem that has to be understood is the effect of the non-standard statistical characteristics of the test particle trajectories on the evolution of the instabilities and of turbulence in magnetized plasmas. We extend the Lagrangian methods of the type of \cite{D72}, \cite{SV88}, \cite{VSM} to the nonlinear regime characterized by trapping. We study linear modes on turbulent plasma with the statistical characteristics of the turbulence considered known. The dispersion relation for such test modes is determined as function of the characteristics of the turbulence. We consider the drift instability in slab geometry with constant magnetic field. The combined effect of the parallel motion of electrons (non-adiabatic response) and finite Larmor radius of the ions destabilizes the drift waves. The gyrokinetic equations are not linearized around the unperturbed state as in the linear theory but around a turbulent state with known spectrum. The perturbations of the electron and ion distribution functions are obtained from the gyrokinetic equation using the characteristics method as integrals along test particle trajectories of the source terms determined by the average density gradient. The background turbulence produces two modifications of the equation for the linear modes. One consists in the stochastic $\mathbf{E\times B}$ drift that appears in the trajectories and the other is the fluctuation of the diamagnetic velocity. Both effects are important for ions while the response of the electrons is approximately the same as in quiescent plasma. They depend on the parameters of the turbulence. \subsection{The statistics of the characteristics} The solution for the potential in the zero Larmor radius limit is \begin{equation} \phi (\mathbf{x},z,t)=\phi _{0}(\mathbf{x-V}_{\ast }t,z), \label{s0} \end{equation} where $\phi _{0}$ is the initial condition and $\mathbf{V}_{\ast }$ is the diamagnetic velocity. This shows that the potential is not changed but displaced with the diamagnetic velocity. The finite Larmor radius effects consist in the modification of the amplitude and of the shape of the potential, but this appears on a much slower time scale. The ordering of the characteristic times for the drift turbulence is \begin{equation} \tau _{\parallel }^{e}\ll \tau _{\ast }\ll \tau _{c}\ll \tau _{\parallel }^{i}, \label{ord} \end{equation} where $\tau _{\parallel }^{e},\tau _{\parallel }^{i}$ are the parallel decorrelation times for electrons and ions ($\tau _{\parallel }^{e,i}=\lambda _{\parallel }/v_{th}^{e,i}$ with $\lambda _{\parallel }$\ the parallel correlation length and $v_{th}^{e,i}$ the thermal velocity), $% \tau _{\ast }=\lambda _{c}/V_{\ast }$ is the characteristic time for the potential drift and $\tau _{c}$\ is the correlation time of the potential. The linear and nonlinear regimes are determined by the position of the time of flight in this ordering. The latter is much smaller than $\tau _{c}$ and much larger than $\tau _{\parallel }^{e}.$ The statistical characteristics of the trajectories essentially depend on the ratio $\tau _{\ast }/\tau _{fl}.$ The quasilinear case corresponds to $\tau _{\ast }/\tau _{fl}\ll 1$ ($% V/V_{\ast }\ll 1),$ which means turbulence with the amplitude of the $% \mathbf{E}\times \mathbf{B}$ drift smaller than the diamagnetic velocity. The motion of the potential produces in this case a fast decorrelation and trapping does not appear. The probability of displacements is Gaussian and the diffusion coefficient is $D_{ql}=V^{2}\tau _{\ast }.$ The nonlinear case corresponds to $\tau _{\ast }/\tau _{fl}>1$ ($V/V_{\ast }>1).$ The motion of the potential is slow and trajectory structures produced by trapping exist in this case. The test particle motion in a drifting potential is obtained by a Galilean transformation from the motion produced by a stochastic $\mathbf{E}\times \mathbf{B}$ drift and an average velocity $V_{d}.$ This process was studied in \cite{V03}. It was shown that strips of opened contour lines of the effective potential $\phi +xV_{d}$ appear due to an average velocity $V_{d}$ and that the width of these strips increases with $V_{d}$ until they completely eliminate the closed contour lines (for $V_{d}>V$). The Lagrangian correlation of the velocity in the presence of an average velocity $V_{d}<V$ does not decay to zero as in Figure 2, but it has a positive asymptotic values at $t\rightarrow \infty .$ Consequently the transport along the average velocity is superdiffusive in the static potential and diffusive with large diffusion coefficient (proportional with the average velocity) in the time dependent case. A part of the particles are trapped and the other move on the strips of opened contour lines of the effective potential. The invariance of the distribution of the Lagrangian velocity shows that the average velocity of the free particles $% V_{fr}^{\prime }$ fulfils the condition \begin{equation} n_{fr}V_{fr}^{\prime }=V_{d}, \label{vcons} \end{equation} and thus it is larger than the average velocity ($V_{fr}^{\prime }>V_{d}).$ $% n_{tr}$ is the fraction of trapped trajectories and $n_{fr}$\ is the fraction of free trajectories at a moment ($n_{tr}+n_{fr}=1)$. This physical image leads, by changing the reference frame, to the following paradigm of the statistics of trajectories produced by the $\mathbf{E}\times \mathbf{B}$ drift in a moving potential. The trapped particles (structures) are advected by the moving potential while the other particles have an average motion in the opposite direction with a velocity $V_{fr}$ such that \begin{equation} n_{fr}V_{fr}+n_{tr}V_{\ast }=0, \label{vcons2} \end{equation} which is the equivalent of Eq. (\ref{vcons}). This shows that there are particle flows in opposite directions induced by the drifting potential if the amplitude of the stochastic $\mathbf{E}\times \mathbf{B}$ velocity is larger than the velocity of the potential. This determines a spliting of the probability of displacements in two parts: the probability of trapped and the probability of free particles. The first is a picked function that has constant width and moves with the velocity $V_{\ast }.$ The second, is a Gaussian like function with an average displacement $\left\langle x_{2}\right\rangle _{fr}=V_{fr}t=-V_{\ast }t\;n_{tr}/n_{fr}.$ The probability of displacements at $t<\tau _{c}$ is modeled by \begin{equation} P(x,y,t)=n_{tr}G(x,y-V_{\ast }t;S_{x},S_{y})+n_{fr}G(x,y-V_{fr}t;S_{x}+2D_{x}t,S_{y}+2D_{y}t) \label{prob} \end{equation} where $G(x,y;S_{x},S_{y})$ is the 2-dimensional Gaussian distribution with dispersion $S_{x},S_{y}.$ We have considered for simplicity the distribution of trapped particles as a Gaussian function but with small (fixed) dispersion. The shape of this function does not change much these estimations. The free trajectories have dispersion that grows linearly in time. \subsection{The growth rate of drift modes in turbulent plasma} The average propagator of for a mode with frequency $\omega $ and wave number $\mathbf{k}=\left( k_{1},k_{2}\right) $ is evaluated using the above results on trajectory statistics. It depends on the size $S(K)$ of the structures and on the fractions of trapped and free particles: \begin{eqnarray} &&\int_{-\infty }^{t}d\tau \left\langle \exp \left( -i\mathbf{k}\cdot \mathbf{x}(\tau )\right) \right\rangle \exp \left( i\omega (t-\tau )\right) \notag \\ &=&i\exp \left( -k_{i}^{2}S_{i}^{2}\right) \left[ \frac{n_{tr}}{\omega +k_{y}V_{\ast }}+\frac{n_{fr}}{\omega +k_{y}V_{fr}+ik_{i}^{2}D_{i}}\right] \label{prop} \end{eqnarray} where $\mathbf{x}(\tau )$ is the trajectory in the moving potential integrated backward in time with the condition $\mathbf{x}$ at time $t.$ The solution of the dispersion relation is obtained as \begin{equation} \omega =k_{y}V_{\ast }^{eff} \label{omeg} \end{equation} \begin{equation} V_{\ast }^{eff}=V_{\ast }\frac{\Gamma _{0}\mathcal{F(}n_{fr}-n_{tr})+2n_{tr}% }{2-\Gamma _{0}\mathcal{F}} \label{veff} \end{equation} \begin{equation} \mathcal{F}=\exp \left( -\frac{1}{2}k_{i}^{2}S^{2}\right) \label{fstr} \end{equation} \begin{equation} \gamma =\frac{\sqrt{\pi }}{\left| k_{z}\right| v_{Te}}\frac{k_{y}^{2}\left( V_{\ast }-V_{\ast }^{eff}\right) \left( V_{\ast }^{eff}-\frac{n_{tr}}{n_{fr}}% V_{\ast }\right) }{2-\Gamma _{0}\mathcal{F}}-k_{i}^{2}D_{i}\frac{2-\Gamma _{0}\mathcal{F}n_{tr}}{2-\Gamma _{0}\mathcal{F}}+k_{i}k_{j}R_{ij}V_{\ast }^{eff} \label{gam} \end{equation} where $\Gamma _{0}=exp(-b)I_{0}(b),$ $b=k_{\perp }^{2}\rho _{L}^{2}/2$\ \ and $\rho _{L}$ is the ion Larmor radius. The tensor $R_{ij}$ has the dimension of a length and is defined by \begin{equation} R_{ji}(\tau ,t)\equiv \int_{\tau }^{t}d\theta ^{\prime }\int_{-\infty }^{\tau -\theta ^{\prime }}d\theta M_{ji}(\left| \theta \right| ) \label{rji} \end{equation} where $M_{ij}$ is the Lagrangian correlation \begin{equation} M_{ji}(\left| \theta ^{\prime }-\theta \right| )\equiv \left\langle v_{j}\left( \mathbf{x}^{i}(\theta ^{\prime }),z,\theta ^{\prime }\right) \;\partial _{2}v_{i}\left( \mathbf{x}^{i}(\theta ),z,\theta \right) \right\rangle , \label{mji} \end{equation} and $v_{j}$ is the $\mathbf{E\times B}$ drift velocity. Several effects appear in the test modes characteristics due to the background turbulence. The spreading of ion trajectories produces the diffusion $D_{i}$ that influences the growth rate (\ref{gam}) both in linear and nonlinear conditions. This term is similar to the result of Dupree, but the values of \ $D_{i}$ is influenced by trapping. Beside this, there are several influences that appear only in the nonlinear regime. The first is the factor $\mathcal{F}$ given by Eq. (\ref{fstr}), which is produced by the trajectory structures. It determines essentially the modification of the mode frequency. The flows of the ions induced by the drifting potential are represented by the fractions $n_{tr}$ and $n_{fr}.$The tensor $R_{ij}$ is determined by the fluctuations of the diamagnetic velocity due to the background turbulence. We will analyze each of these processes in the next section. \section{\protect\bigskip Trapping effects on the test modes} The trajectory trapping has a complex influence on the mode. This can be understood by considering the evolution of the drift turbulence starting from a stochastic potential with very small amplitude as it can be deduced from the growth rates of the test modes. The trajectories are Gaussian, there is no trapping in such potential and the only effect of the background turbulence is the diffusion of ion trajectories that produce resonance broadening. The well known results of drift modes in quasilinear turbulence are obtained \begin{equation} \omega =k_{y}V_{\ast }\frac{\Gamma _{0}}{2-\Gamma _{0}},\quad \gamma =\frac{% \sqrt{\pi }}{\left| k_{z}\right| v_{Te}}\frac{\left( k_{y}V_{\ast }-\omega \right) \left( \omega -k_{y}V_{\ast }\right) }{2-\Gamma _{0}}% -k_{i}^{2}D_{ql}, \label{ql} \end{equation} where $D_{x}=D_{y}=D_{ql}=V^{2}\lambda _{c}/V_{\ast }.$\ This shows that the modes with large $k$ are damped due to ion trajectory diffusion as the amplitude potential increases. The maximum of the spectrum is for $\omega =k_{y}V_{\ast }/2$ and corresponds to $k_{\perp }\rho _{L}\sim 1.$ When the nonlinear stage is attaint for $V>V_{\ast },$ the first effect is produced when the fraction of trapped trajectories is still small by the quasi-coherent component of ion motion. The structures of ion trajectories determine the $\mathcal{F}$ factor (\ref{fstr}), which modifies the effective diamagnetic frequency (\ref{veff}) and the frequency $\omega $. At this stage the flows can be neglected ($n_{tr}\cong 0,$ $n_{fr}\cong 1)$ and $R_{ji}\cong 0$ in Eqs. (\ref{omeg})-(\ref{gam}), so that only the factor $% \mathcal{F}$ is important. It is interesting to note that this factors appears in Eqs. (\ref{omeg})-(\ref{gam}) as a multiple of $\Gamma _{0},$ although they come from different sources ($\mathcal{F}$ from the propagator and $\Gamma _{0}$ from gyro-average of the mode potential). This shows that the trapping or eddying motion has the same attenuation effect as the gyro-average. The maximum of the spectrum that appears for $\omega =k_{y}V_{\ast }/2$ is obtained for smaller $k_{\perp },$ of the order of the size of the trajectory structures $k_{\perp }S\sim 1.$ This means that the unstable range of the wave numbers is displaced toward small values. The maximum growth rate is not changed but displaced at values of the order $% 1/S. $ Consequently, in this stage both the amplitude of the turbulence and its correlation length increase. At larger amplitude of the background potential, when the fraction or trapped ions becomes comparable with the fraction of free ions, the ion flows induced by the moving potential become important. These flows determine the increase of the effective diamagnetic velocity (\ref{veff}) toward the diamagnetic velocity and the modification of the growth rate of the drift modes. The latter decreases and for $n_{tr}=n_{fr}$ it is negative. The evolution of the amplitude becomes slower and eventually the growth rates vanishes and changes the sign. Thus, the flows of the ions induced by the moving potential produce the damping of the drift modes. The fluctuations of the diamagnetic velocity due to background turbulence determine a direct contribution to the growth rate (the tensor $R_{ij}).$ This term is zero for homogeneous and izotropic turbulence and strongly depends on the parameters of the anizotropy. The $i=j=1$ component corresponds to zonal flows. Preliminary results show that it appears for trapped particles due to the anizotropy induced by ion flows with the moving potential. \section{Summary and conclusions} We have discussed the problem of stochastic advection of test particles by the $\mathbf{E\times B}$ drift in turbulent plasmas. We have shown that trajectory trapping or eddying have complex nonlinear effects on the statistical characteristics of the trajectories and on the transport. The nonlinear effects are very strong in the case of static potentials. The trajectories are non-Gaussian, there is statistical memory, coherence and they form structures. These properties persist if the system is weakly perturbed by time variation of the potential or by other components of the motion (collisions, poloidal rotation, parallel motion). The memory effect (long tail of the LVC) determines anomalous diffusion regimes. The process of trajectory trapping also influences the evolution of the turbulence. Recent results on test modes on turbulent plasmas are presented. They are based on a Lagrangian method that takes into account the trapping or eddying of the ions. The growth rate and the frequency of the drift modes on turbulent plasmas are estimated as function of the characteristics of the turbulence. The effects of the background turbulence appear in particle trajectories (characteristics of Vlasov equations) and in the fluctuations of the diamagnetic velocity produced by the density fluctuations. We show that the nonlinear process of trapping, which determines non-standard statistical properties of trajectories, has a very strong and complex influence on the evolution of the turbulence. It appears when the amplitude of the $\mathbf{E}\times \mathbf{B}$ drift becomes larger than the diamagnetic velocity. A different physical perspective on the nonlinear evolution of drift waves is obtained. The main role is played by the trapping of the ions in the stochastic potential that moves with the diamagnetic velocity. We show that the moving potential determines flows of the ions when the amplitude of the $% \mathbf{E}\times \mathbf{B}$ velocity is larger than the diamagnetic velocity. A part of the ions are trapped and move with the potential while the other ions drift in the opposite direction. These opposite (zonal) flows compensate such that the average velocity is zero. The evolution of the turbulence toward large wave lengths (the inverse cascade) is determined by ion trapping, which averages the potential and determines a smaller effective diamagnetic velocity. The ion flows produced by the moving potential determine the decay of the growth rate and eventually the damping of the drift modes and generate zonal flows due to their nonlinear interaction with the fluctuations of the diamagnetic velocity. \bigskip